threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi,\n\nWe are running a postgres streaming replica with\nmax_standby_streaming_delay set to 900 seconds (15min). We encountered an\nissue in our environment where we had a long running query that was running\nagainst the replica for just over 4 hours causing 4 hours of replication\nlag. Looking at pg_stat_activity for this query it was stuck in\nClient:ClientWrite wait state for pretty much all of this time (it ran for\nless than 1 minute before going into ClientWrite wait state. We capture\npg_stat_activity every minute and only first capture shows a DataFileRead\nwait and there was only 1 other capture during the 4 hours where it was\nactive with no wait event). From what we could tell the client process\ntried to send cancellation and disconnected (our client application uses\nnpgsql) so there was no process to consume these results and after manually\ncancelling the query the replication lag came back down so this query was\ndefinitely the cause of the lag.\n\nQuestion: Why did the max_standby_streaming_delay setting not cancel this\nquery?\n\nI looked at the code in standby.c and if there are conflicting locks it\nshould be cancelled. Unfortunately at the time this issue occurred we\nweren't collecting pg_locks to see what locks are being held but given the\nstate the query was in would it have released the ACCESS SHARE lock it\nacquired while executing the query given it just has to send data to the\nclient now? I would think if it still held this lock then the query would\nbe cancelled. If it didn't hold it anymore then maybe that is why\nmax_standby_streaming_delay setting didn't cause it to be cancelled. Any\nother ideas?\n\nThanks,\nBen\n\nHi,We are running a postgres streaming replica with max_standby_streaming_delay set to 900 seconds (15min). We encountered an issue in our environment where we had a long running query that was running against the replica for just over 4 hours causing 4 hours of replication lag. Looking at pg_stat_activity for this query it was stuck in Client:ClientWrite wait state for pretty much all of this time (it ran for less than 1 minute before going into ClientWrite wait state. We capture pg_stat_activity every minute and only first capture shows a DataFileRead wait and there was only 1 other capture during the 4 hours where it was active with no wait event). From what we could tell the client process tried to send cancellation and disconnected (our client application uses npgsql) so there was no process to consume these results and after manually cancelling the query the replication lag came back down so this query was definitely the cause of the lag.Question: Why did the max_standby_streaming_delay setting not cancel this query?I looked at the code in standby.c and if there are conflicting locks it should be cancelled. Unfortunately at the time this issue occurred we weren't collecting pg_locks to see what locks are being held but given the state the query was in would it have released the ACCESS SHARE lock it acquired while executing the query given it just has to send data to the client now? I would think if it still held this lock then the query would be cancelled. If it didn't hold it anymore then maybe that is why max_standby_streaming_delay setting didn't cause it to be cancelled. Any other ideas?Thanks,Ben",
"msg_date": "Wed, 1 Nov 2023 12:58:17 -0400",
"msg_from": "Ben Snaidero <[email protected]>",
"msg_from_op": true,
"msg_subject": "max_standby_streaming_delay setting not cancelling query on replica"
}
] |
[
{
"msg_contents": "Hello!\n\nFound that src/test/modules/test_misc/t/003_check_guc.pl will crash if an extension\nthat adds own GUCs was loaded into memory.\nSo it is now impossible to run a check-world with loaded extension libraries.\n\nReproduction:\ncd src/test/modules/test_misc\nexport EXTRA_INSTALL=\"contrib/pg_stat_statements\"\nexport TEMP_CONFIG=$(pwd)/pg_stat_statements_temp.conf\necho -e \"shared_preload_libraries = 'pg_stat_statements'\" > $TEMP_CONFIG\necho \"compute_query_id = 'regress'\" >> $TEMP_CONFIG\nmake check PROVE_TESTS='t/003_check_guc.pl'\n\n# +++ tap check in src/test/modules/test_misc +++\nt/003_check_guc.pl .. 1/?\n# Failed test 'no parameters missing from postgresql.conf.sample'\n# at t/003_check_guc.pl line 81.\n# got: '5'\n# expected: '0'\n# Looks like you failed 1 test of 3.\n\nMaybe exclude such GUCs from this test?\nFor instance, like that:\n\n--- a/src/test/modules/test_misc/t/003_check_guc.pl\n+++ b/src/test/modules/test_misc/t/003_check_guc.pl\n@@ -19,7 +19,7 @@ my $all_params = $node->safe_psql(\n \"SELECT name\n FROM pg_settings\n WHERE NOT 'NOT_IN_SAMPLE' = ANY (pg_settings_get_flags(name)) AND\n- name <> 'config_file'\n+ name <> 'config_file' AND name NOT LIKE '%.%'\n ORDER BY 1\");\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 2 Nov 2023 00:28:05 +0300",
"msg_from": "\"Anton A. Melnikov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "003_check_guc.pl crashes if some extensions were loaded."
},
{
"msg_contents": "On Thu, Nov 02, 2023 at 12:28:05AM +0300, Anton A. Melnikov wrote:\n> Found that src/test/modules/test_misc/t/003_check_guc.pl will crash if an extension\n> that adds own GUCs was loaded into memory.\n> So it is now impossible to run a check-world with loaded extension libraries.\n\nRight. That's annoying, so let's fix it.\n\n> --- a/src/test/modules/test_misc/t/003_check_guc.pl\n> +++ b/src/test/modules/test_misc/t/003_check_guc.pl\n> @@ -19,7 +19,7 @@ my $all_params = $node->safe_psql(\n> \"SELECT name\n> FROM pg_settings\n> WHERE NOT 'NOT_IN_SAMPLE' = ANY (pg_settings_get_flags(name)) AND\n> - name <> 'config_file'\n> + name <> 'config_file' AND name NOT LIKE '%.%'\n> ORDER BY 1\");\n\nWouldn't it be better to add a qual as of \"category <> 'Customized\nOptions'\"? That's something arbitrarily assigned for all custom GUCs\nand we are sure that none of them will exist in\npostgresql.conf.sample. There's also no guarantee that out-of-core\ncustom GUCs will include a dot in their name (even if I know that\nmaintainers close to the community adopt this convention and are\nrather careful about that).\n--\nMichael",
"msg_date": "Thu, 2 Nov 2023 07:53:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 003_check_guc.pl crashes if some extensions were loaded."
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Nov 02, 2023 at 12:28:05AM +0300, Anton A. Melnikov wrote:\n>> \"SELECT name\n>> FROM pg_settings\n>> WHERE NOT 'NOT_IN_SAMPLE' = ANY (pg_settings_get_flags(name)) AND\n>> - name <> 'config_file'\n>> + name <> 'config_file' AND name NOT LIKE '%.%'\n>> ORDER BY 1\");\n\n> Wouldn't it be better to add a qual as of \"category <> 'Customized\n> Options'\"?\n\n+1, seems like a cleaner answer.\n\n> That's something arbitrarily assigned for all custom GUCs\n> and we are sure that none of them will exist in\n> postgresql.conf.sample. There's also no guarantee that out-of-core\n> custom GUCs will include a dot in their name (even if I know that\n> maintainers close to the community adopt this convention and are\n> rather careful about that).\n\nActually we do force that, see valid_custom_variable_name().\nBut I think your idea is better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Nov 2023 19:29:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 003_check_guc.pl crashes if some extensions were loaded."
},
{
"msg_contents": "On Wed, Nov 01, 2023 at 07:29:51PM -0400, Tom Lane wrote:\n> Actually we do force that, see valid_custom_variable_name().\n> But I think your idea is better.\n\nAh, indeed, thanks. I didn't recall this was the case.\n--\nMichael",
"msg_date": "Thu, 2 Nov 2023 08:37:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 003_check_guc.pl crashes if some extensions were loaded."
},
{
"msg_contents": "On 02.11.2023 01:53, Michael Paquier wrote:> On Thu, Nov 02, 2023 at 12:28:05AM +0300, Anton A. Melnikov wrote:\n>> Found that src/test/modules/test_misc/t/003_check_guc.pl will crash if an extension\n>> that adds own GUCs was loaded into memory.\n>> So it is now impossible to run a check-world with loaded extension libraries.\n> \n> Right. That's annoying, so let's fix it.\n\nThanks!\n\nOn 02.11.2023 02:29, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> Wouldn't it be better to add a qual as of \"category <> 'Customized\n>> Options'\"?\n> \n> +1, seems like a cleaner answer.\n\nAlso agreed. That is a better variant!\n\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 2 Nov 2023 07:08:20 +0300",
"msg_from": "\"Anton A. Melnikov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 003_check_guc.pl crashes if some extensions were loaded."
}
] |
[
{
"msg_contents": "The upper planner was pathified many years ago now. That was a large\nchunk of work and because of that, the union planner was not properly\npathified in that effort. A small note was left in\nrecurse_set_operations() to mention about future work.\n\nYou can see this lack of pathification in make_union_unique() and\nchoose_hashed_setop(). There are heuristics in there to decide the\nmethod to use instead of creating paths and letting add_path() decide\nwhat's faster.\n\nI've been working on improving this for the past few weeks and I'm not\nquite as far along as I'd hoped, but what I have is perhaps worthy of\nsharing. For now, I've only improved UNIONs.\n\nA UNION plan can now look like:\n\n# explain (costs off) select * from a union select * from a;\n QUERY PLAN\n---------------------------------------------------\n Unique\n -> Merge Append\n Sort Key: a.a\n -> Index Only Scan using a_pkey on a\n -> Index Only Scan using a_pkey on a a_1\n\nPreviously we'd have only considered Append -> Hash Aggregate or via\nAppend -> Sort -> Unique\n\nTo make this work, the query_planner() needs to know about setops, so\nI've passed those down via the standard_qp_extra struct so that we can\nchoose pathkeys for the setops.\n\nOne part that still needs work is the EquivalanceClass building.\nBecause we only build the final targetlist for the Append after\nplanning all the append child queries, I ended up having to populate\nthe EquivalanceClasses backwards, i.e children first. add_eq_member()\ndetermines if you're passing a child member by checking if parent !=\nNULL. Since I don't have a parent EquivalenceMember to pass,\nem_is_child gets set wrongly, and that causes problems because\nec_has_const can get set to true when it shouldn't. This is a problem\nas it can make a PathKey redundant when it isn't. I wonder if I'll\nneed to change the signature of add_eq_member() and add an \"is_child\"\nbool to force the EM to be a child em... Needs more thought...\n\nI've not worked on the creation of Incremental Sort paths yet, or done\nany path plumbing work to have UNION consider Gather Merge -> Unique\non already sorted paths. I think to make similar improvements to\nEXCEPT and INTERSECT we'd need a node executor node. Perhaps\nnodeMergeAppendSetops.c which can be configured to do EXCEPT or\nINTERSECT. It could also perhaps handle UNION too then we can use\nthat instead of a Merge Append -> Unique. That might save doing some\nslot copying and improve performance further. I'm not planning on\ndoing that for the first stage. I only intend to improve UNION for\nthat and we have all the executor nodes to make that work already.\n\nAnyway, I've attached my WIP patch for this.\n\nDavid",
"msg_date": "Thu, 2 Nov 2023 12:42:51 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Properly pathify the union planner"
},
{
"msg_contents": "On Thu, 2 Nov 2023 at 12:42, David Rowley <[email protected]> wrote:\n> One part that still needs work is the EquivalanceClass building.\n> Because we only build the final targetlist for the Append after\n> planning all the append child queries, I ended up having to populate\n> the EquivalanceClasses backwards, i.e children first. add_eq_member()\n> determines if you're passing a child member by checking if parent !=\n> NULL. Since I don't have a parent EquivalenceMember to pass,\n> em_is_child gets set wrongly, and that causes problems because\n> ec_has_const can get set to true when it shouldn't. This is a problem\n> as it can make a PathKey redundant when it isn't. I wonder if I'll\n> need to change the signature of add_eq_member() and add an \"is_child\"\n> bool to force the EM to be a child em... Needs more thought...\n\nI've spent more time working on this and I ended up solving the above\nproblem by delaying the subquery path creation on the union children\nuntil after we've built the top-level targetlist. This allows the\nparent eclasses to be correctly added before adding members for the\nunion children. (See build_setop_child_paths() in the patch)\n\nThere's been quite a bit of progress in other areas too. Incremental\nsorts now work:\n\n# create table t1(a int primary key, b int not null);\n# create table t2(a int primary key, b int not null);\n# insert into t1 select x,x from generate_Series(1,1000000)x;\n# insert into t2 select x,x from generate_Series(1,1000000)x;\n# vacuum analyze t1,t2;\n\n\n# explain (costs off) select * from t1 union select * from t2;\n QUERY PLAN\n--------------------------------------------------\n Unique\n -> Merge Append\n Sort Key: t1.a, t1.b\n -> Incremental Sort\n Sort Key: t1.a, t1.b\n Presorted Key: t1.a\n -> Index Scan using t1_pkey on t1\n -> Incremental Sort\n Sort Key: t2.a, t2.b\n Presorted Key: t2.a\n -> Index Scan using t2_pkey on t2\n(11 rows)\n\nHowever, I've not yet made the MergeAppend UNIONs work when the\ndatatypes don't match on either side of the UNION. For now, the\nreason this does not work is due to convert_subquery_pathkeys() being\nunable to find the pathkey targets in the targetlist. The actual\ntargets can't be found due to the typecast. I wondered if this could\nbe fixed by adding an additional projection path to the subquery when\nthe output columns don't match the setop->colTypes, but I'm a bit put\noff by the comment in transformSetOperationTree:\n\n> * For all non-UNKNOWN-type cases, we verify coercibility but we\n> * don't modify the child's expression, for fear of changing the\n> * child query's semantics.\n\nI assume that's worried about the semantics of things like WHERE\nclauses, so maybe the projection path in the subquery would be ok. I\nneed to spend more time on that.\n\nAnother problem I hit was add_path() pfreeing a Path that I needed.\nThis was happening due to how I'm building the final paths in the\nsubquery when setop_pathkeys are set. Because I want to always\ninclude the cheapest_input_path to allow that path to be used in\nhash-based UNIONs, I also want to provide sorted paths so that\nMergeAppend has something to work with. I found cases where I'd\nadd_path() the cheapest_input_path to the final rel then also attempt\nto sort that path. Unfortunately, add_path() found the unsorted path\nand the sorted path fuzzily the same cost and opted to keep the sorted\none due to it having better pathkeys. add_path() then pfree'd the\ncheapest_input_path which meant the Sort's subpath was gone which\nobviously caused issues in createplan.c.\n\nFor now, as a temporary fix, I've just #ifdef'd out the code in\nadd_path() that's pfreeing the old path. I have drafted a patch that\nrefcounts Paths, but I'm unsure if that's the correct solution as I'm\nonly maintaining the refcounts in add_path() and add_partial_path(). I\nthink a true correct solution would bump the refcount when a path is\nused as some other path's subpath. That would mean having to\nrecursively pfree paths up until we find one with a refcount>0. Seems\na bit expensive for add_path() to do.\n\nI've attached the updated patch. This one is probably ready for\nsomeone to test out. There will be more work to do, however.\n\nDavid",
"msg_date": "Fri, 24 Nov 2023 11:29:26 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 3:59 AM David Rowley <[email protected]> wrote:\n>\n> Another problem I hit was add_path() pfreeing a Path that I needed.\n> This was happening due to how I'm building the final paths in the\n> subquery when setop_pathkeys are set. Because I want to always\n> include the cheapest_input_path to allow that path to be used in\n> hash-based UNIONs, I also want to provide sorted paths so that\n> MergeAppend has something to work with. I found cases where I'd\n> add_path() the cheapest_input_path to the final rel then also attempt\n> to sort that path. Unfortunately, add_path() found the unsorted path\n> and the sorted path fuzzily the same cost and opted to keep the sorted\n> one due to it having better pathkeys. add_path() then pfree'd the\n> cheapest_input_path which meant the Sort's subpath was gone which\n> obviously caused issues in createplan.c.\n>\n> For now, as a temporary fix, I've just #ifdef'd out the code in\n> add_path() that's pfreeing the old path. I have drafted a patch that\n> refcounts Paths, but I'm unsure if that's the correct solution as I'm\n> only maintaining the refcounts in add_path() and add_partial_path(). I\n> think a true correct solution would bump the refcount when a path is\n> used as some other path's subpath. That would mean having to\n> recursively pfree paths up until we find one with a refcount>0. Seems\n> a bit expensive for add_path() to do.\n\nPlease find my proposal to refcount paths at [1]. I did that to reduce\nthe memory consumed by partitionwise joins. I remember another thread\nwhere freeing a path that was referenced by upper sort path created\nminor debugging problem. [2]. I paused my work on my proposal since\nthere didn't seem enough justification. But it looks like the\nrequirement is coming up repeatedly. I am willing to resume my work if\nit's needed. The email lists next TODOs. As to making the add_path()\nexpensive, I didn't find any noticeable impact on planning time.\n\n\n[1] https://www.postgresql.org/message-id/CAExHW5tUcVsBkq9qT%3DL5vYz4e-cwQNw%3DKAGJrtSyzOp3F%3DXacA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAM2%2B6%3DUC1mcVtM0Y_LEMBEGHTM58HEkqHPn7vau_V_YfuZjEGg%40mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 24 Nov 2023 11:13:04 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Fri, 24 Nov 2023 at 18:43, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Fri, Nov 24, 2023 at 3:59 AM David Rowley <[email protected]> wrote:\n> > For now, as a temporary fix, I've just #ifdef'd out the code in\n> > add_path() that's pfreeing the old path. I have drafted a patch that\n> > refcounts Paths, but I'm unsure if that's the correct solution as I'm\n> > only maintaining the refcounts in add_path() and add_partial_path(). I\n> > think a true correct solution would bump the refcount when a path is\n> > used as some other path's subpath. That would mean having to\n> > recursively pfree paths up until we find one with a refcount>0. Seems\n> > a bit expensive for add_path() to do.\n>\n> Please find my proposal to refcount paths at [1]. I did that to reduce\n> the memory consumed by partitionwise joins. I remember another thread\n> where freeing a path that was referenced by upper sort path created\n> minor debugging problem. [2]. I paused my work on my proposal since\n> there didn't seem enough justification. But it looks like the\n> requirement is coming up repeatedly. I am willing to resume my work if\n> it's needed. The email lists next TODOs. As to making the add_path()\n> expensive, I didn't find any noticeable impact on planning time.\n\nI missed that thread. Thanks for pointing it out.\n\nI skim read your patch and I see it does seem to have the workings for\ntracking refcounts when the pack is a subpath of another path. I\nimagine that would allow the undocumented hack that is \"if\n(!IsA(old_path, IndexPath))\" in add_path() to disappear.\n\nI wondered if the problem of pfreeing paths that are in the pathlist\nof another relation could be fixed in another way. If we have an\nAdoptedPath path type that just inherits the costs from its single\nsubpath and we wrap a Path up in one of these before we do add_path()\na Path which is not parented by the relation we're adding the path to,\nsince we don't recursively pfree() Paths in add_path(), we'd only ever\npfree the AdoptedPath rather than pfreeing a Path that directly exists\nin another relations pathlist.\n\nAnother simpler option would be just don't pfree the Path if the Path\nparent is not the add_path rel.\n\nDavid\n\n> [1] https://www.postgresql.org/message-id/CAExHW5tUcVsBkq9qT%3DL5vYz4e-cwQNw%3DKAGJrtSyzOp3F%3DXacA%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CAM2%2B6%3DUC1mcVtM0Y_LEMBEGHTM58HEkqHPn7vau_V_YfuZjEGg%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 29 Nov 2023 08:24:57 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 6:29 AM David Rowley <[email protected]> wrote:\n\n> I've attached the updated patch. This one is probably ready for\n> someone to test out. There will be more work to do, however.\n\n\nI just started reviewing this patch and haven't looked through all the\ndetails yet. Here are some feedbacks that came to my mind. Post them\nfirst so that I don’t forget them after the holidays.\n\n* I think we should update truncate_useless_pathkeys() to account for\nthe ordering requested by the query's set operation; otherwise we may\nnot get a subquery's path with the expected pathkeys. For instance,\n\ncreate table t (a int, b int);\ncreate index on t (a, b);\nset enable_hashagg to off;\n\n-- on v1 patch\nexplain (costs off)\n(select * from t order by a) UNION (select * from t order by a);\n QUERY PLAN\n------------------------------------------------------------\n Unique\n -> Merge Append\n Sort Key: t.a, t.b\n -> Incremental Sort\n Sort Key: t.a, t.b\n Presorted Key: t.a\n -> Index Only Scan using t_a_b_idx on t\n -> Incremental Sort\n Sort Key: t_1.a, t_1.b\n Presorted Key: t_1.a\n -> Index Only Scan using t_a_b_idx on t t_1\n(11 rows)\n\n-- after accounting for setop_pathkeys in truncate_useless_pathkeys()\nexplain (costs off)\n(select * from t order by a) UNION (select * from t order by a);\n QUERY PLAN\n------------------------------------------------------\n Unique\n -> Merge Append\n Sort Key: t.a, t.b\n -> Index Only Scan using t_a_b_idx on t\n -> Index Only Scan using t_a_b_idx on t t_1\n(5 rows)\n\n* I understand that we need to sort (full or incremental) the paths of\nthe subqueries to meet the ordering required for setop_pathkeys, so that\nMergeAppend has something to work with. Currently in the v1 patch this\nsorting is performed during the planning phase of the subqueries (in\ngrouping_planner).\n\nAnd we want to add the subquery's cheapest_total_path as-is to allow\nthat path to be used in hash-based UNIONs, and we also want to add a\nsorted path on top of cheapest_total_path. And then we may encounter\nthe issue you mentioned earlier regarding add_path() potentially freeing\nthe cheapest_total_path, leaving the Sort's subpath gone.\n\nI'm thinking that maybe it'd be better to move the work of sorting the\nsubquery's paths to the outer query level, specifically within the\nbuild_setop_child_paths() function, just before we stick SubqueryScanPath\non top of the subquery's paths. I think this is better because:\n\n1. This minimizes the impact on subquery planning and reduces the\nfootprint within the grouping_planner() function as much as possible.\n\n2. This can help avoid the aforementioned add_path() issue because the\ntwo involved paths will be structured as:\n\n cheapest_path -> subqueryscan\nand\n cheapest_path -> sort -> subqueryscan\n\nIf the two paths cost fuzzily the same and add_path() decides to keep\nthe second one due to it having better pathkeys and pfree the first one,\nit would not be a problem.\n\nBTW, I haven't looked through the part involving partial paths, but I\nthink we can do the same to partial paths.\n\n\n* I noticed that in generate_union_paths() we use a new function\nbuild_setop_pathkeys() to build the 'union_pathkeys'. I wonder why we\ndon't simply utilize make_pathkeys_for_sortclauses() since we already\nhave the grouplist for the setop's output columns.\n\nTo assist the discussion I've attached a diff file that includes all the\nchanges above.\n\nThanks\nRichard",
"msg_date": "Tue, 6 Feb 2024 17:05:33 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "Hi,\n\n> * I think we should update truncate_useless_pathkeys() to account for\n> the ordering requested by the query's set operation;\n\nNice catch.\n\n> I'm thinking that maybe it'd be better to move the work of sorting the\n> subquery's paths to the outer query level, specifically within the\n> build_setop_child_paths() function, just before we stick SubqueryScanPath\n> on top of the subquery's paths. I think this is better because:\n>\n> 1. This minimizes the impact on subquery planning and reduces the\n> footprint within the grouping_planner() function as much as possible.\n>\n> 2. This can help avoid the aforementioned add_path() issue because the\n> two involved paths will be structured as:\n>\n> cheapest_path -> subqueryscan\n> and\n> cheapest_path -> sort -> subqueryscan\n>\n> If the two paths cost fuzzily the same and add_path() decides to keep\n> the second one due to it having better pathkeys and pfree the first one,\n> it would not be a problem.\n\nThis is a smart idea, it works because you create a two different\nsubqueryscan for the cheapest_input_path.\n\nFWIW, I found we didn't create_sort_path during building a merge join\npath, instead it just cost the sort and add it to the cost of mergejoin\npath only and note this path needs a presorted data. At last during the\ncreate_mergejoin_*plan*, it create the sort_plan really. As for the\nmergeappend case, could we use the similar strategy? with this way, we\nmight simpliy the code to use MergeAppend node since the caller just\nneed to say I want to try MergeAppend with the given pathkeys without\nreally creating the sort by themselves. \n\n(Have a quick glance of initial_cost_mergejoin and\ncreate_mergejoin_plan, looks incremental sort doesn't work with mergejoin?)\n\n>\n> To assist the discussion I've attached a diff file that includes all the\n> changes above.\n\n+ */\n+static int\n+pathkeys_useful_for_setop(PlannerInfo *root, List *pathkeys)\n+{\n+\tint\t\t\tn_common_pathkeys;\n+\n+\tif (root->setop_pathkeys == NIL)\n+\t\treturn 0;\t\t\t\t/* no special setop ordering requested */\n+\n+\tif (pathkeys == NIL)\n+\t\treturn 0;\t\t\t\t/* unordered path */\n+\n+\t(void) pathkeys_count_contained_in(root->setop_pathkeys, pathkeys,\n+\t\t\t\t\t\t\t\t\t &n_common_pathkeys);\n+\n+\treturn n_common_pathkeys;\n+}\n\nThe two if-clauses looks unnecessary, it should be handled by\npathkeys_count_contained_in already. The same issue exists in\npathkeys_useful_for_ordering as well. Attached patch fix it in master.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Wed, 07 Feb 2024 06:29:28 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Tue, 6 Feb 2024 at 22:05, Richard Guo <[email protected]> wrote:\n> I'm thinking that maybe it'd be better to move the work of sorting the\n> subquery's paths to the outer query level, specifically within the\n> build_setop_child_paths() function, just before we stick SubqueryScanPath\n> on top of the subquery's paths. I think this is better because:\n>\n> 1. This minimizes the impact on subquery planning and reduces the\n> footprint within the grouping_planner() function as much as possible.\n>\n> 2. This can help avoid the aforementioned add_path() issue because the\n> two involved paths will be structured as:\n\nYes, this is a good idea. I agree with both of your points.\n\nI've taken your suggested changes with minor fixups and expanded on it\nto do the partial paths too. I've also removed almost all of the\nchanges to planner.c.\n\nI fixed a bug where I was overwriting the union child's\nTargetEntry.ressortgroupref without consideration that it might be set\nfor some other purpose in the subquery. I wrote\ngenerate_setop_child_grouplist() to handle this which is almost like\ngenerate_setop_grouplist() except it calls assignSortGroupRef() to\nfigure out the next free tleSortGroupRef, (or reuse the existing one\nif the TargetEntry already has one set).\n\nEarlier, I pushed a small comment change to pathnode.c in order to\nshrink this patch down a little. It was also a chance that could be\nmade in isolation of this work.\n\nv2 attached.\n\nDavid",
"msg_date": "Thu, 15 Feb 2024 17:30:47 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Wed, 7 Feb 2024 at 12:05, Andy Fan <[email protected]> wrote:\n> +static int\n> +pathkeys_useful_for_setop(PlannerInfo *root, List *pathkeys)\n> +{\n> + int n_common_pathkeys;\n> +\n> + if (root->setop_pathkeys == NIL)\n> + return 0; /* no special setop ordering requested */\n> +\n> + if (pathkeys == NIL)\n> + return 0; /* unordered path */\n> +\n> + (void) pathkeys_count_contained_in(root->setop_pathkeys, pathkeys,\n> + &n_common_pathkeys);\n> +\n> + return n_common_pathkeys;\n> +}\n>\n> The two if-clauses looks unnecessary, it should be handled by\n> pathkeys_count_contained_in already. The same issue exists in\n> pathkeys_useful_for_ordering as well. Attached patch fix it in master.\n\nI agree. I'd rather not have those redundant checks in\npathkeys_useful_for_setop(), and I do want those functions to be as\nsimilar as possible. So I think adjusting it in master is a good\nidea.\n\nI've pushed your patch.\n\nDavid\n\n\n",
"msg_date": "Thu, 15 Feb 2024 18:03:43 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "\nDavid Rowley <[email protected]> writes:\n\n>>\n>> The two if-clauses looks unnecessary, it should be handled by\n>> pathkeys_count_contained_in already. The same issue exists in\n>> pathkeys_useful_for_ordering as well. Attached patch fix it in master.\n>\n> I agree. I'd rather not have those redundant checks in\n> pathkeys_useful_for_setop(), and I do want those functions to be as\n> similar as possible. So I think adjusting it in master is a good\n> idea.\n>\n> I've pushed your patch.\n>\nThanks for the pushing!\n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Sun, 18 Feb 2024 17:07:50 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Thu, 15 Feb 2024 at 17:30, David Rowley <[email protected]> wrote:\n>\n> On Tue, 6 Feb 2024 at 22:05, Richard Guo <[email protected]> wrote:\n> > I'm thinking that maybe it'd be better to move the work of sorting the\n> > subquery's paths to the outer query level, specifically within the\n> > build_setop_child_paths() function, just before we stick SubqueryScanPath\n> > on top of the subquery's paths. I think this is better because:\n> >\n> > 1. This minimizes the impact on subquery planning and reduces the\n> > footprint within the grouping_planner() function as much as possible.\n> >\n> > 2. This can help avoid the aforementioned add_path() issue because the\n> > two involved paths will be structured as:\n>\n> Yes, this is a good idea. I agree with both of your points.\n\n> v2 attached.\n\nIf anyone else or if you want to take another look, let me know soon.\nOtherwise, I'll assume that's the reviews over and I can take another\nlook again.\n\nIf nobody speaks up before Monday next week (11th), New Zealand time,\nI'm going to be looking at this again from the point of view of\ncommitting it.\n\nThanks\n\nDavid\n\n\n",
"msg_date": "Fri, 8 Mar 2024 00:16:05 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 7:16 PM David Rowley <[email protected]> wrote:\n\n> On Thu, 15 Feb 2024 at 17:30, David Rowley <[email protected]> wrote:\n> >\n> > On Tue, 6 Feb 2024 at 22:05, Richard Guo <[email protected]> wrote:\n> > > I'm thinking that maybe it'd be better to move the work of sorting the\n> > > subquery's paths to the outer query level, specifically within the\n> > > build_setop_child_paths() function, just before we stick\n> SubqueryScanPath\n> > > on top of the subquery's paths. I think this is better because:\n> > >\n> > > 1. This minimizes the impact on subquery planning and reduces the\n> > > footprint within the grouping_planner() function as much as possible.\n> > >\n> > > 2. This can help avoid the aforementioned add_path() issue because the\n> > > two involved paths will be structured as:\n> >\n> > Yes, this is a good idea. I agree with both of your points.\n>\n> > v2 attached.\n>\n> If anyone else or if you want to take another look, let me know soon.\n> Otherwise, I'll assume that's the reviews over and I can take another\n> look again.\n\n\nHi David,\n\nI would like to have another look, but it might take several days.\nWould that be too late?\n\nThanks\nRichard\n\nOn Thu, Mar 7, 2024 at 7:16 PM David Rowley <[email protected]> wrote:On Thu, 15 Feb 2024 at 17:30, David Rowley <[email protected]> wrote:\n>\n> On Tue, 6 Feb 2024 at 22:05, Richard Guo <[email protected]> wrote:\n> > I'm thinking that maybe it'd be better to move the work of sorting the\n> > subquery's paths to the outer query level, specifically within the\n> > build_setop_child_paths() function, just before we stick SubqueryScanPath\n> > on top of the subquery's paths. I think this is better because:\n> >\n> > 1. This minimizes the impact on subquery planning and reduces the\n> > footprint within the grouping_planner() function as much as possible.\n> >\n> > 2. This can help avoid the aforementioned add_path() issue because the\n> > two involved paths will be structured as:\n>\n> Yes, this is a good idea. I agree with both of your points.\n\n> v2 attached.\n\nIf anyone else or if you want to take another look, let me know soon.\nOtherwise, I'll assume that's the reviews over and I can take another\nlook again.Hi David,I would like to have another look, but it might take several days.Would that be too late?ThanksRichard",
"msg_date": "Thu, 7 Mar 2024 19:38:50 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Fri, 8 Mar 2024 at 00:39, Richard Guo <[email protected]> wrote:\n> I would like to have another look, but it might take several days.\n> Would that be too late?\n\nPlease look. Several days is fine. I'd like to start looking on Monday\nor Tuesday next week.\n\nThanks\n\nDavid\n\n\n",
"msg_date": "Fri, 8 Mar 2024 16:30:55 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 11:31 AM David Rowley <[email protected]> wrote:\n\n> On Fri, 8 Mar 2024 at 00:39, Richard Guo <[email protected]> wrote:\n> > I would like to have another look, but it might take several days.\n> > Would that be too late?\n>\n> Please look. Several days is fine. I'd like to start looking on Monday\n> or Tuesday next week.\n\n\nI've had another look, and here are some comments that came to my mind.\n\n* There are cases where the setop_pathkeys of a subquery does not match\nthe union_pathkeys generated in generate_union_paths() for sorting by\nthe whole target list. In such cases, even if we have explicitly sorted\nthe paths of the subquery to meet the ordering required for its\nsetop_pathkeys, we cannot find appropriate ordered paths for\nMergeAppend. For instance,\n\nexplain (costs off)\n(select a, b from t t1 where a = b) UNION (select a, b from t t2 where a =\nb);\n QUERY PLAN\n-----------------------------------------------------------\n Unique\n -> Sort\n Sort Key: t1.a, t1.b\n -> Append\n -> Index Only Scan using t_a_b_idx on t t1\n Filter: (a = b)\n -> Index Only Scan using t_a_b_idx on t t2\n Filter: (a = b)\n(8 rows)\n\n(Assume t_a_b_idx is a btree index on t(a, b))\n\nIn this query, the setop_pathkeys of the subqueries includes only one\nPathKey because 'a' and 'b' are in the same EC inside the subqueries,\nwhile the union_pathkeys of the whole query includes two PathKeys, one\nfor each target entry. After we convert the setop_pathkeys to outer\nrepresentation, we'd notice that it does not match union_pathkeys.\nConsequently, we are unable to recognize that the index scan paths are\nalready appropriately sorted, leading us to miss the opportunity to\nutilize MergeAppend.\n\nNot sure if this case is common enough to be worth paying attention to.\n\n* In build_setop_child_paths() we also create properly sorted partial\npaths, which seems not necessary because we do not support parallel\nmerge append, right?\n\n* Another is minor and relates to cosmetic matters. When we unique-ify\nthe result of a UNION, we take the number of distinct groups as equal to\nthe total input size. For the Append path and Gather path, we use\n'dNumGroups', which is 'rows' of the Append path. For the MergeAppend\nwe use 'rows' of the MergeAppend path. I believe they are supposed to\nbe the same, but I think it'd be better to keep them consistent: either\nuse 'dNumGroups' for all the three kinds of paths, or use 'path->rows'\nfor each path.\n\nThanks\nRichard\n\nOn Fri, Mar 8, 2024 at 11:31 AM David Rowley <[email protected]> wrote:On Fri, 8 Mar 2024 at 00:39, Richard Guo <[email protected]> wrote:\n> I would like to have another look, but it might take several days.\n> Would that be too late?\n\nPlease look. Several days is fine. I'd like to start looking on Monday\nor Tuesday next week.I've had another look, and here are some comments that came to my mind.* There are cases where the setop_pathkeys of a subquery does not matchthe union_pathkeys generated in generate_union_paths() for sorting bythe whole target list. In such cases, even if we have explicitly sortedthe paths of the subquery to meet the ordering required for itssetop_pathkeys, we cannot find appropriate ordered paths forMergeAppend. For instance,explain (costs off)(select a, b from t t1 where a = b) UNION (select a, b from t t2 where a = b); QUERY PLAN----------------------------------------------------------- Unique -> Sort Sort Key: t1.a, t1.b -> Append -> Index Only Scan using t_a_b_idx on t t1 Filter: (a = b) -> Index Only Scan using t_a_b_idx on t t2 Filter: (a = b)(8 rows)(Assume t_a_b_idx is a btree index on t(a, b))In this query, the setop_pathkeys of the subqueries includes only onePathKey because 'a' and 'b' are in the same EC inside the subqueries,while the union_pathkeys of the whole query includes two PathKeys, onefor each target entry. After we convert the setop_pathkeys to outerrepresentation, we'd notice that it does not match union_pathkeys.Consequently, we are unable to recognize that the index scan paths arealready appropriately sorted, leading us to miss the opportunity toutilize MergeAppend.Not sure if this case is common enough to be worth paying attention to.* In build_setop_child_paths() we also create properly sorted partialpaths, which seems not necessary because we do not support parallelmerge append, right?* Another is minor and relates to cosmetic matters. When we unique-ifythe result of a UNION, we take the number of distinct groups as equal tothe total input size. For the Append path and Gather path, we use'dNumGroups', which is 'rows' of the Append path. For the MergeAppendwe use 'rows' of the MergeAppend path. I believe they are supposed tobe the same, but I think it'd be better to keep them consistent: eitheruse 'dNumGroups' for all the three kinds of paths, or use 'path->rows'for each path.ThanksRichard",
"msg_date": "Mon, 11 Mar 2024 14:56:15 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Mon, 11 Mar 2024 at 19:56, Richard Guo <[email protected]> wrote:\n> * There are cases where the setop_pathkeys of a subquery does not match\n> the union_pathkeys generated in generate_union_paths() for sorting by\n> the whole target list. In such cases, even if we have explicitly sorted\n> the paths of the subquery to meet the ordering required for its\n> setop_pathkeys, we cannot find appropriate ordered paths for\n> MergeAppend. For instance,\n>\n> explain (costs off)\n> (select a, b from t t1 where a = b) UNION (select a, b from t t2 where a = b);\n> QUERY PLAN\n> -----------------------------------------------------------\n> Unique\n> -> Sort\n> Sort Key: t1.a, t1.b\n> -> Append\n> -> Index Only Scan using t_a_b_idx on t t1\n> Filter: (a = b)\n> -> Index Only Scan using t_a_b_idx on t t2\n> Filter: (a = b)\n> (8 rows)\n>\n> (Assume t_a_b_idx is a btree index on t(a, b))\n>\n> In this query, the setop_pathkeys of the subqueries includes only one\n> PathKey because 'a' and 'b' are in the same EC inside the subqueries,\n> while the union_pathkeys of the whole query includes two PathKeys, one\n> for each target entry. After we convert the setop_pathkeys to outer\n> representation, we'd notice that it does not match union_pathkeys.\n> Consequently, we are unable to recognize that the index scan paths are\n> already appropriately sorted, leading us to miss the opportunity to\n> utilize MergeAppend.\n>\n> Not sure if this case is common enough to be worth paying attention to.\n\nI've spent almost all day looking into this, which is just enough work\nto satisfy myself this *is* future work rather than v17 work.\n\nThe reason I feel this is future work rather than work for this patch\nis that this is already a limitation of subqueries in general and it's\nnot unique to union child queries.\n\nFor example:\n\ncreate table ab(a int, b int, primary key(a,b));\ninsert into ab select x,y from generate_series(1,100)x,generate_series(1,100)y;\nvacuum analyze ab;\nexplain select * from (select a,b from ab where a = 1 order by a,b\nlimit 10) order by a,b;\n\nThe current plan for this is:\n\n QUERY PLAN\n-------------------------------------------------\n Sort\n Sort Key: ab.a, ab.b\n -> Limit\n -> Index Only Scan using ab_pkey on ab\n Index Cond: (a = 1)\n(5 rows)\n\nThe additional sort isn't required but is added because the outer\nquery requires the pathkeys {a,b} and the inner query only has the\npathkey {b}. {a} is removed due to it being redundant because of the\nconst member. The outer query does not know about the redundant\npathkeys so think the subquery is only sorted by {b} therefore adds\nthe sort on \"a\", \"b\".\n\nThe attached 0001 patch (renamed as .txt so it's ignored by the CFbot)\nadjusts convert_subquery_pathkeys() to have it look a bit deeper and\ntry harder to match the path to the outer query's query_pathkeys.\nAfter patching with that, the plan becomes:\n\n QUERY PLAN\n-------------------------------------------\n Limit\n -> Index Only Scan using ab_pkey on ab\n Index Cond: (a = 1)\n(3 rows)\n\nThe patch is still incomplete as the matching is quite complex for the\ncase you mentioned with a=b. It's a bit late here to start making\nthat work, but I think the key to make that work is to give\nsubquery_matches_pathkeys() an extra parameter or 2 for where to start\nworking on each list and recursively call itself where I've left the\nTODO comment in the function and on the recursive call, try the next\nquery_pathkeys and the same sub_pathkey. If the recursive call\nfunction returns false, continue on trying to match the normal way. If\nit returns true, return true.\n\nThere'd be a bit more work elsewhere to do to make this work for the\ngeneral case. For example:\n\nexplain (costs off) select * from (select a,b from ab where a = 1\noffset 0) order by a,b;\n\nstill produces the following plan with the patched version:\n\n QUERY PLAN\n-------------------------------------------\n Sort\n Sort Key: ab.a, ab.b\n -> Index Only Scan using ab_pkey on ab\n Index Cond: (a = 1)\n(4 rows)\n\nthe reason for this is that the subquery does not know the outer query\nwould like the index path to have the pathkeys {a,b}. I've not\nlooked at this issue in detail but I suspect we could do something\nsomewhere like set_subquery_pathlist() to check if the outer query's\nquery_pathkeys are all Vars from the subquery. I think we'd need to\ntrawl through the eclasses to ensure that each query_pathkeys eclasses\ncontains a member mentioning only a Var from the subquery and if so,\nget the subquery to set those pathkeys.\n\nAnyway, this is all at least PG18 work so I'll raise a thread about it\naround June. The patch is included in case you want to mess around\nwith it. I'd be happy if you want to look into the subquery pathkey\nissue portion of the work. I won't be giving this much more focus\nuntil after the freeze.\n\n> * In build_setop_child_paths() we also create properly sorted partial\n> paths, which seems not necessary because we do not support parallel\n> merge append, right?\n\nYeah. Thanks for noticing that. Removing all that saves quite a bit more code.\n\n> * Another is minor and relates to cosmetic matters. When we unique-ify\n> the result of a UNION, we take the number of distinct groups as equal to\n> the total input size. For the Append path and Gather path, we use\n> 'dNumGroups', which is 'rows' of the Append path. For the MergeAppend\n> we use 'rows' of the MergeAppend path. I believe they are supposed to\n> be the same, but I think it'd be better to keep them consistent: either\n> use 'dNumGroups' for all the three kinds of paths, or use 'path->rows'\n> for each path.\n\nYeah, that should use dNumGroups. Well spotted.\n\nI've attached v3.\n\nDavid",
"msg_date": "Tue, 12 Mar 2024 23:21:27 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Tue, 12 Mar 2024 at 23:21, David Rowley <[email protected]> wrote:\n> I've attached v3.\n\nI spent quite a bit more time looking at this.\n\nI discovered that the dNumGroups wasn't being set as it should have\nbeen for INTERSECT and EXCEPT queries. There was a plan change as a\nresult of this. I've fixed this by adjusting where dNumGroups is set.\nIt must be delayed until after the setop child paths have been\ncreated.\n\nAside from this, the changes I made were mostly cosmetic. However, I\ndid notice that I wasn't setting the union child RelOptInfo's\nec_indexes in add_setop_child_rel_equivalences(). I also discovered\nthat we're not doing that properly for the top-level RelOptInfo for\nthe UNION query prior to this change. The reason is that due to the\nVar.varno==0 for the top-level UNION targetlist. The code in\nget_eclass_for_sort_expr() effectively misses this relation due to\n\"while ((i = bms_next_member(newec->ec_relids, i)) > 0)\". This\nhappens to be good because there is no root->simple_rel_array[0]\nentry, so happens to prevent that code crashing. It seems ok that\nthe ec_indexes are not set for the top-level set RelOptInfo as\nget_eclass_for_sort_expr() does not make use of ec_indexes while\nsearching for an existing EquivalenceClass. Maybe we should fix this\nvarno == 0 hack and adjust get_eclass_for_sort_expr() so that it makes\nuse of the ec_indexes.\n\nIt's possible to see this happen with a query such as:\n\nSELECT oid FROM pg_class UNION SELECT oid FROM pg_class ORDER BY oid;\n\nI didn't see that as a reason not to push this patch as this occurs\nboth with and without this change, so I've now pushed this patch.\n\nThank you and Andy for reviewing this.\n\nDavid\n\n\n",
"msg_date": "Mon, 25 Mar 2024 14:43:54 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 9:44 AM David Rowley <[email protected]> wrote:\n\n> It seems ok that\n> the ec_indexes are not set for the top-level set RelOptInfo as\n> get_eclass_for_sort_expr() does not make use of ec_indexes while\n> searching for an existing EquivalenceClass. Maybe we should fix this\n> varno == 0 hack and adjust get_eclass_for_sort_expr() so that it makes\n> use of the ec_indexes.\n>\n> It's possible to see this happen with a query such as:\n>\n> SELECT oid FROM pg_class UNION SELECT oid FROM pg_class ORDER BY oid;\n\n\nI see what you said. Yeah, there might be some optimization\npossibilities in this area. And I agree that this should not be a\nblocker in pushing this patch.\n\n\n> I didn't see that as a reason not to push this patch as this occurs\n> both with and without this change, so I've now pushed this patch.\n\n\nGreat to see this patch has been pushed!\n\nThanks\nRichard\n\nOn Mon, Mar 25, 2024 at 9:44 AM David Rowley <[email protected]> wrote:It seems ok that\nthe ec_indexes are not set for the top-level set RelOptInfo as\nget_eclass_for_sort_expr() does not make use of ec_indexes while\nsearching for an existing EquivalenceClass. Maybe we should fix this\nvarno == 0 hack and adjust get_eclass_for_sort_expr() so that it makes\nuse of the ec_indexes.\n\nIt's possible to see this happen with a query such as:\n\nSELECT oid FROM pg_class UNION SELECT oid FROM pg_class ORDER BY oid;I see what you said. Yeah, there might be some optimizationpossibilities in this area. And I agree that this should not be ablocker in pushing this patch. \nI didn't see that as a reason not to push this patch as this occurs\nboth with and without this change, so I've now pushed this patch.Great to see this patch has been pushed!ThanksRichard",
"msg_date": "Mon, 25 Mar 2024 14:05:07 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "Hello David,\n\n25.03.2024 04:43, David Rowley wrote:\n> I didn't see that as a reason not to push this patch as this occurs\n> both with and without this change, so I've now pushed this patch.\n\nPlease look at a new assertion failure, that is triggered by the following\nquery:\nSELECT count(*) FROM (\n WITH q1(x) AS (SELECT 1)\n SELECT FROM q1 UNION SELECT FROM q1\n) qu;\n\nTRAP: failed Assert(\"lg != NULL\"), File: \"planner.c\", Line: 7941, PID: 1133017\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 26 Mar 2024 20:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 06:00, Alexander Lakhin <[email protected]> wrote:\n> SELECT count(*) FROM (\n> WITH q1(x) AS (SELECT 1)\n> SELECT FROM q1 UNION SELECT FROM q1\n> ) qu;\n>\n> TRAP: failed Assert(\"lg != NULL\"), File: \"planner.c\", Line: 7941, PID: 1133017\n\nThanks for finding that.\n\nThere's something weird going on with the UNION child subquery's\nsetOperations field. As far as I understand, and from reading the\nexisting comments, this should only be set for the top-level union.\n\nBecause this field is set, it plans the CTE thinking it's a UNION\nchild and breaks when it can't find a SortGroupClause for the CTE's\ntarget list item.\n\nI'll keep digging. As far as I see the setOperations field is only set\nin transformSetOperationStmt(). I'm guessing we must be doing a\ncopyObject() somewhere and accidentally picking up the parent's\nsetOperations.\n\nDavid\n\n\n",
"msg_date": "Wed, 27 Mar 2024 11:22:53 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 6:23 AM David Rowley <[email protected]> wrote:\n\n> Because this field is set, it plans the CTE thinking it's a UNION\n> child and breaks when it can't find a SortGroupClause for the CTE's\n> target list item.\n\n\nRight. The problem here is that we mistakenly think that the CTE query\nis a subquery for the set operation and thus store the SetOperationStmt\nin its qp_extra. Currently the code for the check is:\n\n /*\n * Check if we're a subquery for a set operation. If we are, store\n * the SetOperationStmt in qp_extra.\n */\n if (root->parent_root != NULL &&\n root->parent_root->parse->setOperations != NULL &&\n IsA(root->parent_root->parse->setOperations, SetOperationStmt))\n qp_extra.setop =\n (SetOperationStmt *) root->parent_root->parse->setOperations;\n else\n qp_extra.setop = NULL;\n\nThis check cannot tell if the subquery is for a set operation or a CTE,\nbecause its parent might have setOperations set in both cases. Hmm, is\nthere any way to differentiate between the two?\n\nThanks\nRichard\n\nOn Wed, Mar 27, 2024 at 6:23 AM David Rowley <[email protected]> wrote:\nBecause this field is set, it plans the CTE thinking it's a UNION\nchild and breaks when it can't find a SortGroupClause for the CTE's\ntarget list item.Right. The problem here is that we mistakenly think that the CTE queryis a subquery for the set operation and thus store the SetOperationStmtin its qp_extra. Currently the code for the check is: /* * Check if we're a subquery for a set operation. If we are, store * the SetOperationStmt in qp_extra. */ if (root->parent_root != NULL && root->parent_root->parse->setOperations != NULL && IsA(root->parent_root->parse->setOperations, SetOperationStmt)) qp_extra.setop = (SetOperationStmt *) root->parent_root->parse->setOperations; else qp_extra.setop = NULL;This check cannot tell if the subquery is for a set operation or a CTE,because its parent might have setOperations set in both cases. Hmm, isthere any way to differentiate between the two?ThanksRichard",
"msg_date": "Wed, 27 Mar 2024 11:14:52 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 16:15, Richard Guo <[email protected]> wrote:\n> if (root->parent_root != NULL &&\n> root->parent_root->parse->setOperations != NULL &&\n> IsA(root->parent_root->parse->setOperations, SetOperationStmt))\n> qp_extra.setop =\n> (SetOperationStmt *) root->parent_root->parse->setOperations;\n> else\n> qp_extra.setop = NULL;\n>\n> This check cannot tell if the subquery is for a set operation or a CTE,\n> because its parent might have setOperations set in both cases. Hmm, is\n> there any way to differentiate between the two?\n\nAs far as I see, there's nothing to go on... well unless you counted\ncanSetTag, which is false for the CTE (per analyzeCTE())... but that's\ncertainly not the fix.\n\nI did wonder when first working on this patch if subquery_planner()\nshould grow an extra parameter, or maybe consolidate some existing\nones by passing some struct that provides the planner with a bit more\ncontext about the query. A few of the existing parameters are likely\ncandidates for being in such a struct. e.g. hasRecursion and\ntuple_fraction. A SetOperationStmt could go in there too.\n\nThe other CTE thread about the PathKey change you worked on highlights\nthat something like this could be useful. I posted in [1] about this.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvrF53ErmonnpO77eDiJm7PyReZ+nD=4FSsSOmaKx6+MuQ@mail.gmail.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 22:47:54 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 22:47, David Rowley <[email protected]> wrote:\n> I did wonder when first working on this patch if subquery_planner()\n> should grow an extra parameter, or maybe consolidate some existing\n> ones by passing some struct that provides the planner with a bit more\n> context about the query. A few of the existing parameters are likely\n> candidates for being in such a struct. e.g. hasRecursion and\n> tuple_fraction. A SetOperationStmt could go in there too.\n\nThe attached is roughly what I had in mind. I've not taken the time\nto see what comments need to be updated, so the attached aims only to\nassist discussion.\n\nDavid",
"msg_date": "Wed, 27 Mar 2024 23:34:09 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 6:34 PM David Rowley <[email protected]> wrote:\n\n> On Wed, 27 Mar 2024 at 22:47, David Rowley <[email protected]> wrote:\n> > I did wonder when first working on this patch if subquery_planner()\n> > should grow an extra parameter, or maybe consolidate some existing\n> > ones by passing some struct that provides the planner with a bit more\n> > context about the query. A few of the existing parameters are likely\n> > candidates for being in such a struct. e.g. hasRecursion and\n> > tuple_fraction. A SetOperationStmt could go in there too.\n>\n> The attached is roughly what I had in mind. I've not taken the time\n> to see what comments need to be updated, so the attached aims only to\n> assist discussion.\n\n\nI like this idea. And there may be future applications for having such\na struct if we want to pass down additional information to subquery\nplanning, such as ordering requirements from outer query.\n\nThanks\nRichard\n\nOn Wed, Mar 27, 2024 at 6:34 PM David Rowley <[email protected]> wrote:On Wed, 27 Mar 2024 at 22:47, David Rowley <[email protected]> wrote:\n> I did wonder when first working on this patch if subquery_planner()\n> should grow an extra parameter, or maybe consolidate some existing\n> ones by passing some struct that provides the planner with a bit more\n> context about the query. A few of the existing parameters are likely\n> candidates for being in such a struct. e.g. hasRecursion and\n> tuple_fraction. A SetOperationStmt could go in there too.\n\nThe attached is roughly what I had in mind. I've not taken the time\nto see what comments need to be updated, so the attached aims only to\nassist discussion.I like this idea. And there may be future applications for having sucha struct if we want to pass down additional information to subqueryplanning, such as ordering requirements from outer query.ThanksRichard",
"msg_date": "Thu, 28 Mar 2024 10:48:03 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> On Wed, Mar 27, 2024 at 6:34 PM David Rowley <[email protected]> wrote:\n>> The attached is roughly what I had in mind. I've not taken the time\n>> to see what comments need to be updated, so the attached aims only to\n>> assist discussion.\n\n> I like this idea.\n\nI haven't studied the underlying problem yet, so I'm not quite\nbuying into whether we need this struct at all ... but assuming\nwe do, I feel like \"PlannerContext\" is a pretty poor name.\nThere's basically nothing to distinguish it from \"PlannerInfo\",\nnot to mention that readers would likely assume it's a memory\ncontext of some sort.\n\nPerhaps \"SubqueryContext\" or the like would be better? It\nstill has the conflict with memory contexts though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Mar 2024 22:56:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Thu, 28 Mar 2024 at 15:56, Tom Lane <[email protected]> wrote:\n> I haven't studied the underlying problem yet, so I'm not quite\n> buying into whether we need this struct at all ...\n\nThe problem is, when planning a UNION child query, we want to try and\nproduce some Paths that would suit the top-level UNION query so that a\nMerge Append -> Unique can be used rather than a Append -> Sort ->\nUnique or Append -> Hash Aggregate.\n\nThe problem is informing the UNION child query about what it is. I\nthought I could do root->parent_root->parse->setOperations for a UNION\nchild to know what it is, but that breaks for a query such as:\n\nWITH q1(x) AS (SELECT 1)\n SELECT FROM q1 UNION SELECT FROM q1\n\nas the CTE also has root->parent_root->parse->setOperations set and in\nthe above case, that's a problem as there's some code that tries to\nmatch the non-resjunk child targetlists up with the SetOperationStmt's\nSortGroupClauses, but there's a mismatch for the CTE. The actual\nUNION children should have a 1:1 match for non-junk columns.\n\n> but assuming\n> we do, I feel like \"PlannerContext\" is a pretty poor name.\n> There's basically nothing to distinguish it from \"PlannerInfo\",\n> not to mention that readers would likely assume it's a memory\n> context of some sort.\n>\n> Perhaps \"SubqueryContext\" or the like would be better? It\n> still has the conflict with memory contexts though.\n\nMaybe something with \"Parameters\" in the name?\n\nDavid\n\n\n",
"msg_date": "Thu, 28 Mar 2024 16:06:46 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> The problem is informing the UNION child query about what it is. I\n> thought I could do root->parent_root->parse->setOperations for a UNION\n> child to know what it is, but that breaks for a query such as:\n\nYeah, having grouping_planner poke into the parent level\ndoesn't seem like a great idea here. I continue to not like\nthe name \"PlannerContext\" but I agree passing down the setop\nexplicitly is the way to go.\n\n>> Perhaps \"SubqueryContext\" or the like would be better? It\n>> still has the conflict with memory contexts though.\n\n> Maybe something with \"Parameters\" in the name?\n\nSubqueryParameters might be OK. Or SubqueryPlannerExtra?\nSince this is a bespoke struct that will probably only ever\nbe used with subquery_planner, naming it after that function\nseems like a good idea. (And, given that fact and the fact\nthat it's not a Node, I'm not sure it belongs in pathnodes.h.\nWe could just declare it in planner.h.)\n\nSome minor comments now that I've looked at 66c0185a3 a little:\n\n* Near the head of grouping_planner is this bit:\n\n if (parse->setOperations)\n {\n /*\n * If there's a top-level ORDER BY, assume we have to fetch all the\n * tuples. This might be too simplistic given all the hackery below\n * to possibly avoid the sort; but the odds of accurate estimates here\n * are pretty low anyway. XXX try to get rid of this in favor of\n * letting plan_set_operations generate both fast-start and\n * cheapest-total paths.\n */\n if (parse->sortClause)\n root->tuple_fraction = 0.0;\n\nI'm pretty sure this comment is mine, but it's old enough that I don't\nrecall exactly what I had in mind. Still, it seems like your patch\nhas addressed precisely the issue of generating fast-start plans for\nsetops. Should we now remove this reset of tuple_fraction?\n\n* generate_setop_child_grouplist does this:\n\n /* assign a tleSortGroupRef, or reuse the existing one */\n sgc->tleSortGroupRef = assignSortGroupRef(tle, targetlist);\n tle->ressortgroupref = sgc->tleSortGroupRef;\n\nThat last line is redundant and confusing. It is not this code's\ncharter to change ressortgroupref.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Mar 2024 15:36:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "I wrote:\n> David Rowley <[email protected]> writes:\n>> Maybe something with \"Parameters\" in the name?\n\n> SubqueryParameters might be OK. Or SubqueryPlannerExtra?\n> Since this is a bespoke struct that will probably only ever\n> be used with subquery_planner, naming it after that function\n> seems like a good idea.\n\nOn third thought, I'm not at all convinced that we even want to\ninvent this struct as compared to just adding another parameter\nto subquery_planner. The problem with a struct is what happens\nthe next time we need to add a parameter? If we add yet another\nfunction parameter, we can count on the compiler to complain\nabout call sites that didn't get the memo. Adding a field\nwithin an existing struct provokes no such warning, leading\nto situations with uninitialized fields that might accidentally\nwork during testing, but fail the minute they get to the field.\n\nIf you do want to go this direction, a minimum safety requirement\nwould be to have an ironclad rule that callers memset the whole\nstruct to zero before filling it, so that any not-set fields\nwill at least have predictable values. But I don't see the\npoint really.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Mar 2024 15:53:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Properly pathify the union planner"
},
{
"msg_contents": "On Fri, 29 Mar 2024 at 08:53, Tom Lane <[email protected]> wrote:\n> On third thought, I'm not at all convinced that we even want to\n> invent this struct as compared to just adding another parameter\n> to subquery_planner. The problem with a struct is what happens\n> the next time we need to add a parameter? If we add yet another\n> function parameter, we can count on the compiler to complain\n> about call sites that didn't get the memo. Adding a field\n> within an existing struct provokes no such warning, leading\n> to situations with uninitialized fields that might accidentally\n> work during testing, but fail the minute they get to the field.\n\nI agree it's best to break callers that don't update their code to\nconsider passing or not passing a SetOperationStmt. I've just\ncommitted a fix to do it that way. This also seems to be the path of\nleast resistance, which also appeals.\n\nI opted to add a new test alongside the existing tests which validate\nset operations with an empty SELECT list work. The new tests include\nthe variation that the set operation has both a materialized and\nnon-materialized CTE as a child. This was only a problem with a\nmaterialized CTE, but I opted to include a non-materialized one as I\ndon't expect that we'll get this exact problem again. I was just keen\non getting more coverage with a couple of cheap tests.\n\nThanks for your input on this. I'll review your other comments shortly.\n\nDavid\n\n\n",
"msg_date": "Tue, 2 Apr 2024 12:22:38 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Properly pathify the union planner"
}
] |
[
{
"msg_contents": "Hi hackers:\n\nI got a basebackup using pg_basebackup -R. After that, I created a restore\npoint named test on primary, and set recovery_target_name to test,\nrecovery_target_action to shutdown in standby datadir. I got a failure\nstartup message after 'pg_ctl start -D $standby_datadir'. I think it is\nnot a failure, and makes users nervous, especially for newbies.\n\nMy thought is to generate a recovery.done file if the postmaster receives\nexit code 3 from the startup process. When postmaster exits, pg_ctl will\ngive a more friendly message to users.",
"msg_date": "Thu, 2 Nov 2023 14:50:14 +0800",
"msg_from": "Crisp Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "make pg_ctl more friendly"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-02 14:50:14 +0800, Crisp Lee wrote:\n> I got a basebackup using pg_basebackup -R. After that, I created a restore\n> point named test on primary, and set recovery_target_name to test,\n> recovery_target_action to shutdown in standby datadir. I got a failure\n> startup message after 'pg_ctl start -D $standby_datadir'. I think it is\n> not a failure, and makes users nervous, especially for newbies.\n> \n> My thought is to generate a recovery.done file if the postmaster receives\n> exit code 3 from the startup process. When postmaster exits, pg_ctl will\n> give a more friendly message to users.\n\nI think we can detect this without any additional state - pg_ctl already\naccesses pg_control (via get_control_dbstate()). We should be able to detect\nyour case by issuing a different warning if\n\na) get_control_dbstate() at the start was *not* DB_SHUTDOWNED\nb) get_control_dbstate() at the end is DB_SHUTDOWNED\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Nov 2023 18:56:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "How to judge from 'DB_SHUTDOWNED' that PITR ends normally? 'DB_SHUTDOWNED'\nis just a state, it could not give more meaning, so I reuse the\nrecovery.done.\n\nOn Sat, Nov 4, 2023 at 9:56 AM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-11-02 14:50:14 +0800, Crisp Lee wrote:\n> > I got a basebackup using pg_basebackup -R. After that, I created a\n> restore\n> > point named test on primary, and set recovery_target_name to test,\n> > recovery_target_action to shutdown in standby datadir. I got a failure\n> > startup message after 'pg_ctl start -D $standby_datadir'. I think it is\n> > not a failure, and makes users nervous, especially for newbies.\n> >\n> > My thought is to generate a recovery.done file if the postmaster receives\n> > exit code 3 from the startup process. When postmaster exits, pg_ctl will\n> > give a more friendly message to users.\n>\n> I think we can detect this without any additional state - pg_ctl already\n> accesses pg_control (via get_control_dbstate()). We should be able to\n> detect\n> your case by issuing a different warning if\n>\n> a) get_control_dbstate() at the start was *not* DB_SHUTDOWNED\n> b) get_control_dbstate() at the end is DB_SHUTDOWNED\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHow to judge from 'DB_SHUTDOWNED' that PITR ends normally? \n'DB_SHUTDOWNED' is just a state, it could not give more meaning, so I \nreuse the recovery.done.On Sat, Nov 4, 2023 at 9:56 AM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-11-02 14:50:14 +0800, Crisp Lee wrote:\n> I got a basebackup using pg_basebackup -R. After that, I created a restore\n> point named test on primary, and set recovery_target_name to test,\n> recovery_target_action to shutdown in standby datadir. I got a failure\n> startup message after 'pg_ctl start -D $standby_datadir'. I think it is\n> not a failure, and makes users nervous, especially for newbies.\n> \n> My thought is to generate a recovery.done file if the postmaster receives\n> exit code 3 from the startup process. When postmaster exits, pg_ctl will\n> give a more friendly message to users.\n\nI think we can detect this without any additional state - pg_ctl already\naccesses pg_control (via get_control_dbstate()). We should be able to detect\nyour case by issuing a different warning if\n\na) get_control_dbstate() at the start was *not* DB_SHUTDOWNED\nb) get_control_dbstate() at the end is DB_SHUTDOWNED\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 9 Nov 2023 09:29:32 +0800",
"msg_from": "Crisp Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-09 09:29:32 +0800, Crisp Lee wrote:\n> How to judge from 'DB_SHUTDOWNED' that PITR ends normally? 'DB_SHUTDOWNED'\n> is just a state, it could not give more meaning, so I reuse the\n> recovery.done.\n\nDB_SHUTDOWNED cannot be encountered while recovery is ongoing. If there was a\nhard crash, you'd see DB_IN_ARCHIVE_RECOVERY or such, if the command was shut\ndown orderly before PITR has finished, you'd see DB_SHUTDOWNED_IN_RECOVERY.\n\n- Andres\n\n\n",
"msg_date": "Wed, 8 Nov 2023 17:32:50 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Hi,\n\nI know it. But my question is not that. I did a PITR operation with\nrecovery_target_name and recovery_target_action('shutdown'). The PITR\nprocess was very short and the PITR was done before pg_ctl check. The\npostmaster shutdown due to recovery_target_action, and there was no crash.\nBut pg_ctl told me about startup failure. I think the startup had\nsucceeded and the result was not a exception. pg_ctl should tell users\nabout detailed messages.\n\nOn Thu, Nov 9, 2023 at 9:32 AM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-11-09 09:29:32 +0800, Crisp Lee wrote:\n> > How to judge from 'DB_SHUTDOWNED' that PITR ends normally?\n> 'DB_SHUTDOWNED'\n> > is just a state, it could not give more meaning, so I reuse the\n> > recovery.done.\n>\n> DB_SHUTDOWNED cannot be encountered while recovery is ongoing. If there\n> was a\n> hard crash, you'd see DB_IN_ARCHIVE_RECOVERY or such, if the command was\n> shut\n> down orderly before PITR has finished, you'd see DB_SHUTDOWNED_IN_RECOVERY.\n>\n> - Andres\n>\n\nHi,I know it. But my question is not that. I did a PITR operation with recovery_target_name and recovery_target_action('shutdown'). The PITR process was very short and the PITR was done before pg_ctl check. The postmaster shutdown due to recovery_target_action, and there was no crash. But pg_ctl told me about startup failure. I think the startup had succeeded and the result was not a exception. pg_ctl should tell users about detailed messages.On Thu, Nov 9, 2023 at 9:32 AM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-11-09 09:29:32 +0800, Crisp Lee wrote:\n> How to judge from 'DB_SHUTDOWNED' that PITR ends normally? 'DB_SHUTDOWNED'\n> is just a state, it could not give more meaning, so I reuse the\n> recovery.done.\n\nDB_SHUTDOWNED cannot be encountered while recovery is ongoing. If there was a\nhard crash, you'd see DB_IN_ARCHIVE_RECOVERY or such, if the command was shut\ndown orderly before PITR has finished, you'd see DB_SHUTDOWNED_IN_RECOVERY.\n\n- Andres",
"msg_date": "Thu, 9 Nov 2023 09:56:50 +0800",
"msg_from": "Crisp Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "On Thu, Nov 9, 2023 at 9:57 AM Crisp Lee <[email protected]> wrote:\n>\n> Hi,\n>\n> I know it. But my question is not that. I did a PITR operation with recovery_target_name and recovery_target_action('shutdown'). The PITR process was very short and the PITR was done before pg_ctl check. The postmaster shutdown due to recovery_target_action, and there was no crash. But pg_ctl told me about startup failure. I think the startup had succeeded and the result was not a exception. pg_ctl should tell users about detailed messages.\n>\n> On Thu, Nov 9, 2023 at 9:32 AM Andres Freund <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> On 2023-11-09 09:29:32 +0800, Crisp Lee wrote:\n>> > How to judge from 'DB_SHUTDOWNED' that PITR ends normally? 'DB_SHUTDOWNED'\n>> > is just a state, it could not give more meaning, so I reuse the\n>> > recovery.done.\n>>\n>> DB_SHUTDOWNED cannot be encountered while recovery is ongoing. If there was a\n>> hard crash, you'd see DB_IN_ARCHIVE_RECOVERY or such, if the command was shut\n>> down orderly before PITR has finished, you'd see DB_SHUTDOWNED_IN_RECOVERY.\n>>\n>> - Andres\n\nAfter a PITR shutdown, the db state should be *shut down in recovery*, try the\npatch attached.\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 9 Nov 2023 15:08:11 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "On Thu, Nov 9, 2023 at 3:08 PM Junwang Zhao <[email protected]> wrote:\n>\n> On Thu, Nov 9, 2023 at 9:57 AM Crisp Lee <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > I know it. But my question is not that. I did a PITR operation with recovery_target_name and recovery_target_action('shutdown'). The PITR process was very short and the PITR was done before pg_ctl check. The postmaster shutdown due to recovery_target_action, and there was no crash. But pg_ctl told me about startup failure. I think the startup had succeeded and the result was not a exception. pg_ctl should tell users about detailed messages.\n> >\n> > On Thu, Nov 9, 2023 at 9:32 AM Andres Freund <[email protected]> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On 2023-11-09 09:29:32 +0800, Crisp Lee wrote:\n> >> > How to judge from 'DB_SHUTDOWNED' that PITR ends normally? 'DB_SHUTDOWNED'\n> >> > is just a state, it could not give more meaning, so I reuse the\n> >> > recovery.done.\n> >>\n> >> DB_SHUTDOWNED cannot be encountered while recovery is ongoing. If there was a\n> >> hard crash, you'd see DB_IN_ARCHIVE_RECOVERY or such, if the command was shut\n> >> down orderly before PITR has finished, you'd see DB_SHUTDOWNED_IN_RECOVERY.\n> >>\n> >> - Andres\n>\n> After a PITR shutdown, the db state should be *shut down in recovery*, try the\n> patch attached.\n>\n\nprevious patch has some format issues, V2 attached.\n\n>\n> --\n> Regards\n> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 9 Nov 2023 15:19:06 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Hi,\n\nI thought the PITR shutdown was DB_SHUTDOWN. I made a mistake. The v2\nattach looks good.\n\nOn Thu, Nov 9, 2023 at 3:19 PM Junwang Zhao <[email protected]> wrote:\n\n> On Thu, Nov 9, 2023 at 3:08 PM Junwang Zhao <[email protected]> wrote:\n> >\n> > On Thu, Nov 9, 2023 at 9:57 AM Crisp Lee <[email protected]>\n> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I know it. But my question is not that. I did a PITR operation with\n> recovery_target_name and recovery_target_action('shutdown'). The PITR\n> process was very short and the PITR was done before pg_ctl check. The\n> postmaster shutdown due to recovery_target_action, and there was no crash.\n> But pg_ctl told me about startup failure. I think the startup had\n> succeeded and the result was not a exception. pg_ctl should tell users\n> about detailed messages.\n> > >\n> > > On Thu, Nov 9, 2023 at 9:32 AM Andres Freund <[email protected]>\n> wrote:\n> > >>\n> > >> Hi,\n> > >>\n> > >> On 2023-11-09 09:29:32 +0800, Crisp Lee wrote:\n> > >> > How to judge from 'DB_SHUTDOWNED' that PITR ends normally?\n> 'DB_SHUTDOWNED'\n> > >> > is just a state, it could not give more meaning, so I reuse the\n> > >> > recovery.done.\n> > >>\n> > >> DB_SHUTDOWNED cannot be encountered while recovery is ongoing. If\n> there was a\n> > >> hard crash, you'd see DB_IN_ARCHIVE_RECOVERY or such, if the command\n> was shut\n> > >> down orderly before PITR has finished, you'd see\n> DB_SHUTDOWNED_IN_RECOVERY.\n> > >>\n> > >> - Andres\n> >\n> > After a PITR shutdown, the db state should be *shut down in recovery*,\n> try the\n> > patch attached.\n> >\n>\n> previous patch has some format issues, V2 attached.\n>\n> >\n> > --\n> > Regards\n> > Junwang Zhao\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n>\n\nHi,I thought the PITR shutdown was DB_SHUTDOWN. I made a mistake. The v2 attach looks good.On Thu, Nov 9, 2023 at 3:19 PM Junwang Zhao <[email protected]> wrote:On Thu, Nov 9, 2023 at 3:08 PM Junwang Zhao <[email protected]> wrote:\n>\n> On Thu, Nov 9, 2023 at 9:57 AM Crisp Lee <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > I know it. But my question is not that. I did a PITR operation with recovery_target_name and recovery_target_action('shutdown'). The PITR process was very short and the PITR was done before pg_ctl check. The postmaster shutdown due to recovery_target_action, and there was no crash. But pg_ctl told me about startup failure. I think the startup had succeeded and the result was not a exception. pg_ctl should tell users about detailed messages.\n> >\n> > On Thu, Nov 9, 2023 at 9:32 AM Andres Freund <[email protected]> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On 2023-11-09 09:29:32 +0800, Crisp Lee wrote:\n> >> > How to judge from 'DB_SHUTDOWNED' that PITR ends normally? 'DB_SHUTDOWNED'\n> >> > is just a state, it could not give more meaning, so I reuse the\n> >> > recovery.done.\n> >>\n> >> DB_SHUTDOWNED cannot be encountered while recovery is ongoing. If there was a\n> >> hard crash, you'd see DB_IN_ARCHIVE_RECOVERY or such, if the command was shut\n> >> down orderly before PITR has finished, you'd see DB_SHUTDOWNED_IN_RECOVERY.\n> >>\n> >> - Andres\n>\n> After a PITR shutdown, the db state should be *shut down in recovery*, try the\n> patch attached.\n>\n\nprevious patch has some format issues, V2 attached.\n\n>\n> --\n> Regards\n> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 9 Nov 2023 15:32:42 +0800",
"msg_from": "Crisp Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Hi,\n\nThank you for working on this! I agree that the current message is not friendly.\n\nOn Thu, 9 Nov 2023 at 10:19, Junwang Zhao <[email protected]> wrote:\n>\n> On Thu, Nov 9, 2023 at 3:08 PM Junwang Zhao <[email protected]> wrote:\n> >\n> > After a PITR shutdown, the db state should be *shut down in recovery*, try the\n> > patch attached.\n> >\n>\n> previous patch has some format issues, V2 attached.\n\nv2-0001-PITR-shutdown-should-not-report-error-by-pg_ctl.patch:\n\n- \"Examine the log output.\\n\"),\n+ \"Examine the log output\\n\"),\n progname);\n\nI don't think that this is needed.\n\nOther than that, the patch looks good and I confirm that after PITR shutdown:\n\n\"PITR shutdown\"\n\"update configuration for startup again if needed\"\n\nmessage shows up, instead of:\n\n\"pg_ctl: could not start server\"\n\"Examine the log output.\".\n\nnitpick: It would be better if the order of the error message cases\nand enums is the same ( i.e. 'POSTMASTER_RECOVERY_SHUTDOWN' before\n'POSTMASTER_FAILED' in enum )\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 9 Jan 2024 16:22:49 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Hi Nazir,\n\nOn Tue, Jan 9, 2024 at 9:23 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> Thank you for working on this! I agree that the current message is not friendly.\n>\n> On Thu, 9 Nov 2023 at 10:19, Junwang Zhao <[email protected]> wrote:\n> >\n> > On Thu, Nov 9, 2023 at 3:08 PM Junwang Zhao <[email protected]> wrote:\n> > >\n> > > After a PITR shutdown, the db state should be *shut down in recovery*, try the\n> > > patch attached.\n> > >\n> >\n> > previous patch has some format issues, V2 attached.\n>\n> v2-0001-PITR-shutdown-should-not-report-error-by-pg_ctl.patch:\n>\n> - \"Examine the log output.\\n\"),\n> + \"Examine the log output\\n\"),\n> progname);\n>\n> I don't think that this is needed.\nThere seems to be no common sense for the ending dot when using\nwrite_stderr, so I will leave this not changed.\n\n>\n> Other than that, the patch looks good and I confirm that after PITR shutdown:\n>\n> \"PITR shutdown\"\n> \"update configuration for startup again if needed\"\n>\n> message shows up, instead of:\n>\n> \"pg_ctl: could not start server\"\n> \"Examine the log output.\".\n>\n> nitpick: It would be better if the order of the error message cases\n> and enums is the same ( i.e. 'POSTMASTER_RECOVERY_SHUTDOWN' before\n> 'POSTMASTER_FAILED' in enum )\nAgreed, fixed in V3\n\n>\n> --\n> Regards,\n> Nazir Bilal Yavuz\n> Microsoft\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Wed, 10 Jan 2024 11:33:03 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Hi,\n\nOn Wed, 10 Jan 2024 at 06:33, Junwang Zhao <[email protected]> wrote:\n>\n> Hi Nazir,\n>\n> On Tue, Jan 9, 2024 at 9:23 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > v2-0001-PITR-shutdown-should-not-report-error-by-pg_ctl.patch:\n> >\n> > - \"Examine the log output.\\n\"),\n> > + \"Examine the log output\\n\"),\n> > progname);\n> >\n> > I don't think that this is needed.\n> There seems to be no common sense for the ending dot when using\n> write_stderr, so I will leave this not changed.\n>\n> >\n> > Other than that, the patch looks good and I confirm that after PITR shutdown:\n> >\n> > \"PITR shutdown\"\n> > \"update configuration for startup again if needed\"\n> >\n> > message shows up, instead of:\n> >\n> > \"pg_ctl: could not start server\"\n> > \"Examine the log output.\".\n> >\n> > nitpick: It would be better if the order of the error message cases\n> > and enums is the same ( i.e. 'POSTMASTER_RECOVERY_SHUTDOWN' before\n> > 'POSTMASTER_FAILED' in enum )\n> Agreed, fixed in V3\n\nThank you for the update. It looks good to me.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 11 Jan 2024 15:21:27 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "+\tPOSTMASTER_RECOVERY_SHUTDOWN,\n\nPerhaps this should be POSTMASTER_SHUTDOWN_IN_RECOVERY to match the state\nin the control file?\n\n+\t\t\tcase POSTMASTER_RECOVERY_SHUTDOWN:\n+\t\t\t\tprint_msg(_(\"PITR shutdown\\n\"));\n+\t\t\t\tprint_msg(_(\"update configuration for startup again if needed\\n\"));\n+\t\t\t\tbreak;\n\nI'm not sure I agree that this is a substantially friendlier message. From\na quick skim of the thread, it seems like you want to avoid sending a scary\nerror message if Postgres was intentionally shut down while in recovery.\nIf I got this particular message, I think I would be worried that something\nwent wrong during my point-in-time restore, and I'd be scrambling to figure\nout what configuration this message wants me to update.\n\nIf I'm correctly interpreting the intent here, it might be worth fleshing\nout the messages a bit more. For example, instead of \"PITR shutdown,\"\nperhaps we could say \"shut down while in recovery.\" And maybe we should\npoint to the specific settings in the latter message.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 15 Jan 2024 15:39:32 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Hi Nathan,\n\nOn Tue, Jan 16, 2024 at 5:39 AM Nathan Bossart <[email protected]> wrote:\n>\n> + POSTMASTER_RECOVERY_SHUTDOWN,\n>\n> Perhaps this should be POSTMASTER_SHUTDOWN_IN_RECOVERY to match the state\n> in the control file?\n\nAgreed\n\n>\n> + case POSTMASTER_RECOVERY_SHUTDOWN:\n> + print_msg(_(\"PITR shutdown\\n\"));\n> + print_msg(_(\"update configuration for startup again if needed\\n\"));\n> + break;\n>\n> I'm not sure I agree that this is a substantially friendlier message. From\n> a quick skim of the thread, it seems like you want to avoid sending a scary\n> error message if Postgres was intentionally shut down while in recovery.\n> If I got this particular message, I think I would be worried that something\n> went wrong during my point-in-time restore, and I'd be scrambling to figure\n> out what configuration this message wants me to update.\n>\n> If I'm correctly interpreting the intent here, it might be worth fleshing\n> out the messages a bit more. For example, instead of \"PITR shutdown,\"\n> perhaps we could say \"shut down while in recovery.\"\n\nMake sense. Fixed. See V4\n\n> And maybe we should\n> point to the specific settings in the latter message.\n\nI've changed this latter message to:\nupdate recovery target settings for startup again if needed\n\nWhat do you think?\n\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Tue, 16 Jan 2024 10:32:55 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "I think this needs more comments. First, in the WaitPMResult enum, we\ncurrently have three values -- READY, STILL_STARTING, FAILED. These are\nall pretty self-explanatory. But this patch adds SHUTDOWN_IN_RECOVERY,\nand that's not at all self-explanatory. Did postmaster start or not?\nThe enum value's name doesn't make that clear. So I'd do something like\n\ntypedef enum\n{\n\tPOSTMASTER_READY,\n\tPOSTMASTER_STILL_STARTING,\n\t/*\n\t * postmaster no longer running, because it stopped after recovery\n\t * completed.\n\t */\n\tPOSTMASTER_SHUTDOWN_IN_RECOVERY,\n\tPOSTMASTER_FAILED,\n} WaitPMResult;\n\nMaybe put the comments in wait_for_postmaster_start instead.\n\nSecondly, the patch proposes to add new text to be returned by\ndo_start() when this happens, which would look like this:\n\n waiting for server to start... shut down while in recovery\n update recovery target settings for startup again if needed\n\nIs this crystal clear? I'm not sure. How about something like this?\n\n waiting for server to start... done, but not running\n server shut down because of recovery target settings\n\nvariations on the first phrase:\n\n\"done, no longer running\"\n\"done, but no longer running\"\n\"done, automatically shut down\"\n\"done, automatically shut down after recovery\"\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Now I have my system running, not a byte was off the shelf;\nIt rarely breaks and when it does I fix the code myself.\nIt's stable, clean and elegant, and lightning fast as well,\nAnd it doesn't cost a nickel, so Bill Gates can go to hell.\"\n\n\n",
"msg_date": "Wed, 17 Jan 2024 09:53:58 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Wed, Jan 17, 2024 at 4:54 PM Alvaro Herrera <[email protected]> wrote:\n>\n> I think this needs more comments. First, in the WaitPMResult enum, we\n> currently have three values -- READY, STILL_STARTING, FAILED. These are\n> all pretty self-explanatory. But this patch adds SHUTDOWN_IN_RECOVERY,\n> and that's not at all self-explanatory. Did postmaster start or not?\n> The enum value's name doesn't make that clear. So I'd do something like\n>\n> typedef enum\n> {\n> POSTMASTER_READY,\n> POSTMASTER_STILL_STARTING,\n> /*\n> * postmaster no longer running, because it stopped after recovery\n> * completed.\n> */\n> POSTMASTER_SHUTDOWN_IN_RECOVERY,\n> POSTMASTER_FAILED,\n> } WaitPMResult;\n>\n> Maybe put the comments in wait_for_postmaster_start instead.\n\nI put the comments in WaitPMResult since we need to add two\nof those if in wait_for_postmaster_start.\n\n>\n> Secondly, the patch proposes to add new text to be returned by\n> do_start() when this happens, which would look like this:\n>\n> waiting for server to start... shut down while in recovery\n> update recovery target settings for startup again if needed\n>\n> Is this crystal clear? I'm not sure. How about something like this?\n>\n> waiting for server to start... done, but not running\n> server shut down because of recovery target settings\n\nAgreed.\n>\n> variations on the first phrase:\n>\n> \"done, no longer running\"\n> \"done, but no longer running\"\n> \"done, automatically shut down\"\n> \"done, automatically shut down after recovery\"\n\nI chose the last one because it gives information to users.\nSee V5, thanks for the wording ;)\n\n>\n> --\n> Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n> \"Now I have my system running, not a byte was off the shelf;\n> It rarely breaks and when it does I fix the code myself.\n> It's stable, clean and elegant, and lightning fast as well,\n> And it doesn't cost a nickel, so Bill Gates can go to hell.\"\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Wed, 17 Jan 2024 17:33:34 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "On Wed, 2024-01-17 at 17:33 +0800, Junwang Zhao wrote:\n> On Wed, Jan 17, 2024 at 4:54 PM Alvaro Herrera <[email protected]> wrote:\n> > I think this needs more comments. First, in the WaitPMResult enum, we\n> > currently have three values -- READY, STILL_STARTING, FAILED. These are\n> > all pretty self-explanatory. But this patch adds SHUTDOWN_IN_RECOVERY,\n> > and that's not at all self-explanatory. Did postmaster start or not?\n> > The enum value's name doesn't make that clear. So I'd do something like\n> > \n> > typedef enum\n> > {\n> > POSTMASTER_READY,\n> > POSTMASTER_STILL_STARTING,\n> > /*\n> > * postmaster no longer running, because it stopped after recovery\n> > * completed.\n> > */\n> > POSTMASTER_SHUTDOWN_IN_RECOVERY,\n> > POSTMASTER_FAILED,\n> > } WaitPMResult;\n> > \n> > Maybe put the comments in wait_for_postmaster_start instead.\n> \n> I put the comments in WaitPMResult since we need to add two\n> of those if in wait_for_postmaster_start.\n\nI don't think that any comment is needed; the name says it all.\n\n> > Secondly, the patch proposes to add new text to be returned by\n> > do_start() when this happens, which would look like this:\n> > \n> > waiting for server to start... shut down while in recovery\n> > update recovery target settings for startup again if needed\n> > \n> > Is this crystal clear? I'm not sure. How about something like this?\n> > \n> > waiting for server to start... done, but not running\n> > server shut down because of recovery target settings\n> > \n> > variations on the first phrase:\n> > \n> > \"done, no longer running\"\n> > \"done, but no longer running\"\n> > \"done, automatically shut down\"\n> > \"done, automatically shut down after recovery\"\n> \n> I chose the last one because it gives information to users.\n> See V5, thanks for the wording ;)\n\nWhy not just leave it at plain \"done\"?\nAfter all, the server was started successfully.\nThe second message should make sufficiently clear that the server has stopped.\n\n\nI didn't like the code duplication introduced by the patch, so I rewrote\nthat part a bit.\n\nAttached is my suggestion.\n\nI'll set the status to \"waiting for author\"; if you are fine with my version,\nI think that the patch is \"ready for committer\".\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 09 Jul 2024 21:59:32 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Hi, Laurenz\n\nOn Wed, Jul 10, 2024 at 3:59 AM Laurenz Albe <[email protected]> wrote:\n>\n> On Wed, 2024-01-17 at 17:33 +0800, Junwang Zhao wrote:\n> > On Wed, Jan 17, 2024 at 4:54 PM Alvaro Herrera <[email protected]> wrote:\n> > > I think this needs more comments. First, in the WaitPMResult enum, we\n> > > currently have three values -- READY, STILL_STARTING, FAILED. These are\n> > > all pretty self-explanatory. But this patch adds SHUTDOWN_IN_RECOVERY,\n> > > and that's not at all self-explanatory. Did postmaster start or not?\n> > > The enum value's name doesn't make that clear. So I'd do something like\n> > >\n> > > typedef enum\n> > > {\n> > > POSTMASTER_READY,\n> > > POSTMASTER_STILL_STARTING,\n> > > /*\n> > > * postmaster no longer running, because it stopped after recovery\n> > > * completed.\n> > > */\n> > > POSTMASTER_SHUTDOWN_IN_RECOVERY,\n> > > POSTMASTER_FAILED,\n> > > } WaitPMResult;\n> > >\n> > > Maybe put the comments in wait_for_postmaster_start instead.\n> >\n> > I put the comments in WaitPMResult since we need to add two\n> > of those if in wait_for_postmaster_start.\n>\n> I don't think that any comment is needed; the name says it all.\n>\n> > > Secondly, the patch proposes to add new text to be returned by\n> > > do_start() when this happens, which would look like this:\n> > >\n> > > waiting for server to start... shut down while in recovery\n> > > update recovery target settings for startup again if needed\n> > >\n> > > Is this crystal clear? I'm not sure. How about something like this?\n> > >\n> > > waiting for server to start... done, but not running\n> > > server shut down because of recovery target settings\n> > >\n> > > variations on the first phrase:\n> > >\n> > > \"done, no longer running\"\n> > > \"done, but no longer running\"\n> > > \"done, automatically shut down\"\n> > > \"done, automatically shut down after recovery\"\n> >\n> > I chose the last one because it gives information to users.\n> > See V5, thanks for the wording ;)\n>\n> Why not just leave it at plain \"done\"?\n> After all, the server was started successfully.\n> The second message should make sufficiently clear that the server has stopped.\n>\n>\n> I didn't like the code duplication introduced by the patch, so I rewrote\n> that part a bit.\n>\n> Attached is my suggestion.\n\nThe patch LGTM.\n\n>\n> I'll set the status to \"waiting for author\"; if you are fine with my version,\n> I think that the patch is \"ready for committer\".\n\nI've set it to \"ready for committer\", thanks.\n\n>\n> Yours,\n> Laurenz Albe\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 10 Jul 2024 10:45:34 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "\n\nOn 2024/07/10 11:45, Junwang Zhao wrote:\n>> Attached is my suggestion.\n> \n> The patch LGTM.\n\n+\t\t\tcase POSTMASTER_SHUTDOWN_IN_RECOVERY:\n+\t\t\t\tprint_msg(_(\" done\\n\"));\n+\t\t\t\tprint_msg(_(\"server shut down because of recovery target settings\\n\"));\n\n\"because of recovery target settings\" isn't always accurate.\nFor example, if the DBA shuts down the server during recovery,\nPOSTMASTER_SHUTDOWN_IN_RECOVERY can be returned regardless of\nthe recovery target settings. Should we change the message to\nsomething like \"server shut down in recovery\" for accuracy?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Jul 2024 02:47:59 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Junwang Zhao <[email protected]> writes:\n> On Wed, Jul 10, 2024 at 3:59 AM Laurenz Albe <[email protected]> wrote:\n>> Attached is my suggestion.\n\n> The patch LGTM.\n\n>> I'll set the status to \"waiting for author\"; if you are fine with my version,\n>> I think that the patch is \"ready for committer\".\n\n> I've set it to \"ready for committer\", thanks.\n\nPushed, thanks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2024 13:50:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "Fujii Masao <[email protected]> writes:\n> \"because of recovery target settings\" isn't always accurate.\n> For example, if the DBA shuts down the server during recovery,\n> POSTMASTER_SHUTDOWN_IN_RECOVERY can be returned regardless of\n> the recovery target settings. Should we change the message to\n> something like \"server shut down in recovery\" for accuracy?\n\nHmm, I just pushed it with Laurenz's wording. I don't mind\nif we change it again, but I'm not sure that there's much\nwrong with it as it stands. Keep in mind that the context\nis the DBA doing \"pg_ctl start\". It seems unlikely that\nhe/she would concurrently do \"pg_ctl stop\". Even if that\ndid happen, do we really need to phrase the message to account\nfor it?\n\nI like Laurenz's wording because it points the user in the\ndirection of the settings that would need adjustment if an\nimmediate shutdown wasn't what was expected/wanted. If we\njust say \"shut down in recovery\", that may be accurate,\nbut it offers little help as to what to do next.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2024 13:58:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
},
{
"msg_contents": "\n\nOn 2024/07/19 2:58, Tom Lane wrote:\n> Fujii Masao <[email protected]> writes:\n>> \"because of recovery target settings\" isn't always accurate.\n>> For example, if the DBA shuts down the server during recovery,\n>> POSTMASTER_SHUTDOWN_IN_RECOVERY can be returned regardless of\n>> the recovery target settings. Should we change the message to\n>> something like \"server shut down in recovery\" for accuracy?\n> \n> Hmm, I just pushed it with Laurenz's wording. I don't mind\n> if we change it again, but I'm not sure that there's much\n> wrong with it as it stands. Keep in mind that the context\n> is the DBA doing \"pg_ctl start\". It seems unlikely that\n> he/she would concurrently do \"pg_ctl stop\". Even if that\n> did happen, do we really need to phrase the message to account\n> for it?\n> \n> I like Laurenz's wording because it points the user in the\n> direction of the settings that would need adjustment if an\n> immediate shutdown wasn't what was expected/wanted. If we\n> just say \"shut down in recovery\", that may be accurate,\n> but it offers little help as to what to do next.\n\nI was thinking the scenario where \"pg_ctl -w start\" exits due to\na recovery target setting, especially with recovery_target_action=shutdown,\ncan happen not so many times. This is because the server typically\ncan reach PM_STATUS_READY or PM_STATUS_STANDBY,\nand pg_ctl exits normally before the recovery target is reached.\n\nOn the other thand, if users start the crash recovery and find\nmisconfiguration of parameter requiring a server restart,\nthey might shut down the server during recovery to fix it.\nIn this case, mentioning \"recovery target\" could be confusing.\nThis scenario also might not be so common, but seems a bit more\nlikely than the recovery target case. I understand this might be\na minority opinion, though..\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Jul 2024 21:13:13 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make pg_ctl more friendly"
}
] |
[
{
"msg_contents": "Hi all!\n\nCurrently, only unnamed prepared statements are supported by psql with the\n\\bind command and it's not possible to create or use named prepared statements\nthrough extended protocol.\n\nThis patch introduces 2 additional commands: \\parse and \\bindx.\n\\parse allows us to issue a Parse message to create a named prepared statement\nthrough extended protocol.\n\\bindx allows to bind and execute a named prepared statement through extended\nprotocol.\n\nThe main goal is to provide more ways to test extended protocol in\nregression tests\nsimilarly to what \\bind is doing.\n\nRegards,\nAnthonin",
"msg_date": "Thu, 2 Nov 2023 10:52:36 +0100",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add additional extended protocol commands to psql: \\parse and\n \\bindx"
},
{
"msg_contents": "On Thu, 2 Nov 2023 at 10:52, Anthonin Bonnefoy\n<[email protected]> wrote:\n> The main goal is to provide more ways to test extended protocol in\n> regression tests\n> similarly to what \\bind is doing.\n\nI think this is a great addition. One thing that I think should be\nadded for completeness though is the ability to deallocate the\nprepared statement using PQsendClosePrepared. Other than that the\nchanges look great.\n\nAlso a tiny nitpick: stmt_name should be replaced with STMT_NAME in\nthis line of the help message.\n\n> + HELP0(\" \\\\bindx stmt_name [PARAM]...\\n\"\n\n\n",
"msg_date": "Sat, 13 Jan 2024 15:37:31 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "Hi,\n\nThanks for the review and comments.\n\n> One thing that I think should be added for completeness though is the\n> ability to deallocate the prepared statement using\n> PQsendClosePrepared. Other than that the changes look great.\nGood point, I've added the \\close command.\n\n> Also a tiny nitpick: stmt_name should be replaced with STMT_NAME in\n> this line of the help message.\nFixed\n\n\nOn Sat, Jan 13, 2024 at 3:37 PM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Thu, 2 Nov 2023 at 10:52, Anthonin Bonnefoy\n> <[email protected]> wrote:\n> > The main goal is to provide more ways to test extended protocol in\n> > regression tests\n> > similarly to what \\bind is doing.\n>\n> I think this is a great addition. One thing that I think should be\n> added for completeness though is the ability to deallocate the\n> prepared statement using PQsendClosePrepared. Other than that the\n> changes look great.\n>\n> Also a tiny nitpick: stmt_name should be replaced with STMT_NAME in\n> this line of the help message.\n>\n> > + HELP0(\" \\\\bindx stmt_name [PARAM]...\\n\"",
"msg_date": "Tue, 16 Jan 2024 09:13:17 +0100",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "Looks really good now. One thing I noticed is that \\bindx doesn't call\nignore_slash_options if it's not in an active branch. Afaict it\nshould. I do realize the same is true for plain \\bind, but it seems\nlike a bug there too.\n\n\n",
"msg_date": "Tue, 16 Jan 2024 10:37:22 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "One more usability thing. I think \\parse and \\close should not require\na \\g to send the message. You can do that by returning PSQL_CMD_SEND\ninstead of PSQL_CMD_SKIP_LIN.\nI feel like the main point of requiring \\g for \\bind and \\bindx is so\nyou can also use \\gset or \\gexec. But since \\parse and \\close don't\nreturn any rows that argument does not apply to them.\n\nAnd regarding the docs. I think the examples for \\bindx and \\close\nshould use \\parse instead of PREPARE. ISTM that people will likely\nwant to use the extended query protocol for preparing and executing,\nnot a mix of them. I know that it's possible to do that, but I think\nthe examples should cover the most common use case.\n\n\n",
"msg_date": "Tue, 16 Jan 2024 13:51:59 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "On Tue, 16 Jan 2024 at 10:37, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> Looks really good now. One thing I noticed is that \\bindx doesn't call\n> ignore_slash_options if it's not in an active branch. Afaict it\n> should. I do realize the same is true for plain \\bind, but it seems\n> like a bug there too.\n\n\nTo cover this case with tests you add your net commands to the big\nlist of meta commands in the \"\\if false\" block on around line 1000 of\nsrc/test/regress/sql/psql.sql\n\n\n",
"msg_date": "Tue, 16 Jan 2024 13:57:44 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 10:37:22AM +0100, Jelte Fennema-Nio wrote:\n> I do realize the same is true for plain \\bind, but it seems\n> like a bug there too.\n\nHmm. ignore_slash_options() is used to make the difference between\nactive and inactive branches with \\if. I was playing a bit with\npsql.sql but I don't really see a difference if for example adding\nsome \\bind commands (say a valid SELECT $1 \\bind 4) in the big \"\\if\nfalse\" that all the command types (see \"vars and backticks\").\n\nPerhaps I am missing a trick?\n--\nMichael",
"msg_date": "Wed, 17 Jan 2024 16:28:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bindx"
},
{
"msg_contents": "> I do realize the same is true for plain \\bind, but it seems\n> like a bug there too.\n\nThe unscanned bind's parameters are discarded later in the\nHandleSlashCmds functions. So adding the ignore_slash_options() for\ninactive branches scans and discards them earlier. I will add it to\nmatch what's done in the other commands but I don't think it's\ntestable as the behaviour is the same unless I miss something.\n\nI did add the \\bind, \\bindx, \\close and \\parse to the inactive branch\ntests to complete the list.\n\n> One more usability thing. I think \\parse and \\close should not require\n> a \\g to send the message. You can do that by returning PSQL_CMD_SEND\n> instead of PSQL_CMD_SKIP_LIN\n\nChanged.\n\n> I think the examples for \\bindx and \\close\n> should use \\parse instead of PREPARE\n\nDone. I had to rely on manual PREPARE for my first tests and it leaked\nin the docs.",
"msg_date": "Wed, 17 Jan 2024 10:05:33 +0100",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 10:05:33AM +0100, Anthonin Bonnefoy wrote:\n> > I do realize the same is true for plain \\bind, but it seems\n> > like a bug there too.\n> \n> The unscanned bind's parameters are discarded later in the\n> HandleSlashCmds functions. So adding the ignore_slash_options() for\n> inactive branches scans and discards them earlier. I will add it to\n> match what's done in the other commands but I don't think it's\n> testable as the behaviour is the same unless I miss something.\n\nHmm. So it does not lead to any user-visible changes, right? I can\nget your argument about being consistent in the code across the board\nfor all the backslash commands, though.\n\n> I did add the \\bind, \\bindx, \\close and \\parse to the inactive branch\n> tests to complete the list.\n\nCould you split the bits for \\bind into a separate patch, please?\nThis requires a separate evaluation, especially if this had better be\nbackpatched.\n--\nMichael",
"msg_date": "Thu, 18 Jan 2024 10:29:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bindx"
},
{
"msg_contents": "> Hmm. So it does not lead to any user-visible changes, right?\n\n From what I can tell, there's no change in the behaviour. All paths\nwould eventually go through HandleSlashCmds's cleaning logic. This is\nalso mentioned in ignore_slash_options's comment.\n\n* Read and discard \"normal\" slash command options.\n*\n* This should be used for inactive-branch processing of any slash command\n* that eats one or more OT_NORMAL, OT_SQLID, or OT_SQLIDHACK parameters.\n* We don't need to worry about exactly how many it would eat, since the\n* cleanup logic in HandleSlashCmds would silently discard any extras anyway.\n\n> Could you split the bits for \\bind into a separate patch, please?\n> This requires a separate evaluation, especially if this had better be\n> backpatched.\n\nDone. patch 1 adds ignore_slash_options to bind. patch 2 adds the new\n\\bindx, \\close and \\parse commands.",
"msg_date": "Thu, 18 Jan 2024 09:25:16 +0100",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 09:25:16AM +0100, Anthonin Bonnefoy wrote:\n> From what I can tell, there's no change in the behaviour. All paths\n> would eventually go through HandleSlashCmds's cleaning logic. This is\n> also mentioned in ignore_slash_options's comment.\n\nYeah, I can confirm that. I would be really tempted to backpatch that\nbecause that's a bug: we have to call ignore_slash_options() for\ninactive branches when a command parses options with OT_NORMAL. Now,\nI cannot break things, either.\n\n> Done. patch 1 adds ignore_slash_options to bind. patch 2 adds the new\n> \\bindx, \\close and \\parse commands.\n\n0001 has been applied on HEAD.\n--\nMichael",
"msg_date": "Fri, 19 Jan 2024 14:20:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bindx"
},
{
"msg_contents": "On Fri, 19 Jan 2024 at 10:50, Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jan 18, 2024 at 09:25:16AM +0100, Anthonin Bonnefoy wrote:\n> > From what I can tell, there's no change in the behaviour. All paths\n> > would eventually go through HandleSlashCmds's cleaning logic. This is\n> > also mentioned in ignore_slash_options's comment.\n>\n> Yeah, I can confirm that. I would be really tempted to backpatch that\n> because that's a bug: we have to call ignore_slash_options() for\n> inactive branches when a command parses options with OT_NORMAL. Now,\n> I cannot break things, either.\n>\n> > Done. patch 1 adds ignore_slash_options to bind. patch 2 adds the new\n> > \\bindx, \\close and \\parse commands.\n>\n> 0001 has been applied on HEAD.\n\nSince the 0001 patch has been applied, sending only 0002 as v5-0001 so\nthat CFBot can apply and run.\n\nRegards,\nVignesh",
"msg_date": "Sat, 27 Jan 2024 08:56:53 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "Hi,\n\nshall we do something about this patch? It seems to be in a pretty good\nshape (pretty much RFC, based on quick review), the cfbot is still\nhappy, and there seems to be agreement this is a nice feature.\n\nMichael, I see you've reviewed the patch in January. Do you agree / plan\nto get it committed, or should I take a look?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Jul 2024 00:17:44 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 12:17:44AM +0200, Tomas Vondra wrote:\n> shall we do something about this patch? It seems to be in a pretty good\n> shape (pretty much RFC, based on quick review), the cfbot is still\n> happy, and there seems to be agreement this is a nice feature.\n> \n> Michael, I see you've reviewed the patch in January. Do you agree / plan\n> to get it committed, or should I take a look?\n\nThis feel off my radar a bit, thanks for the reminder :)\n\nI have a local branch dating back from January where this patch is\nsitting, with something like 50% of the code reviewed. I'd still need\nto look at the test coverage, but I did like the proposed patch a lot\nbased on my notes.\n\nI may be able to come back to that if not next week, then the week\nafter that. If you want to handle it yourself before that, that's\nfine by me.\n--\nMichael",
"msg_date": "Fri, 19 Jul 2024 11:23:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bindx"
},
{
"msg_contents": "On 7/19/24 04:23, Michael Paquier wrote:\n> On Fri, Jul 19, 2024 at 12:17:44AM +0200, Tomas Vondra wrote:\n>> shall we do something about this patch? It seems to be in a pretty good\n>> shape (pretty much RFC, based on quick review), the cfbot is still\n>> happy, and there seems to be agreement this is a nice feature.\n>>\n>> Michael, I see you've reviewed the patch in January. Do you agree / plan\n>> to get it committed, or should I take a look?\n> \n> This feel off my radar a bit, thanks for the reminder :)\n> \n> I have a local branch dating back from January where this patch is\n> sitting, with something like 50% of the code reviewed. I'd still need\n> to look at the test coverage, but I did like the proposed patch a lot\n> based on my notes.\n> \n> I may be able to come back to that if not next week, then the week\n> after that. If you want to handle it yourself before that, that's\n> fine by me.\n\nOK, if you're already half-way through the review, I'll leave it up to\nyou. I don't think we need to rush, and I'd have to learn about all the\npsql stuff first anyway.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Jul 2024 15:28:44 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 03:28:44PM +0200, Tomas Vondra wrote:\n> OK, if you're already half-way through the review, I'll leave it up to\n> you. I don't think we need to rush, and I'd have to learn about all the\n> psql stuff first anyway.\n\nIt took me a couple of days to get back to it, but attached is what I\nhave finished with. This was mostly OK, except for a few things:\n- \\close was inconsistent with the other two commands, where no\nargument was treated as the unnamed prepared statement. I think that\nthis should be made consistent with \\parse and \\bindx, requiring an\nargument, where '' is the unnamed statement.\n- The docs did not mention the case of the unnamed statement, so added\nsome notes about that.\n- Some free() calls were not needed in the command executions, where\npsql_scan_slash_option() returns NULL.\n- Tests missing when no argument is provided for the new commands.\n\nOne last thing I have found really confusing is that this leads to the\naddition of two more status flags in pset for the close and parse\nparts, with \\bind and \\bindx sharing the third one while deciding\nwhich path to use depending on if the statement name is provided.\nThat's fragile. I think that it would be much cleaner to put all that\nbehind an enum, falling back to PQsendQuery() by default. I am\nattaching that as 0002, for clarity, but my plan is to merge both 0001\nand 0002 together.\n--\nMichael",
"msg_date": "Wed, 24 Jul 2024 14:04:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bindx"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 03:28:44PM +0200, Tomas Vondra wrote:\n> OK, if you're already half-way through the review, I'll leave it up to\n> you. I don't think we need to rush, and I'd have to learn about all the\n> psql stuff first anyway.\n\nIt took me a couple of days to get back to it, but attached is what I\nhave finished with. This was mostly OK, except for a few things:\n- \\close was inconsistent with the other two commands, where no\nargument was treated as the unnamed prepared statement. I think that\nthis should be made consistent with \\parse and \\bindx, requiring an\nargument, where '' is the unnamed statement.\n- The docs did not mention the case of the unnamed statement, so added\nsome notes about that.\n- Some free() calls were not needed in the command executions, where\npsql_scan_slash_option() returns NULL.\n- Tests missing when no argument is provided for the new commands.\n\nOne last thing I have found really confusing is that this leads to the\naddition of two more status flags in pset for the close and parse\nparts, with \\bind and \\bindx sharing the third one while deciding\nwhich path to use depending on if the statement name is provided.\nThat's fragile. I think that it would be much cleaner to put all that\nbehind an enum, falling back to PQsendQuery() by default. I am\nattaching that as 0002, for clarity, but my plan is to merge both 0001\nand 0002 together.\n--\nMichael",
"msg_date": "Wed, 24 Jul 2024 14:07:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bindx"
},
{
"msg_contents": "Hi,\n\n> It took me a couple of days to get back to it, but attached is what I\n> have finished with. This was mostly OK, except for a few things:\n> - \\close was inconsistent with the other two commands, where no\n> argument was treated as the unnamed prepared statement. I think that\n> this should be made consistent with \\parse and \\bindx, requiring an\n> argument, where '' is the unnamed statement.\n> - The docs did not mention the case of the unnamed statement, so added\n> some notes about that.\n> - Some free() calls were not needed in the command executions, where\n> psql_scan_slash_option() returns NULL.\n> - Tests missing when no argument is provided for the new commands.\n>\n> One last thing I have found really confusing is that this leads to the\n> addition of two more status flags in pset for the close and parse\n> parts, with \\bind and \\bindx sharing the third one while deciding\n> which path to use depending on if the statement name is provided.\n> That's fragile. I think that it would be much cleaner to put all that\n> behind an enum, falling back to PQsendQuery() by default. I am\n> attaching that as 0002, for clarity, but my plan is to merge both 0001\n> and 0002 together.\n\nI reviewed and tested v6. I believe it's ready to be merged.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 24 Jul 2024 15:19:52 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "On 24.07.24 07:04, Michael Paquier wrote:\n> This commit introduces three additional commands: \\parse, \\bindx and\n> \\close.\n> \\parse creates a prepared statement using extended protocol.\n> \\bindx binds and execute an existing prepared statement using extended\n> protocol.\n> \\close closes an existing prepared statement using extended protocol.\n\nThis commit message confused me, because I don't think this is what the \n\\bindx command actually does. AFAICT, it only binds, it does not \nexecute. At least that is what the documentation in the content of the \npatch appears to indicate.\n\nI'm not sure \\bindx is such a great name. The \"x\" stands for \"I ran out \nof ideas\". ;-) Maybe \\bind_named or \\bindn or something like that. Or \nuse the existing \\bind with a -name argument?\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 17:33:07 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bindx"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 05:33:07PM +0200, Peter Eisentraut wrote:\n> This commit message confused me, because I don't think this is what the\n> \\bindx command actually does. AFAICT, it only binds, it does not execute.\n> At least that is what the documentation in the content of the patch appears\n> to indicate.\n\nYep. FWIW, I always edit these before commit, and noticed that it was\nincorrect. Just took the original message for now.\n\n> I'm not sure \\bindx is such a great name. The \"x\" stands for \"I ran out of\n> ideas\". ;-) Maybe \\bind_named or \\bindn or something like that. Or use the\n> existing \\bind with a -name argument?\n\nNot sure that I like much the additional option embedded in the\nexisting command; I'd rather keep a separate command for each libpq\ncall, that seems cleaner. So I would be OK with your suggested\n\\bind_named. Fine by me to be outvoted, of course.\n--\nMichael",
"msg_date": "Thu, 25 Jul 2024 11:19:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bind"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 05:33:07PM +0200, Peter Eisentraut wrote:\n> This commit message confused me, because I don't think this is what the\n> \\bindx command actually does. AFAICT, it only binds, it does not execute.\n> At least that is what the documentation in the content of the patch appears\n> to indicate.\n\nUnless I misunderstand the remark, \\bindx will call\nPQsendQueryPrepared which will bind then execute the query, similar to\nwhat \\bind is doing (except \\bind also parses the query).\n\n> I'm not sure \\bindx is such a great name. The \"x\" stands for \"I ran out of\n> ideas\". ;-)\n\nThat's definitely what happened :). \\bind would have been a better fit\nbut it was already used.\n\nOn Thu, Jul 25, 2024 at 4:19 AM Michael Paquier <[email protected]> wrote:\n> Not sure that I like much the additional option embedded in the\n> existing command; I'd rather keep a separate command for each libpq\n> call, that seems cleaner. So I would be OK with your suggested\n> \\bind_named. Fine by me to be outvoted, of course.\n\n+1 keeping this as a separate command and using \\bind_named. \\bind has\na different behaviour as it also parses the query so keeping them as\nseparate commands would probably avoid some confusion.\n\n\n",
"msg_date": "Thu, 25 Jul 2024 08:45:28 +0200",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bind"
},
{
"msg_contents": "On Thu, 25 Jul 2024 at 08:45, Anthonin Bonnefoy\n<[email protected]> wrote:\n> +1 keeping this as a separate command and using \\bind_named. \\bind has\n> a different behaviour as it also parses the query so keeping them as\n> separate commands would probably avoid some confusion.\n\n+1 on naming it \\bind_named\n\n@Anthonin are you planning to update the patch accordingly?\n\n\n",
"msg_date": "Wed, 21 Aug 2024 00:00:06 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bind"
},
{
"msg_contents": "On Wed, Aug 21, 2024 at 12:00 AM Jelte Fennema-Nio <[email protected]> wrote:\n> @Anthonin are you planning to update the patch accordingly?\n\nHere's the patch with \\bindx renamed to \\bind_named.\n\nI've also made a small change to Michael's refactoring in 0002 by\ninitialising success to false in ExecQueryAndProcessResults. There was\na compiler warning about success possibly used uninitialized[1].\n\n[1] https://cirrus-ci.com/task/6207675187331072",
"msg_date": "Wed, 21 Aug 2024 09:29:04 +0200",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bind"
},
{
"msg_contents": "On Wed, Aug 21, 2024 at 09:29:04AM +0200, Anthonin Bonnefoy wrote:\n> Here's the patch with \\bindx renamed to \\bind_named.\n\nLooks OK to me. I have spent more time double-checking the whole, and\nit looks like we're there, so applied. Now let's play with it in more\nregression tests. Note that the refactoring patch has been merged\nwith the original one in a single commit.\n\n> I've also made a small change to Michael's refactoring in 0002 by\n> initialising success to false in ExecQueryAndProcessResults. There was\n> a compiler warning about success possibly used uninitialized[1].\n> \n> [1] https://cirrus-ci.com/task/6207675187331072\n\nAh, thanks! I've missed this one. I see where my mistake was.\n--\nMichael",
"msg_date": "Thu, 22 Aug 2024 16:33:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bind"
},
{
"msg_contents": "Hello Michael and Anthonin,\n\n22.08.2024 10:33, Michael Paquier wrote:\n> Looks OK to me. I have spent more time double-checking the whole, and\n> it looks like we're there, so applied. Now let's play with it in more\n> regression tests. Note that the refactoring patch has been merged\n> with the original one in a single commit.\n\nPlease look at an assertion failure, caused by \\bind_named:\nregression=# SELECT $1 \\parse s\n\\bind_named s\n\nregression=# \\bind_named\n\\bind_named: missing required argument\nregression=# 1 \\g\npsql: common.c:1501: ExecQueryAndProcessResults: Assertion `pset.stmtName != ((void *)0)' failed.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 17 Sep 2024 18:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bind"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 5:00 PM Alexander Lakhin <[email protected]> wrote:\n>\n> Please look at an assertion failure, caused by \\bind_named:\n> regression=# SELECT $1 \\parse s\n> \\bind_named s\n>\n> regression=# \\bind_named\n> \\bind_named: missing required argument\n> regression=# 1 \\g\n> psql: common.c:1501: ExecQueryAndProcessResults: Assertion `pset.stmtName != ((void *)0)' failed.\n\nThanks for the report.\n\nLooking at the failure, it seems like the issue was already present\nwith \\bind, though there was no assertion failure: repeatedly calling\n\\bind would allocate new stmtName/bind_params and leak them at the\nstart of exec_command_bind.\n\nI've joined a patch to clean the psql extended state at the start of\nevery extended protocol backslash command, freeing the allocated\nvariables and resetting the send_mode. Another possible approach would\nbe to return an error when there's already an existing state instead\nof overwriting it.",
"msg_date": "Wed, 18 Sep 2024 09:42:43 +0200",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bind"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 09:42:43AM +0200, Anthonin Bonnefoy wrote:\n> Looking at the failure, it seems like the issue was already present\n> with \\bind, though there was no assertion failure: repeatedly calling\n> \\bind would allocate new stmtName/bind_params and leak them at the\n> start of exec_command_bind.\n\nIndeed. That's a bad idea to do that in the client. We'd better\nback-patch that.\n\n> I've joined a patch to clean the psql extended state at the start of\n> every extended protocol backslash command, freeing the allocated\n> variables and resetting the send_mode. Another possible approach would\n> be to return an error when there's already an existing state instead\n> of overwriting it.\n\nI'll double-check all that tomorrow, but you have looks like it is\ngoing in the right direction.\n--\nMichael",
"msg_date": "Wed, 18 Sep 2024 18:08:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bind"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 06:08:54PM +0900, Michael Paquier wrote:\n> On Wed, Sep 18, 2024 at 09:42:43AM +0200, Anthonin Bonnefoy wrote:\n>> I've joined a patch to clean the psql extended state at the start of\n>> every extended protocol backslash command, freeing the allocated\n>> variables and resetting the send_mode. Another possible approach would\n>> be to return an error when there's already an existing state instead\n>> of overwriting it.\n> \n> I'll double-check all that tomorrow, but you have looks like it is\n> going in the right direction.\n\nAnd done down to v16, with one logic for HEAD and something simpler\nfor \\bind in v16 and v17.\n\nIssuing an error if there is a state does not sound like a good idea\nat this stage because it would suddenly break scripts that expect\nmultiple commands of \\bind to prioritize the last one. If that was\nsomething only on HEAD, I would have considered that as a serious\noption, but not with v16 in mind for \\bind.\n--\nMichael",
"msg_date": "Thu, 19 Sep 2024 16:30:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bind"
},
{
"msg_contents": "On Thu, 19 Sept 2024 at 09:30, Michael Paquier <[email protected]> wrote:\n> Issuing an error if there is a state does not sound like a good idea\n> at this stage because it would suddenly break scripts that expect\n> multiple commands of \\bind to prioritize the last one.\n\nSeems like a good idea to add a simple test for that behaviour then.\nSee attached.\n\nOn Thu, 19 Sept 2024 at 09:30, Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Sep 18, 2024 at 06:08:54PM +0900, Michael Paquier wrote:\n> > On Wed, Sep 18, 2024 at 09:42:43AM +0200, Anthonin Bonnefoy wrote:\n> >> I've joined a patch to clean the psql extended state at the start of\n> >> every extended protocol backslash command, freeing the allocated\n> >> variables and resetting the send_mode. Another possible approach would\n> >> be to return an error when there's already an existing state instead\n> >> of overwriting it.\n> >\n> > I'll double-check all that tomorrow, but you have looks like it is\n> > going in the right direction.\n>\n> And done down to v16, with one logic for HEAD and something simpler\n> for \\bind in v16 and v17.\n>\n> Issuing an error if there is a state does not sound like a good idea\n> at this stage because it would suddenly break scripts that expect\n> multiple commands of \\bind to prioritize the last one. If that was\n> something only on HEAD, I would have considered that as a serious\n> option, but not with v16 in mind for \\bind.\n> --\n> Michael",
"msg_date": "Thu, 19 Sep 2024 10:53:07 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql: \\parse\n and \\bind"
},
{
"msg_contents": "On Thu, Sep 19, 2024 at 10:53:07AM +0200, Jelte Fennema-Nio wrote:\n> Seems like a good idea to add a simple test for that behaviour then.\n> See attached.\n\nThanks. The same can be said for \\bind_named, so I have added\nsomething for both \\bind and \\bind_named.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2024 08:59:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add additional extended protocol commands to psql:\n \\parse and \\bind"
}
] |
[
{
"msg_contents": "Any new patches will now need to be submitted to the January\ncommitfest. (Normally we'd give advance notice of that change, but\nwe're a bit behind this time.)\n\nI have started going through the patch entries looking for out-of-date\nstatuses and filling in missing author fields. Currently we're at:\nNeeds review: 210. Waiting on Author: 42. Ready for Committer: 29.\nCommitted: 55. Withdrawn: 10. Returned with Feedback: 1. Total: 347.\n\nNext week I will also look at patches new to this CF and try to\nidentify any that could be either committed or returned quickly.\n\nNow is a good time to check if your patch still applies and passes\ntests. http://cfbot.cputube.org/ shows 81 patches needing rebase. I\ndon't know what is usual, but that seems like a pretty large number,\nso I won't spam the list requesting rebase on each thread. Instead I\nplan to do so on a case-by-case basis where I think a patch has some\nmomentum and needs a little push.\n\nIf you have submitted a patch this cycle and have not yet reviewed a\npatch, we encourage you to sign up to do so. If you actively\nreviewing, we are grateful! We perennially have plenty of code, but a\nshortage of good review.\n\n--\nJohn Naylor\n\n\n",
"msg_date": "Thu, 2 Nov 2023 18:46:36 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest 2023-11 has started"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on something else, I noticed $SUBJECT: commit b7eda3e0e\nmoved XidInMVCCSnapshot() from tqual.c into snapmgr.c, but follow-up\ncommit c91560def updated this reference incorrectly:\n\n@@ -1498,7 +1498,7 @@ GetMaxSnapshotSubxidCount(void)\n * information may not be available. If we find any overflowed subxid arrays,\n * we have to mark the snapshot's subxid data as overflowed, and extra work\n * *may* need to be done to determine what's running (see XidInMVCCSnapshot()\n- * in tqual.c).\n+ * in heapam_visibility.c).\n\nAttached is a small patch for that: s/heapam_visibility.c/snapmgr.c/.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Thu, 2 Nov 2023 21:40:35 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Incorrect file reference in comment in procarray.c"
},
{
"msg_contents": "> On 2 Nov 2023, at 13:40, Etsuro Fujita <[email protected]> wrote:\n\n> Attached is a small patch for that: s/heapam_visibility.c/snapmgr.c/.\n\nNo objections to the patch, the change is correct. However, with git grep and\nctags and other ways of discovery I wonder if we're not better off avoiding\nsuch references to filenames which are prone to going stale (and do from time\nto time).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 2 Nov 2023 14:20:55 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect file reference in comment in procarray.c"
},
{
"msg_contents": "On Thu, Nov 2, 2023 at 10:20 PM Daniel Gustafsson <[email protected]> wrote:\n> > On 2 Nov 2023, at 13:40, Etsuro Fujita <[email protected]> wrote:\n> > Attached is a small patch for that: s/heapam_visibility.c/snapmgr.c/.\n>\n> No objections to the patch, the change is correct. However, with git grep and\n> ctags and other ways of discovery I wonder if we're not better off avoiding\n> such references to filenames which are prone to going stale (and do from time\n> to time).\n\nAgreed. As XidInMVCCSnapshot() is an extern function, such a tool\nwould allow the reader to easily find the source file that contains\nthe definition of that function.\n\nThanks for the comment!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 3 Nov 2023 18:53:57 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect file reference in comment in procarray.c"
},
{
"msg_contents": "On Fri, Nov 3, 2023 at 6:53 PM Etsuro Fujita <[email protected]> wrote:\n> On Thu, Nov 2, 2023 at 10:20 PM Daniel Gustafsson <[email protected]> wrote:\n> > > On 2 Nov 2023, at 13:40, Etsuro Fujita <[email protected]> wrote:\n> > > Attached is a small patch for that: s/heapam_visibility.c/snapmgr.c/.\n> >\n> > No objections to the patch, the change is correct. However, with git grep and\n> > ctags and other ways of discovery I wonder if we're not better off avoiding\n> > such references to filenames which are prone to going stale (and do from time\n> > to time).\n>\n> Agreed. As XidInMVCCSnapshot() is an extern function, such a tool\n> would allow the reader to easily find the source file that contains\n> the definition of that function.\n\nPushed, and back-patched, like that.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 13 Nov 2023 19:20:43 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect file reference in comment in procarray.c"
}
] |
[
{
"msg_contents": "This proposal showcases the speed-up provided to popcount feature when using AVX512 registers. The intent is to share the preliminary results with the community and get feedback for adding avx512 support for popcount. \n \nRevisiting the previous discussion/improvements around this feature, I have created a micro-benchmark based on the pg_popcount() in PostgreSQL's current implementations for x86_64 using the newer AVX512 intrinsics. Playing with this implementation has improved performance up to 46% on Intel's Sapphire Rapids platform on AWS. Such gains will benefit scenarios relying on popcount.\n \nMy setup:\n \nMachine: AWS EC2 m7i - 16vcpu, 64gb RAM\nOS : Ubuntu 22.04\nGCC: 11.4 and 12.3 with flags \"-mavx -mavx512vpopcntdq -mavx512vl -march=native -O2\".\n\n1. I copied the pg_popcount() implementation into a new C/C++ project using cmake/make.\n\ta. Software only and\n\tb. SSE 64 bit version\n2. I created an implementation using the following AVX512 intrinsics:\n\ta. _mm512_popcnt_epi64()\n\tb. _mm512_reduce_add_epi64()\n3. I tested random bit streams from 64 MiB to 1024 MiB in length (5 sizes; repeatable with RNG seed [std::mt19937_64])\n4. I tested 5 seeds for each input buffer size and averaged 100 runs each (5*5*100=2500 pg_popcount() calls on a single thread)\n5. Data: <See Attached picture.>\n\nThe code I wrote uses the 64-bit solution or SW on the memory not aligned to a 512-bit boundary in memory:\n \n///////////////////////////////////////////////////////////////////////\n// 512-bit intrisic implementation (AVX512VPOPCNTDQ + AVX512F)\nuint64_t popcount_512_impl(const char *bytes, int byteCount) {\n#ifdef __AVX__\n uint64_t result = 0;\n uint64_t remainder = ((uint64_t)bytes) % 64;\n result += popcount_64_impl(bytes, remainder);\n byteCount -= remainder;\n bytes += remainder;\n uint64_t vectorCount = byteCount / 64;\n remainder = byteCount % 64;\n __m512i *vectors = (__m512i *)bytes;\n __m512i rv;\n while (vectorCount--) {\n rv = _mm512_popcnt_epi64(*(vectors++));\n result += _mm512_reduce_add_epi64(rv);\n }\n bytes = (const char *)vectors;\n result += popcount_64_impl(bytes, remainder);\n return result;\n#else\n return popcount_64_impl(bytes, byteCount);\n#endif\n}\n \nThere are further optimizations that can be applied here, but for demonstration I added the __AVX__ macro and if not fall back to the original implementations in PostgreSQL.\n \nThe 46% improvement in popcount is worthy of discussion considering the previous popcount 64-bit SSE and SW implementations. \n \n Thanks,\nPaul Amonson",
"msg_date": "Thu, 2 Nov 2023 14:22:10 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, 2 Nov 2023 at 15:22, Amonson, Paul D <[email protected]> wrote:\n>\n> This proposal showcases the speed-up provided to popcount feature when using AVX512 registers. The intent is to share the preliminary results with the community and get feedback for adding avx512 support for popcount.\n>\n> Revisiting the previous discussion/improvements around this feature, I have created a micro-benchmark based on the pg_popcount() in PostgreSQL's current implementations for x86_64 using the newer AVX512 intrinsics. Playing with this implementation has improved performance up to 46% on Intel's Sapphire Rapids platform on AWS. Such gains will benefit scenarios relying on popcount.\n\nHow does this compare to older CPUs, and more mixed workloads? IIRC,\nthe use of AVX512 (which I believe this instruction to be included in)\nhas significant implications for core clock frequency when those\ninstructions are being executed, reducing overall performance if\nthey're not a large part of the workload.\n\n> My setup:\n>\n> Machine: AWS EC2 m7i - 16vcpu, 64gb RAM\n> OS : Ubuntu 22.04\n> GCC: 11.4 and 12.3 with flags \"-mavx -mavx512vpopcntdq -mavx512vl -march=native -O2\".\n>\n> 1. I copied the pg_popcount() implementation into a new C/C++ project using cmake/make.\n> a. Software only and\n> b. SSE 64 bit version\n> 2. I created an implementation using the following AVX512 intrinsics:\n> a. _mm512_popcnt_epi64()\n> b. _mm512_reduce_add_epi64()\n> 3. I tested random bit streams from 64 MiB to 1024 MiB in length (5 sizes; repeatable with RNG seed [std::mt19937_64])\n\nApart from the two type functions bytea_bit_count and bit_bit_count\n(which are not accessed in postgres' own systems, but which could want\nto cover bytestreams of >BLCKSZ) the only popcount usages I could find\nwere on objects that fit on a page, i.e. <8KiB in size. How does\nperformance compare for bitstreams of such sizes, especially after any\nCPU clock implications are taken into account?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 3 Nov 2023 12:16:05 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, Nov 03, 2023 at 12:16:05PM +0100, Matthias van de Meent wrote:\n> On Thu, 2 Nov 2023 at 15:22, Amonson, Paul D <[email protected]> wrote:\n>> This proposal showcases the speed-up provided to popcount feature when\n>> using AVX512 registers. The intent is to share the preliminary results\n>> with the community and get feedback for adding avx512 support for\n>> popcount.\n>>\n>> Revisiting the previous discussion/improvements around this feature, I\n>> have created a micro-benchmark based on the pg_popcount() in\n>> PostgreSQL's current implementations for x86_64 using the newer AVX512\n>> intrinsics. Playing with this implementation has improved performance up\n>> to 46% on Intel's Sapphire Rapids platform on AWS. Such gains will\n>> benefit scenarios relying on popcount.\n\nNice. I've been testing out AVX2 support in src/include/port/simd.h, and\nthe results look promising there, too. I intend to start a new thread for\nthat (hopefully soon), but one open question I don't have a great answer\nfor yet is how to detect support for newer intrinsics. So far, we've been\nable to use function pointers (e.g., popcount, crc32c) or deduce support\nvia common predefined compiler macros (e.g., we assume SSE2 is supported if\nthe compiler is targeting 64-bit x86). But the former introduces a\nperformance penalty, and we probably want to inline most of this stuff,\nanyway. And the latter limits us to stuff that has been around for a\ndecade or two.\n\nLike I said, I don't have any proposals yet, but assuming we do want to\nsupport newer intrinsics, either open-coded or via auto-vectorization, I\nsuspect we'll need to gather consensus for a new policy/strategy.\n\n> Apart from the two type functions bytea_bit_count and bit_bit_count\n> (which are not accessed in postgres' own systems, but which could want\n> to cover bytestreams of >BLCKSZ) the only popcount usages I could find\n> were on objects that fit on a page, i.e. <8KiB in size. How does\n> performance compare for bitstreams of such sizes, especially after any\n> CPU clock implications are taken into account?\n\nYeah, the previous optimizations in this area appear to have used ANALYZE\nas the benchmark, presumably because of visibilitymap_count(). I briefly\nattempted to measure the difference with and without AVX512 support, but I\nhaven't noticed any difference thus far. One complication for\nvisiblitymap_count() is that the data passed to pg_popcount64() is masked,\nwhich requires a couple more intructions when you're using the intrinsics.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 6 Nov 2023 20:22:40 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> Like I said, I don't have any proposals yet, but assuming we do want to\n> support newer intrinsics, either open-coded or via auto-vectorization, I\n> suspect we'll need to gather consensus for a new policy/strategy.\n\nYeah. The function-pointer solution kind of sucks, because for the\nsort of operation we're considering here, adding a call and return\nis probably order-of-100% overhead. Worse, it adds similar overhead\nfor everyone who doesn't get the benefit of the optimization. (One\nof the key things you want to be able to say, when trying to sell\na maybe-it-helps-or-maybe-it-doesnt optimization to the PG community,\nis \"it doesn't hurt anyone who's not able to benefit\".) And you\ncan't argue that that overhead is negligible either, because if it\nis then we're all wasting our time even discussing this. So we need\na better technology, and I fear I have no good ideas about what.\n\nYour comment about vectorization hints at one answer: if you can\namortize the overhead across multiple applications of the operation,\nthen it doesn't hurt so much. But I'm not sure how often we can\nmake that answer work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Nov 2023 21:52:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Nov 06, 2023 at 09:52:58PM -0500, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n> > Like I said, I don't have any proposals yet, but assuming we do want to\n> > support newer intrinsics, either open-coded or via auto-vectorization, I\n> > suspect we'll need to gather consensus for a new policy/strategy.\n> \n> Yeah. The function-pointer solution kind of sucks, because for the\n> sort of operation we're considering here, adding a call and return\n> is probably order-of-100% overhead. Worse, it adds similar overhead\n> for everyone who doesn't get the benefit of the optimization.\n\nThe glibc/gcc \"ifunc\" mechanism was designed to solve this problem of choosing\na function implementation based on the runtime CPU, without incurring function\npointer overhead. I would not attempt to use AVX512 on non-glibc systems, and\nI would use ifunc to select the desired popcount implementation on glibc:\nhttps://gcc.gnu.org/onlinedocs/gcc-4.8.5/gcc/Function-Attributes.html\n\n\n",
"msg_date": "Mon, 6 Nov 2023 19:15:01 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Nov 06, 2023 at 07:15:01PM -0800, Noah Misch wrote:\n> On Mon, Nov 06, 2023 at 09:52:58PM -0500, Tom Lane wrote:\n>> Nathan Bossart <[email protected]> writes:\n>> > Like I said, I don't have any proposals yet, but assuming we do want to\n>> > support newer intrinsics, either open-coded or via auto-vectorization, I\n>> > suspect we'll need to gather consensus for a new policy/strategy.\n>> \n>> Yeah. The function-pointer solution kind of sucks, because for the\n>> sort of operation we're considering here, adding a call and return\n>> is probably order-of-100% overhead. Worse, it adds similar overhead\n>> for everyone who doesn't get the benefit of the optimization.\n> \n> The glibc/gcc \"ifunc\" mechanism was designed to solve this problem of choosing\n> a function implementation based on the runtime CPU, without incurring function\n> pointer overhead. I would not attempt to use AVX512 on non-glibc systems, and\n> I would use ifunc to select the desired popcount implementation on glibc:\n> https://gcc.gnu.org/onlinedocs/gcc-4.8.5/gcc/Function-Attributes.html\n\nThanks, that seems promising for the function pointer cases. I'll plan on\ntrying to convert one of the existing ones to use it. BTW it looks like\nLLVM has something similar [0].\n\nIIUC this unfortunately wouldn't help for cases where we wanted to keep\nstuff inlined, such as is_valid_ascii() and the functions in pg_lfind.h,\nunless we applied it to the calling functions, but that doesn't ѕound\nparticularly maintainable.\n\n[0] https://llvm.org/docs/LangRef.html#ifuncs\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 6 Nov 2023 21:59:26 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Nov 06, 2023 at 09:59:26PM -0600, Nathan Bossart wrote:\n> On Mon, Nov 06, 2023 at 07:15:01PM -0800, Noah Misch wrote:\n> > On Mon, Nov 06, 2023 at 09:52:58PM -0500, Tom Lane wrote:\n> >> Nathan Bossart <[email protected]> writes:\n> >> > Like I said, I don't have any proposals yet, but assuming we do want to\n> >> > support newer intrinsics, either open-coded or via auto-vectorization, I\n> >> > suspect we'll need to gather consensus for a new policy/strategy.\n> >> \n> >> Yeah. The function-pointer solution kind of sucks, because for the\n> >> sort of operation we're considering here, adding a call and return\n> >> is probably order-of-100% overhead. Worse, it adds similar overhead\n> >> for everyone who doesn't get the benefit of the optimization.\n> > \n> > The glibc/gcc \"ifunc\" mechanism was designed to solve this problem of choosing\n> > a function implementation based on the runtime CPU, without incurring function\n> > pointer overhead. I would not attempt to use AVX512 on non-glibc systems, and\n> > I would use ifunc to select the desired popcount implementation on glibc:\n> > https://gcc.gnu.org/onlinedocs/gcc-4.8.5/gcc/Function-Attributes.html\n> \n> Thanks, that seems promising for the function pointer cases. I'll plan on\n> trying to convert one of the existing ones to use it. BTW it looks like\n> LLVM has something similar [0].\n> \n> IIUC this unfortunately wouldn't help for cases where we wanted to keep\n> stuff inlined, such as is_valid_ascii() and the functions in pg_lfind.h,\n> unless we applied it to the calling functions, but that doesn't ѕound\n> particularly maintainable.\n\nAgreed, it doesn't solve inline cases. If the gains are big enough, we should\nmove toward packages containing N CPU-specialized copies of the postgres\nbinary, with bin/postgres just exec'ing the right one.\n\n\n",
"msg_date": "Mon, 6 Nov 2023 21:53:15 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Nov 06, 2023 at 09:53:15PM -0800, Noah Misch wrote:\n> On Mon, Nov 06, 2023 at 09:59:26PM -0600, Nathan Bossart wrote:\n>> On Mon, Nov 06, 2023 at 07:15:01PM -0800, Noah Misch wrote:\n>> > The glibc/gcc \"ifunc\" mechanism was designed to solve this problem of choosing\n>> > a function implementation based on the runtime CPU, without incurring function\n>> > pointer overhead. I would not attempt to use AVX512 on non-glibc systems, and\n>> > I would use ifunc to select the desired popcount implementation on glibc:\n>> > https://gcc.gnu.org/onlinedocs/gcc-4.8.5/gcc/Function-Attributes.html\n>> \n>> Thanks, that seems promising for the function pointer cases. I'll plan on\n>> trying to convert one of the existing ones to use it. BTW it looks like\n>> LLVM has something similar [0].\n>> \n>> IIUC this unfortunately wouldn't help for cases where we wanted to keep\n>> stuff inlined, such as is_valid_ascii() and the functions in pg_lfind.h,\n>> unless we applied it to the calling functions, but that doesn't ѕound\n>> particularly maintainable.\n> \n> Agreed, it doesn't solve inline cases. If the gains are big enough, we should\n> move toward packages containing N CPU-specialized copies of the postgres\n> binary, with bin/postgres just exec'ing the right one.\n\nI performed a quick test with ifunc on my x86 machine that ordinarily uses\nthe runtime checks for the CRC32C code, and I actually see a consistent\n3.5% regression for pg_waldump -z on 100M 65-byte records. I've attached\nthe patch used for testing.\n\nThe multiple-copies-of-the-postgres-binary idea seems interesting. That's\nprobably not something that could be enabled by default, but perhaps we\ncould add support for a build option.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 7 Nov 2023 14:14:41 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Sorry for the late response here. We spent some time researching and measuring the frequency impact of AVX512 instructions used here.\r\n\r\n>How does this compare to older CPUs, and more mixed workloads? IIRC,\r\nthe use of AVX512 (which I believe this instruction to be included in)\r\nhas significant implications for core clock frequency when those\r\ninstructions are being executed, reducing overall performance if\r\nthey're not a large part of the workload.\r\n\r\nAVX512 has light and heavy instructions. While the heavy AVX512 instructions have clock frequency implications, the light instructions not so much. See [0] for more details. We captured EMON data for the benchmark used in this work, and see that the instructions are using the licensing level not meant for heavy AVX512 operations. This means the instructions for popcount : _mm512_popcnt_epi64(), _mm512_reduce_add_epi64() are not going to have any significant impact on CPU clock frequency. \r\nClock frequency impact aside, we measured the same benchmark for gains on older Intel hardware and observe up to 18% better performance on Intel Icelake. On older intel hardware, the popcntdq 512 instruction is not present so it won’t work. If clock frequency is not affected, rest of workload should not be impacted in the case of mixed workloads. \r\n\r\n>Apart from the two type functions bytea_bit_count and bit_bit_count\r\n(which are not accessed in postgres' own systems, but which could want\r\nto cover bytestreams of >BLCKSZ) the only popcount usages I could find\r\nwere on objects that fit on a page, i.e. <8KiB in size. How does\r\nperformance compare for bitstreams of such sizes, especially after any\r\nCPU clock implications are taken into account?\r\n\r\nTesting this on smaller block sizes < 8KiB shows that AVX512 compared to the current 64bit behavior shows slightly lower performance, but with a large variance. We cannot conclude much from it. The testing with ANALYZE benchmark by Nathan also points to no visible impact as a result of using AVX512. The gains on larger dataset is easily evident, with less variance. \r\nWhat are your thoughts if we introduce AVX512 popcount for smaller sizes as an optional feature initially, and then test it more thoroughly over time on this particular use case? \r\n\r\nRegarding enablement, following the other responses related to function inlining, using ifunc and enabling future intrinsic support, it seems a concrete solution would require further discussion. We’re attaching a patch to enable AVX512, which can use AVX512 flags during build. For example:\r\n >make -E CFLAGS_AVX512=\"-mavx -mavx512dq -mavx512vpopcntdq -mavx512vl -march=icelake-server -DAVX512_POPCNT=1\"\r\n\r\nThoughts or feedback on the approach in the patch? This solution should not impact anyone who doesn’t use the feature i.e. AVX512. Open to additional ideas if this doesn’t seem like the right approach here. \r\n\r\n[0] https://lemire.me/blog/2018/09/07/avx-512-when-and-how-to-use-these-new-instructions/\r\n\r\n-----Original Message-----\r\nFrom: Nathan Bossart <[email protected]> \r\nSent: Tuesday, November 7, 2023 12:15 PM\r\nTo: Noah Misch <[email protected]>\r\nCc: Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; Amonson, Paul D <[email protected]>; [email protected]; Shankaran, Akash <[email protected]>\r\nSubject: Re: Popcount optimization using AVX512\r\n\r\nOn Mon, Nov 06, 2023 at 09:53:15PM -0800, Noah Misch wrote:\r\n> On Mon, Nov 06, 2023 at 09:59:26PM -0600, Nathan Bossart wrote:\r\n>> On Mon, Nov 06, 2023 at 07:15:01PM -0800, Noah Misch wrote:\r\n>> > The glibc/gcc \"ifunc\" mechanism was designed to solve this problem \r\n>> > of choosing a function implementation based on the runtime CPU, \r\n>> > without incurring function pointer overhead. I would not attempt \r\n>> > to use AVX512 on non-glibc systems, and I would use ifunc to select the desired popcount implementation on glibc:\r\n>> > https://gcc.gnu.org/onlinedocs/gcc-4.8.5/gcc/Function-Attributes.ht\r\n>> > ml\r\n>> \r\n>> Thanks, that seems promising for the function pointer cases. I'll \r\n>> plan on trying to convert one of the existing ones to use it. BTW it \r\n>> looks like LLVM has something similar [0].\r\n>> \r\n>> IIUC this unfortunately wouldn't help for cases where we wanted to \r\n>> keep stuff inlined, such as is_valid_ascii() and the functions in \r\n>> pg_lfind.h, unless we applied it to the calling functions, but that \r\n>> doesn't ѕound particularly maintainable.\r\n> \r\n> Agreed, it doesn't solve inline cases. If the gains are big enough, \r\n> we should move toward packages containing N CPU-specialized copies of \r\n> the postgres binary, with bin/postgres just exec'ing the right one.\r\n\r\nI performed a quick test with ifunc on my x86 machine that ordinarily uses the runtime checks for the CRC32C code, and I actually see a consistent 3.5% regression for pg_waldump -z on 100M 65-byte records. I've attached the patch used for testing.\r\n\r\nThe multiple-copies-of-the-postgres-binary idea seems interesting. That's probably not something that could be enabled by default, but perhaps we could add support for a build option.\r\n\r\n--\r\nNathan Bossart\r\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 15 Nov 2023 20:27:57 +0000",
"msg_from": "\"Shankaran, Akash\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 08:27:57PM +0000, Shankaran, Akash wrote:\n> AVX512 has light and heavy instructions. While the heavy AVX512\n> instructions have clock frequency implications, the light instructions\n> not so much. See [0] for more details. We captured EMON data for the\n> benchmark used in this work, and see that the instructions are using the\n> licensing level not meant for heavy AVX512 operations. This means the\n> instructions for popcount : _mm512_popcnt_epi64(),\n> _mm512_reduce_add_epi64() are not going to have any significant impact on\n> CPU clock frequency.\n>\n> Clock frequency impact aside, we measured the same benchmark for gains on\n> older Intel hardware and observe up to 18% better performance on Intel\n> Icelake. On older intel hardware, the popcntdq 512 instruction is not\n> present so it won’t work. If clock frequency is not affected, rest of\n> workload should not be impacted in the case of mixed workloads. \n\nThanks for sharing your analysis.\n\n> Testing this on smaller block sizes < 8KiB shows that AVX512 compared to\n> the current 64bit behavior shows slightly lower performance, but with a\n> large variance. We cannot conclude much from it. The testing with ANALYZE\n> benchmark by Nathan also points to no visible impact as a result of using\n> AVX512. The gains on larger dataset is easily evident, with less\n> variance.\n>\n> What are your thoughts if we introduce AVX512 popcount for smaller sizes\n> as an optional feature initially, and then test it more thoroughly over\n> time on this particular use case? \n\nI don't see any need to rush this. At the very earliest, this feature\nwould go into v17, which doesn't enter feature freeze until April 2024.\nThat seems like enough time to complete any additional testing you'd like\nto do. However, if you are seeing worse performance with this patch, then\nit seems unlikely that we'd want to proceed.\n\n> Thoughts or feedback on the approach in the patch? This solution should\n> not impact anyone who doesn’t use the feature i.e. AVX512. Open to\n> additional ideas if this doesn’t seem like the right approach here. \n\nIt's true that it wouldn't impact anyone not using the feature, but there's\nalso a decent chance that this code goes virtually untested. As I've\nstated elsewhere [0], I think we should ensure there's buildfarm coverage\nfor this kind of architecture-specific stuff.\n\n[0] https://postgr.es/m/20230726043707.GB3211130%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Nov 2023 15:48:53 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Sorry for the late response. We did some further testing and research on our end, and ended up modifying the AVX512 based algorithm for popcount. We removed a scalar dependency and accumulate the results of popcnt instruction in a zmm register, only performing the reduce add at the very end, similar to [0].\r\n\r\nWith the updated patch, we observed significant improvements and handily beat the previous popcount algorithm performance. No regressions in any scenario are observed:\r\nPlatform: Intel Xeon Platinum 8360Y (Icelake) for data sizes 1kb - 64kb.\r\nMicrobenchmark: 2x - 3x gains presently vs 19% previously, on the same microbenchmark described initially in this thread. \r\n\r\nPG testing: \r\nSQL bit_count() calls popcount. Using a Postgres benchmark calling \"select bit_count(bytea(col1)) from mytable\" on a table with ~2M text rows, each row 1-12kb in size, we observe (only comparing with 64bit PG implementation, which is the fastest): \r\n\r\n1. Entire benchmark using AVX512 implementation vs PG 64-bit impl runs 6-13% faster. \r\n2. Reduce time spent on pg_popcount() method in postgres server during the benchmark: \r\n\to\t64bit (current PG): 29.5% \r\n\to\tAVX512: \t\t 3.3%\r\n3. Reduce number of samples processed by popcount: \r\n\to\t64bit (current PG): 2.4B samples\r\n\to\tAVX512: \t285M samples\r\n\r\nCompile above patch (on a machine supporting AVX512 vpopcntdq) using: make all CFLAGS_AVX512=\"-DHAVE__HW_AVX512_POPCNT -mavx -mavx512vpopcntdq -mavx512f -march=native\r\nAttaching flamegraphs and patch for above observations. \r\n\r\n[0] https://github.com/WojciechMula/sse-popcount/blob/master/popcnt-avx512-vpopcnt.cpp\r\n\r\nThanks,\r\nAkash Shankaran \r\n\r\n-----Original Message-----\r\nFrom: Nathan Bossart <[email protected]> \r\nSent: Wednesday, November 15, 2023 1:49 PM\r\nTo: Shankaran, Akash <[email protected]>\r\nCc: Noah Misch <[email protected]>; Amonson, Paul D <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\r\nSubject: Re: Popcount optimization using AVX512\r\n\r\nOn Wed, Nov 15, 2023 at 08:27:57PM +0000, Shankaran, Akash wrote:\r\n> AVX512 has light and heavy instructions. While the heavy AVX512 \r\n> instructions have clock frequency implications, the light instructions \r\n> not so much. See [0] for more details. We captured EMON data for the \r\n> benchmark used in this work, and see that the instructions are using \r\n> the licensing level not meant for heavy AVX512 operations. This means \r\n> the instructions for popcount : _mm512_popcnt_epi64(),\r\n> _mm512_reduce_add_epi64() are not going to have any significant impact \r\n> on CPU clock frequency.\r\n>\r\n> Clock frequency impact aside, we measured the same benchmark for gains \r\n> on older Intel hardware and observe up to 18% better performance on \r\n> Intel Icelake. On older intel hardware, the popcntdq 512 instruction \r\n> is not present so it won’t work. If clock frequency is not affected, \r\n> rest of workload should not be impacted in the case of mixed workloads.\r\n\r\nThanks for sharing your analysis.\r\n\r\n> Testing this on smaller block sizes < 8KiB shows that AVX512 compared \r\n> to the current 64bit behavior shows slightly lower performance, but \r\n> with a large variance. We cannot conclude much from it. The testing \r\n> with ANALYZE benchmark by Nathan also points to no visible impact as a \r\n> result of using AVX512. The gains on larger dataset is easily evident, \r\n> with less variance.\r\n>\r\n> What are your thoughts if we introduce AVX512 popcount for smaller \r\n> sizes as an optional feature initially, and then test it more \r\n> thoroughly over time on this particular use case?\r\n\r\nI don't see any need to rush this. At the very earliest, this feature would go into v17, which doesn't enter feature freeze until April 2024.\r\nThat seems like enough time to complete any additional testing you'd like to do. However, if you are seeing worse performance with this patch, then it seems unlikely that we'd want to proceed.\r\n\r\n> Thoughts or feedback on the approach in the patch? This solution \r\n> should not impact anyone who doesn’t use the feature i.e. AVX512. Open \r\n> to additional ideas if this doesn’t seem like the right approach here.\r\n\r\nIt's true that it wouldn't impact anyone not using the feature, but there's also a decent chance that this code goes virtually untested. As I've stated elsewhere [0], I think we should ensure there's buildfarm coverage for this kind of architecture-specific stuff.\r\n\r\n[0] https://postgr.es/m/20230726043707.GB3211130%40nathanxps13\r\n\r\n--\r\nNathan Bossart\r\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 25 Jan 2024 05:43:41 +0000",
"msg_from": "\"Shankaran, Akash\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On 2024-Jan-25, Shankaran, Akash wrote:\n\n> With the updated patch, we observed significant improvements and\n> handily beat the previous popcount algorithm performance. No\n> regressions in any scenario are observed:\n> Platform: Intel Xeon Platinum 8360Y (Icelake) for data sizes 1kb - 64kb.\n> Microbenchmark: 2x - 3x gains presently vs 19% previously, on the same\n> microbenchmark described initially in this thread. \n\nThese are great results.\n\nHowever, it would be much better if the improved code were available for\nall relevant builds and activated if a CPUID test determines that the\nrelevant instructions are available, instead of requiring a compile-time\nflag -- which most builds are not going to use, thus wasting the\nopportunity for running the optimized code.\n\nI suppose this would require patching pg_popcount64_choose() to be more\nspecific. Looking at the existing code, I would also consider renaming\nthe \"_fast\" variants to something like pg_popcount32_asml/\npg_popcount64_asmq so that you can name the new one pg_popcount64_asmdq\nor such. (Or maybe leave the 32-bit version alone as \"fast/slow\", since\nthere's no third option for that one -- or do I misread?)\n\nI also think this needs to move the CFLAGS-decision-making elsewhere;\nasking the user to get it right is too much of a burden. Is it workable\nto simply verify compiler support for the additional flags needed, and\nif so add them to a new CFLAGS_BITUTILS variable or such? We already\nhave the CFLAGS_CRC model that should be easy to follow. Should be easy\nenough to mostly copy what's in configure.ac and meson.build, right?\n\nFinally, the matter of using ifunc as proposed by Noah seems to be still\nin the air, with no patches offered for the popcount family. Given that\nNathan reports [1] a performance decrease, maybe we should set that\nthought aside for now and continue to use function pointers. It's worth\nkeeping in mind that popcount is already using function pointers (at\nleast in the case where we try to use POPCNT directly), so patching to\nselect between three options instead of between two wouldn't be a\nregression.\n\n[1] https://postgr.es/m/20231107201441.GA898662@nathanxps13\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n",
"msg_date": "Thu, 25 Jan 2024 10:49:09 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On 2024-Jan-25, Alvaro Herrera wrote:\n\n> Finally, the matter of using ifunc as proposed by Noah seems to be still\n> in the air, with no patches offered for the popcount family.\n\nOh, I just realized that the patch as currently proposed is placing the\noptimized popcount code in the path that does not require going through\na function pointer. So the performance increase is probably coming from\nboth avoiding jumping through the pointer as well as from the improved\ninstruction.\n\nThis suggests that finding a way to make the ifunc stuff work (with good\nperformance) is critical to this work.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The ability of users to misuse tools is, of course, legendary\" (David Steele)\nhttps://postgr.es/m/[email protected]\n\n\n",
"msg_date": "Fri, 26 Jan 2024 07:42:33 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi All,\r\n\r\n> However, it would be much better if the improved code were available for\r\n> all relevant builds and activated if a CPUID test determines that the\r\n> relevant instructions are available, instead of requiring a compile-time\r\n> flag -- which most builds are not going to use, thus wasting the\r\n> opportunity for running the optimized code.\r\n \r\nThis makes sense. I addressed the feedback, and am attaching an updated patch. Patch also addresses your feedback of autconf configurations by adding CFLAG support. I tested the runtime check for AVX512 on multiple processors with and without AVX512 and it detected or failed to detect the feature as expected.\r\n \r\n> Looking at the existing code, I would also consider renaming\r\n> the \"_fast\" variants to something like pg_popcount32_asml/\r\n> pg_popcount64_asmq so that you can name the new one pg_popcount64_asmdq\r\n> or such.\r\n \r\nI left out the renaming, as it made sense to keep the fast/slow naming for readability.\r\n \r\n> Finally, the matter of using ifunc as proposed by Noah seems to be still\r\n> in the air, with no patches offered for the popcount family. Given that\r\n> Nathan reports [1] a performance decrease, maybe we should set that\r\n> thought aside for now and continue to use function pointers.\r\n \r\nSince there are improvements without it (results below), I agree with you to continue using function pointers.\r\n \r\nI collected data on machines with, and without AVX512 support, using a table with 1M rows and performing SQL bit_count() on a char column containing (84bytes, 4KiB, 8KiB, 16KiB).\r\n * On non-AVX 512 hardware: no regression or impact at runtime with code built with AVX 512 support in the binary between the patched and unpatched servers.\r\n * On AVX512 hardware: the max improvement I saw was 17% but was averaged closer to 6.5% on a bare-metal machine. The benefit is lower on smaller cloud VMs on AWS (1 - 3%)\r\n \r\nIf the patch looks good, please suggest next steps on committing it.\r\n \r\nPaul\r\n\r\n-----Original Message-----\r\nFrom: Alvaro Herrera <[email protected]> \r\nSent: Thursday, January 25, 2024 1:49 AM\r\nTo: Shankaran, Akash <[email protected]>\r\nCc: Nathan Bossart <[email protected]>; Noah Misch <[email protected]>; Amonson, Paul D <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\r\nSubject: Re: Popcount optimization using AVX512\r\n\r\nOn 2024-Jan-25, Shankaran, Akash wrote:\r\n\r\n> With the updated patch, we observed significant improvements and \r\n> handily beat the previous popcount algorithm performance. No \r\n> regressions in any scenario are observed:\r\n> Platform: Intel Xeon Platinum 8360Y (Icelake) for data sizes 1kb - 64kb.\r\n> Microbenchmark: 2x - 3x gains presently vs 19% previously, on the same \r\n> microbenchmark described initially in this thread.\r\n\r\nThese are great results.\r\n\r\nHowever, it would be much better if the improved code were available for all relevant builds and activated if a CPUID test determines that the relevant instructions are available, instead of requiring a compile-time flag -- which most builds are not going to use, thus wasting the opportunity for running the optimized code.\r\n\r\nI suppose this would require patching pg_popcount64_choose() to be more specific. Looking at the existing code, I would also consider renaming the \"_fast\" variants to something like pg_popcount32_asml/ pg_popcount64_asmq so that you can name the new one pg_popcount64_asmdq or such. (Or maybe leave the 32-bit version alone as \"fast/slow\", since there's no third option for that one -- or do I misread?)\r\n\r\nI also think this needs to move the CFLAGS-decision-making elsewhere; asking the user to get it right is too much of a burden. Is it workable to simply verify compiler support for the additional flags needed, and if so add them to a new CFLAGS_BITUTILS variable or such? We already have the CFLAGS_CRC model that should be easy to follow. Should be easy enough to mostly copy what's in configure.ac and meson.build, right?\r\n\r\nFinally, the matter of using ifunc as proposed by Noah seems to be still in the air, with no patches offered for the popcount family. Given that Nathan reports [1] a performance decrease, maybe we should set that thought aside for now and continue to use function pointers. It's worth keeping in mind that popcount is already using function pointers (at least in the case where we try to use POPCNT directly), so patching to select between three options instead of between two wouldn't be a regression.\r\n\r\n[1] https://postgr.es/m/20231107201441.GA898662@nathanxps13\r\n\r\n-- \r\nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\r\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)",
"msg_date": "Tue, 6 Feb 2024 18:16:23 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "Hello,\n\nThis looks quite reasonable. On my machine, I get the compiler test to\npass so I get a \"yes\" in configure; but of course my CPU doesn't support\nthe instructions so I get the slow variant. So here's the patch again\nwith some minor artifacts fixed.\n\nI have the following review notes:\n\n1. we use __get_cpuid_count and __cpuidex by relying on macros\nHAVE__GET_CPUID and HAVE__CPUID respectively; but those macros are (in\nthe current Postgres source) only used and tested for __get_cpuid and\n__cpuid respectively. So unless there's some reason to be certain that\n__get_cpuid_count is always present when __get_cpuid is present, and\nthat __cpuidex is present when __cpuid is present, I think we need to\nadd new configure tests and new HAVE_ macros for these.\n\n2. we rely on <immintrin.h> being present with no AC_CHECK_HEADER()\ntest. We currently don't use this header anywhere, so I suppose we need\na test for this one as well. (Also, I suppose if we don't have\nimmintrin.h we can skip the rest of it?)\n\n3. We do the __get_cpuid_count/__cpuidex test and we also do a xgetbv\ntest. The comment there claims that this is to check the results for\nconsistency. But ... how would we know that the results are ever\ninconsistent? As far as I understand, if they were, we would silently\nbecome slower. Is this really what we want? I'm confused about this\ncoding. Maybe we do need both tests to succeed? In that case, just\nreword the comment.\n\nI think if both tests are each considered reliable on its own, then we\ncould either choose one of them and stick with it, ignoring the other;\nor we could use one as primary and then in a USE_ASSERT_CHECKING block\nverify that the other matches and throw a WARNING if not (but what would\nthat tell us?). Or something like that ... not sure.\n\n4. It needs meson support, which I suppose consists of copying the\nc-compiler.m4 test into meson.build, mimicking what the tests for CRC\ninstructions do.\n\n\nI started a CI run with this patch applied,\nhttps://cirrus-ci.com/build/4912499619790848\nbut because Meson support is missing, the compile failed\nimmediately:\n\n[10:08:48.825] ccache cc -Isrc/port/libpgport_srv.a.p -Isrc/include -I../src/include -Isrc/include/utils -fdiagnostics-color=always -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -g -fno-strict-aliasing -fwrapv -fexcess-precision=standard -D_GNU_SOURCE -Wmissing-prototypes -Wpointer-arith -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -Wdeclaration-after-statement -Wno-format-truncation -Wno-stringop-truncation -fPIC -pthread -DBUILDING_DLL -MD -MQ src/port/libpgport_srv.a.p/pg_bitutils.c.o -MF src/port/libpgport_srv.a.p/pg_bitutils.c.o.d -o src/port/libpgport_srv.a.p/pg_bitutils.c.o -c ../src/port/pg_bitutils.c\n[10:08:48.825] ../src/port/pg_bitutils.c: In function ‘pg_popcount512_fast’:\n[10:08:48.825] ../src/port/pg_bitutils.c:270:11: warning: AVX512F vector return without AVX512F enabled changes the ABI [-Wpsabi]\n[10:08:48.825] 270 | __m512i accumulator = _mm512_setzero_si512();\n[10:08:48.825] | ^~~~~~~~~~~\n[10:08:48.825] In file included from /usr/lib/gcc/x86_64-linux-gnu/10/include/immintrin.h:55,\n[10:08:48.825] from ../src/port/pg_bitutils.c:22:\n[10:08:48.825] /usr/lib/gcc/x86_64-linux-gnu/10/include/avx512fintrin.h:339:1: error: inlining failed in call to ‘always_inline’ ‘_mm512_setzero_si512’: target specific option mismatch\n[10:08:48.825] 339 | _mm512_setzero_si512 (void)\n[10:08:48.825] | ^~~~~~~~~~~~~~~~~~~~\n[10:08:48.825] ../src/port/pg_bitutils.c:270:25: note: called from here\n[10:08:48.825] 270 | __m512i accumulator = _mm512_setzero_si512();\n[10:08:48.825] | ^~~~~~~~~~~~~~~~~~~~~~\n\n\nThanks\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)",
"msg_date": "Wed, 7 Feb 2024 11:13:14 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "I happened to notice by chance that John Naylor had posted an extension\nto measure performance of popcount here:\nhttps://postgr.es/m/CAFBsxsE7otwnfA36Ly44zZO+b7AEWHRFANxR1h1kxveEV=ghLQ@mail.gmail.com\n\nThis might be useful as a base for a new one to verify the results of\nthe proposed patch in machines with relevant instruction support.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"We're here to devour each other alive\" (Hobbes)\n\n\n",
"msg_date": "Wed, 7 Feb 2024 20:53:40 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Álvaro,\r\n\r\nAll feedback is now completed. I added the additional checks for the new APIs and a separate check for the header to autoconf.\r\n\r\nAbout the double check for AVX 512 I added a large comment explaining why both are needed. There are cases where the CPU ZMM# registers are not exposed by the OS or hypervisor even if the CPU supports AVX512.\r\n\r\nThe big change is adding all old and new build support to meson. I am new to meson/ninja so please review carefully.\r\n\r\nThanks,\r\nPaul\r\n\r\n-----Original Message-----\r\nFrom: Alvaro Herrera <[email protected]> \r\nSent: Wednesday, February 7, 2024 2:13 AM\r\nTo: Amonson, Paul D <[email protected]>\r\nCc: Shankaran, Akash <[email protected]>; Nathan Bossart <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\r\nSubject: Re: Popcount optimization using AVX512\r\n\r\nHello,\r\n\r\nThis looks quite reasonable. On my machine, I get the compiler test to pass so I get a \"yes\" in configure; but of course my CPU doesn't support the instructions so I get the slow variant. So here's the patch again with some minor artifacts fixed.\r\n\r\nI have the following review notes:\r\n\r\n1. we use __get_cpuid_count and __cpuidex by relying on macros HAVE__GET_CPUID and HAVE__CPUID respectively; but those macros are (in the current Postgres source) only used and tested for __get_cpuid and __cpuid respectively. So unless there's some reason to be certain that __get_cpuid_count is always present when __get_cpuid is present, and that __cpuidex is present when __cpuid is present, I think we need to add new configure tests and new HAVE_ macros for these.\r\n\r\n2. we rely on <immintrin.h> being present with no AC_CHECK_HEADER() test. We currently don't use this header anywhere, so I suppose we need a test for this one as well. (Also, I suppose if we don't have immintrin.h we can skip the rest of it?)\r\n\r\n3. We do the __get_cpuid_count/__cpuidex test and we also do a xgetbv test. The comment there claims that this is to check the results for consistency. But ... how would we know that the results are ever inconsistent? As far as I understand, if they were, we would silently become slower. Is this really what we want? I'm confused about this coding. Maybe we do need both tests to succeed? In that case, just reword the comment.\r\n\r\nI think if both tests are each considered reliable on its own, then we could either choose one of them and stick with it, ignoring the other; or we could use one as primary and then in a USE_ASSERT_CHECKING block verify that the other matches and throw a WARNING if not (but what would that tell us?). Or something like that ... not sure.\r\n\r\n4. It needs meson support, which I suppose consists of copying the\r\nc-compiler.m4 test into meson.build, mimicking what the tests for CRC instructions do.\r\n\r\n\r\nI started a CI run with this patch applied,\r\nhttps://cirrus-ci.com/build/4912499619790848\r\nbut because Meson support is missing, the compile failed\r\nimmediately:\r\n\r\n[10:08:48.825] ccache cc -Isrc/port/libpgport_srv.a.p -Isrc/include -I../src/include -Isrc/include/utils -fdiagnostics-color=always -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -g -fno-strict-aliasing -fwrapv -fexcess-precision=standard -D_GNU_SOURCE -Wmissing-prototypes -Wpointer-arith -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -Wdeclaration-after-statement -Wno-format-truncation -Wno-stringop-truncation -fPIC -pthread -DBUILDING_DLL -MD -MQ src/port/libpgport_srv.a.p/pg_bitutils.c.o -MF src/port/libpgport_srv.a.p/pg_bitutils.c.o.d -o src/port/libpgport_srv.a.p/pg_bitutils.c.o -c ../src/port/pg_bitutils.c [10:08:48.825] ../src/port/pg_bitutils.c: In function ‘pg_popcount512_fast’:\r\n[10:08:48.825] ../src/port/pg_bitutils.c:270:11: warning: AVX512F vector return without AVX512F enabled changes the ABI [-Wpsabi]\r\n[10:08:48.825] 270 | __m512i accumulator = _mm512_setzero_si512();\r\n[10:08:48.825] | ^~~~~~~~~~~\r\n[10:08:48.825] In file included from /usr/lib/gcc/x86_64-linux-gnu/10/include/immintrin.h:55,\r\n[10:08:48.825] from ../src/port/pg_bitutils.c:22:\r\n[10:08:48.825] /usr/lib/gcc/x86_64-linux-gnu/10/include/avx512fintrin.h:339:1: error: inlining failed in call to ‘always_inline’ ‘_mm512_setzero_si512’: target specific option mismatch\r\n[10:08:48.825] 339 | _mm512_setzero_si512 (void)\r\n[10:08:48.825] | ^~~~~~~~~~~~~~~~~~~~\r\n[10:08:48.825] ../src/port/pg_bitutils.c:270:25: note: called from here\r\n[10:08:48.825] 270 | __m512i accumulator = _mm512_setzero_si512();\r\n[10:08:48.825] | ^~~~~~~~~~~~~~~~~~~~~~\r\n\r\n\r\nThanks\r\n\r\n-- \r\nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\r\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)",
"msg_date": "Fri, 9 Feb 2024 17:39:46 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-26 07:42:33 +0100, Alvaro Herrera wrote:\n> This suggests that finding a way to make the ifunc stuff work (with good\n> performance) is critical to this work.\n\nIfuncs are effectively implemented as a function call via a pointer, they're\nnot magic, unfortunately. The sole trick they provide is that you don't\nmanually have to use the function pointer.\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Fri, 9 Feb 2024 10:24:32 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-09 17:39:46 +0000, Amonson, Paul D wrote:\n\n> diff --git a/meson.build b/meson.build\n> index 8ed51b6aae..1e7a4dc942 100644\n> --- a/meson.build\n> +++ b/meson.build\n> @@ -1773,6 +1773,45 @@ elif cc.links('''\n> endif\n> \n> \n> +# XXX: The configure.ac check for __cpuidex() is broken, we don't copy that\n> +# here. To prevent problems due to two detection methods working, stop\n> +# checking after one.\n\nThis seems like a bogus copy-paste.\n\n\n> +if cc.links('''\n> + #include <cpuid.h>\n> + int main(int arg, char **argv)\n> + {\n> + unsigned int exx[4] = {0, 0, 0, 0};\n> + __get_cpuid_count(7, 0, &exx[0], &exx[1], &exx[2], &exx[3]);\n> + }\n> + ''', name: '__get_cpuid_count',\n> + args: test_c_args)\n> + cdata.set('HAVE__GET_CPUID_COUNT', 1)\n> +elif cc.links('''\n> + #include <intrin.h>\n> + int main(int arg, char **argv)\n> + {\n> + unsigned int exx[4] = {0, 0, 0, 0};\n> + __cpuidex(exx, 7, 0);\n> + }\n> + ''', name: '__cpuidex',\n> + args: test_c_args)\n> + cdata.set('HAVE__CPUIDEX', 1)\n> +endif\n> +\n> +\n> +# Check for header immintrin.h\n> +if cc.links('''\n> + #include <immintrin.h>\n> + int main(int arg, char **argv)\n> + {\n> + return 1701;\n> + }\n> + ''', name: '__immintrin',\n> + args: test_c_args)\n> + cdata.set('HAVE__IMMINTRIN', 1)\n> +endif\n\nDo these all actually have to link? Invoking the linker is slow.\n\nI think you might be able to just use cc.has_header_symbol().\n\n\n\n> +###############################################################\n> +# AVX 512 POPCNT Intrinsic check\n> +###############################################################\n> +have_avx512_popcnt = false\n> +cflags_avx512_popcnt = []\n> +if host_cpu == 'x86_64'\n> + prog = '''\n> + #include <immintrin.h>\n> + #include <stdint.h>\n> + void main(void)\n> + {\n> + __m512i tmp __attribute__((aligned(64)));\n> + __m512i input = _mm512_setzero_si512();\n> + __m512i output = _mm512_popcnt_epi64(input);\n> + uint64_t cnt = 999;\n> + _mm512_store_si512(&tmp, output);\n> + cnt = _mm512_reduce_add_epi64(tmp);\n> + /* return computed value, to prevent the above being optimized away */\n> + return cnt == 0;\n> + }'''\n\nDoes this work with msvc?\n\n\n> + if cc.links(prog, name: '_mm512_setzero_si512, _mm512_popcnt_epi64, _mm512_store_si512, and _mm512_reduce_add_epi64 with -mavx512vpopcntdq -mavx512f',\n\nThat's a very long line in the output, how about using the avx feature name or\nsomething?\n\n\n\n> diff --git a/src/port/Makefile b/src/port/Makefile\n> index dcc8737e68..6a01a7d89a 100644\n> --- a/src/port/Makefile\n> +++ b/src/port/Makefile\n> @@ -87,6 +87,11 @@ pg_crc32c_sse42.o: CFLAGS+=$(CFLAGS_CRC)\n> pg_crc32c_sse42_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n> pg_crc32c_sse42_srv.o: CFLAGS+=$(CFLAGS_CRC)\n> \n> +# Newer Intel processors can use AVX-512 POPCNT Capabilities (01/30/2024)\n> +pg_bitutils.o: CFLAGS+=$(CFLAGS_AVX512_POPCNT)\n> +pg_bitutils_shlib.o: CFLAGS+=$(CFLAGS_AVX512_POPCNT)\n> +pg_bitutils_srv.o:CFLAGS+=$(CFLAGS_AVX512_POPCNT)\n> +\n> # all versions of pg_crc32c_armv8.o need CFLAGS_CRC\n> pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC)\n> pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n> diff --git a/src/port/meson.build b/src/port/meson.build\n> index 69b30ab21b..1c48a3b07e 100644\n> --- a/src/port/meson.build\n> +++ b/src/port/meson.build\n> @@ -184,6 +184,7 @@ foreach name, opts : pgport_variants\n> link_with: cflag_libs,\n> c_pch: pch_c_h,\n> kwargs: opts + {\n> + 'c_args': opts.get('c_args', []) + cflags_avx512_popcnt,\n> 'dependencies': opts['dependencies'] + [ssl],\n> }\n> )\n\nThis will build all of pgport with the avx flags, which wouldn't be correct, I\nthink? The compiler might inject automatic uses of avx512 in places, which\nwould cause problems, no?\n\nWhile you don't do the same for make, isn't even just using the avx512 for all\nof pg_bitutils.c broken for exactly that reson? That's why the existing code\nbuilds the files for various crc variants as their own file.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Feb 2024 10:34:50 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, Feb 09, 2024 at 10:24:32AM -0800, Andres Freund wrote:\n> On 2024-01-26 07:42:33 +0100, Alvaro Herrera wrote:\n> > This suggests that finding a way to make the ifunc stuff work (with good\n> > performance) is critical to this work.\n> \n> Ifuncs are effectively implemented as a function call via a pointer, they're\n> not magic, unfortunately. The sole trick they provide is that you don't\n> manually have to use the function pointer.\n\nThe IFUNC creators introduced it so glibc could use arch-specific memcpy with\nthe instruction sequence of a non-pointer, extern function call, not the\ninstruction sequence of a function pointer call. I don't know why the\nupthread ifunc_test.patch benchmark found ifunc performing worse than function\npointers. However, it would be odd if toolchains have replaced the original\nIFUNC with something equivalent to or slower than function pointers.\n\n\n",
"msg_date": "Fri, 9 Feb 2024 15:27:57 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-09 15:27:57 -0800, Noah Misch wrote:\n> On Fri, Feb 09, 2024 at 10:24:32AM -0800, Andres Freund wrote:\n> > On 2024-01-26 07:42:33 +0100, Alvaro Herrera wrote:\n> > > This suggests that finding a way to make the ifunc stuff work (with good\n> > > performance) is critical to this work.\n> > \n> > Ifuncs are effectively implemented as a function call via a pointer, they're\n> > not magic, unfortunately. The sole trick they provide is that you don't\n> > manually have to use the function pointer.\n> \n> The IFUNC creators introduced it so glibc could use arch-specific memcpy with\n> the instruction sequence of a non-pointer, extern function call, not the\n> instruction sequence of a function pointer call.\n\nMy understanding is that the ifunc mechanism just avoid the need for repeated\nindirect calls/jumps to implement a single function call, not the use of\nindirect function calls at all. Calls into shared libraries, like libc, are\nindirected via the GOT / PLT, i.e. an indirect function call/jump. Without\nifuncs, the target of the function call would then have to dispatch to the\nresolved function. Ifuncs allow to avoid this repeated dispatch by moving the\ndispatch to the dynamic linker stage, modifying the contents of the GOT/PLT to\npoint to the right function. Thus ifuncs are an optimization when calling a\nfunction in a shared library that's then dispatched depending on the cpu\ncapabilities.\n\nHowever, in our case, where the code is in the same binary, function calls\nimplemented in the main binary directly (possibly via a static library) don't\ngo through GOT/PLT. In such a case, use of ifuncs turns a normal direct\nfunction call into one going through the GOT/PLT, i.e. makes it indirect. The\nsame is true for calls within a shared library if either explicit symbol\nvisibility is used, or -symbolic, -Wl,-Bsymbolic or such is used. Therefore\nthere's no efficiency gain of ifuncs over a call via function pointer.\n\n\nThis isn't because ifunc is implemented badly or something - the reason for\nthis is that dynamic relocations aren't typically implemented by patching all\ncallsites (\".text relocations\"), which is what you would need to avoid the\nneed for an indirect call to something that fundamentally cannot be a constant\naddress at link time. The reason text relocations are disfavored is that\nthey can make program startup quite slow, that they require allowing\nmodifications to executable pages which are disliked due to the security\nimplications, and that they make the code non-shareable, as the in-memory\nexecutable code has to differ from the on-disk code.\n\n\nI actually think ifuncs within the same binary are a tad *slower* than plain\nfunction pointer calls, unless -fno-plt is used. Without -fno-plt, an ifunc is\ncalled by 1) a direct call into the PLT, 2) loading the target address from\nthe GOT, 3) making an an indirect jump to that address. Whereas a \"plain\nindirect function call\" is just 1) load target address from variable 2) making\nan indirect jump to that address. With -fno-plt the callsites themselves load\nthe address from the GOT.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Feb 2024 20:33:23 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, Feb 09, 2024 at 08:33:23PM -0800, Andres Freund wrote:\n> On 2024-02-09 15:27:57 -0800, Noah Misch wrote:\n> > On Fri, Feb 09, 2024 at 10:24:32AM -0800, Andres Freund wrote:\n> > > On 2024-01-26 07:42:33 +0100, Alvaro Herrera wrote:\n> > > > This suggests that finding a way to make the ifunc stuff work (with good\n> > > > performance) is critical to this work.\n> > > \n> > > Ifuncs are effectively implemented as a function call via a pointer, they're\n> > > not magic, unfortunately. The sole trick they provide is that you don't\n> > > manually have to use the function pointer.\n> > \n> > The IFUNC creators introduced it so glibc could use arch-specific memcpy with\n> > the instruction sequence of a non-pointer, extern function call, not the\n> > instruction sequence of a function pointer call.\n> \n> My understanding is that the ifunc mechanism just avoid the need for repeated\n> indirect calls/jumps to implement a single function call, not the use of\n> indirect function calls at all. Calls into shared libraries, like libc, are\n> indirected via the GOT / PLT, i.e. an indirect function call/jump. Without\n> ifuncs, the target of the function call would then have to dispatch to the\n> resolved function. Ifuncs allow to avoid this repeated dispatch by moving the\n> dispatch to the dynamic linker stage, modifying the contents of the GOT/PLT to\n> point to the right function. Thus ifuncs are an optimization when calling a\n> function in a shared library that's then dispatched depending on the cpu\n> capabilities.\n> \n> However, in our case, where the code is in the same binary, function calls\n> implemented in the main binary directly (possibly via a static library) don't\n> go through GOT/PLT. In such a case, use of ifuncs turns a normal direct\n> function call into one going through the GOT/PLT, i.e. makes it indirect. The\n> same is true for calls within a shared library if either explicit symbol\n> visibility is used, or -symbolic, -Wl,-Bsymbolic or such is used. Therefore\n> there's no efficiency gain of ifuncs over a call via function pointer.\n> \n> \n> This isn't because ifunc is implemented badly or something - the reason for\n> this is that dynamic relocations aren't typically implemented by patching all\n> callsites (\".text relocations\"), which is what you would need to avoid the\n> need for an indirect call to something that fundamentally cannot be a constant\n> address at link time. The reason text relocations are disfavored is that\n> they can make program startup quite slow, that they require allowing\n> modifications to executable pages which are disliked due to the security\n> implications, and that they make the code non-shareable, as the in-memory\n> executable code has to differ from the on-disk code.\n> \n> \n> I actually think ifuncs within the same binary are a tad *slower* than plain\n> function pointer calls, unless -fno-plt is used. Without -fno-plt, an ifunc is\n> called by 1) a direct call into the PLT, 2) loading the target address from\n> the GOT, 3) making an an indirect jump to that address. Whereas a \"plain\n> indirect function call\" is just 1) load target address from variable 2) making\n> an indirect jump to that address. With -fno-plt the callsites themselves load\n> the address from the GOT.\n\nThat sounds more accurate than what I wrote. Thanks.\n\n\n",
"msg_date": "Sat, 10 Feb 2024 15:52:38 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "My responses with questions,\r\n\r\n> > +# XXX: The configure.ac check for __cpuidex() is broken, we don't \r\n> > +copy that # here. To prevent problems due to two detection methods \r\n> > +working, stop # checking after one.\r\n>\r\n> This seems like a bogus copy-paste.\r\n\r\nMy bad. Will remove the offending comment. :)\r\n\r\n> > +# Check for header immintrin.h\r\n> ...\r\n> Do these all actually have to link? Invoking the linker is slow.\r\n> I think you might be able to just use cc.has_header_symbol().\r\n\r\nI took this to mean the last of the 3 new blocks. I changed this one to the cc_has_header method. I think I do want the first 2 checking the link as well. If the don't link here they won't link in the actual build.\r\n\r\n> Does this work with msvc?\r\n\r\nI think it will work but I have no way to validate it. I propose we remove the AVX-512 popcount feature from MSVC builds. Sound ok?\r\n\r\n> That's a very long line in the output, how about using the avx feature name or something?\r\n\r\nAgree, will fix.\r\n\r\n> This will build all of pgport with the avx flags, which wouldn't be correct, I think? The compiler might inject automatic uses of avx512 in places, which would cause problems, no?\r\n\r\nThis will take me some time to learn how to do this in meson. Any pointers here would be helpful. \r\n\r\n> While you don't do the same for make, isn't even just using the avx512 for all of pg_bitutils.c broken for exactly that reson? That's why the existing code builds the files for various crc variants as their own file.\r\n\r\nI don't think its broken, nothing else in pg_bitutils.c will make use of AVX-512, so I am not sure what dividing this up into multiple files will yield benefits beyond code readability as they will all be needed during compile time. I prefer to not split if the community agrees to it.\r\n \r\nIf splitting still makes sense, I propose splitting into 3 files: pg_bitutils.c (entry point +sw popcnt implementation), pg_popcnt_choose.c (CPUID and xgetbv check) and pg_popcnt_x86_64_accel.c (64/512bit x86 implementations). \r\nI'm not an expert in meson, but splitting might add complexity to meson.build. \r\n\r\nCould you elaborate if there are other benefits to the split file approach?\r\n\r\nPaul\r\n\r\n\r\n-----Original Message-----\r\nFrom: Andres Freund <[email protected]> \r\nSent: Friday, February 9, 2024 10:35 AM\r\nTo: Amonson, Paul D <[email protected]>\r\nCc: Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Nathan Bossart <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\r\nSubject: Re: Popcount optimization using AVX512\r\n\r\nHi,\r\n\r\nOn 2024-02-09 17:39:46 +0000, Amonson, Paul D wrote:\r\n\r\n> diff --git a/meson.build b/meson.build index 8ed51b6aae..1e7a4dc942 \r\n> 100644\r\n> --- a/meson.build\r\n> +++ b/meson.build\r\n> @@ -1773,6 +1773,45 @@ elif cc.links('''\r\n> endif\r\n> \r\n> \r\n> +# XXX: The configure.ac check for __cpuidex() is broken, we don't \r\n> +copy that # here. To prevent problems due to two detection methods \r\n> +working, stop # checking after one.\r\n\r\nThis seems like a bogus copy-paste.\r\n\r\n\r\n> +if cc.links('''\r\n> + #include <cpuid.h>\r\n> + int main(int arg, char **argv)\r\n> + {\r\n> + unsigned int exx[4] = {0, 0, 0, 0};\r\n> + __get_cpuid_count(7, 0, &exx[0], &exx[1], &exx[2], &exx[3]);\r\n> + }\r\n> + ''', name: '__get_cpuid_count',\r\n> + args: test_c_args)\r\n> + cdata.set('HAVE__GET_CPUID_COUNT', 1) elif cc.links('''\r\n> + #include <intrin.h>\r\n> + int main(int arg, char **argv)\r\n> + {\r\n> + unsigned int exx[4] = {0, 0, 0, 0};\r\n> + __cpuidex(exx, 7, 0);\r\n> + }\r\n> + ''', name: '__cpuidex',\r\n> + args: test_c_args)\r\n> + cdata.set('HAVE__CPUIDEX', 1)\r\n> +endif\r\n> +\r\n> +\r\n> +# Check for header immintrin.h\r\n> +if cc.links('''\r\n> + #include <immintrin.h>\r\n> + int main(int arg, char **argv)\r\n> + {\r\n> + return 1701;\r\n> + }\r\n> + ''', name: '__immintrin',\r\n> + args: test_c_args)\r\n> + cdata.set('HAVE__IMMINTRIN', 1)\r\n> +endif\r\n\r\nDo these all actually have to link? Invoking the linker is slow.\r\n\r\nI think you might be able to just use cc.has_header_symbol().\r\n\r\n\r\n\r\n> +###############################################################\r\n> +# AVX 512 POPCNT Intrinsic check\r\n> +###############################################################\r\n> +have_avx512_popcnt = false\r\n> +cflags_avx512_popcnt = []\r\n> +if host_cpu == 'x86_64'\r\n> + prog = '''\r\n> + #include <immintrin.h>\r\n> + #include <stdint.h>\r\n> + void main(void)\r\n> + {\r\n> + __m512i tmp __attribute__((aligned(64)));\r\n> + __m512i input = _mm512_setzero_si512();\r\n> + __m512i output = _mm512_popcnt_epi64(input);\r\n> + uint64_t cnt = 999;\r\n> + _mm512_store_si512(&tmp, output);\r\n> + cnt = _mm512_reduce_add_epi64(tmp);\r\n> + /* return computed value, to prevent the above being optimized away */\r\n> + return cnt == 0;\r\n> + }'''\r\n\r\nDoes this work with msvc?\r\n\r\n\r\n> + if cc.links(prog, name: '_mm512_setzero_si512, \r\n> + _mm512_popcnt_epi64, _mm512_store_si512, and _mm512_reduce_add_epi64 \r\n> + with -mavx512vpopcntdq -mavx512f',\r\n\r\nThat's a very long line in the output, how about using the avx feature name or something?\r\n\r\n\r\n\r\n> diff --git a/src/port/Makefile b/src/port/Makefile index \r\n> dcc8737e68..6a01a7d89a 100644\r\n> --- a/src/port/Makefile\r\n> +++ b/src/port/Makefile\r\n> @@ -87,6 +87,11 @@ pg_crc32c_sse42.o: CFLAGS+=$(CFLAGS_CRC)\r\n> pg_crc32c_sse42_shlib.o: CFLAGS+=$(CFLAGS_CRC)\r\n> pg_crc32c_sse42_srv.o: CFLAGS+=$(CFLAGS_CRC)\r\n> \r\n> +# Newer Intel processors can use AVX-512 POPCNT Capabilities \r\n> +(01/30/2024)\r\n> +pg_bitutils.o: CFLAGS+=$(CFLAGS_AVX512_POPCNT)\r\n> +pg_bitutils_shlib.o: CFLAGS+=$(CFLAGS_AVX512_POPCNT)\r\n> +pg_bitutils_srv.o:CFLAGS+=$(CFLAGS_AVX512_POPCNT)\r\n> +\r\n> # all versions of pg_crc32c_armv8.o need CFLAGS_CRC\r\n> pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC)\r\n> pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC) diff --git \r\n> a/src/port/meson.build b/src/port/meson.build index \r\n> 69b30ab21b..1c48a3b07e 100644\r\n> --- a/src/port/meson.build\r\n> +++ b/src/port/meson.build\r\n> @@ -184,6 +184,7 @@ foreach name, opts : pgport_variants\r\n> link_with: cflag_libs,\r\n> c_pch: pch_c_h,\r\n> kwargs: opts + {\r\n> + 'c_args': opts.get('c_args', []) + cflags_avx512_popcnt,\r\n> 'dependencies': opts['dependencies'] + [ssl],\r\n> }\r\n> )\r\n\r\nThis will build all of pgport with the avx flags, which wouldn't be correct, I think? The compiler might inject automatic uses of avx512 in places, which would cause problems, no?\r\n\r\nWhile you don't do the same for make, isn't even just using the avx512 for all of pg_bitutils.c broken for exactly that reson? That's why the existing code builds the files for various crc variants as their own file.\r\n\r\n\r\nGreetings,\r\n\r\nAndres Freund\r\n",
"msg_date": "Mon, 12 Feb 2024 20:14:06 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-12 20:14:06 +0000, Amonson, Paul D wrote:\n> > > +# Check for header immintrin.h\n> > ...\n> > Do these all actually have to link? Invoking the linker is slow.\n> > I think you might be able to just use cc.has_header_symbol().\n>\n> I took this to mean the last of the 3 new blocks.\n\nYep.\n\n\n> > Does this work with msvc?\n>\n> I think it will work but I have no way to validate it. I propose we remove the AVX-512 popcount feature from MSVC builds. Sound ok?\n\nCI [1], whould be able to test at least building. Including via cfbot,\nautomatically run for each commitfest entry - you can see prior runs at\n[2]. They run on Zen 3 epyc instances, so unfortunately runtime won't be\ntested. If you look at [3], you can see that currently it doesn't seem to be\nconsidered supported at configure time:\n\n...\n[00:23:48.480] Checking if \"__get_cpuid\" : links: NO\n[00:23:48.480] Checking if \"__cpuid\" : links: YES\n...\n[00:23:48.492] Checking if \"x86_64: popcntq instruction\" compiles: NO\n...\n\nUnfortunately CI currently is configured to not upload the build logs if the\nbuild succeeds, so we don't have enough details to see why.\n\n\n> > This will build all of pgport with the avx flags, which wouldn't be correct, I think? The compiler might inject automatic uses of avx512 in places, which would cause problems, no?\n>\n> This will take me some time to learn how to do this in meson. Any pointers\n> here would be helpful.\n\nShould be fairly simple, add it to the replace_funcs_pos and add the relevant\ncflags to pgport_cflags, similar to how it's done for crc.\n\n\n> > While you don't do the same for make, isn't even just using the avx512 for all of pg_bitutils.c broken for exactly that reson? That's why the existing code builds the files for various crc variants as their own file.\n>\n> I don't think its broken, nothing else in pg_bitutils.c will make use of\n> AVX-512\n\nYou can't really guarantee that compiler auto-vectorization won't decide to do\nso, no? I wouldn't call it likely, but it's also hard to be sure it won't\nhappen at some point.\n\n\n> If splitting still makes sense, I propose splitting into 3 files: pg_bitutils.c (entry point +sw popcnt implementation), pg_popcnt_choose.c (CPUID and xgetbv check) and pg_popcnt_x86_64_accel.c (64/512bit x86 implementations).\n> I'm not an expert in meson, but splitting might add complexity to meson.build.\n>\n> Could you elaborate if there are other benefits to the split file approach?\n\nIt won't lead to SIGILLs ;)\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://github.com/postgres/postgres/blob/master/src/tools/ci/README\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F47%2F4675\n[3] https://cirrus-ci.com/task/5645112189911040\n\n\n",
"msg_date": "Mon, 12 Feb 2024 12:37:14 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Sat, Feb 10, 2024 at 03:52:38PM -0800, Noah Misch wrote:\n> On Fri, Feb 09, 2024 at 08:33:23PM -0800, Andres Freund wrote:\n>> My understanding is that the ifunc mechanism just avoid the need for repeated\n>> indirect calls/jumps to implement a single function call, not the use of\n>> indirect function calls at all. Calls into shared libraries, like libc, are\n>> indirected via the GOT / PLT, i.e. an indirect function call/jump. Without\n>> ifuncs, the target of the function call would then have to dispatch to the\n>> resolved function. Ifuncs allow to avoid this repeated dispatch by moving the\n>> dispatch to the dynamic linker stage, modifying the contents of the GOT/PLT to\n>> point to the right function. Thus ifuncs are an optimization when calling a\n>> function in a shared library that's then dispatched depending on the cpu\n>> capabilities.\n>> \n>> However, in our case, where the code is in the same binary, function calls\n>> implemented in the main binary directly (possibly via a static library) don't\n>> go through GOT/PLT. In such a case, use of ifuncs turns a normal direct\n>> function call into one going through the GOT/PLT, i.e. makes it indirect. The\n>> same is true for calls within a shared library if either explicit symbol\n>> visibility is used, or -symbolic, -Wl,-Bsymbolic or such is used. Therefore\n>> there's no efficiency gain of ifuncs over a call via function pointer.\n>> \n>> \n>> This isn't because ifunc is implemented badly or something - the reason for\n>> this is that dynamic relocations aren't typically implemented by patching all\n>> callsites (\".text relocations\"), which is what you would need to avoid the\n>> need for an indirect call to something that fundamentally cannot be a constant\n>> address at link time. The reason text relocations are disfavored is that\n>> they can make program startup quite slow, that they require allowing\n>> modifications to executable pages which are disliked due to the security\n>> implications, and that they make the code non-shareable, as the in-memory\n>> executable code has to differ from the on-disk code.\n>> \n>> \n>> I actually think ifuncs within the same binary are a tad *slower* than plain\n>> function pointer calls, unless -fno-plt is used. Without -fno-plt, an ifunc is\n>> called by 1) a direct call into the PLT, 2) loading the target address from\n>> the GOT, 3) making an an indirect jump to that address. Whereas a \"plain\n>> indirect function call\" is just 1) load target address from variable 2) making\n>> an indirect jump to that address. With -fno-plt the callsites themselves load\n>> the address from the GOT.\n> \n> That sounds more accurate than what I wrote. Thanks.\n\n+1, thanks for the detailed explanation, Andres.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Feb 2024 14:55:07 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nI am encountering a problem that I don't think I understand. I cannot get the MSVC build to link in CI. I added 2 files to the build, but the linker is complaining about the original pg_bitutils.c file is missing (specifically symbol 'pg_popcount'). To my knowledge my changes did not change linking for the offending file and I see the compiles for pg_bitutils.c in all 3 libs in the build. All other builds are compiling.\n\nAny help on this issue would be greatly appreciated.\n\nMy fork is at https://github.com/paul-amonson/postgresql/tree/popcnt_patch and the CI build is at https://cirrus-ci.com/task/4927666021728256.\n\nThanks,\nPaul\n\n-----Original Message-----\nFrom: Andres Freund <[email protected]> \nSent: Monday, February 12, 2024 12:37 PM\nTo: Amonson, Paul D <[email protected]>\nCc: Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Nathan Bossart <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\nSubject: Re: Popcount optimization using AVX512\n\nHi,\n\nOn 2024-02-12 20:14:06 +0000, Amonson, Paul D wrote:\n> > > +# Check for header immintrin.h\n> > ...\n> > Do these all actually have to link? Invoking the linker is slow.\n> > I think you might be able to just use cc.has_header_symbol().\n>\n> I took this to mean the last of the 3 new blocks.\n\nYep.\n\n\n> > Does this work with msvc?\n>\n> I think it will work but I have no way to validate it. I propose we remove the AVX-512 popcount feature from MSVC builds. Sound ok?\n\nCI [1], whould be able to test at least building. Including via cfbot, automatically run for each commitfest entry - you can see prior runs at [2]. They run on Zen 3 epyc instances, so unfortunately runtime won't be tested. If you look at [3], you can see that currently it doesn't seem to be considered supported at configure time:\n\n...\n[00:23:48.480] Checking if \"__get_cpuid\" : links: NO [00:23:48.480] Checking if \"__cpuid\" : links: YES ...\n[00:23:48.492] Checking if \"x86_64: popcntq instruction\" compiles: NO ...\n\nUnfortunately CI currently is configured to not upload the build logs if the build succeeds, so we don't have enough details to see why.\n\n\n> > This will build all of pgport with the avx flags, which wouldn't be correct, I think? The compiler might inject automatic uses of avx512 in places, which would cause problems, no?\n>\n> This will take me some time to learn how to do this in meson. Any \n> pointers here would be helpful.\n\nShould be fairly simple, add it to the replace_funcs_pos and add the relevant cflags to pgport_cflags, similar to how it's done for crc.\n\n\n> > While you don't do the same for make, isn't even just using the avx512 for all of pg_bitutils.c broken for exactly that reson? That's why the existing code builds the files for various crc variants as their own file.\n>\n> I don't think its broken, nothing else in pg_bitutils.c will make use \n> of\n> AVX-512\n\nYou can't really guarantee that compiler auto-vectorization won't decide to do so, no? I wouldn't call it likely, but it's also hard to be sure it won't happen at some point.\n\n\n> If splitting still makes sense, I propose splitting into 3 files: pg_bitutils.c (entry point +sw popcnt implementation), pg_popcnt_choose.c (CPUID and xgetbv check) and pg_popcnt_x86_64_accel.c (64/512bit x86 implementations).\n> I'm not an expert in meson, but splitting might add complexity to meson.build.\n>\n> Could you elaborate if there are other benefits to the split file approach?\n\nIt won't lead to SIGILLs ;)\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://github.com/postgres/postgres/blob/master/src/tools/ci/README\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F47%2F4675\n[3] https://cirrus-ci.com/task/5645112189911040\n\n\n",
"msg_date": "Wed, 21 Feb 2024 17:35:57 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "Hello again,\n\nThis is now a blocking issue. I can find no reason for the failing behavior of the MSVC build. All other languages build fine in CI including the Mac. Since the master branch builds, I assume I changed something critical to linking, but I can't figure out what that would be. Can someone with Windows/MSVC experience help me?\n\n* Code: https://github.com/paul-amonson/postgresql/tree/popcnt_patch\n* CI build: https://cirrus-ci.com/task/4927666021728256\n\nThanks,\nPaul\n\n-----Original Message-----\nFrom: Amonson, Paul D <[email protected]> \nSent: Wednesday, February 21, 2024 9:36 AM\nTo: Andres Freund <[email protected]>\nCc: Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Nathan Bossart <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\nSubject: RE: Popcount optimization using AVX512\n\nHi,\n\nI am encountering a problem that I don't think I understand. I cannot get the MSVC build to link in CI. I added 2 files to the build, but the linker is complaining about the original pg_bitutils.c file is missing (specifically symbol 'pg_popcount'). To my knowledge my changes did not change linking for the offending file and I see the compiles for pg_bitutils.c in all 3 libs in the build. All other builds are compiling.\n\nAny help on this issue would be greatly appreciated.\n\nMy fork is at https://github.com/paul-amonson/postgresql/tree/popcnt_patch and the CI build is at https://cirrus-ci.com/task/4927666021728256.\n\nThanks,\nPaul\n\n-----Original Message-----\nFrom: Andres Freund <[email protected]>\nSent: Monday, February 12, 2024 12:37 PM\nTo: Amonson, Paul D <[email protected]>\nCc: Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Nathan Bossart <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\nSubject: Re: Popcount optimization using AVX512\n\nHi,\n\nOn 2024-02-12 20:14:06 +0000, Amonson, Paul D wrote:\n> > > +# Check for header immintrin.h\n> > ...\n> > Do these all actually have to link? Invoking the linker is slow.\n> > I think you might be able to just use cc.has_header_symbol().\n>\n> I took this to mean the last of the 3 new blocks.\n\nYep.\n\n\n> > Does this work with msvc?\n>\n> I think it will work but I have no way to validate it. I propose we remove the AVX-512 popcount feature from MSVC builds. Sound ok?\n\nCI [1], whould be able to test at least building. Including via cfbot, automatically run for each commitfest entry - you can see prior runs at [2]. They run on Zen 3 epyc instances, so unfortunately runtime won't be tested. If you look at [3], you can see that currently it doesn't seem to be considered supported at configure time:\n\n...\n[00:23:48.480] Checking if \"__get_cpuid\" : links: NO [00:23:48.480] Checking if \"__cpuid\" : links: YES ...\n[00:23:48.492] Checking if \"x86_64: popcntq instruction\" compiles: NO ...\n\nUnfortunately CI currently is configured to not upload the build logs if the build succeeds, so we don't have enough details to see why.\n\n\n> > This will build all of pgport with the avx flags, which wouldn't be correct, I think? The compiler might inject automatic uses of avx512 in places, which would cause problems, no?\n>\n> This will take me some time to learn how to do this in meson. Any \n> pointers here would be helpful.\n\nShould be fairly simple, add it to the replace_funcs_pos and add the relevant cflags to pgport_cflags, similar to how it's done for crc.\n\n\n> > While you don't do the same for make, isn't even just using the avx512 for all of pg_bitutils.c broken for exactly that reson? That's why the existing code builds the files for various crc variants as their own file.\n>\n> I don't think its broken, nothing else in pg_bitutils.c will make use \n> of\n> AVX-512\n\nYou can't really guarantee that compiler auto-vectorization won't decide to do so, no? I wouldn't call it likely, but it's also hard to be sure it won't happen at some point.\n\n\n> If splitting still makes sense, I propose splitting into 3 files: pg_bitutils.c (entry point +sw popcnt implementation), pg_popcnt_choose.c (CPUID and xgetbv check) and pg_popcnt_x86_64_accel.c (64/512bit x86 implementations).\n> I'm not an expert in meson, but splitting might add complexity to meson.build.\n>\n> Could you elaborate if there are other benefits to the split file approach?\n\nIt won't lead to SIGILLs ;)\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://github.com/postgres/postgres/blob/master/src/tools/ci/README\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F47%2F4675\n[3] https://cirrus-ci.com/task/5645112189911040\n\n\n\n\n",
"msg_date": "Mon, 26 Feb 2024 17:56:50 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "Andres,\n\nAfter consulting some Intel internal experts on MSVC the linking issue as it stood was not resolved. Instead, I created a MSVC ONLY work-around. This adds one extra functional call on the Windows builds (The linker resolves a real function just fine but not a function pointer of the same name). This extra latency does not exist on any of the other platforms. I also believe I addressed all issues raised in the previous reviews. The new pg_popcnt_x86_64_accel.c file is now the ONLY file compiled with the AVX512 compiler flags. I added support for the MSVC compiler flag as well. Both meson and autoconf are updated with the new refactor.\n\nI am attaching the new patch.\n\nPaul\n\n-----Original Message-----\nFrom: Amonson, Paul D <[email protected]> \nSent: Monday, February 26, 2024 9:57 AM\nTo: Amonson, Paul D <[email protected]>; Andres Freund <[email protected]>\nCc: Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Nathan Bossart <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\nSubject: RE: Popcount optimization using AVX512\n\nHello again,\n\nThis is now a blocking issue. I can find no reason for the failing behavior of the MSVC build. All other languages build fine in CI including the Mac. Since the master branch builds, I assume I changed something critical to linking, but I can't figure out what that would be. Can someone with Windows/MSVC experience help me?\n\n* Code: https://github.com/paul-amonson/postgresql/tree/popcnt_patch\n* CI build: https://cirrus-ci.com/task/4927666021728256\n\nThanks,\nPaul\n\n-----Original Message-----\nFrom: Amonson, Paul D <[email protected]>\nSent: Wednesday, February 21, 2024 9:36 AM\nTo: Andres Freund <[email protected]>\nCc: Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Nathan Bossart <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\nSubject: RE: Popcount optimization using AVX512\n\nHi,\n\nI am encountering a problem that I don't think I understand. I cannot get the MSVC build to link in CI. I added 2 files to the build, but the linker is complaining about the original pg_bitutils.c file is missing (specifically symbol 'pg_popcount'). To my knowledge my changes did not change linking for the offending file and I see the compiles for pg_bitutils.c in all 3 libs in the build. All other builds are compiling.\n\nAny help on this issue would be greatly appreciated.\n\nMy fork is at https://github.com/paul-amonson/postgresql/tree/popcnt_patch and the CI build is at https://cirrus-ci.com/task/4927666021728256.\n\nThanks,\nPaul\n\n-----Original Message-----\nFrom: Andres Freund <[email protected]>\nSent: Monday, February 12, 2024 12:37 PM\nTo: Amonson, Paul D <[email protected]>\nCc: Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Nathan Bossart <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\nSubject: Re: Popcount optimization using AVX512\n\nHi,\n\nOn 2024-02-12 20:14:06 +0000, Amonson, Paul D wrote:\n> > > +# Check for header immintrin.h\n> > ...\n> > Do these all actually have to link? Invoking the linker is slow.\n> > I think you might be able to just use cc.has_header_symbol().\n>\n> I took this to mean the last of the 3 new blocks.\n\nYep.\n\n\n> > Does this work with msvc?\n>\n> I think it will work but I have no way to validate it. I propose we remove the AVX-512 popcount feature from MSVC builds. Sound ok?\n\nCI [1], whould be able to test at least building. Including via cfbot, automatically run for each commitfest entry - you can see prior runs at [2]. They run on Zen 3 epyc instances, so unfortunately runtime won't be tested. If you look at [3], you can see that currently it doesn't seem to be considered supported at configure time:\n\n...\n[00:23:48.480] Checking if \"__get_cpuid\" : links: NO [00:23:48.480] Checking if \"__cpuid\" : links: YES ...\n[00:23:48.492] Checking if \"x86_64: popcntq instruction\" compiles: NO ...\n\nUnfortunately CI currently is configured to not upload the build logs if the build succeeds, so we don't have enough details to see why.\n\n\n> > This will build all of pgport with the avx flags, which wouldn't be correct, I think? The compiler might inject automatic uses of avx512 in places, which would cause problems, no?\n>\n> This will take me some time to learn how to do this in meson. Any \n> pointers here would be helpful.\n\nShould be fairly simple, add it to the replace_funcs_pos and add the relevant cflags to pgport_cflags, similar to how it's done for crc.\n\n\n> > While you don't do the same for make, isn't even just using the avx512 for all of pg_bitutils.c broken for exactly that reson? That's why the existing code builds the files for various crc variants as their own file.\n>\n> I don't think its broken, nothing else in pg_bitutils.c will make use \n> of\n> AVX-512\n\nYou can't really guarantee that compiler auto-vectorization won't decide to do so, no? I wouldn't call it likely, but it's also hard to be sure it won't happen at some point.\n\n\n> If splitting still makes sense, I propose splitting into 3 files: pg_bitutils.c (entry point +sw popcnt implementation), pg_popcnt_choose.c (CPUID and xgetbv check) and pg_popcnt_x86_64_accel.c (64/512bit x86 implementations).\n> I'm not an expert in meson, but splitting might add complexity to meson.build.\n>\n> Could you elaborate if there are other benefits to the split file approach?\n\nIt won't lead to SIGILLs ;)\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://github.com/postgres/postgres/blob/master/src/tools/ci/README\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F47%2F4675\n[3] https://cirrus-ci.com/task/5645112189911040",
"msg_date": "Tue, 27 Feb 2024 20:46:06 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "Thanks for the new version of the patch. I didn't see a commitfest entry\nfor this one, and unfortunately I think it's too late to add it for the\nMarch commitfest. I would encourage you to add it to July's commitfest [0]\nso that we can get some routine cfbot coverage.\n\nOn Tue, Feb 27, 2024 at 08:46:06PM +0000, Amonson, Paul D wrote:\n> After consulting some Intel internal experts on MSVC the linking issue as\n> it stood was not resolved. Instead, I created a MSVC ONLY work-around.\n> This adds one extra functional call on the Windows builds (The linker\n> resolves a real function just fine but not a function pointer of the same\n> name). This extra latency does not exist on any of the other platforms. I\n> also believe I addressed all issues raised in the previous reviews. The\n> new pg_popcnt_x86_64_accel.c file is now the ONLY file compiled with the\n> AVX512 compiler flags. I added support for the MSVC compiler flag as\n> well. Both meson and autoconf are updated with the new refactor.\n> \n> I am attaching the new patch.\n\nI think this patch might be missing the new files.\n\n-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))\n\nIME this means that the autoconf you are using has been patched. A quick\nsearch on the mailing lists seems to indicate that it might be specific to\nDebian [1].\n\n-static int\tpg_popcount32_slow(uint32 word);\n-static int\tpg_popcount64_slow(uint64 word);\n+int\tpg_popcount32_slow(uint32 word);\n+int\tpg_popcount64_slow(uint64 word);\n+uint64 pg_popcount_slow(const char *buf, int bytes);\n\nThis patch appears to do a lot of refactoring. Would it be possible to\nbreak out the refactoring parts into a prerequisite patch that could be\nreviewed and committed independently from the AVX512 stuff?\n\n-#if SIZEOF_VOID_P >= 8\n+#if SIZEOF_VOID_P == 8\n \t/* Process in 64-bit chunks if the buffer is aligned. */\n-\tif (buf == (const char *) TYPEALIGN(8, buf))\n+\tif (buf == (const char *)TYPEALIGN(8, buf))\n \t{\n-\t\tconst uint64 *words = (const uint64 *) buf;\n+\t\tconst uint64 *words = (const uint64 *)buf;\n \n \t\twhile (bytes >= 8)\n \t\t{\n@@ -309,9 +213,9 @@ pg_popcount(const char *buf, int bytes)\n \t\t\tbytes -= 8;\n \t\t}\n \n-\t\tbuf = (const char *) words;\n+\t\tbuf = (const char *)words;\n \t}\n-#else\n+#elif SIZEOF_VOID_P == 4\n \t/* Process in 32-bit chunks if the buffer is aligned. */\n \tif (buf == (const char *) TYPEALIGN(4, buf))\n \t{\n\nMost, if not all, of these changes seem extraneous. Do we actually need to\nmore strictly check SIZEOF_VOID_P?\n\n[0] https://commitfest.postgresql.org/48/\n[1] https://postgr.es/m/20230211020042.uthdgj72kp3xlqam%40awork3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Mar 2024 15:44:57 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nFirst, apologies on the patch. Find re-attached updated version.\n \nNow I have some questions....\n#1\n \n> -#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n> +#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31)\n> +<< 31))\n>\n> IME this means that the autoconf you are using has been patched. A quick search on the mailing lists seems to indicate that it might be specific to Debian [1].\n \nI am not sure what the ask is here? I made changes to the configure.ac and ran autoconf2.69 to get builds to succeed. Do you have a separate feedback here? \n \n#2 \nAs for the refactoring, this was done to satisfy previous review feedback about applying the AVX512 CFLAGS to the entire pg_bitutils.c file. Mainly to avoid segfault due to the AVX512 flags. If its ok, I would prefer to make a single commit as the change is pretty small and straight forward.\n \n#3\nI am not sure I understand the comment about the SIZE_VOID_P checks. Aren't they necessary to choose which functions to call based on 32 or 64 bit architectures?\n \n#4\nWould this change qualify for Workflow A as described in [0] and can be picked up by a committer, given it has been reviewed by multiple committers so far? The scope of the change is pretty contained as well. \n \n[0] https://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nThanks,\nPaul\n\n\n-----Original Message-----\nFrom: Nathan Bossart <[email protected]> \nSent: Friday, March 1, 2024 1:45 PM\nTo: Amonson, Paul D <[email protected]>\nCc: Andres Freund <[email protected]>; Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\nSubject: Re: Popcount optimization using AVX512\n\nThanks for the new version of the patch. I didn't see a commitfest entry for this one, and unfortunately I think it's too late to add it for the March commitfest. I would encourage you to add it to July's commitfest [0] so that we can get some routine cfbot coverage.\n\nOn Tue, Feb 27, 2024 at 08:46:06PM +0000, Amonson, Paul D wrote:\n> After consulting some Intel internal experts on MSVC the linking issue \n> as it stood was not resolved. Instead, I created a MSVC ONLY work-around.\n> This adds one extra functional call on the Windows builds (The linker \n> resolves a real function just fine but not a function pointer of the \n> same name). This extra latency does not exist on any of the other \n> platforms. I also believe I addressed all issues raised in the \n> previous reviews. The new pg_popcnt_x86_64_accel.c file is now the \n> ONLY file compiled with the\n> AVX512 compiler flags. I added support for the MSVC compiler flag as \n> well. Both meson and autoconf are updated with the new refactor.\n> \n> I am attaching the new patch.\n\nI think this patch might be missing the new files.\n\n-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) \n+<< 31))\n\nIME this means that the autoconf you are using has been patched. A quick search on the mailing lists seems to indicate that it might be specific to Debian [1].\n\n-static int\tpg_popcount32_slow(uint32 word);\n-static int\tpg_popcount64_slow(uint64 word);\n+int\tpg_popcount32_slow(uint32 word);\n+int\tpg_popcount64_slow(uint64 word);\n+uint64 pg_popcount_slow(const char *buf, int bytes);\n\nThis patch appears to do a lot of refactoring. Would it be possible to break out the refactoring parts into a prerequisite patch that could be reviewed and committed independently from the AVX512 stuff?\n\n-#if SIZEOF_VOID_P >= 8\n+#if SIZEOF_VOID_P == 8\n \t/* Process in 64-bit chunks if the buffer is aligned. */\n-\tif (buf == (const char *) TYPEALIGN(8, buf))\n+\tif (buf == (const char *)TYPEALIGN(8, buf))\n \t{\n-\t\tconst uint64 *words = (const uint64 *) buf;\n+\t\tconst uint64 *words = (const uint64 *)buf;\n \n \t\twhile (bytes >= 8)\n \t\t{\n@@ -309,9 +213,9 @@ pg_popcount(const char *buf, int bytes)\n \t\t\tbytes -= 8;\n \t\t}\n \n-\t\tbuf = (const char *) words;\n+\t\tbuf = (const char *)words;\n \t}\n-#else\n+#elif SIZEOF_VOID_P == 4\n \t/* Process in 32-bit chunks if the buffer is aligned. */\n \tif (buf == (const char *) TYPEALIGN(4, buf))\n \t{\n\nMost, if not all, of these changes seem extraneous. Do we actually need to more strictly check SIZEOF_VOID_P?\n\n[0] https://commitfest.postgresql.org/48/\n[1] https://postgr.es/m/20230211020042.uthdgj72kp3xlqam%40awork3.anarazel.de\n\n--\nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 4 Mar 2024 21:39:36 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "(Please don't top-post on the Postgres lists.)\n\nOn Mon, Mar 04, 2024 at 09:39:36PM +0000, Amonson, Paul D wrote:\n> First, apologies on the patch. Find re-attached updated version.\n\nThanks for the new version of the patch.\n\n>> -#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n>> +#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31)\n>> +<< 31))\n>>\n>> IME this means that the autoconf you are using has been patched. A\n>> quick search on the mailing lists seems to indicate that it might be\n>> specific to Debian [1].\n> \n> I am not sure what the ask is here? I made changes to the configure.ac\n> and ran autoconf2.69 to get builds to succeed. Do you have a separate\n> feedback here?\n\nThese LARGE_OFF_T changes are unrelated to the patch at hand and should be\nremoved. This likely means that you are using a patched autoconf that is\nmaking these extra changes.\n \n> As for the refactoring, this was done to satisfy previous review feedback\n> about applying the AVX512 CFLAGS to the entire pg_bitutils.c file. Mainly\n> to avoid segfault due to the AVX512 flags. If its ok, I would prefer to\n> make a single commit as the change is pretty small and straight forward.\n\nOkay. The only reason I suggest this is to ease review. For example, if\nthere is some required refactoring that doesn't involve any functionality\nchanges, it can be advantageous to get that part reviewed and committed\nfirst so that reviewers can better focus on the code for the new feature.\nBut, of course, that isn't necessary and/or isn't possible in all cases.\n\n> I am not sure I understand the comment about the SIZE_VOID_P checks.\n> Aren't they necessary to choose which functions to call based on 32 or 64\n> bit architectures?\n\nYes. My comment was that the patch appeared to make unnecessary changes to\nthis code. Perhaps I am misunderstanding something here.\n\n> Would this change qualify for Workflow A as described in [0] and can be\n> picked up by a committer, given it has been reviewed by multiple\n> committers so far? The scope of the change is pretty contained as well.\n\nI think so. I would still encourage you to create an entry for this so\nthat it is automatically tested via cfbot [0].\n\n[0] http://commitfest.cputube.org/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 16:21:18 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\r\n\r\nI am not sure what \"top-post\" means but I am not doing anything different but using \"reply to all\" in Outlook. Please enlighten me. 😊\r\n\r\nThis is the new patch with the hand edit to remove the offending lines from the patch file. I did a basic test to make the patch would apply and build. It succeeded.\r\n\r\nThanks,\r\nPaul\r\n\r\n-----Original Message-----\r\nFrom: Nathan Bossart <[email protected]> \r\nSent: Monday, March 4, 2024 2:21 PM\r\nTo: Amonson, Paul D <[email protected]>\r\nCc: Andres Freund <[email protected]>; Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; [email protected]\r\nSubject: Re: Popcount optimization using AVX512\r\n\r\n(Please don't top-post on the Postgres lists.)\r\n\r\nOn Mon, Mar 04, 2024 at 09:39:36PM +0000, Amonson, Paul D wrote:\r\n> First, apologies on the patch. Find re-attached updated version.\r\n\r\nThanks for the new version of the patch.\r\n\r\n>> -#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\r\n>> +#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << \r\n>> +31) << 31))\r\n>>\r\n>> IME this means that the autoconf you are using has been patched. A \r\n>> quick search on the mailing lists seems to indicate that it might be \r\n>> specific to Debian [1].\r\n> \r\n> I am not sure what the ask is here? I made changes to the \r\n> configure.ac and ran autoconf2.69 to get builds to succeed. Do you \r\n> have a separate feedback here?\r\n\r\nThese LARGE_OFF_T changes are unrelated to the patch at hand and should be removed. This likely means that you are using a patched autoconf that is making these extra changes.\r\n \r\n> As for the refactoring, this was done to satisfy previous review \r\n> feedback about applying the AVX512 CFLAGS to the entire pg_bitutils.c \r\n> file. Mainly to avoid segfault due to the AVX512 flags. If its ok, I \r\n> would prefer to make a single commit as the change is pretty small and straight forward.\r\n\r\nOkay. The only reason I suggest this is to ease review. For example, if there is some required refactoring that doesn't involve any functionality changes, it can be advantageous to get that part reviewed and committed first so that reviewers can better focus on the code for the new feature.\r\nBut, of course, that isn't necessary and/or isn't possible in all cases.\r\n\r\n> I am not sure I understand the comment about the SIZE_VOID_P checks.\r\n> Aren't they necessary to choose which functions to call based on 32 or \r\n> 64 bit architectures?\r\n\r\nYes. My comment was that the patch appeared to make unnecessary changes to this code. Perhaps I am misunderstanding something here.\r\n\r\n> Would this change qualify for Workflow A as described in [0] and can \r\n> be picked up by a committer, given it has been reviewed by multiple \r\n> committers so far? The scope of the change is pretty contained as well.\r\n\r\nI think so. I would still encourage you to create an entry for this so that it is automatically tested via cfbot [0].\r\n\r\n[0] http://commitfest.cputube.org/\r\n\r\n--\r\nNathan Bossart\r\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 5 Mar 2024 16:31:15 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 04:31:15PM +0000, Amonson, Paul D wrote:\n> I am not sure what \"top-post\" means but I am not doing anything different\n> but using \"reply to all\" in Outlook. Please enlighten me. 😊\n\nThe following link provides some more information:\n\n\thttps://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 5 Mar 2024 10:37:30 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "-----Original Message-----\r\n>From: Nathan Bossart <[email protected]> \r\n>Sent: Tuesday, March 5, 2024 8:38 AM\r\n>To: Amonson, Paul D <[email protected]>\r\n>Cc: Andres Freund <[email protected]>; Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; >[email protected]\r\n>Subject: Re: Popcount optimization using AVX512\r\n>\r\n>On Tue, Mar 05, 2024 at 04:31:15PM +0000, Amonson, Paul D wrote:\r\n>> I am not sure what \"top-post\" means but I am not doing anything \r\n>> different but using \"reply to all\" in Outlook. Please enlighten me. 😊\r\n>\r\n>The following link provides some more information:\r\n>\r\n>\thttps://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics\r\n>\r\n>--\r\n>Nathan Bossart\r\n>Amazon Web Services: https://aws.amazon.com\r\n\r\nAhhhh.....Ok... guess it's time to thank Microsoft then. ;) Noted I will try to do the \"reduced\" bottom-posting. I might slip up occasionally because it's an Intel habit. Is there a way to make Outlook do the leading \">\" in a reply for the previous message?\r\n\r\nBTW: Created the commit-fest submission.\r\n\r\nPaul\r\n\r\n",
"msg_date": "Tue, 5 Mar 2024 16:52:23 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 04:52:23PM +0000, Amonson, Paul D wrote:\n> -----Original Message-----\n> >From: Nathan Bossart <[email protected]> \n> >Sent: Tuesday, March 5, 2024 8:38 AM\n> >To: Amonson, Paul D <[email protected]>\n> >Cc: Andres Freund <[email protected]>; Alvaro Herrera <[email protected]>; Shankaran, Akash <[email protected]>; Noah Misch <[email protected]>; Tom Lane <[email protected]>; Matthias van de Meent <[email protected]>; >[email protected]\n> >Subject: Re: Popcount optimization using AVX512\n> >\n> >On Tue, Mar 05, 2024 at 04:31:15PM +0000, Amonson, Paul D wrote:\n> >> I am not sure what \"top-post\" means but I am not doing anything \n> >> different but using \"reply to all\" in Outlook. Please enlighten me. 😊\n> >\n> >The following link provides some more information:\n> >\n> >\thttps://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics\n> >\n> >--\n> >Nathan Bossart\n> >Amazon Web Services: https://aws.amazon.com\n> \n> Ahhhh.....Ok... guess it's time to thank Microsoft then. ;) Noted I will try to do the \"reduced\" bottom-posting. I might slip up occasionally because it's an Intel habit. Is there a way to make Outlook do the leading \">\" in a reply for the previous message?\n\nHere is a blog post about how complex email posting can be:\n\n\thttps://momjian.us/main/blogs/pgblog/2023.html#September_8_2023\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 5 Mar 2024 17:18:32 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 04:52:23PM +0000, Amonson, Paul D wrote:\n> Noted I will try to do the \"reduced\" bottom-posting. I might slip up\n> occasionally because it's an Intel habit.\n\nNo worries.\n\n> Is there a way to make Outlook do the leading \">\" in a reply for the\n> previous message?\n\nI do not know, sorry. I personally use mutt for the lists.\n\n> BTW: Created the commit-fest submission.\n\nThanks. I intend to provide a more detailed review shortly, as I am aiming\nto get this one committed for v17.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Mar 2024 11:33:18 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On 2024-Mar-04, Amonson, Paul D wrote:\n\n> > -#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n> > +#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31)\n> > +<< 31))\n> >\n> > IME this means that the autoconf you are using has been patched. A\n> > quick search on the mailing lists seems to indicate that it might be\n> > specific to Debian [1].\n> \n> I am not sure what the ask is here? I made changes to the\n> configure.ac and ran autoconf2.69 to get builds to succeed. Do you\n> have a separate feedback here? \n\nSo what happens here is that autoconf-2.69 as shipped by Debian contains\nsome patches on top of the one released by GNU. We use the latter, so\nif you run Debian's, then the generated configure script will contain\nthe differences coming from Debian's version.\n\nReally, I don't think this is very important as a review point, because\nif the configure.ac file is changed in the patch, it's best for the\ncommitter to run autoconf on their own, using a pristine GNU autoconf;\nthe configure file in the submitted patch is not relevant, only\nconfigure.ac matters.\n\nWhat committers do (or should do) is keep an install of autoconf-2.69\nstraight from GNU.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 7 Mar 2024 18:53:12 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Mar 07, 2024 at 06:53:12PM +0100, Alvaro Herrera wrote:\n> Really, I don't think this is very important as a review point, because\n> if the configure.ac file is changed in the patch, it's best for the\n> committer to run autoconf on their own, using a pristine GNU autoconf;\n> the configure file in the submitted patch is not relevant, only\n> configure.ac matters.\n\nAgreed. I didn't intend for this to be a major review point, and I\napologize for the extra noise.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Mar 2024 11:59:55 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "As promised...\n\n> +# Check for Intel AVX512 intrinsics to do POPCNT calculations.\n> +#\n> +PGAC_AVX512_POPCNT_INTRINSICS([])\n> +if test x\"$pgac_avx512_popcnt_intrinsics\" != x\"yes\"; then\n> + PGAC_AVX512_POPCNT_INTRINSICS([-mavx512vpopcntdq -mavx512f])\n> +fi\n> +AC_SUBST(CFLAGS_AVX512_POPCNT)\n\nI'm curious why we need both -mavx512vpopcntdq and -mavx512f. On my\nmachine, -mavx512vpopcntdq alone is enough to pass this test, so if there\nare other instructions required that need -mavx512f, then we might need to\nexpand the test.\n\n> 13 files changed, 657 insertions(+), 119 deletions(-)\n\nI still think it's worth breaking this change into at least 2 patches. In\nparticular, I think there's an opportunity to do the refactoring into\npg_popcnt_choose.c and pg_popcnt_x86_64_accel.c prior to adding the AVX512\nstuff. These changes are likely straightforward, and getting them out of\nthe way early would make it easier to focus on the more interesting\nchanges. IMHO there are a lot of moving parts in this patch.\n\n> +#undef HAVE__GET_CPUID_COUNT\n> +\n> +/* Define to 1 if you have immintrin. */\n> +#undef HAVE__IMMINTRIN\n\nIs this missing HAVE__CPUIDEX?\n\n> uint64\n> -pg_popcount(const char *buf, int bytes)\n> +pg_popcount_slow(const char *buf, int bytes)\n> {\n> uint64 popcnt = 0;\n> \n> -#if SIZEOF_VOID_P >= 8\n> +#if SIZEOF_VOID_P == 8\n> /* Process in 64-bit chunks if the buffer is aligned. */\n> if (buf == (const char *) TYPEALIGN(8, buf))\n> {\n> @@ -311,7 +224,7 @@ pg_popcount(const char *buf, int bytes)\n> \n> buf = (const char *) words;\n> }\n> -#else\n> +#elif SIZEOF_VOID_P == 4\n> /* Process in 32-bit chunks if the buffer is aligned. */\n> if (buf == (const char *) TYPEALIGN(4, buf))\n> {\n\nApologies for harping on this, but I'm still not seeing the need for these\nSIZEOF_VOID_P changes. While it's unlikely that this makes any practical\ndifference, I see no reason to more strictly check SIZEOF_VOID_P here.\n\n> + /* Process any remaining bytes */\n> + while (bytes--)\n> + popcnt += pg_number_of_ones[(unsigned char) *buf++];\n> + return popcnt;\n> +#else\n> + return pg_popcount_slow(buf, bytes);\n> +#endif /* USE_AVX512_CODE */\n\nnitpick: Could we call pg_popcount_slow() in a common section for these\n\"remaining bytes?\"\n\n> +#if defined(_MSC_VER)\n> + pg_popcount_indirect = pg_popcount512_fast;\n> +#else\n> + pg_popcount = pg_popcount512_fast;\n> +#endif\n\nThese _MSC_VER sections are interesting. I'm assuming this is the\nworkaround for the MSVC linking issue you mentioned above. I haven't\nlooked too closely, but I wonder if the CRC32C code (see\nsrc/include/port/pg_crc32c.h) is doing something different to avoid this\nissue.\n\nUpthread, Alvaro suggested a benchmark [0] that might be useful. I scanned\nthrough this thread and didn't see any recent benchmark results for the\nlatest form of the patch. I think it's worth verifying that we are still\nseeing the expected improvements.\n\n[0] https://postgr.es/m/202402071953.5c4z7t6kl7ts%40alvherre.pgsql\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Mar 2024 15:36:00 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\r\n> From: Nathan Bossart <[email protected]>\r\n> Sent: Thursday, March 7, 2024 1:36 PM\r\n> Subject: Re: Popcount optimization using AVX512\r\n\r\nI will be splitting the request into 2 patches. I am attaching the first patch (refactoring only) and I updated the commitfest entry to match this patch. I have a question however:\r\nDo I need to wait for the refactor patch to be merged before I post the AVX portion of this feature in this thread?\r\n\r\n> > + PGAC_AVX512_POPCNT_INTRINSICS([-mavx512vpopcntdq -mavx512f])\r\n> \r\n> I'm curious why we need both -mavx512vpopcntdq and -mavx512f. On my\r\n> machine, -mavx512vpopcntdq alone is enough to pass this test, so if there are\r\n> other instructions required that need -mavx512f, then we might need to\r\n> expand the test.\r\n\r\nFirst, nice catch on the required flags to build! When I changed my algorithm, dependence on the -mavx512f flag was no longer needed, In the second patch (AVX specific) I will fix this.\r\n\r\n> I still think it's worth breaking this change into at least 2 patches. In particular,\r\n> I think there's an opportunity to do the refactoring into pg_popcnt_choose.c\r\n> and pg_popcnt_x86_64_accel.c prior to adding the AVX512 stuff. These\r\n> changes are likely straightforward, and getting them out of the way early\r\n> would make it easier to focus on the more interesting changes. IMHO there\r\n> are a lot of moving parts in this patch.\r\n\r\nAs stated above I am doing this in 2 patches. :)\r\n\r\n> > +#undef HAVE__GET_CPUID_COUNT\r\n> > +\r\n> > +/* Define to 1 if you have immintrin. */ #undef HAVE__IMMINTRIN\r\n> \r\n> Is this missing HAVE__CPUIDEX?\r\n\r\nYes I missed it, I will include in the second patch (AVX specific) of the 2 patches.\r\n\r\n> > uint64\r\n> > -pg_popcount(const char *buf, int bytes)\r\n> > +pg_popcount_slow(const char *buf, int bytes)\r\n> > {\r\n> > uint64 popcnt = 0;\r\n> >\r\n> > -#if SIZEOF_VOID_P >= 8\r\n> > +#if SIZEOF_VOID_P == 8\r\n> > /* Process in 64-bit chunks if the buffer is aligned. */\r\n> > if (buf == (const char *) TYPEALIGN(8, buf))\r\n> > {\r\n> > @@ -311,7 +224,7 @@ pg_popcount(const char *buf, int bytes)\r\n> >\r\n> > buf = (const char *) words;\r\n> > }\r\n> > -#else\r\n> > +#elif SIZEOF_VOID_P == 4\r\n> > /* Process in 32-bit chunks if the buffer is aligned. */\r\n> > if (buf == (const char *) TYPEALIGN(4, buf))\r\n> > {\r\n> \r\n> Apologies for harping on this, but I'm still not seeing the need for these\r\n> SIZEOF_VOID_P changes. While it's unlikely that this makes any practical\r\n> difference, I see no reason to more strictly check SIZEOF_VOID_P here.\r\n\r\nI got rid of the second occurrence as I agree it is not needed but unless you see something I don't how to know which function to call between a 32-bit and 64-bit architecture? Maybe I am missing something obvious? What exactly do you suggest here? I am happy to always call either pg_popcount32() or pg_popcount64() with the understanding that it may not be optimal, but I do need to know which to use.\r\n\r\n> > + /* Process any remaining bytes */\r\n> > + while (bytes--)\r\n> > + popcnt += pg_number_of_ones[(unsigned char) *buf++];\r\n> > + return popcnt;\r\n> > +#else\r\n> > + return pg_popcount_slow(buf, bytes);\r\n> > +#endif /* USE_AVX512_CODE */\r\n> \r\n> nitpick: Could we call pg_popcount_slow() in a common section for these\r\n> \"remaining bytes?\"\r\n\r\nAgreed, will fix in the second patch as well.\r\n\r\n> > +#if defined(_MSC_VER)\r\n> > + pg_popcount_indirect = pg_popcount512_fast; #else\r\n> > + pg_popcount = pg_popcount512_fast; #endif\r\n\r\n> These _MSC_VER sections are interesting. I'm assuming this is the\r\n> workaround for the MSVC linking issue you mentioned above. I haven't\r\n> looked too closely, but I wonder if the CRC32C code (see\r\n> src/include/port/pg_crc32c.h) is doing something different to avoid this issue.\r\n\r\nUsing the latest master branch, I see what the needed changes are, I will implement using PGDLLIMPORT macro in the second patch.\r\n\r\n> Upthread, Alvaro suggested a benchmark [0] that might be useful. I scanned\r\n> through this thread and didn't see any recent benchmark results for the latest\r\n> form of the patch. I think it's worth verifying that we are still seeing the\r\n> expected improvements.\r\n\r\nI will get new benchmarks using the same process I used before (from Akash) so I get apples to apples. These are pending completion of the second patch which is still in progress.\r\n\r\nJust a reminder, I asked questions above about 1) multi-part dependent patches and, 2) What specifically to do about the SIZE_VOID_P checks. :)\r\n\r\nThanks,\r\nPaul",
"msg_date": "Mon, 11 Mar 2024 21:59:53 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 09:59:53PM +0000, Amonson, Paul D wrote:\n> I will be splitting the request into 2 patches. I am attaching the first\n> patch (refactoring only) and I updated the commitfest entry to match this\n> patch. I have a question however:\n> Do I need to wait for the refactor patch to be merged before I post the\n> AVX portion of this feature in this thread?\n\nThanks. There's no need to wait to post the AVX portion. I recommend\nusing \"git format-patch\" to construct the patch set for the lists.\n\n>> Apologies for harping on this, but I'm still not seeing the need for these\n>> SIZEOF_VOID_P changes. While it's unlikely that this makes any practical\n>> difference, I see no reason to more strictly check SIZEOF_VOID_P here.\n> \n> I got rid of the second occurrence as I agree it is not needed but unless\n> you see something I don't how to know which function to call between a\n> 32-bit and 64-bit architecture? Maybe I am missing something obvious?\n> What exactly do you suggest here? I am happy to always call either\n> pg_popcount32() or pg_popcount64() with the understanding that it may not\n> be optimal, but I do need to know which to use.\n\nI'm recommending that we don't change any of the code in the pg_popcount()\nfunction (which is renamed to pg_popcount_slow() in your v6 patch). If\npointers are 8 or more bytes, we'll try to process the buffer in 64-bit\nchunks. Else, we'll try to process it in 32-bit chunks. Any remaining\nbytes will be processed one-by-one.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 11 Mar 2024 20:34:36 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "A couple of thoughts on v7-0001:\n\n+extern int pg_popcount32_slow(uint32 word);\n+extern int pg_popcount64_slow(uint64 word);\n\n+/* In pg_popcnt_*_accel source file. */\n+extern int pg_popcount32_fast(uint32 word);\n+extern int pg_popcount64_fast(uint64 word);\n\nCan these prototypes be moved to a header file (maybe pg_bitutils.h)? It\nlooks like these are defined twice in the patch, and while I'm not positive\nthat it's against project policy to declare extern function prototypes in\n.c files, it appears to be pretty rare.\n\n+ 'pg_popcnt_choose.c',\n+ 'pg_popcnt_x86_64_accel.c',\n\nI think we want these to be architecture-specific, i.e., only built for\nx86_64 if the compiler knows how to use the relevant instructions. There\nis a good chance that we'll want to add similar support for other systems.\nThe CRC32C files are probably a good reference point for how to do this.\n\n+#ifdef TRY_POPCNT_FAST\n\nIIUC this macro can be set if either 1) the popcntq test in the\nautoconf/meson scripts passes or 2) we're building with MSVC on x86_64. I\nwonder if it would be better to move the MSVC/x86_64 check to the\nautoconf/meson scripts so that we could avoid surrounding large portions of\nthe popcount code with this macro. This might even be a necessary step\ntowards building these files in an architecture-specific fashion.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Mar 2024 11:39:04 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Nathan Bossart <[email protected]>\n> Sent: Wednesday, March 13, 2024 9:39 AM\n> To: Amonson, Paul D <[email protected]>\n\n> +extern int pg_popcount32_slow(uint32 word); extern int\n> +pg_popcount64_slow(uint64 word);\n> \n> +/* In pg_popcnt_*_accel source file. */ extern int\n> +pg_popcount32_fast(uint32 word); extern int pg_popcount64_fast(uint64\n> +word);\n> \n> Can these prototypes be moved to a header file (maybe pg_bitutils.h)? It\n> looks like these are defined twice in the patch, and while I'm not positive that\n> it's against project policy to declare extern function prototypes in .c files, it\n> appears to be pretty rare.\n\nOriginally, I intentionally did not put these in the header file as I want them to be private, but they are not defined in this .c file hence extern. Now I realize the \"extern\" part is not needed to accomplish my goal. Will fix by removing the \"extern\" keyword.\n\n> + 'pg_popcnt_choose.c',\n> + 'pg_popcnt_x86_64_accel.c',\n> \n> I think we want these to be architecture-specific, i.e., only built for\n> x86_64 if the compiler knows how to use the relevant instructions. There is a\n> good chance that we'll want to add similar support for other systems.\n> The CRC32C files are probably a good reference point for how to do this.\n\nI will look at this for the 'pg_popcnt_x86_64_accel.c' file but the 'pg_popcnt_choose.c' file is intended to be for any platform that may need accelerators including a possible future ARM accelerator.\n \n> +#ifdef TRY_POPCNT_FAST\n> \n> IIUC this macro can be set if either 1) the popcntq test in the autoconf/meson\n> scripts passes or 2) we're building with MSVC on x86_64. I wonder if it would\n> be better to move the MSVC/x86_64 check to the autoconf/meson scripts so\n> that we could avoid surrounding large portions of the popcount code with this\n> macro. This might even be a necessary step towards building these files in an\n> architecture-specific fashion.\n\nI see the point here; however, this will take some time to get right especially since I don't have a Windows box to do compiles on. Should I attempt to do this in this patch?\n\nThanks,\nPaul\n\n\n",
"msg_date": "Wed, 13 Mar 2024 17:52:14 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 05:52:14PM +0000, Amonson, Paul D wrote:\n>> I think we want these to be architecture-specific, i.e., only built for\n>> x86_64 if the compiler knows how to use the relevant instructions. There is a\n>> good chance that we'll want to add similar support for other systems.\n>> The CRC32C files are probably a good reference point for how to do this.\n> \n> I will look at this for the 'pg_popcnt_x86_64_accel.c' file but the\n> 'pg_popcnt_choose.c' file is intended to be for any platform that may\n> need accelerators including a possible future ARM accelerator.\n\nI worry that using the same file for *_choose.c for all architectures would\nbecome rather #ifdef heavy. Since we are already separating out this code\ninto new files, IMO we might as well try to avoid too many #ifdefs, too.\nBut this is admittedly less important right now because there's almost no\nchance of any new architecture support here for v17.\n\n>> +#ifdef TRY_POPCNT_FAST\n>> \n>> IIUC this macro can be set if either 1) the popcntq test in the autoconf/meson\n>> scripts passes or 2) we're building with MSVC on x86_64. I wonder if it would\n>> be better to move the MSVC/x86_64 check to the autoconf/meson scripts so\n>> that we could avoid surrounding large portions of the popcount code with this\n>> macro. This might even be a necessary step towards building these files in an\n>> architecture-specific fashion.\n> \n> I see the point here; however, this will take some time to get right\n> especially since I don't have a Windows box to do compiles on. Should I\n> attempt to do this in this patch?\n\nThis might also be less important given the absence of any imminent new\narchitecture support in this area. I'm okay with it, given we are just\nmaintaining the status quo.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Mar 2024 13:38:11 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Nathan Bossart <[email protected]>\n> Sent: Monday, March 11, 2024 6:35 PM\n> To: Amonson, Paul D <[email protected]>\n\n> Thanks. There's no need to wait to post the AVX portion. I recommend using\n> \"git format-patch\" to construct the patch set for the lists.\n\nAfter exploring git format-patch command I think I understand what you need. Attached.\n \n> > What exactly do you suggest here? I am happy to always call either\n> > pg_popcount32() or pg_popcount64() with the understanding that it may\n> > not be optimal, but I do need to know which to use.\n> \n> I'm recommending that we don't change any of the code in the pg_popcount()\n> function (which is renamed to pg_popcount_slow() in your v6 patch). If\n> pointers are 8 or more bytes, we'll try to process the buffer in 64-bit chunks.\n> Else, we'll try to process it in 32-bit chunks. Any remaining bytes will be\n> processed one-by-one.\n\nOk, we are on the same page now. :) It is already fixed that way in the refactor patch #1.\n\nAs for new performance numbers: I just ran a full suite like I did earlier in the process. My latest results an equivalent to a pgbench scale factor 10 DB with the target column having varying column widths and appropriate random data are 1.2% improvement with a 2.2% Margin of Error at a 98% confidence level. Still seeing improvement and no regressions.\n\nAs stated in the previous separate chain I updated the code removing the extra \"extern\" keywords.\n\nThanks,\nPaul",
"msg_date": "Thu, 14 Mar 2024 19:50:46 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 07:50:46PM +0000, Amonson, Paul D wrote:\n> As for new performance numbers: I just ran a full suite like I did\n> earlier in the process. My latest results an equivalent to a pgbench\n> scale factor 10 DB with the target column having varying column widths\n> and appropriate random data are 1.2% improvement with a 2.2% Margin of\n> Error at a 98% confidence level. Still seeing improvement and no\n> regressions.\n\nWhich test suite did you run? Those numbers seem potentially\nindistinguishable from noise, which probably isn't great for such a large\npatch set.\n\nI ran John Naylor's test_popcount module [0] with the following command on\nan i7-1195G7:\n\n\ttime psql postgres -c 'select drive_popcount(10000000, 1024)'\n\nWithout your patches, this seems to take somewhere around 8.8 seconds.\nWith your patches, it takes 0.6 seconds. (I re-compiled and re-ran the\ntests a couple of times because I had a difficult time believing the amount\nof improvement.)\n\n[0] https://postgr.es/m/CAFBsxsE7otwnfA36Ly44zZO%2Bb7AEWHRFANxR1h1kxveEV%3DghLQ%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 15 Mar 2024 10:06:11 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Nathan Bossart <[email protected]>\n> Sent: Friday, March 15, 2024 8:06 AM\n> To: Amonson, Paul D <[email protected]>\n> Cc: Andres Freund <[email protected]>; Alvaro Herrera <[email protected]\n> ip.org>; Shankaran, Akash <[email protected]>; Noah Misch\n> <[email protected]>; Tom Lane <[email protected]>; Matthias van de\n> Meent <[email protected]>; pgsql-\n> [email protected]\n> Subject: Re: Popcount optimization using AVX512\n> \n> Which test suite did you run? Those numbers seem potentially\n> indistinguishable from noise, which probably isn't great for such a large patch\n> set.\n\nI ran...\n\tpsql -c \"select bitcount(column) from table;\"\n...in a loop with \"column\" widths of 84, 4096, 8192, and 16384 containing random data. There DB has 1 million rows. In the loop before calling the select I have code to clear all system caches. If I omit the code to clear system caches the margin of error remains the same but the improvement percent changes from 1.2% to 14.6% (much less I/O when cached data is available).\n\n> I ran John Naylor's test_popcount module [0] with the following command on\n> an i7-1195G7:\n> \n> \ttime psql postgres -c 'select drive_popcount(10000000, 1024)'\n> \n> Without your patches, this seems to take somewhere around 8.8 seconds.\n> With your patches, it takes 0.6 seconds. (I re-compiled and re-ran the tests a\n> couple of times because I had a difficult time believing the amount of\n> improvement.)\n\nWhen I tested the code outside postgres in a micro benchmark I got 200-300% improvements. Your results are interesting, as it implies more than 300% improvement. Let me do some research on the benchmark you referenced. However, in all cases it seems that there is no regression so should we move forward on merging while I run some more local tests?\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Fri, 15 Mar 2024 15:31:17 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Amonson, Paul D <[email protected]>\n> Sent: Friday, March 15, 2024 8:31 AM\n> To: Nathan Bossart <[email protected]>\n...\n> When I tested the code outside postgres in a micro benchmark I got 200-\n> 300% improvements. Your results are interesting, as it implies more than\n> 300% improvement. Let me do some research on the benchmark you\n> referenced. However, in all cases it seems that there is no regression so should\n> we move forward on merging while I run some more local tests?\n\nWhen running quick test with small buffers (1 to 32K) I see up to about a 740% improvement. This was using my stand-alone micro benchmark outside of PG. My original 200-300% numbers were averaged including sizes up to 512MB which seems to not run as well on large buffers. I will try the referenced micro benchmark on Monday. None of my benchmark testing used the command line \"time\" command. For Postgres is set \"\\timing\" before the run and for the stand-alone benchmark is took timestamps in the code. In all cases I used -O2 for optimization.\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Fri, 15 Mar 2024 17:43:39 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Sat, 16 Mar 2024 at 04:06, Nathan Bossart <[email protected]> wrote:\n> I ran John Naylor's test_popcount module [0] with the following command on\n> an i7-1195G7:\n>\n> time psql postgres -c 'select drive_popcount(10000000, 1024)'\n>\n> Without your patches, this seems to take somewhere around 8.8 seconds.\n> With your patches, it takes 0.6 seconds. (I re-compiled and re-ran the\n> tests a couple of times because I had a difficult time believing the amount\n> of improvement.)\n>\n> [0] https://postgr.es/m/CAFBsxsE7otwnfA36Ly44zZO%2Bb7AEWHRFANxR1h1kxveEV%3DghLQ%40mail.gmail.com\n\nI think most of that will come from getting rid of the indirect\nfunction that currently exists in pg_popcount().\n\nUsing the attached quick hack, the performance using John's test\nmodule goes from:\n\n-- master\npostgres=# select drive_popcount(10000000, 1024);\nTime: 9832.845 ms (00:09.833)\nTime: 9844.460 ms (00:09.844)\nTime: 9858.608 ms (00:09.859)\n\n-- with attached hacky and untested patch\npostgres=# select drive_popcount(10000000, 1024);\nTime: 2539.029 ms (00:02.539)\nTime: 2598.223 ms (00:02.598)\nTime: 2611.435 ms (00:02.611)\n\n--- and with the avx512 patch on an AMD 7945HX CPU:\npostgres=# select drive_popcount(10000000, 1024);\nTime: 564.982 ms\nTime: 556.540 ms\nTime: 554.032 ms\n\nThe following comment seems like it could do with some improvements.\n\n * Use AVX-512 Intrinsics for supported Intel CPUs or fall back the the software\n * loop in pg_bunutils.c and use the best 32 or 64 bit fast methods. If no fast\n * methods are used this will fall back to __builtin_* or pure software.\n\nThere's nothing much specific to Intel here. AMD Zen4 has AVX512.\nPlus \"pg_bunutils.c\" should be \"pg_bitutils.c\" and \"the the\"\n\nHow about just:\n\n * Use AVX-512 Intrinsics on supported CPUs. Fall back the software loop in\n * pg_popcount_slow() when AVX-512 is unavailable.\n\nMaybe it's worth exploring something along the lines of the attached\nbefore doing the AVX512 stuff. It seems like a pretty good speed-up\nand will apply for CPUs without AVX512 support.\n\nDavid",
"msg_date": "Mon, 18 Mar 2024 09:56:32 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 09:56:32AM +1300, David Rowley wrote:\n> Maybe it's worth exploring something along the lines of the attached\n> before doing the AVX512 stuff. It seems like a pretty good speed-up\n> and will apply for CPUs without AVX512 support.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 10:29:02 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Won't I still need the runtime checks? If I compile with a compiler supporting the HW \"feature\" but run on HW without that feature, I will want to avoid faults due to illegal operations. Won't that also affect performance?\n\nPaul\n\n> -----Original Message-----\n> From: Nathan Bossart <[email protected]>\n> Sent: Monday, March 18, 2024 8:29 AM\n> To: David Rowley <[email protected]>\n> Cc: Amonson, Paul D <[email protected]>; Andres Freund\n> <[email protected]>; Alvaro Herrera <[email protected]>; Shankaran,\n> Akash <[email protected]>; Noah Misch <[email protected]>;\n> Tom Lane <[email protected]>; Matthias van de Meent\n> <[email protected]>; [email protected]\n> Subject: Re: Popcount optimization using AVX512\n> \n> On Mon, Mar 18, 2024 at 09:56:32AM +1300, David Rowley wrote:\n> > Maybe it's worth exploring something along the lines of the attached\n> > before doing the AVX512 stuff. It seems like a pretty good speed-up\n> > and will apply for CPUs without AVX512 support.\n> \n> +1\n> \n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:07:40 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 04:07:40PM +0000, Amonson, Paul D wrote:\n> Won't I still need the runtime checks? If I compile with a compiler\n> supporting the HW \"feature\" but run on HW without that feature, I will\n> want to avoid faults due to illegal operations. Won't that also affect\n> performance?\n\nI don't think David was suggesting that we need to remove the runtime\nchecks for AVX512. IIUC he was pointing out that most of the performance\ngain is from removing the function call overhead, which your v8-0002 patch\nalready does for the proposed AVX512 code. We can apply a similar\noptimization for systems without AVX512 by inlining the code for\npg_popcount64() and pg_popcount32().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 11:20:18 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Nathan Bossart <[email protected]>\n> Sent: Monday, March 18, 2024 9:20 AM\n> ...\n> I don't think David was suggesting that we need to remove the runtime checks\n> for AVX512. IIUC he was pointing out that most of the performance gain is\n> from removing the function call overhead, which your v8-0002 patch already\n> does for the proposed AVX512 code. We can apply a similar optimization for\n> systems without AVX512 by inlining the code for\n> pg_popcount64() and pg_popcount32().\n\nOk, got you.\n\nQuestion: I applied the patch for the drive_popcount* functions and rebuilt. The resultant server complains that the function is missing. What is the trick to make this work?\n\nAnother Question: Is there a reason \"time psql\" is used over the Postgres \"\\timing\" command?\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Mon, 18 Mar 2024 17:28:32 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 11:20:18AM -0500, Nathan Bossart wrote:\n> I don't think David was suggesting that we need to remove the runtime\n> checks for AVX512. IIUC he was pointing out that most of the performance\n> gain is from removing the function call overhead, which your v8-0002 patch\n> already does for the proposed AVX512 code. We can apply a similar\n> optimization for systems without AVX512 by inlining the code for\n> pg_popcount64() and pg_popcount32().\n\nHere is a more fleshed-out version of what I believe David is proposing.\nOn my machine, the gains aren't quite as impressive (~8.8s to ~5.2s for the\ntest_popcount benchmark). I assume this is because this patch turns\npg_popcount() into a function pointer, which is what the AVX512 patches do,\ntoo. I left out the 32-bit section from pg_popcount_fast(), but I'll admit\nthat I'm not yet 100% sure that we can assume we're on a 64-bit system\nthere.\n\nIMHO this work is arguably a prerequisite for the AVX512 work, as turning\npg_popcount() into a function pointer will likely regress performance for\nfolks on systems without AVX512 otherwise.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Mar 2024 12:30:04 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 05:28:32PM +0000, Amonson, Paul D wrote:\n> Question: I applied the patch for the drive_popcount* functions and\n> rebuilt. The resultant server complains that the function is missing.\n> What is the trick to make this work?\n\nYou probably need to install the test_popcount extension and run \"CREATE\nEXTENION test_popcount;\".\n\n> Another Question: Is there a reason \"time psql\" is used over the Postgres\n> \"\\timing\" command?\n\nI don't think there's any strong reason. I've used both.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 12:32:53 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 12:30:04PM -0500, Nathan Bossart wrote:\n> Here is a more fleshed-out version of what I believe David is proposing.\n> On my machine, the gains aren't quite as impressive (~8.8s to ~5.2s for the\n> test_popcount benchmark). I assume this is because this patch turns\n> pg_popcount() into a function pointer, which is what the AVX512 patches do,\n> too. I left out the 32-bit section from pg_popcount_fast(), but I'll admit\n> that I'm not yet 100% sure that we can assume we're on a 64-bit system\n> there.\n> \n> IMHO this work is arguably a prerequisite for the AVX512 work, as turning\n> pg_popcount() into a function pointer will likely regress performance for\n> folks on systems without AVX512 otherwise.\n\nApologies for the noise. I noticed that we could (and probably should)\ninline the pg_popcount32/64 calls in the \"slow\" version, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Mar 2024 12:53:50 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, 19 Mar 2024 at 06:30, Nathan Bossart <[email protected]> wrote:\n> Here is a more fleshed-out version of what I believe David is proposing.\n> On my machine, the gains aren't quite as impressive (~8.8s to ~5.2s for the\n> test_popcount benchmark). I assume this is because this patch turns\n> pg_popcount() into a function pointer, which is what the AVX512 patches do,\n> too. I left out the 32-bit section from pg_popcount_fast(), but I'll admit\n> that I'm not yet 100% sure that we can assume we're on a 64-bit system\n> there.\n\nI looked at your latest patch and tried out the performance on a Zen4\nrunning windows and a Zen2 running on Linux. As follows:\n\nAMD 3990x:\n\nmaster:\npostgres=# select drive_popcount(10000000, 1024);\nTime: 11904.078 ms (00:11.904)\nTime: 11907.176 ms (00:11.907)\nTime: 11927.983 ms (00:11.928)\n\npatched:\npostgres=# select drive_popcount(10000000, 1024);\nTime: 3641.271 ms (00:03.641)\nTime: 3610.934 ms (00:03.611)\nTime: 3663.423 ms (00:03.663)\n\n\nAMD 7945HX Windows\n\nmaster:\npostgres=# select drive_popcount(10000000, 1024);\nTime: 9832.845 ms (00:09.833)\nTime: 9844.460 ms (00:09.844)\nTime: 9858.608 ms (00:09.859)\n\npatched:\npostgres=# select drive_popcount(10000000, 1024);\nTime: 3427.942 ms (00:03.428)\nTime: 3364.262 ms (00:03.364)\nTime: 3413.407 ms (00:03.413)\n\nThe only thing I'd question in the patch is in pg_popcount_fast(). It\nlooks like you've opted to not do the 32-bit processing on 32-bit\nmachines. I think that's likely still worth coding in a similar way to\nhow pg_popcount_slow() works. i.e. use \"#if SIZEOF_VOID_P >= 8\".\nProbably one day we'll remove that code, but it seems strange to have\npg_popcount_slow() do it and not pg_popcount_fast().\n\n> IMHO this work is arguably a prerequisite for the AVX512 work, as turning\n> pg_popcount() into a function pointer will likely regress performance for\n> folks on systems without AVX512 otherwise.\n\nI think so too.\n\nDavid\n\n\n",
"msg_date": "Tue, 19 Mar 2024 10:02:18 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 10:02:18AM +1300, David Rowley wrote:\n> I looked at your latest patch and tried out the performance on a Zen4\n> running windows and a Zen2 running on Linux. As follows:\n\nThanks for taking a look.\n\n> The only thing I'd question in the patch is in pg_popcount_fast(). It\n> looks like you've opted to not do the 32-bit processing on 32-bit\n> machines. I think that's likely still worth coding in a similar way to\n> how pg_popcount_slow() works. i.e. use \"#if SIZEOF_VOID_P >= 8\".\n> Probably one day we'll remove that code, but it seems strange to have\n> pg_popcount_slow() do it and not pg_popcount_fast().\n\nThe only reason I left it out was because I couldn't convince myself that\nit wasn't dead code, given we assume that popcntq is available in\npg_popcount64_fast() today. But I don't see any harm in adding that just\nin case.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:08:10 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Nathan Bossart <[email protected]>\n> Sent: Monday, March 18, 2024 2:08 PM\n> To: David Rowley <[email protected]>\n> Cc: Amonson, Paul D <[email protected]>; Andres Freund\n>...\n> \n> The only reason I left it out was because I couldn't convince myself that it\n> wasn't dead code, given we assume that popcntq is available in\n> pg_popcount64_fast() today. But I don't see any harm in adding that just in\n> case.\n\nI am not sure how to read this. Does this mean that for popcount32_fast and popcount64_fast I can assume that the x86(_64) instructions exists and stop doing the runtime checks for instruction availability?\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Mon, 18 Mar 2024 21:22:43 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 09:22:43PM +0000, Amonson, Paul D wrote:\n>> The only reason I left it out was because I couldn't convince myself that it\n>> wasn't dead code, given we assume that popcntq is available in\n>> pg_popcount64_fast() today. But I don't see any harm in adding that just in\n>> case.\n> \n> I am not sure how to read this. Does this mean that for popcount32_fast\n> and popcount64_fast I can assume that the x86(_64) instructions exists\n> and stop doing the runtime checks for instruction availability?\n\nI think my question boils down to \"if pg_popcount_available() returns true,\ncan I safely assume I'm on a 64-bit machine?\"\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:26:00 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, 19 Mar 2024 at 10:08, Nathan Bossart <[email protected]> wrote:\n>\n> On Tue, Mar 19, 2024 at 10:02:18AM +1300, David Rowley wrote:\n> > The only thing I'd question in the patch is in pg_popcount_fast(). It\n> > looks like you've opted to not do the 32-bit processing on 32-bit\n> > machines. I think that's likely still worth coding in a similar way to\n> > how pg_popcount_slow() works. i.e. use \"#if SIZEOF_VOID_P >= 8\".\n> > Probably one day we'll remove that code, but it seems strange to have\n> > pg_popcount_slow() do it and not pg_popcount_fast().\n>\n> The only reason I left it out was because I couldn't convince myself that\n> it wasn't dead code, given we assume that popcntq is available in\n> pg_popcount64_fast() today. But I don't see any harm in adding that just\n> in case.\n\nIt's probably more of a case of using native instructions rather than\nones that might be implemented only via microcode. For the record, I\ndon't know if that would be the case for popcntq on x86 32-bit and I\ndon't have the hardware to test it. It just seems less risky just to\ndo it.\n\nDavid\n\n\n",
"msg_date": "Tue, 19 Mar 2024 10:27:58 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 10:27:58AM +1300, David Rowley wrote:\n> On Tue, 19 Mar 2024 at 10:08, Nathan Bossart <[email protected]> wrote:\n>> On Tue, Mar 19, 2024 at 10:02:18AM +1300, David Rowley wrote:\n>> > The only thing I'd question in the patch is in pg_popcount_fast(). It\n>> > looks like you've opted to not do the 32-bit processing on 32-bit\n>> > machines. I think that's likely still worth coding in a similar way to\n>> > how pg_popcount_slow() works. i.e. use \"#if SIZEOF_VOID_P >= 8\".\n>> > Probably one day we'll remove that code, but it seems strange to have\n>> > pg_popcount_slow() do it and not pg_popcount_fast().\n>>\n>> The only reason I left it out was because I couldn't convince myself that\n>> it wasn't dead code, given we assume that popcntq is available in\n>> pg_popcount64_fast() today. But I don't see any harm in adding that just\n>> in case.\n> \n> It's probably more of a case of using native instructions rather than\n> ones that might be implemented only via microcode. For the record, I\n> don't know if that would be the case for popcntq on x86 32-bit and I\n> don't have the hardware to test it. It just seems less risky just to\n> do it.\n\nAgreed. Will send an updated patch shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:29:19 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 04:29:19PM -0500, Nathan Bossart wrote:\n> Agreed. Will send an updated patch shortly.\n\nAs promised...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Mar 2024 17:08:45 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, 19 Mar 2024 at 11:08, Nathan Bossart <[email protected]> wrote:\n>\n> On Mon, Mar 18, 2024 at 04:29:19PM -0500, Nathan Bossart wrote:\n> > Agreed. Will send an updated patch shortly.\n>\n> As promised...\n\nLooks good.\n\nDavid\n\n\n",
"msg_date": "Tue, 19 Mar 2024 12:30:50 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 12:30:50PM +1300, David Rowley wrote:\n> Looks good.\n\nCommitted. Thanks for the suggestion and for reviewing!\n\nPaul, I suspect your patches will need to be rebased after commit cc4826d.\nWould you mind doing so?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Mar 2024 14:59:18 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Nathan Bossart <[email protected]>\n>\n> Committed. Thanks for the suggestion and for reviewing!\n> \n> Paul, I suspect your patches will need to be rebased after commit cc4826d.\n> Would you mind doing so?\n\nChanged in this patch set.\n\n* Rebased.\n* Direct *slow* calls via macros as shown in example patch.\n* Changed the choose filename to be platform specific as suggested.\n* Falls back to intermediate \"Fast\" methods if AVX512 is not available at runtime.\n* inline used where is makes sense, remember using \"extern\" negates \"inline\".\n* Fixed comment issues pointed out in review.\n\nI tested building with and without TRY_POPCOUNT_FAST, for both configure and meson build systems, and ran in CI.\n\nThanks,\nPaul",
"msg_date": "Tue, 19 Mar 2024 22:56:01 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Wed, 20 Mar 2024 at 11:56, Amonson, Paul D <[email protected]> wrote:\n> Changed in this patch set.\n\nThanks for rebasing.\n\nI don't think there's any need to mention Intel in each of the\nfollowing comments:\n\n+# Check for Intel AVX512 intrinsics to do POPCNT calculations.\n\n+# Newer Intel processors can use AVX-512 POPCNT Capabilities (01/30/2024)\n\nAMD's Zen4 also has AVX512, so it's misleading to indicate it's an\nIntel only instruction. Also, writing the date isn't necessary as we\nhave \"git blame\"\n\nDavid\n\n\n",
"msg_date": "Wed, 20 Mar 2024 17:26:07 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\r\n> From: David Rowley <[email protected]>\r\n> Sent: Tuesday, March 19, 2024 9:26 PM\r\n> To: Amonson, Paul D <[email protected]>\r\n> \r\n> AMD's Zen4 also has AVX512, so it's misleading to indicate it's an Intel only\r\n> instruction. Also, writing the date isn't necessary as we have \"git blame\"\r\n\r\nFixed.\r\n\r\nThanks,\r\nPaul",
"msg_date": "Wed, 20 Mar 2024 14:23:55 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Wed, 20 Mar 2024 at 11:56, Amonson, Paul D <[email protected]> wrote:\n> Changed in this patch set.\n>\n> * Rebased.\n> * Direct *slow* calls via macros as shown in example patch.\n> * Changed the choose filename to be platform specific as suggested.\n> * Falls back to intermediate \"Fast\" methods if AVX512 is not available at runtime.\n> * inline used where is makes sense, remember using \"extern\" negates \"inline\".\n\nI'm not sure about this \"extern negates inline\" comment. It seems to\nme the compiler is perfectly free to inline a static function into an\nexternal function and it's free to inline the static function\nelsewhere within the same .c file.\n\nThe final sentence of the following comment that the 0001 patch\nremoves explains this:\n\n/*\n * When the POPCNT instruction is not available, there's no point in using\n * function pointers to vary the implementation between the fast and slow\n * method. We instead just make these actual external functions when\n * TRY_POPCNT_FAST is not defined. The compiler should be able to inline\n * the slow versions here.\n */\n\nAlso, have a look at [1]. You'll see f_slow() wasn't even compiled\nand the code was just inlined into f(). I just added the\n__attribute__((noinline)) so that usage() wouldn't just perform\nconstant folding and just return 6.\n\nI think, unless you have evidence that some common compiler isn't\ninlining the static into the extern then we shouldn't add the macros.\nIt adds quite a bit of churn to the patch and will break out of core\ncode as you no longer have functions named pg_popcount32(),\npg_popcount64() and pg_popcount().\n\nDavid\n\n[1] https://godbolt.org/z/6joExb79d\n\n\n",
"msg_date": "Thu, 21 Mar 2024 13:28:16 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\r\n> From: David Rowley <[email protected]>\r\n> Sent: Wednesday, March 20, 2024 5:28 PM\r\n> To: Amonson, Paul D <[email protected]>\r\n> Cc: Nathan Bossart <[email protected]>; Andres Freund\r\n>\r\n> I'm not sure about this \"extern negates inline\" comment. It seems to me the\r\n> compiler is perfectly free to inline a static function into an external function\r\n> and it's free to inline the static function elsewhere within the same .c file.\r\n> \r\n> The final sentence of the following comment that the 0001 patch removes\r\n> explains this:\r\n> \r\n> /*\r\n> * When the POPCNT instruction is not available, there's no point in using\r\n> * function pointers to vary the implementation between the fast and slow\r\n> * method. We instead just make these actual external functions when\r\n> * TRY_POPCNT_FAST is not defined. The compiler should be able to inline\r\n> * the slow versions here.\r\n> */\r\n> \r\n> Also, have a look at [1]. You'll see f_slow() wasn't even compiled and the code\r\n> was just inlined into f(). I just added the\r\n> __attribute__((noinline)) so that usage() wouldn't just perform constant\r\n> folding and just return 6.\r\n> \r\n> I think, unless you have evidence that some common compiler isn't inlining the\r\n> static into the extern then we shouldn't add the macros.\r\n> It adds quite a bit of churn to the patch and will break out of core code as you\r\n> no longer have functions named pg_popcount32(),\r\n> pg_popcount64() and pg_popcount().\r\n\r\nThis may be a simple misunderstanding extern != static. If I use the \"extern\" keyword then a symbol *will* be generated and inline will be ignored. This is NOT true of \"static inline\", where the compiler will try to inline the method. :)\r\n\r\nIn this patch set:\r\n* I removed the macro implementation.\r\n* Made everything that could possibly be inlined marked with the \"static inline\" keyword.\r\n* Conditionally made the *_slow() functions \"static inline\" when TRY_POPCONT_FAST is not set.\r\n* Found and fixed some whitespace errors in the AVX code implementation.\r\n\r\nThanks,\r\nPaul",
"msg_date": "Thu, 21 Mar 2024 19:17:54 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\r\n> From: Amonson, Paul D <[email protected]>\r\n> Sent: Thursday, March 21, 2024 12:18 PM\r\n> To: David Rowley <[email protected]>\r\n> Cc: Nathan Bossart <[email protected]>; Andres Freund\r\n\r\nI am re-posting the patches as CI for Mac failed (CI error not code/test error). The patches are the same as last time.\r\n\r\nThanks,\r\nPaul",
"msg_date": "Mon, 25 Mar 2024 15:06:16 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "\"Amonson, Paul D\" <[email protected]> writes:\n> I am re-posting the patches as CI for Mac failed (CI error not code/test error). The patches are the same as last time.\n\nJust for a note --- the cfbot will re-test existing patches every\nso often without needing a bump. The current cycle period seems to\nbe about two days.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Mar 2024 11:12:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane <[email protected]>\n> Sent: Monday, March 25, 2024 8:12 AM\n> To: Amonson, Paul D <[email protected]>\n> Cc: David Rowley <[email protected]>; Nathan Bossart\n> Subject: Re: Popcount optimization using AVX512\n>...\n> Just for a note --- the cfbot will re-test existing patches every so often without\n> needing a bump. The current cycle period seems to be about two days.\n> \n> \t\t\tregards, tom lane\n\nGood to know! Maybe this is why I thought it originally passed CI and suddenly this morning there is a failure. I noticed at least 2 other patch runs also failed in the same way.\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Mon, 25 Mar 2024 15:20:19 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On 3/25/24 11:12, Tom Lane wrote:\n> \"Amonson, Paul D\" <[email protected]> writes:\n>> I am re-posting the patches as CI for Mac failed (CI error not code/test error). The patches are the same as last time.\n> \n> Just for a note --- the cfbot will re-test existing patches every\n> so often without needing a bump. The current cycle period seems to\n> be about two days.\n\n\nJust an FYI -- there seems to be an issue with all three of the macos \ncfbot runners (mine included). I spent time over the weekend working \nwith Thomas Munro (added to CC list) trying different fixes to no avail. \nHelp from macos CI wizards would be gratefully accepted...\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 25 Mar 2024 11:44:34 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Amonson, Paul D <[email protected]>\n> Sent: Monday, March 25, 2024 8:20 AM\n> To: Tom Lane <[email protected]>\n> Cc: David Rowley <[email protected]>; Nathan Bossart\n> <[email protected]>; Andres Freund <[email protected]>; Alvaro\n> Herrera <[email protected]>; Shankaran, Akash\n> <[email protected]>; Noah Misch <[email protected]>; Matthias\n> van de Meent <[email protected]>; pgsql-\n> [email protected]\n> Subject: RE: Popcount optimization using AVX512\n>\n\nOk, CI turned green after my re-post of the patches. Can this please get merged?\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Mon, 25 Mar 2024 18:42:36 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 06:42:36PM +0000, Amonson, Paul D wrote:\n> Ok, CI turned green after my re-post of the patches. Can this please get\n> merged?\n\nThanks for the new patches. I intend to take another look soon.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Mar 2024 15:05:51 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 03:05:51PM -0500, Nathan Bossart wrote:\n> On Mon, Mar 25, 2024 at 06:42:36PM +0000, Amonson, Paul D wrote:\n>> Ok, CI turned green after my re-post of the patches. Can this please get\n>> merged?\n> \n> Thanks for the new patches. I intend to take another look soon.\n\nThanks for your patience. I spent most of my afternoon looking into the\nlatest patch set, but I needed to do a CHECKPOINT and take a break. I am\nin the middle of doing some rather heavy editorialization, but the core of\nyour changes will remain the same (and so I still intend to give you\nauthorship credit). I've attached what I have so far, which is still\nmissing the configuration checks and the changes to make sure the extra\ncompiler flags make it to the right places.\n\nUnless something pops up while I work on the remainder of this patch, I\nthink we'll end up going with a simpler approach. I originally set out to\nmake this look like the CRC32C stuff (e.g., a file per implementation), but\nthat seemed primarily useful if we can choose which files need to be\ncompiled at configure-time. However, the TRY_POPCNT_FAST macro is defined\nat compile-time (AFAICT for good reason [0]), so we end up having to\ncompile all the files in many cases anyway, and we continue to need to\nsurround lots of code with \"#ifdef TRY_POPCNT_FAST\" or similar. So, my\ncurrent thinking is that we should only move the AVX512 stuff to its own\nfile for the purposes of compiling it with special flags when possible. (I\nrealize that I'm essentially recanting much of my previous feedback, which\nI apologize for.)\n\n[0] https://postgr.es/m/CAApHDvrONNcYxGV6C0O3ZmaL0BvXBWY%2BrBOCBuYcQVUOURwhkA%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 27 Mar 2024 17:00:10 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Nathan Bossart <[email protected]>\n> Sent: Wednesday, March 27, 2024 3:00 PM\n> To: Amonson, Paul D <[email protected]>\n> \n> ... (I realize that I'm essentially\n> recanting much of my previous feedback, which I apologize for.)\n\nIt happens. LOL As long as the algorithm for AVX-512 is not altered I am confident that your new refactor will be fine. :)\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Wed, 27 Mar 2024 22:32:24 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "Here is a v14 of the patch that I think is beginning to approach something\ncommittable. Besides general review and testing, there are two things that\nI'd like to bring up:\n\n* The latest patch set from Paul Amonson appeared to support MSVC in the\n meson build, but not the autoconf one. I don't have much expertise here,\n so the v14 patch doesn't have any autoconf/meson support for MSVC, which\n I thought might be okay for now. IIUC we assume that 64-bit/MSVC builds\n can always compile the x86_64 popcount code, but I don't know whether\n that's safe for AVX512.\n\n* I think we need to verify there isn't a huge performance regression for\n smaller arrays. IIUC those will still require an AVX512 instruction or\n two as well as a function call, which might add some noticeable overhead.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 28 Mar 2024 16:38:54 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 04:38:54PM -0500, Nathan Bossart wrote:\n> Here is a v14 of the patch that I think is beginning to approach something\n> committable. Besides general review and testing, there are two things that\n> I'd like to bring up:\n> \n> * The latest patch set from Paul Amonson appeared to support MSVC in the\n> meson build, but not the autoconf one. I don't have much expertise here,\n> so the v14 patch doesn't have any autoconf/meson support for MSVC, which\n> I thought might be okay for now. IIUC we assume that 64-bit/MSVC builds\n> can always compile the x86_64 popcount code, but I don't know whether\n> that's safe for AVX512.\n> \n> * I think we need to verify there isn't a huge performance regression for\n> smaller arrays. IIUC those will still require an AVX512 instruction or\n> two as well as a function call, which might add some noticeable overhead.\n\nI forgot to mention that I also want to understand whether we can actually\nassume availability of XGETBV when CPUID says we support AVX512:\n\n> +\t\t/*\n> +\t\t * We also need to check that the OS has enabled support for the ZMM\n> +\t\t * registers.\n> +\t\t */\n> +#ifdef _MSC_VER\n> +\t\treturn (_xgetbv(0) & 0xe0) != 0;\n> +#else\n> +\t\tuint64\t\txcr = 0;\n> +\t\tuint32\t\thigh;\n> +\t\tuint32\t\tlow;\n> +\n> +__asm__ __volatile__(\" xgetbv\\n\":\"=a\"(low), \"=d\"(high):\"c\"(xcr));\n> +\t\treturn (low & 0xe0) != 0;\n> +#endif\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 28 Mar 2024 16:51:36 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Nathan Bossart <[email protected]>\n> Sent: Thursday, March 28, 2024 2:39 PM\n> To: Amonson, Paul D <[email protected]>\n> \n> * The latest patch set from Paul Amonson appeared to support MSVC in the\n> meson build, but not the autoconf one. I don't have much expertise here,\n> so the v14 patch doesn't have any autoconf/meson support for MSVC, which\n> I thought might be okay for now. IIUC we assume that 64-bit/MSVC builds\n> can always compile the x86_64 popcount code, but I don't know whether\n> that's safe for AVX512.\n\nI also do not know how to integrate MSVC+Autoconf, the CI uses MSVC+Meson+Ninja so I stuck with that.\n \n> * I think we need to verify there isn't a huge performance regression for\n> smaller arrays. IIUC those will still require an AVX512 instruction or\n> two as well as a function call, which might add some noticeable overhead.\n\nNot considering your changes, I had already tested small buffers. At less than 512 bytes there was no measurable regression (there was one extra condition check) and for 512+ bytes it moved from no regression to some gains between 512 and 4096 bytes. Assuming you introduced no extra function calls, it should be the same.\n\n> I forgot to mention that I also want to understand whether we can actually assume availability of XGETBV when CPUID says we support AVX512:\n\nYou cannot assume as there are edge cases where AVX-512 was found on system one during compile but it's not actually available in a kernel on a second system at runtime despite the CPU actually having the hardware feature.\n\nI will review the new patch to see if there are anything that jumps out at me.\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Thu, 28 Mar 2024 22:03:04 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On 2024-Mar-28, Amonson, Paul D wrote:\n\n> > -----Original Message-----\n> > From: Nathan Bossart <[email protected]>\n> > Sent: Thursday, March 28, 2024 2:39 PM\n> > To: Amonson, Paul D <[email protected]>\n> > \n> > * The latest patch set from Paul Amonson appeared to support MSVC in the\n> > meson build, but not the autoconf one. I don't have much expertise here,\n> > so the v14 patch doesn't have any autoconf/meson support for MSVC, which\n> > I thought might be okay for now. IIUC we assume that 64-bit/MSVC builds\n> > can always compile the x86_64 popcount code, but I don't know whether\n> > that's safe for AVX512.\n> \n> I also do not know how to integrate MSVC+Autoconf, the CI uses\n> MSVC+Meson+Ninja so I stuck with that.\n\nWe don't do MSVC via autoconf/Make. We used to have a special build\nframework for MSVC which parsed Makefiles to produce \"solution\" files,\nbut it was removed as soon as Meson was mature enough to build. See\ncommit 1301c80b2167. If it builds with Meson, you're good.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"[PostgreSQL] is a great group; in my opinion it is THE best open source\ndevelopment communities in existence anywhere.\" (Lamar Owen)\n\n\n",
"msg_date": "Thu, 28 Mar 2024 23:10:33 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n> From: Amonson, Paul D <[email protected]>\n> Sent: Thursday, March 28, 2024 3:03 PM\n> To: Nathan Bossart <[email protected]>\n> ...\n> I will review the new patch to see if there are anything that jumps out at me.\n\nI see in the meson.build you added the new file twice?\n\n@@ -7,6 +7,7 @@ pgport_sources = [\n 'noblock.c',\n 'path.c',\n 'pg_bitutils.c',\n+ 'pg_popcount_avx512.c',\n 'pg_strong_random.c',\n 'pgcheckdir.c',\n 'pgmkdirp.c',\n@@ -84,6 +85,7 @@ replace_funcs_pos = [\n ['pg_crc32c_sse42', 'USE_SSE42_CRC32C_WITH_RUNTIME_CHECK', 'crc'],\n ['pg_crc32c_sse42_choose', 'USE_SSE42_CRC32C_WITH_RUNTIME_CHECK'],\n ['pg_crc32c_sb8', 'USE_SSE42_CRC32C_WITH_RUNTIME_CHECK'],\n+ ['pg_popcount_avx512', 'USE_AVX512_POPCNT_WITH_RUNTIME_CHECK', 'avx512_popcnt'],\n\nI was putting the file with special flags ONLY in the second section and all seemed to work. :)\n\nEverything else seems good to me.\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Thu, 28 Mar 2024 22:29:47 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 10:03:04PM +0000, Amonson, Paul D wrote:\n>> * I think we need to verify there isn't a huge performance regression for\n>> smaller arrays. IIUC those will still require an AVX512 instruction or\n>> two as well as a function call, which might add some noticeable overhead.\n> \n> Not considering your changes, I had already tested small buffers. At less\n> than 512 bytes there was no measurable regression (there was one extra\n> condition check) and for 512+ bytes it moved from no regression to some\n> gains between 512 and 4096 bytes. Assuming you introduced no extra\n> function calls, it should be the same.\n\nCool. I think we should run the benchmarks again to be safe, though.\n\n>> I forgot to mention that I also want to understand whether we can\n>> actually assume availability of XGETBV when CPUID says we support\n>> AVX512:\n> \n> You cannot assume as there are edge cases where AVX-512 was found on\n> system one during compile but it's not actually available in a kernel on\n> a second system at runtime despite the CPU actually having the hardware\n> feature.\n\nYeah, I understand that much, but I want to know how portable the XGETBV\ninstruction is. Unless I can assume that all x86_64 systems and compilers\nsupport that instruction, we might need an additional configure check\nand/or CPUID check. It looks like MSVC has had support for the _xgetbv\nintrinsic for quite a while, but I'm still researching the other cases.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 10:35:15 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 11:10:33PM +0100, Alvaro Herrera wrote:\n> We don't do MSVC via autoconf/Make. We used to have a special build\n> framework for MSVC which parsed Makefiles to produce \"solution\" files,\n> but it was removed as soon as Meson was mature enough to build. See\n> commit 1301c80b2167. If it builds with Meson, you're good.\n\nThe latest cfbot build for this seems to indicate that at least newer MSVC\nknows AVX512 intrinsics without any special compiler flags [0], so maybe\nwhat I had in v14 is good enough. A previous version of the patch set [1]\nhad the following lines:\n\n+ if host_system == 'windows'\n+ test_flags = ['/arch:AVX512']\n+ endif\n\nI'm not sure if this is needed for older MSVC or something else. IIRC I\ncouldn't find any other examples of this sort of thing in the meson\nscripts, either. Paul, do you recall why you added this?\n\n[0] https://cirrus-ci.com/task/5787206636273664?logs=configure#L159\n[1] https://postgr.es/m/attachment/158206/v12-0002-Feature-Added-AVX-512-acceleration-to-the-pg_popcoun.patch\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 10:42:41 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 10:29:47PM +0000, Amonson, Paul D wrote:\n> I see in the meson.build you added the new file twice?\n> \n> @@ -7,6 +7,7 @@ pgport_sources = [\n> 'noblock.c',\n> 'path.c',\n> 'pg_bitutils.c',\n> + 'pg_popcount_avx512.c',\n> 'pg_strong_random.c',\n> 'pgcheckdir.c',\n> 'pgmkdirp.c',\n> @@ -84,6 +85,7 @@ replace_funcs_pos = [\n> ['pg_crc32c_sse42', 'USE_SSE42_CRC32C_WITH_RUNTIME_CHECK', 'crc'],\n> ['pg_crc32c_sse42_choose', 'USE_SSE42_CRC32C_WITH_RUNTIME_CHECK'],\n> ['pg_crc32c_sb8', 'USE_SSE42_CRC32C_WITH_RUNTIME_CHECK'],\n> + ['pg_popcount_avx512', 'USE_AVX512_POPCNT_WITH_RUNTIME_CHECK', 'avx512_popcnt'],\n> \n> I was putting the file with special flags ONLY in the second section and all seemed to work. :)\n\nAh, yes, I think that's a mistake, and without looking closely, might\nexplain the MSVC warnings [0]:\n\n\t[22:05:47.444] pg_popcount_avx512.c.obj : warning LNK4006: pg_popcount_avx512_available already defined in pg_popcount_a...\n\nIt might be nice if we conditionally built pg_popcount_avx512.o in autoconf\nbuilds, too, but AFAICT we still need to wrap most of that code with\nmacros, so I'm not sure it's worth the trouble. I'll take another look at\nthis...\n\n[0] http://commitfest.cputube.org/highlights/all.html#4883\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 29 Mar 2024 10:59:40 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> -----Original Message-----\n>\n> Cool. I think we should run the benchmarks again to be safe, though.\n\nOk, sure go ahead. :)\n\n> >> I forgot to mention that I also want to understand whether we can\n> >> actually assume availability of XGETBV when CPUID says we support\n> >> AVX512:\n> >\n> > You cannot assume as there are edge cases where AVX-512 was found on\n> > system one during compile but it's not actually available in a kernel\n> > on a second system at runtime despite the CPU actually having the\n> > hardware feature.\n> \n> Yeah, I understand that much, but I want to know how portable the XGETBV\n> instruction is. Unless I can assume that all x86_64 systems and compilers\n> support that instruction, we might need an additional configure check and/or\n> CPUID check. It looks like MSVC has had support for the _xgetbv intrinsic for\n> quite a while, but I'm still researching the other cases.\n\nI see google web references to the xgetbv instruction as far back as 2009 for Intel 64 bit HW and 2010 for AMD 64bit HW, maybe you could test for _xgetbv() MSVC built-in. How far back do you need to go?\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Fri, 29 Mar 2024 16:06:17 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 04:06:17PM +0000, Amonson, Paul D wrote:\n>> Yeah, I understand that much, but I want to know how portable the XGETBV\n>> instruction is. Unless I can assume that all x86_64 systems and compilers\n>> support that instruction, we might need an additional configure check and/or\n>> CPUID check. It looks like MSVC has had support for the _xgetbv intrinsic for\n>> quite a while, but I'm still researching the other cases.\n> \n> I see google web references to the xgetbv instruction as far back as 2009\n> for Intel 64 bit HW and 2010 for AMD 64bit HW, maybe you could test for\n> _xgetbv() MSVC built-in. How far back do you need to go?\n\nHm. It seems unlikely that a compiler would understand AVX512 intrinsics\nand not XGETBV then. I guess the other question is whether CPUID\nindicating AVX512 is enabled implies the availability of XGETBV on the CPU.\nIf that's not safe, we might need to add another CPUID test.\n\nIt would probably be easy enough to add a couple of tests for this, but if\nwe don't have reason to believe there's any practical case to do so, I\ndon't know why we would. I'm curious what others think about this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 11:16:42 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 10:59:40AM -0500, Nathan Bossart wrote:\n> It might be nice if we conditionally built pg_popcount_avx512.o in autoconf\n> builds, too, but AFAICT we still need to wrap most of that code with\n> macros, so I'm not sure it's worth the trouble. I'll take another look at\n> this...\n\nIf we assumed that TRY_POPCNT_FAST would be set and either\nHAVE__GET_CPUID_COUNT or HAVE__CPUIDEX would be set whenever\nUSE_AVX512_POPCNT_WITH_RUNTIME_CHECK is set, we could probably remove the\nsurrounding macros and just compile pg_popcount_avx512.c conditionally\nbased on USE_AVX512_POPCNT_WITH_RUNTIME_CHECK. However, the surrounding\ncode seems to be pretty cautious about these assumptions (e.g., the CPUID\nmacros are checked before setting TRY_POPCNT_FAST), so this would stray\nfrom the nearby precedent a bit.\n\nA counterexample is the CRC32C code. AFAICT we assume the presence of\nCPUID in that code (and #error otherwise). I imagine its probably safe to\nassume the compiler understands CPUID if it understands AVX512 intrinsics,\nbut that is still mostly a guess.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 11:22:11 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n>> I see google web references to the xgetbv instruction as far back as 2009\n>> for Intel 64 bit HW and 2010 for AMD 64bit HW, maybe you could test for\n>> _xgetbv() MSVC built-in. How far back do you need to go?\n\n> Hm. It seems unlikely that a compiler would understand AVX512 intrinsics\n> and not XGETBV then. I guess the other question is whether CPUID\n> indicating AVX512 is enabled implies the availability of XGETBV on the CPU.\n> If that's not safe, we might need to add another CPUID test.\n\nSome quick googling says that (1) XGETBV predates AVX and (2) if you\nare worried about old CPUs, you should check CPUID to verify whether\nXGETBV exists before trying to use it. I did not look for the\nbit-level details on how to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Mar 2024 12:30:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> From: Nathan Bossart <[email protected]> \n> Sent: Friday, March 29, 2024 9:17 AM\n> To: Amonson, Paul D <[email protected]>\n\n> On Fri, Mar 29, 2024 at 04:06:17PM +0000, Amonson, Paul D wrote:\n>> Yeah, I understand that much, but I want to know how portable the \n>> XGETBV instruction is. Unless I can assume that all x86_64 systems \n>> and compilers support that instruction, we might need an additional \n>> configure check and/or CPUID check. It looks like MSVC has had \n>> support for the _xgetbv intrinsic for quite a while, but I'm still researching the other cases.\n> \n> I see google web references to the xgetbv instruction as far back as \n> 2009 for Intel 64 bit HW and 2010 for AMD 64bit HW, maybe you could \n> test for\n> _xgetbv() MSVC built-in. How far back do you need to go?\n\n> Hm. It seems unlikely that a compiler would understand AVX512 intrinsics and not XGETBV then. I guess the other question is whether CPUID indicating AVX512 is enabled implies the availability of XGETBV on the CPU.\n> If that's not safe, we might need to add another CPUID test.\n\n> It would probably be easy enough to add a couple of tests for this, but if we don't have reason to believe there's any practical case to do so, I don't know why we would. I'm curious what others think about this.\n\nThis seems unlikely. Machines supporting XGETBV would support AVX512 intrinsics. Xgetbv instruction seems to be part of xsave feature set as per intel developer manual [2]. XGETBV/XSAVE came first, and seems to be available in all x86 systems available since 2011, since Intel SandyBridge architecture and AMD the Opteron Gen4 [0].\nAVX512 first came into a product in 2016 [1]\n[0]: https://kb.vmware.com/s/article/1005764\n[1]: https://en.wikipedia.org/wiki/AVX-512\n[2]: https://cdrdv2-public.intel.com/774475/252046-sdm-change-document.pdf\n\n- Akash Shankaran\n\n\n\n",
"msg_date": "Fri, 29 Mar 2024 16:31:09 +0000",
"msg_from": "\"Shankaran, Akash\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 12:30:14PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>>> I see google web references to the xgetbv instruction as far back as 2009\n>>> for Intel 64 bit HW and 2010 for AMD 64bit HW, maybe you could test for\n>>> _xgetbv() MSVC built-in. How far back do you need to go?\n> \n>> Hm. It seems unlikely that a compiler would understand AVX512 intrinsics\n>> and not XGETBV then. I guess the other question is whether CPUID\n>> indicating AVX512 is enabled implies the availability of XGETBV on the CPU.\n>> If that's not safe, we might need to add another CPUID test.\n> \n> Some quick googling says that (1) XGETBV predates AVX and (2) if you\n> are worried about old CPUs, you should check CPUID to verify whether\n> XGETBV exists before trying to use it. I did not look for the\n> bit-level details on how to do that.\n\nThat extra CPUID check should translate to exactly one additional line of\ncode, so I think I'm inclined to just add it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 11:41:53 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> On Thu, Mar 28, 2024 at 11:10:33PM +0100, Alvaro Herrera wrote:\n> > We don't do MSVC via autoconf/Make. We used to have a special build\n> > framework for MSVC which parsed Makefiles to produce \"solution\" files,\n> > but it was removed as soon as Meson was mature enough to build. See\n> > commit 1301c80b2167. If it builds with Meson, you're good.\n> \n> The latest cfbot build for this seems to indicate that at least newer MSVC\n> knows AVX512 intrinsics without any special compiler flags [0], so maybe\n> what I had in v14 is good enough. A previous version of the patch set [1] had\n> the following lines:\n> \n> + if host_system == 'windows'\n> + test_flags = ['/arch:AVX512']\n> + endif\n> \n> I'm not sure if this is needed for older MSVC or something else. IIRC I couldn't\n> find any other examples of this sort of thing in the meson scripts, either. Paul,\n> do you recall why you added this?\n\nI asked internal folks here in-the-know and they suggested I add it. I personally am not a Windows guy. If it works without it and you are comfortable not including the lines, I am fine with it.\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Fri, 29 Mar 2024 17:11:18 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "> A counterexample is the CRC32C code. AFAICT we assume the presence of\n> CPUID in that code (and #error otherwise). I imagine its probably safe to\n> assume the compiler understands CPUID if it understands AVX512 intrinsics,\n> but that is still mostly a guess.\n\nIf AVX-512 intrinsics are available, then yes you will have CPUID. CPUID is much older in the hardware/software timeline than AVX-512.\n\nThanks,\nPaul\n\n\n\n",
"msg_date": "Fri, 29 Mar 2024 17:25:14 +0000",
"msg_from": "\"Amonson, Paul D\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "Okay, here is a slightly different approach that I've dubbed the \"maximum\nassumption\" approach. In short, I wanted to see how much we could simplify\nthe patch by making all possibly-reasonable assumptions about the compiler\nand CPU. These include:\n\n* If the compiler understands AVX512 intrinsics, we assume that it also\n knows about the required CPUID and XGETBV intrinsics, and we assume that\n the conditions for TRY_POPCNT_FAST are true.\n* If this is x86_64, CPUID will be supported by the CPU.\n* If CPUID indicates AVX512 POPCNT support, the CPU also supports XGETBV.\n\nDo any of these assumptions seem unreasonable or unlikely to be true for\nall practical purposes? I don't mind adding back some or all of the\nconfigure/runtime checks if they seem necessary. I guess the real test\nwill be the buildfarm...\n\nAnother big change in this version is that I've moved\npg_popcount_avx512_available() to its own file so that we only compile\npg_popcount_avx512() with the special compiler flags. This is just an\noversight in previous versions.\n\nFinally, I've modified the build scripts so that the AVX512 popcount stuff\nis conditionally built based on the configure checks for both\nautoconf/meson.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 29 Mar 2024 14:13:12 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 02:13:12PM -0500, Nathan Bossart wrote:\n> * If the compiler understands AVX512 intrinsics, we assume that it also\n> knows about the required CPUID and XGETBV intrinsics, and we assume that\n> the conditions for TRY_POPCNT_FAST are true.\n\nBleh, cfbot's 32-bit build is unhappy with this [0]. It looks like it's\ntrying to build the AVX512 stuff, but TRY_POPCNT_FAST isn't set.\n\n[19:39:11.306] ../src/port/pg_popcount_avx512.c:39:18: warning: implicit declaration of function ‘pg_popcount_fast’; did you mean ‘pg_popcount’? [-Wimplicit-function-declaration]\n[19:39:11.306] 39 | return popcnt + pg_popcount_fast(buf, bytes);\n[19:39:11.306] | ^~~~~~~~~~~~~~~~\n[19:39:11.306] | pg_popcount\n\nThere's also a complaint about the inline assembly:\n\n[19:39:11.443] ../src/port/pg_popcount_avx512_choose.c:55:1: error: inconsistent operand constraints in an ‘asm’\n[19:39:11.443] 55 | __asm__ __volatile__(\" xgetbv\\n\":\"=a\"(low), \"=d\"(high):\"c\"(xcr));\n[19:39:11.443] | ^~~~~~~\n\nI'm looking into this...\n\n> +#if defined(HAVE__GET_CPUID)\n> +\t__get_cpuid_count(7, 0, &exx[0], &exx[1], &exx[2], &exx[3]);\n> +#elif defined(HAVE__CPUID)\n> +\t__cpuidex(exx, 7, 0);\n\nIs there any reason we can't use __get_cpuid() and __cpuid() here, given\nthe sub-leaf is 0?\n\n[0] https://cirrus-ci.com/task/5475113447981056\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 15:08:28 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 03:08:28PM -0500, Nathan Bossart wrote:\n>> +#if defined(HAVE__GET_CPUID)\n>> +\t__get_cpuid_count(7, 0, &exx[0], &exx[1], &exx[2], &exx[3]);\n>> +#elif defined(HAVE__CPUID)\n>> +\t__cpuidex(exx, 7, 0);\n> \n> Is there any reason we can't use __get_cpuid() and __cpuid() here, given\n> the sub-leaf is 0?\n\nThe answer to this seems to be \"no.\" After additional research,\n__get_cpuid_count/__cpuidex seem new enough that we probably want configure\nchecks for them, so I'll add those back in the next version of the patch.\n\nApologies for the stream of consciousness today...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 15:57:41 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Here's a v17 of the patch. This one has configure checks for everything\n(i.e., CPUID, XGETBV, and the AVX512 intrinsics) as well as the relevant\nruntime checks (i.e., we call CPUID to check for XGETBV and AVX512 POPCNT\navailability, and we call XGETBV to ensure the ZMM registers are enabled).\nI restricted the AVX512 configure checks to x86_64 since we know we won't\nhave TRY_POPCNT_FAST on 32-bit, and we rely on pg_popcount_fast() as our\nfallback implementation in the AVX512 version. Finally, I removed the\ninline assembly in favor of using the _xgetbv() intrinsic on all systems.\nIt looks like that's available on gcc, clang, and msvc, although it\nsometimes requires -mxsave, so that's applied to\npg_popcount_avx512_choose.o as needed. I doubt this will lead to SIGILLs,\nbut it's admittedly a little shaky.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 29 Mar 2024 22:22:09 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "I used John Naylor's test_popcount module [0] to put together the attached\ngraphs (note that the \"small arrays\" one is semi-logarithmic). For both\ngraphs, the X-axis is the number of 64-bit words in the array, and Y-axis\nis the amount of time in milliseconds to run pg_popcount() on it 100,000\ntimes (along with a bit of overhead). This test didn't show any\nregressions with a relatively small number of bytes, and it showed the\nexpected improvements with many bytes.\n\nThere isn't a ton of use of pg_popcount() in Postgres, but I do see a few\nplaces that call it with enough bytes for the AVX512 optimization to take\neffect. There may be more callers in the future, though, and it seems\ngenerally useful to have some of the foundational work for using AVX512\ninstructions in place. My current plan is to add some new tests for\npg_popcount() with many bytes, and then I'll give it a few more days for\nany additional feedback before committing.\n\n[0] https://postgr.es/m/CAFBsxsE7otwnfA36Ly44zZO+b7AEWHRFANxR1h1kxveEV=ghLQ@mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 30 Mar 2024 15:03:29 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Sat, Mar 30, 2024 at 03:03:29PM -0500, Nathan Bossart wrote:\n> My current plan is to add some new tests for\n> pg_popcount() with many bytes, and then I'll give it a few more days for\n> any additional feedback before committing.\n\nHere is a v18 with a couple of new tests. Otherwise, it is the same as\nv17.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 31 Mar 2024 20:17:08 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On 2024-Mar-31, Nathan Bossart wrote:\n\n> +uint64\n> +pg_popcount_avx512(const char *buf, int bytes)\n> +{\n> +\tuint64\t\tpopcnt;\n> +\t__m512i\t\taccum = _mm512_setzero_si512();\n> +\n> +\tfor (; bytes >= sizeof(__m512i); bytes -= sizeof(__m512i))\n> +\t{\n> +\t\tconst\t\t__m512i val = _mm512_loadu_si512((const __m512i *) buf);\n> +\t\tconst\t\t__m512i cnt = _mm512_popcnt_epi64(val);\n> +\n> +\t\taccum = _mm512_add_epi64(accum, cnt);\n> +\t\tbuf += sizeof(__m512i);\n> +\t}\n> +\n> +\tpopcnt = _mm512_reduce_add_epi64(accum);\n> +\treturn popcnt + pg_popcount_fast(buf, bytes);\n> +}\n\nHmm, doesn't this arrangement cause an extra function call to\npg_popcount_fast to be used here? Given the level of micro-optimization\nbeing used by this code, I would have thought that you'd have tried to\navoid that. (At least, maybe avoid the call if bytes is 0, no?)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)\n\n\n",
"msg_date": "Mon, 1 Apr 2024 13:06:12 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Apr 01, 2024 at 01:06:12PM +0200, Alvaro Herrera wrote:\n> On 2024-Mar-31, Nathan Bossart wrote:\n>> +\tpopcnt = _mm512_reduce_add_epi64(accum);\n>> +\treturn popcnt + pg_popcount_fast(buf, bytes);\n> \n> Hmm, doesn't this arrangement cause an extra function call to\n> pg_popcount_fast to be used here? Given the level of micro-optimization\n> being used by this code, I would have thought that you'd have tried to\n> avoid that. (At least, maybe avoid the call if bytes is 0, no?)\n\nYes, it does. I did another benchmark on very small arrays and can see the\noverhead. This is the time in milliseconds to run pg_popcount() on an\narray 1 billion times:\n\n size (bytes) HEAD AVX512-POPCNT\n 1 1707.685 3480.424\n 2 1926.694 4606.182\n 4 3210.412 5284.506\n 8 1920.703 3640.968\n 16 2936.91 4045.586\n 32 3627.956 5538.418\n 64 5347.213 3748.212\n\nI suspect that anything below 64 bytes will see this regression, as that is\nthe earliest point where there are enough bytes for ZMM registers.\n\nWe could avoid the call if there are no remaining bytes, but the numbers\nfor the smallest arrays probably wouldn't improve much, and that might\nactually add some overhead due to branching. The other option to avoid\nthis overhead is to put most of pg_bitutils.c into its header file so that\nwe can inline the call.\n\nReviewing the current callers of pg_popcount(), IIUC the only ones that are\npassing very small arrays are the bit_count() implementations and a call in\nthe syslogger for a single byte. I don't know how much to worry about the\noverhead for bit_count() since there's presumably a bunch of other\noverhead, and the syslogger one could probably be fixed via an inline\nfunction that pulled the value from pg_number_of_ones (which would probably\nbe an improvement over the status quo, anyway). But this is all to save a\ncouple of nanoseconds...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 1 Apr 2024 10:53:38 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, 1 Apr 2024 at 18:53, Nathan Bossart <[email protected]> wrote:\n>\n> On Mon, Apr 01, 2024 at 01:06:12PM +0200, Alvaro Herrera wrote:\n> > On 2024-Mar-31, Nathan Bossart wrote:\n> >> + popcnt = _mm512_reduce_add_epi64(accum);\n> >> + return popcnt + pg_popcount_fast(buf, bytes);\n> >\n> > Hmm, doesn't this arrangement cause an extra function call to\n> > pg_popcount_fast to be used here? Given the level of micro-optimization\n> > being used by this code, I would have thought that you'd have tried to\n> > avoid that. (At least, maybe avoid the call if bytes is 0, no?)\n>\n> Yes, it does. I did another benchmark on very small arrays and can see the\n> overhead. This is the time in milliseconds to run pg_popcount() on an\n> array 1 billion times:\n>\n> size (bytes) HEAD AVX512-POPCNT\n> 1 1707.685 3480.424\n> 2 1926.694 4606.182\n> 4 3210.412 5284.506\n> 8 1920.703 3640.968\n> 16 2936.91 4045.586\n> 32 3627.956 5538.418\n> 64 5347.213 3748.212\n>\n> I suspect that anything below 64 bytes will see this regression, as that is\n> the earliest point where there are enough bytes for ZMM registers.\n\nWhat about using the masking capabilities of AVX-512 to handle the\ntail in the same code path? Masked out portions of a load instruction\nwill not generate an exception. To allow byte level granularity\nmasking, -mavx512bw is needed. Based on wikipedia this will only\ndisable this fast path on Knights Mill (Xeon Phi), in all other cases\nVPOPCNTQ implies availability of BW.\n\nAttached is an example of what I mean. I did not have a machine to\ntest it with, but the code generated looks sane. I added the clang\npragma because it insisted on unrolling otherwise and based on how the\ninstruction dependencies look that is probably not too helpful even\nfor large cases (needs to be tested). The configure check and compile\nflags of course need to be amended for BW.\n\nRegards,\nAnts Aasma",
"msg_date": "Tue, 2 Apr 2024 00:11:59 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Apr 02, 2024 at 12:11:59AM +0300, Ants Aasma wrote:\n> What about using the masking capabilities of AVX-512 to handle the\n> tail in the same code path? Masked out portions of a load instruction\n> will not generate an exception. To allow byte level granularity\n> masking, -mavx512bw is needed. Based on wikipedia this will only\n> disable this fast path on Knights Mill (Xeon Phi), in all other cases\n> VPOPCNTQ implies availability of BW.\n\nSounds promising. IMHO we should really be sure that these kinds of loads\nwon't generate segfaults and the like due to the masked-out portions. I\nsearched around a little bit but haven't found anything that seemed\ndefinitive.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 1 Apr 2024 16:31:40 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, 2 Apr 2024 at 00:31, Nathan Bossart <[email protected]> wrote:\n>\n> On Tue, Apr 02, 2024 at 12:11:59AM +0300, Ants Aasma wrote:\n> > What about using the masking capabilities of AVX-512 to handle the\n> > tail in the same code path? Masked out portions of a load instruction\n> > will not generate an exception. To allow byte level granularity\n> > masking, -mavx512bw is needed. Based on wikipedia this will only\n> > disable this fast path on Knights Mill (Xeon Phi), in all other cases\n> > VPOPCNTQ implies availability of BW.\n>\n> Sounds promising. IMHO we should really be sure that these kinds of loads\n> won't generate segfaults and the like due to the masked-out portions. I\n> searched around a little bit but haven't found anything that seemed\n> definitive.\n\nInterestingly the Intel software developer manual is not exactly\ncrystal clear on how memory faults with masks work, but volume 2A\nchapter 2.8 [1] does specify that MOVDQU8 is of exception class E4.nb\nthat supports memory fault suppression on page fault.\n\nRegards,\nAnts Aasma\n\n[1] https://cdrdv2-public.intel.com/819712/253666-sdm-vol-2a.pdf\n\n\n",
"msg_date": "Tue, 2 Apr 2024 01:09:57 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Here is a v19 of the patch set. I moved out the refactoring of the\nfunction pointer selection code to 0001. I think this is a good change\nindependent of $SUBJECT, and I plan to commit this soon. In 0002, I\nchanged the syslogger.c usage of pg_popcount() to use pg_number_of_ones\ninstead. This is standard practice elsewhere where the popcount functions\nare unlikely to win. I'll probably commit this one soon, too, as it's even\nmore trivial than 0001.\n\n0003 is the AVX512 POPCNT patch. Besides refactoring out 0001, there are\nno changes from v18. 0004 is an early proof-of-concept for using AVX512\nfor the visibility map code. The code is missing comments, and I haven't\nperformed any benchmarking yet, but I figured I'd post it because it\ndemonstrates how it's possible to build upon 0003 in other areas.\n\nAFAICT the main open question is the function call overhead in 0003 that\nAlvaro brought up earlier. After 0002 is committed, I believe the only\nin-tree caller of pg_popcount() with very few bytes is bit_count(), and I'm\nnot sure it's worth expending too much energy to make sure there are\nabsolutely no regressions there. However, I'm happy to do so if folks feel\nthat it is necessary, and I'd be grateful for thoughts on how to proceed on\nthis one.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 1 Apr 2024 17:11:17 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Apr 02, 2024 at 01:09:57AM +0300, Ants Aasma wrote:\n> On Tue, 2 Apr 2024 at 00:31, Nathan Bossart <[email protected]> wrote:\n>> On Tue, Apr 02, 2024 at 12:11:59AM +0300, Ants Aasma wrote:\n>> > What about using the masking capabilities of AVX-512 to handle the\n>> > tail in the same code path? Masked out portions of a load instruction\n>> > will not generate an exception. To allow byte level granularity\n>> > masking, -mavx512bw is needed. Based on wikipedia this will only\n>> > disable this fast path on Knights Mill (Xeon Phi), in all other cases\n>> > VPOPCNTQ implies availability of BW.\n>>\n>> Sounds promising. IMHO we should really be sure that these kinds of loads\n>> won't generate segfaults and the like due to the masked-out portions. I\n>> searched around a little bit but haven't found anything that seemed\n>> definitive.\n> \n> Interestingly the Intel software developer manual is not exactly\n> crystal clear on how memory faults with masks work, but volume 2A\n> chapter 2.8 [1] does specify that MOVDQU8 is of exception class E4.nb\n> that supports memory fault suppression on page fault.\n\nPerhaps Paul or Akash could chime in here...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 1 Apr 2024 17:15:12 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Mon, Apr 01, 2024 at 05:11:17PM -0500, Nathan Bossart wrote:\n> Here is a v19 of the patch set. I moved out the refactoring of the\n> function pointer selection code to 0001. I think this is a good change\n> independent of $SUBJECT, and I plan to commit this soon. In 0002, I\n> changed the syslogger.c usage of pg_popcount() to use pg_number_of_ones\n> instead. This is standard practice elsewhere where the popcount functions\n> are unlikely to win. I'll probably commit this one soon, too, as it's even\n> more trivial than 0001.\n>\n> 0003 is the AVX512 POPCNT patch. Besides refactoring out 0001, there are\n> no changes from v18. 0004 is an early proof-of-concept for using AVX512\n> for the visibility map code. The code is missing comments, and I haven't\n> performed any benchmarking yet, but I figured I'd post it because it\n> demonstrates how it's possible to build upon 0003 in other areas.\n\nI've committed the first two patches, and I've attached a rebased version\nof the latter two.\n\n> AFAICT the main open question is the function call overhead in 0003 that\n> Alvaro brought up earlier. After 0002 is committed, I believe the only\n> in-tree caller of pg_popcount() with very few bytes is bit_count(), and I'm\n> not sure it's worth expending too much energy to make sure there are\n> absolutely no regressions there. However, I'm happy to do so if folks feel\n> that it is necessary, and I'd be grateful for thoughts on how to proceed on\n> this one.\n\nAnother idea I had is to turn pg_popcount() into a macro that just uses the\npg_number_of_ones array when called for few bytes:\n\n\tstatic inline uint64\n\tpg_popcount_inline(const char *buf, int bytes)\n\t{\n\t\tuint64\t\tpopcnt = 0;\n\n\t\twhile (bytes--)\n\t\t\tpopcnt += pg_number_of_ones[(unsigned char) *buf++];\n\n\t\treturn popcnt;\n\t}\n\n\t#define pg_popcount(buf, bytes) \\\n\t\t((bytes < 64) ? \\\n\t\t pg_popcount_inline(buf, bytes) : \\\n\t\t pg_popcount_optimized(buf, bytes))\n\nBut again, I'm not sure this is really worth it for the current use-cases.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 2 Apr 2024 10:53:01 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On 2024-Apr-02, Nathan Bossart wrote:\n\n> Another idea I had is to turn pg_popcount() into a macro that just uses the\n> pg_number_of_ones array when called for few bytes:\n> \n> \tstatic inline uint64\n> \tpg_popcount_inline(const char *buf, int bytes)\n> \t{\n> \t\tuint64\t\tpopcnt = 0;\n> \n> \t\twhile (bytes--)\n> \t\t\tpopcnt += pg_number_of_ones[(unsigned char) *buf++];\n> \n> \t\treturn popcnt;\n> \t}\n> \n> \t#define pg_popcount(buf, bytes) \\\n> \t\t((bytes < 64) ? \\\n> \t\t pg_popcount_inline(buf, bytes) : \\\n> \t\t pg_popcount_optimized(buf, bytes))\n> \n> But again, I'm not sure this is really worth it for the current use-cases.\n\nEh, that seems simple enough, and then you can forget about that case.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No hay hombre que no aspire a la plenitud, es decir,\nla suma de experiencias de que un hombre es capaz\"\n\n\n",
"msg_date": "Tue, 2 Apr 2024 19:34:08 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2024-Apr-02, Nathan Bossart wrote:\n>> Another idea I had is to turn pg_popcount() into a macro that just uses the\n>> pg_number_of_ones array when called for few bytes:\n>> \n>> \tstatic inline uint64\n>> \tpg_popcount_inline(const char *buf, int bytes)\n>> \t{\n>> \t\tuint64\t\tpopcnt = 0;\n>> \n>> \t\twhile (bytes--)\n>> \t\t\tpopcnt += pg_number_of_ones[(unsigned char) *buf++];\n>> \n>> \t\treturn popcnt;\n>> \t}\n>> \n>> \t#define pg_popcount(buf, bytes) \\\n>> \t\t((bytes < 64) ? \\\n>> \t\t pg_popcount_inline(buf, bytes) : \\\n>> \t\t pg_popcount_optimized(buf, bytes))\n>> \n>> But again, I'm not sure this is really worth it for the current use-cases.\n\n> Eh, that seems simple enough, and then you can forget about that case.\n\nI don't like the double evaluation of the macro argument. Seems like\nyou could get the same results more safely with\n\n\tstatic inline uint64\n\tpg_popcount(const char *buf, int bytes)\n\t{\n\t\tif (bytes < 64)\n\t\t{\n\t\t\tuint64\t\tpopcnt = 0;\n\n\t\t\twhile (bytes--)\n\t\t\t\tpopcnt += pg_number_of_ones[(unsigned char) *buf++];\n\n\t\t\treturn popcnt;\n\t\t}\n\t\treturn pg_popcount_optimized(buf, bytes);\n\t}\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Apr 2024 13:43:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Apr 02, 2024 at 01:43:48PM -0400, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> On 2024-Apr-02, Nathan Bossart wrote:\n>>> Another idea I had is to turn pg_popcount() into a macro that just uses the\n>>> pg_number_of_ones array when called for few bytes:\n>>> \n>>> \tstatic inline uint64\n>>> \tpg_popcount_inline(const char *buf, int bytes)\n>>> \t{\n>>> \t\tuint64\t\tpopcnt = 0;\n>>> \n>>> \t\twhile (bytes--)\n>>> \t\t\tpopcnt += pg_number_of_ones[(unsigned char) *buf++];\n>>> \n>>> \t\treturn popcnt;\n>>> \t}\n>>> \n>>> \t#define pg_popcount(buf, bytes) \\\n>>> \t\t((bytes < 64) ? \\\n>>> \t\t pg_popcount_inline(buf, bytes) : \\\n>>> \t\t pg_popcount_optimized(buf, bytes))\n>>> \n>>> But again, I'm not sure this is really worth it for the current use-cases.\n> \n>> Eh, that seems simple enough, and then you can forget about that case.\n> \n> I don't like the double evaluation of the macro argument. Seems like\n> you could get the same results more safely with\n> \n> \tstatic inline uint64\n> \tpg_popcount(const char *buf, int bytes)\n> \t{\n> \t\tif (bytes < 64)\n> \t\t{\n> \t\t\tuint64\t\tpopcnt = 0;\n> \n> \t\t\twhile (bytes--)\n> \t\t\t\tpopcnt += pg_number_of_ones[(unsigned char) *buf++];\n> \n> \t\t\treturn popcnt;\n> \t\t}\n> \t\treturn pg_popcount_optimized(buf, bytes);\n> \t}\n\nYeah, I like that better. I'll do some testing to see what the threshold\nreally should be before posting an actual patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 2 Apr 2024 13:40:21 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, 2 Apr 2024 at 00:31, Nathan Bossart <[email protected]> wrote:\n> On Tue, Apr 02, 2024 at 12:11:59AM +0300, Ants Aasma wrote:\n> > What about using the masking capabilities of AVX-512 to handle the\n> > tail in the same code path? Masked out portions of a load instruction\n> > will not generate an exception. To allow byte level granularity\n> > masking, -mavx512bw is needed. Based on wikipedia this will only\n> > disable this fast path on Knights Mill (Xeon Phi), in all other cases\n> > VPOPCNTQ implies availability of BW.\n>\n> Sounds promising. IMHO we should really be sure that these kinds of loads\n> won't generate segfaults and the like due to the masked-out portions. I\n> searched around a little bit but haven't found anything that seemed\n> definitive.\n\nAfter sleeping on the problem, I think we can avoid this question\naltogether while making the code faster by using aligned accesses.\nLoads that straddle cache line boundaries run internally as 2 load\noperations. Gut feel says that there are enough out-of-order resources\navailable to make it not matter in most cases. But even so, not doing\nthe extra work is surely better. Attached is another approach that\ndoes aligned accesses, and thereby avoids going outside bounds.\n\nWould be interesting to see how well that fares in the small use case.\nAnything that fits into one aligned cache line should be constant\nspeed, and there is only one branch, but the mask setup and folding\nthe separate popcounts together should add up to about 20-ish cycles\nof overhead.\n\nRegards,\nAnts Aasma",
"msg_date": "Tue, 2 Apr 2024 23:30:39 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Apr 02, 2024 at 01:40:21PM -0500, Nathan Bossart wrote:\n> On Tue, Apr 02, 2024 at 01:43:48PM -0400, Tom Lane wrote:\n>> I don't like the double evaluation of the macro argument. Seems like\n>> you could get the same results more safely with\n>> \n>> \tstatic inline uint64\n>> \tpg_popcount(const char *buf, int bytes)\n>> \t{\n>> \t\tif (bytes < 64)\n>> \t\t{\n>> \t\t\tuint64\t\tpopcnt = 0;\n>> \n>> \t\t\twhile (bytes--)\n>> \t\t\t\tpopcnt += pg_number_of_ones[(unsigned char) *buf++];\n>> \n>> \t\t\treturn popcnt;\n>> \t\t}\n>> \t\treturn pg_popcount_optimized(buf, bytes);\n>> \t}\n> \n> Yeah, I like that better. I'll do some testing to see what the threshold\n> really should be before posting an actual patch.\n\nMy testing shows that inlining wins with fewer than 8 bytes for the current\n\"fast\" implementation. The \"fast\" implementation wins with fewer than 64\nbytes compared to the AVX-512 implementation. These results are pretty\nintuitive because those are the points at which the optimizations kick in.\n\nIn v21, 0001 is just the above inlining idea, which seems worth doing\nindependent of $SUBJECT. 0002 and 0003 are the AVX-512 patches, which I've\nmodified similarly to 0001, i.e., I've inlined the \"fast\" version in the\nfunction pointer to avoid the function call overhead when there are fewer\nthan 64 bytes. All of this overhead juggling should result in choosing the\noptimal popcount implementation depending on how many bytes there are to\nprocess, roughly speaking.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 2 Apr 2024 17:01:32 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Apr 02, 2024 at 05:01:32PM -0500, Nathan Bossart wrote:\n> In v21, 0001 is just the above inlining idea, which seems worth doing\n> independent of $SUBJECT. 0002 and 0003 are the AVX-512 patches, which I've\n> modified similarly to 0001, i.e., I've inlined the \"fast\" version in the\n> function pointer to avoid the function call overhead when there are fewer\n> than 64 bytes. All of this overhead juggling should result in choosing the\n> optimal popcount implementation depending on how many bytes there are to\n> process, roughly speaking.\n\nSorry for the noise. I noticed a couple of silly mistakes immediately\nafter sending v21.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 2 Apr 2024 17:20:20 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Apr 02, 2024 at 05:20:20PM -0500, Nathan Bossart wrote:\n> Sorry for the noise. I noticed a couple of silly mistakes immediately\n> after sending v21.\n\nSigh... I missed a line while rebasing these patches, which seems to have\ngrossly offended cfbot. Apologies again for the noise.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 2 Apr 2024 21:09:14 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "I committed v23-0001. Here is a rebased version of the remaining patches.\nI intend to test the masking idea from Ants next.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 3 Apr 2024 12:41:27 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Wed, Apr 03, 2024 at 12:41:27PM -0500, Nathan Bossart wrote:\n> I committed v23-0001. Here is a rebased version of the remaining patches.\n> I intend to test the masking idea from Ants next.\n\n0002 was missing a cast that is needed for the 32-bit builds. I've fixed\nthat in v25.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 3 Apr 2024 15:12:58 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Apr 02, 2024 at 11:30:39PM +0300, Ants Aasma wrote:\n> On Tue, 2 Apr 2024 at 00:31, Nathan Bossart <[email protected]> wrote:\n>> On Tue, Apr 02, 2024 at 12:11:59AM +0300, Ants Aasma wrote:\n>> > What about using the masking capabilities of AVX-512 to handle the\n>> > tail in the same code path? Masked out portions of a load instruction\n>> > will not generate an exception. To allow byte level granularity\n>> > masking, -mavx512bw is needed. Based on wikipedia this will only\n>> > disable this fast path on Knights Mill (Xeon Phi), in all other cases\n>> > VPOPCNTQ implies availability of BW.\n>>\n>> Sounds promising. IMHO we should really be sure that these kinds of loads\n>> won't generate segfaults and the like due to the masked-out portions. I\n>> searched around a little bit but haven't found anything that seemed\n>> definitive.\n> \n> After sleeping on the problem, I think we can avoid this question\n> altogether while making the code faster by using aligned accesses.\n> Loads that straddle cache line boundaries run internally as 2 load\n> operations. Gut feel says that there are enough out-of-order resources\n> available to make it not matter in most cases. But even so, not doing\n> the extra work is surely better. Attached is another approach that\n> does aligned accesses, and thereby avoids going outside bounds.\n> \n> Would be interesting to see how well that fares in the small use case.\n> Anything that fits into one aligned cache line should be constant\n> speed, and there is only one branch, but the mask setup and folding\n> the separate popcounts together should add up to about 20-ish cycles\n> of overhead.\n\nI tested your patch in comparison to v25 and saw the following:\n\n bytes v25 v25+ants\n 2 1108.205 1033.132\n 4 1311.227 1289.373\n 8 1927.954 2360.113\n 16 2281.091 2365.408\n 32 3856.992 2390.688\n 64 3648.72 3242.498\n 128 4108.549 3607.148\n 256 4910.076 4496.852\n\nFor 2 bytes and 4 bytes, the inlining should take effect, so any difference\nthere is likely just noise. At 8 bytes, we are calling the function\npointer, and there is a small regression with the masking approach.\nHowever, by 16 bytes, the masking approach is on par with v25, and it wins\nfor all larger buffers, although the gains seem to taper off a bit.\n\nIf we can verify this approach won't cause segfaults and can stomach the\nregression between 8 and 16 bytes, I'd happily pivot to this approach so\nthat we can avoid the function call dance that I have in v25.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 3 Apr 2024 17:50:29 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 11:50, Nathan Bossart <[email protected]> wrote:\n> If we can verify this approach won't cause segfaults and can stomach the\n> regression between 8 and 16 bytes, I'd happily pivot to this approach so\n> that we can avoid the function call dance that I have in v25.\n>\n> Thoughts?\n\nIf we're worried about regressions with some narrow range of byte\nvalues, wouldn't it make more sense to compare that to cc4826dd5~1 at\nthe latest rather than to some version that's already probably faster\nthan PG16?\n\nDavid\n\n\n",
"msg_date": "Thu, 4 Apr 2024 16:28:58 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 01:50, Nathan Bossart <[email protected]> wrote:\n> If we can verify this approach won't cause segfaults and can stomach the\n> regression between 8 and 16 bytes, I'd happily pivot to this approach so\n> that we can avoid the function call dance that I have in v25.\n\nThe approach I posted does not rely on masking performing page fault\nsuppression. All loads are 64 byte aligned and always contain at least\none byte of the buffer and therefore are guaranteed to be within a\nvalid page.\n\nI personally don't mind it being slower for the very small cases,\nbecause when performance on those sizes really matters it makes much\nmore sense to shoot for an inlined version instead.\n\nSpeaking of which, what does bumping up the inlined version threshold\nto 16 do with and without AVX-512 available? Linearly extrapolating\nthe 2 and 4 byte numbers it might just come ahead in both cases,\nmaking the choice easy.\n\nRegards,\nAnts Aasma\n\n\n",
"msg_date": "Thu, 4 Apr 2024 16:02:53 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Apr 04, 2024 at 04:28:58PM +1300, David Rowley wrote:\n> On Thu, 4 Apr 2024 at 11:50, Nathan Bossart <[email protected]> wrote:\n>> If we can verify this approach won't cause segfaults and can stomach the\n>> regression between 8 and 16 bytes, I'd happily pivot to this approach so\n>> that we can avoid the function call dance that I have in v25.\n> \n> If we're worried about regressions with some narrow range of byte\n> values, wouldn't it make more sense to compare that to cc4826dd5~1 at\n> the latest rather than to some version that's already probably faster\n> than PG16?\n\nGood point. When compared with REL_16_STABLE, Ants's idea still wins:\n\n bytes v25 v25+ants REL_16_STABLE\n 2 1108.205 1033.132 2039.342\n 4 1311.227 1289.373 3207.217\n 8 1927.954 2360.113 3200.238\n 16 2281.091 2365.408 4457.769\n 32 3856.992 2390.688 6206.689\n 64 3648.72 3242.498 9619.403\n 128 4108.549 3607.148 17912.081\n 256 4910.076 4496.852 33591.385\n\nAs before, with 2 and 4 bytes, HEAD is using the inlined approach, but\nREL_16_STABLE is doing a function call. For 8 bytes, REL_16_STABLE is\ndoing a function call as well as a call to a function pointer. At 16\nbytes, it's doing a function call and two calls to a function pointer.\nWith Ant's approach, both 8 and 16 bytes require a single call to a\nfunction pointer, and of course we are using the AVX-512 implementation for\nboth.\n\nI think this is sufficient to justify switching approaches.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 Apr 2024 12:18:28 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Apr 04, 2024 at 04:02:53PM +0300, Ants Aasma wrote:\n> Speaking of which, what does bumping up the inlined version threshold\n> to 16 do with and without AVX-512 available? Linearly extrapolating\n> the 2 and 4 byte numbers it might just come ahead in both cases,\n> making the choice easy.\n\nIIRC the inlined version starts losing pretty quickly after 8 bytes. As I\nnoted in my previous message, I think we have enough data to switch to your\napproach already, so I think it's a moot point.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 Apr 2024 12:28:40 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Here is an updated patch set. IMHO this is in decent shape and is\napproaching committable.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 4 Apr 2024 23:15:43 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, 5 Apr 2024 at 07:15, Nathan Bossart <[email protected]> wrote:\n> Here is an updated patch set. IMHO this is in decent shape and is\n> approaching committable.\n\nI checked the code generation on various gcc and clang versions. It\nlooks mostly fine starting from versions where avx512 is supported,\ngcc-7.1 and clang-5.\n\nThe main issue I saw was that clang was able to peel off the first\niteration of the loop and then eliminate the mask assignment and\nreplace masked load with a memory operand for vpopcnt. I was not able\nto convince gcc to do that regardless of optimization options.\nGenerated code for the inner loop:\n\nclang:\n<L2>:\n 50: add rdx, 64\n 54: cmp rdx, rdi\n 57: jae <L1>\n 59: vpopcntq zmm1, zmmword ptr [rdx]\n 5f: vpaddq zmm0, zmm1, zmm0\n 65: jmp <L2>\n\ngcc:\n<L1>:\n 38: kmovq k1, rdx\n 3d: vmovdqu8 zmm0 {k1} {z}, zmmword ptr [rax]\n 43: add rax, 64\n 47: mov rdx, -1\n 4e: vpopcntq zmm0, zmm0\n 54: vpaddq zmm0, zmm0, zmm1\n 5a: vmovdqa64 zmm1, zmm0\n 60: cmp rax, rsi\n 63: jb <L1>\n\nI'm not sure how much that matters in practice. Attached is a patch to\ndo this manually giving essentially the same result in gcc. As most\ndistro packages are built using gcc I think it would make sense to\nhave the extra code if it gives a noticeable benefit for large cases.\n\nThe visibility map patch has the same issue, otherwise looks good.\n\nRegards,\nAnts Aasma",
"msg_date": "Fri, 5 Apr 2024 10:33:27 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, Apr 05, 2024 at 10:33:27AM +0300, Ants Aasma wrote:\n> The main issue I saw was that clang was able to peel off the first\n> iteration of the loop and then eliminate the mask assignment and\n> replace masked load with a memory operand for vpopcnt. I was not able\n> to convince gcc to do that regardless of optimization options.\n> Generated code for the inner loop:\n> \n> clang:\n> <L2>:\n> 50: add rdx, 64\n> 54: cmp rdx, rdi\n> 57: jae <L1>\n> 59: vpopcntq zmm1, zmmword ptr [rdx]\n> 5f: vpaddq zmm0, zmm1, zmm0\n> 65: jmp <L2>\n> \n> gcc:\n> <L1>:\n> 38: kmovq k1, rdx\n> 3d: vmovdqu8 zmm0 {k1} {z}, zmmword ptr [rax]\n> 43: add rax, 64\n> 47: mov rdx, -1\n> 4e: vpopcntq zmm0, zmm0\n> 54: vpaddq zmm0, zmm0, zmm1\n> 5a: vmovdqa64 zmm1, zmm0\n> 60: cmp rax, rsi\n> 63: jb <L1>\n> \n> I'm not sure how much that matters in practice. Attached is a patch to\n> do this manually giving essentially the same result in gcc. As most\n> distro packages are built using gcc I think it would make sense to\n> have the extra code if it gives a noticeable benefit for large cases.\n\nYeah, I did see this, but I also wasn't sure if it was worth further\ncomplicating the code. I can test with and without your fix and see if it\nmakes any difference in the benchmarks.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 Apr 2024 07:58:44 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Fri, Apr 05, 2024 at 07:58:44AM -0500, Nathan Bossart wrote:\n> On Fri, Apr 05, 2024 at 10:33:27AM +0300, Ants Aasma wrote:\n>> The main issue I saw was that clang was able to peel off the first\n>> iteration of the loop and then eliminate the mask assignment and\n>> replace masked load with a memory operand for vpopcnt. I was not able\n>> to convince gcc to do that regardless of optimization options.\n>> Generated code for the inner loop:\n>> \n>> clang:\n>> <L2>:\n>> 50: add rdx, 64\n>> 54: cmp rdx, rdi\n>> 57: jae <L1>\n>> 59: vpopcntq zmm1, zmmword ptr [rdx]\n>> 5f: vpaddq zmm0, zmm1, zmm0\n>> 65: jmp <L2>\n>> \n>> gcc:\n>> <L1>:\n>> 38: kmovq k1, rdx\n>> 3d: vmovdqu8 zmm0 {k1} {z}, zmmword ptr [rax]\n>> 43: add rax, 64\n>> 47: mov rdx, -1\n>> 4e: vpopcntq zmm0, zmm0\n>> 54: vpaddq zmm0, zmm0, zmm1\n>> 5a: vmovdqa64 zmm1, zmm0\n>> 60: cmp rax, rsi\n>> 63: jb <L1>\n>> \n>> I'm not sure how much that matters in practice. Attached is a patch to\n>> do this manually giving essentially the same result in gcc. As most\n>> distro packages are built using gcc I think it would make sense to\n>> have the extra code if it gives a noticeable benefit for large cases.\n> \n> Yeah, I did see this, but I also wasn't sure if it was worth further\n> complicating the code. I can test with and without your fix and see if it\n> makes any difference in the benchmarks.\n\nThis seems to provide a small performance boost, so I've incorporated it\ninto v27.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 5 Apr 2024 10:38:11 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Sat, 6 Apr 2024 at 04:38, Nathan Bossart <[email protected]> wrote:\n> This seems to provide a small performance boost, so I've incorporated it\n> into v27.\n\nWon't Valgrind complain about this?\n\n+pg_popcount_avx512(const char *buf, int bytes)\n\n+ buf = (const char *) TYPEALIGN_DOWN(sizeof(__m512i), buf);\n\n+ val = _mm512_maskz_loadu_epi8(mask, (const __m512i *) buf);\n\nDavid\n\n\n",
"msg_date": "Sat, 6 Apr 2024 12:08:14 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Sat, Apr 06, 2024 at 12:08:14PM +1300, David Rowley wrote:\n> Won't Valgrind complain about this?\n> \n> +pg_popcount_avx512(const char *buf, int bytes)\n> \n> + buf = (const char *) TYPEALIGN_DOWN(sizeof(__m512i), buf);\n> \n> + val = _mm512_maskz_loadu_epi8(mask, (const __m512i *) buf);\n\nI haven't been able to generate any complaints, at least with some simple\ntests. But I see your point. If this did cause such complaints, ISTM we'd\njust want to add it to the suppression file. Otherwise, I think we'd have\nto go back to the non-maskz approach (which I really wanted to avoid\nbecause of the weird function overhead juggling) or find another way to do\na partial load into an __m512i.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 Apr 2024 20:17:04 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Sat, 6 Apr 2024 at 14:17, Nathan Bossart <[email protected]> wrote:\n>\n> On Sat, Apr 06, 2024 at 12:08:14PM +1300, David Rowley wrote:\n> > Won't Valgrind complain about this?\n> >\n> > +pg_popcount_avx512(const char *buf, int bytes)\n> >\n> > + buf = (const char *) TYPEALIGN_DOWN(sizeof(__m512i), buf);\n> >\n> > + val = _mm512_maskz_loadu_epi8(mask, (const __m512i *) buf);\n>\n> I haven't been able to generate any complaints, at least with some simple\n> tests. But I see your point. If this did cause such complaints, ISTM we'd\n> just want to add it to the suppression file. Otherwise, I think we'd have\n> to go back to the non-maskz approach (which I really wanted to avoid\n> because of the weird function overhead juggling) or find another way to do\n> a partial load into an __m512i.\n\n[1] seems to think it's ok. If this is true then the following\nshouldn't segfault:\n\nThe following seems to run without any issue and if I change the mask\nto 1 it crashes, as you'd expect.\n\n#include <immintrin.h>\n#include <stdio.h>\nint main(void)\n{\n __m512i val;\n val = _mm512_maskz_loadu_epi8((__mmask64) 0, NULL);\n printf(\"%llu\\n\", _mm512_reduce_add_epi64(val));\n return 0;\n}\n\ngcc avx512.c -o avx512 -O0 -mavx512f -march=native\n\nDavid\n\n[1] https://stackoverflow.com/questions/54497141/when-using-a-mask-register-with-avx-512-load-and-stores-is-a-fault-raised-for-i\n\n\n",
"msg_date": "Sat, 6 Apr 2024 14:51:39 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Sat, Apr 06, 2024 at 02:51:39PM +1300, David Rowley wrote:\n> On Sat, 6 Apr 2024 at 14:17, Nathan Bossart <[email protected]> wrote:\n>> On Sat, Apr 06, 2024 at 12:08:14PM +1300, David Rowley wrote:\n>> > Won't Valgrind complain about this?\n>> >\n>> > +pg_popcount_avx512(const char *buf, int bytes)\n>> >\n>> > + buf = (const char *) TYPEALIGN_DOWN(sizeof(__m512i), buf);\n>> >\n>> > + val = _mm512_maskz_loadu_epi8(mask, (const __m512i *) buf);\n>>\n>> I haven't been able to generate any complaints, at least with some simple\n>> tests. But I see your point. If this did cause such complaints, ISTM we'd\n>> just want to add it to the suppression file. Otherwise, I think we'd have\n>> to go back to the non-maskz approach (which I really wanted to avoid\n>> because of the weird function overhead juggling) or find another way to do\n>> a partial load into an __m512i.\n> \n> [1] seems to think it's ok. If this is true then the following\n> shouldn't segfault:\n> \n> The following seems to run without any issue and if I change the mask\n> to 1 it crashes, as you'd expect.\n\nCool.\n\nHere is what I have staged for commit, which I intend to do shortly. At\nsome point, I'd like to revisit converting TRY_POPCNT_FAST to a\nconfigure-time check and maybe even moving the \"fast\" and \"slow\"\nimplementations to their own files, but since that's mostly for code\nneatness and we are rapidly approaching the v17 deadline, I'm content to\nleave that for v18.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 6 Apr 2024 14:41:01 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Sat, Apr 06, 2024 at 02:41:01PM -0500, Nathan Bossart wrote:\n> Here is what I have staged for commit, which I intend to do shortly.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 6 Apr 2024 23:05:31 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> Here is what I have staged for commit, which I intend to do shortly.\n\nToday's Coverity run produced this warning, which seemingly was\ntriggered by one of these commits, but I can't make much sense\nof it:\n\n*** CID 1596255: Uninitialized variables (UNINIT)\n/usr/lib/gcc/x86_64-linux-gnu/10/include/avxintrin.h: 1218 in _mm256_undefined_si256()\n1214 extern __inline __m256i __attribute__((__gnu_inline__, __always_inline__, __artificial__))\n1215 _mm256_undefined_si256 (void)\n1216 {\n1217 __m256i __Y = __Y;\n>>> CID 1596255: Uninitialized variables (UNINIT)\n>>> Using uninitialized value \"__Y\".\n1218 return __Y;\n1219 }\n\nI see the same code in my local copy of avxintrin.h,\nand I quite agree that it looks like either an undefined\nvalue or something that properly ought to be an error.\nIf we are calling this, why (and from where)?\n\nAnyway, we can certainly just dismiss this warning if it\ndoesn't correspond to any real problem in our code.\nBut I thought I'd raise the question.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2024 20:42:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Sun, Apr 07, 2024 at 08:42:12PM -0400, Tom Lane wrote:\n> Today's Coverity run produced this warning, which seemingly was\n> triggered by one of these commits, but I can't make much sense\n> of it:\n> \n> *** CID 1596255: Uninitialized variables (UNINIT)\n> /usr/lib/gcc/x86_64-linux-gnu/10/include/avxintrin.h: 1218 in _mm256_undefined_si256()\n> 1214 extern __inline __m256i __attribute__((__gnu_inline__, __always_inline__, __artificial__))\n> 1215 _mm256_undefined_si256 (void)\n> 1216 {\n> 1217 __m256i __Y = __Y;\n>>>> CID 1596255: Uninitialized variables (UNINIT)\n>>>> Using uninitialized value \"__Y\".\n> 1218 return __Y;\n> 1219 }\n> \n> I see the same code in my local copy of avxintrin.h,\n> and I quite agree that it looks like either an undefined\n> value or something that properly ought to be an error.\n> If we are calling this, why (and from where)?\n\nNothing in these commits uses this, or even uses the 256-bit registers.\navxintrin.h is included by immintrin.h, which is probably why this is\nshowing up. I believe you're supposed to use immintrin.h for the\nintrinsics used in these commits, so I don't immediately see a great way to\navoid this. The Intel documentation for _mm256_undefined_si256() [0]\nindicates that it is intended to return \"undefined elements,\" so it seems\nlike the use of an uninitialized variable might be intentional.\n\n> Anyway, we can certainly just dismiss this warning if it\n> doesn't correspond to any real problem in our code.\n> But I thought I'd raise the question.\n\nThat's probably the right thing to do, unless there's some action we can\ntake to suppress this warning.\n\n[0] https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm256_undefined_si256&ig_expand=6943\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 7 Apr 2024 20:23:32 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Sun, Apr 07, 2024 at 08:23:32PM -0500, Nathan Bossart wrote:\n> The Intel documentation for _mm256_undefined_si256() [0]\n> indicates that it is intended to return \"undefined elements,\" so it seems\n> like the use of an uninitialized variable might be intentional.\n\nSee also https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=72af61b122.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 7 Apr 2024 20:30:02 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Sun, Apr 07, 2024 at 08:23:32PM -0500, Nathan Bossart wrote:\n>> The Intel documentation for _mm256_undefined_si256() [0]\n>> indicates that it is intended to return \"undefined elements,\" so it seems\n>> like the use of an uninitialized variable might be intentional.\n\n> See also https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=72af61b122.\n\nAh, interesting. That hasn't propagated to stable distros yet,\nevidently (and even when it does, I wonder how soon Coverity\nwill understand it). Anyway, that does establish that it's\ngcc's problem not ours. Thanks for digging!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2024 21:35:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "It was brought to my attention [0] that we probably should be checking for\nthe OSXSAVE bit instead of the XSAVE bit when determining whether there's\nsupport for the XGETBV instruction. IIUC that should indicate that both\nthe OS and the processor have XGETBV support (not just the processor).\nI've attached a one-line patch to fix this.\n\n[0] https://github.com/pgvector/pgvector/pull/519#issuecomment-2062804463\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 17 Apr 2024 21:44:59 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> It was brought to my attention [0] that we probably should be checking for the OSXSAVE bit instead of the XSAVE bit when determining whether there's support for the XGETBV instruction. IIUC that should indicate that both the OS and the processor have XGETBV support (not just the processor).\n> I've attached a one-line patch to fix this.\n\n> [0] https://github.com/pgvector/pgvector/pull/519#issuecomment-2062804463\n\nGood find. I confirmed after speaking with an intel expert, and from the intel AVX-512 manual [0] section 14.3, which recommends to check bit27. From the manual:\n\n\"Prior to using Intel AVX, the application must identify that the operating system supports the XGETBV instruction,\nthe YMM register state, in addition to processor's support for YMM state management using XSAVE/XRSTOR and\nAVX instructions. The following simplified sequence accomplishes both and is strongly recommended.\n1) Detect CPUID.1:ECX.OSXSAVE[bit 27] = 1 (XGETBV enabled for application use1).\n2) Issue XGETBV and verify that XCR0[2:1] = '11b' (XMM state and YMM state are enabled by OS).\n3) detect CPUID.1:ECX.AVX[bit 28] = 1 (AVX instructions supported).\n(Step 3 can be done in any order relative to 1 and 2.)\"\n\nIt also seems that step 1 and step 2 need to be done prior to the CPUID OSXSAVE check in the popcount code.\n\n[0]: https://cdrdv2.intel.com/v1/dl/getContent/671200\n\n- Akash Shankaran\n\n\n\n",
"msg_date": "Thu, 18 Apr 2024 18:12:22 +0000",
"msg_from": "\"Shankaran, Akash\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 06:12:22PM +0000, Shankaran, Akash wrote:\n> Good find. I confirmed after speaking with an intel expert, and from the intel AVX-512 manual [0] section 14.3, which recommends to check bit27. From the manual:\n> \n> \"Prior to using Intel AVX, the application must identify that the operating system supports the XGETBV instruction,\n> the YMM register state, in addition to processor's support for YMM state management using XSAVE/XRSTOR and\n> AVX instructions. The following simplified sequence accomplishes both and is strongly recommended.\n> 1) Detect CPUID.1:ECX.OSXSAVE[bit 27] = 1 (XGETBV enabled for application use1).\n> 2) Issue XGETBV and verify that XCR0[2:1] = '11b' (XMM state and YMM state are enabled by OS).\n> 3) detect CPUID.1:ECX.AVX[bit 28] = 1 (AVX instructions supported).\n> (Step 3 can be done in any order relative to 1 and 2.)\"\n\nThanks for confirming. IIUC my patch should be sufficient, then.\n\n> It also seems that step 1 and step 2 need to be done prior to the CPUID OSXSAVE check in the popcount code.\n\nThis seems to contradict the note about doing step 3 at any point, and\ngiven step 1 is the OSXSAVE check, I'm not following what this means,\nanyway.\n\nI'm also wondering if we need to check that (_xgetbv(0) & 0xe6) == 0xe6\ninstead of just (_xgetbv(0) & 0xe0) != 0, as the status of the lower half\nof some of the ZMM registers is stored in the SSE and AVX state [0]. I\ndon't know how likely it is that 0xe0 would succeed but 0xe6 wouldn't, but\nwe might as well make it correct.\n\n[0] https://en.wikipedia.org/wiki/Control_register#cite_ref-23\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 18 Apr 2024 14:53:46 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 08:24:03PM +0000, Devulapalli, Raghuveer wrote:\n>> This seems to contradict the note about doing step 3 at any point, and\n>> given step 1 is the OSXSAVE check, I'm not following what this means,\n>> anyway.\n> \n> It is recommended that you run the xgetbv code before you check for cpu\n> features avx512-popcnt and avx512-bw. The way it is written now is the\n> opposite order. I would also recommend splitting the cpuid feature check\n> for avx512popcnt/avx512bw and xgetbv section into separate functions to\n> make them modular. Something like:\n> \n> static inline\n> int check_os_avx512_support(void)\n> {\n> // (1) run cpuid leaf 1 to check for xgetbv instruction support:\n> unsigned int exx[4] = {0, 0, 0, 0};\n> __get_cpuid(1, &exx[0], &exx[1], &exx[2], &exx[3]);\n> if ((exx[2] & (1 << 27)) == 0) /* xsave */\n> return false;\n> \n> /* Does XGETBV say the ZMM/YMM/XMM registers are enabled? */\n> return (_xgetbv(0) & 0xe0) == 0xe0;\n> }\n> \n>> I'm also wondering if we need to check that (_xgetbv(0) & 0xe6) == 0xe6\n>> instead of just (_xgetbv(0) & 0xe0) != 0, as the status of the lower\n>> half of some of the ZMM registers is stored in the SSE and AVX state\n>> [0]. I don't know how likely it is that 0xe0 would succeed but 0xe6\n>> wouldn't, but we might as well make it correct.\n> \n> This is correct. It needs to check all the 3 bits (XMM/YMM and ZMM). The\n> way it is written is now is in-correct. \n\nThanks for the feedback. I've attached an updated patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 18 Apr 2024 16:01:58 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> Thanks for the feedback. I've attached an updated patch.\n\n(1) Shouldn't it be: return (_xgetbv(0) & 0xe6) == 0xe6; ? Otherwise zmm_regs_available() will return false. \n(2) Nitpick: avx512_popcnt_available and avx512_bw_available() run the same cpuid leaf. You could combine them into one to avoid running cpuid twice. My apologies, I should have mentioned this before. \n\n\n",
"msg_date": "Thu, 18 Apr 2024 21:29:55 +0000",
"msg_from": "\"Devulapalli, Raghuveer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 09:29:55PM +0000, Devulapalli, Raghuveer wrote:\n> (1) Shouldn't it be: return (_xgetbv(0) & 0xe6) == 0xe6; ? Otherwise\n> zmm_regs_available() will return false..\n\nYes, that's a mistake. I fixed that in v3.\n\n> (2) Nitpick: avx512_popcnt_available and avx512_bw_available() run the\n> same cpuid leaf. You could combine them into one to avoid running cpuid\n> twice. My apologies, I should have mentioned this before..\n\nGood call. The byte-and-word instructions were a late addition to the\npatch, so I missed this originally.\n\nOn that note, is it necessary to also check for avx512f? At the moment, we\nare assuming that's supported if the other AVX-512 instructions are\navailable.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 18 Apr 2024 16:59:02 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "> On that note, is it necessary to also check for avx512f? At the moment, we are assuming that's supported if the other AVX-512 instructions are available.\n\nNo, it's not needed. There are no CPU's with avx512bw/avx512popcnt without avx512f. Unfortunately though, avx512popcnt does not mean avx512bw (I think the deprecated Xeon Phi processors falls in this category) which is why we need both. \n\n\n",
"msg_date": "Thu, 18 Apr 2024 22:11:08 +0000",
"msg_from": "\"Devulapalli, Raghuveer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 10:11:08PM +0000, Devulapalli, Raghuveer wrote:\n>> On that note, is it necessary to also check for avx512f? At the moment,\n>> we are assuming that's supported if the other AVX-512 instructions are\n>> available.\n> \n> No, it's not needed. There are no CPU's with avx512bw/avx512popcnt\n> without avx512f. Unfortunately though, avx512popcnt does not mean\n> avx512bw (I think the deprecated Xeon Phi processors falls in this\n> category) which is why we need both.\n\nMakes sense, thanks. I'm planning to commit this fix sometime early next\nweek.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 18 Apr 2024 17:13:58 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 05:13:58PM -0500, Nathan Bossart wrote:\n> Makes sense, thanks. I'm planning to commit this fix sometime early next\n> week.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 23 Apr 2024 11:02:07 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-23 11:02:07 -0500, Nathan Bossart wrote:\n> On Thu, Apr 18, 2024 at 05:13:58PM -0500, Nathan Bossart wrote:\n> > Makes sense, thanks. I'm planning to commit this fix sometime early next\n> > week.\n>\n> Committed.\n\nI've noticed that the configure probes for this are quite slow - pretty much\nthe slowest step in a meson setup (and autoconf is similar). While looking\ninto this, I also noticed that afaict the tests don't do the right thing for\nmsvc.\n\n...\n[6.825] Checking if \"__sync_val_compare_and_swap(int64)\" : links: YES\n[6.883] Checking if \" __atomic_compare_exchange_n(int32)\" : links: YES\n[6.940] Checking if \" __atomic_compare_exchange_n(int64)\" : links: YES\n[7.481] Checking if \"XSAVE intrinsics without -mxsave\" : links: NO\n[8.097] Checking if \"XSAVE intrinsics with -mxsave\" : links: YES\n[8.641] Checking if \"AVX-512 popcount without -mavx512vpopcntdq -mavx512bw\" : links: NO\n[9.183] Checking if \"AVX-512 popcount with -mavx512vpopcntdq -mavx512bw\" : links: YES\n[9.242] Checking if \"_mm_crc32_u8 and _mm_crc32_u32 without -msse4.2\" : links: NO\n[9.333] Checking if \"_mm_crc32_u8 and _mm_crc32_u32 with -msse4.2\" : links: YES\n[9.367] Checking if \"x86_64: popcntq instruction\" compiles: YES\n[9.382] Has header \"atomic.h\" : NO\n...\n\n(the times here are a bit exaggerated, enabling them in meson also turns on\npython profiling, which makes everything a bit slower)\n\n\nLooks like this is largely the fault of including immintrin.h:\n\necho -e '#include <immintrin.h>\\nint main(){return _xgetbv(0) & 0xe0;}'|time gcc -mxsave -xc - -o /dev/null\n0.45user 0.04system 0:00.50elapsed 99%CPU (0avgtext+0avgdata 94184maxresident)k\n\necho -e '#include <immintrin.h>\\n'|time gcc -c -mxsave -xc - -o /dev/null\n0.43user 0.03system 0:00.46elapsed 99%CPU (0avgtext+0avgdata 86004maxresident)k\n\n\nDo we really need to link the generated programs? If we instead were able to\njust rely on the preprocessor, it'd be vastly faster.\n\nThe __sync* and __atomic* checks actually need to link, as the compiler ends\nup generating calls to unimplemented functions if the compilation target\ndoesn't support some operation natively - but I don't think that's true for\nthe xsave/avx512 stuff\n\nAfaict we could just check for predefined preprocessor macros:\n\necho|time gcc -c -mxsave -mavx512vpopcntdq -mavx512bw -xc -dM -E - -o -|grep -E '__XSAVE__|__AVX512BW__|__AVX512VPOPCNTDQ__'\n#define __AVX512BW__ 1\n#define __AVX512VPOPCNTDQ__ 1\n#define __XSAVE__ 1\n0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 13292maxresident)k\n\necho|time gcc -c -march=nehalem -xc -dM -E - -o -|grep -E '__XSAVE__|__AVX512BW__|__AVX512VPOPCNTDQ__'\n0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 10972maxresident)k\n\n\n\nNow, a reasonable counter-argument would be that only some of these macros are\ndefined for msvc ([1]). However, as it turns out, the test is broken\ntoday, as msvc doesn't error out when using an intrinsic that's not\n\"available\" by the target architecture, it seems to assume that the caller did\na cpuid check ahead of time.\n\n\nCheck out [2], it shows the various predefined macros for gcc, clang and msvc.\n\n\nISTM that the msvc checks for xsave/avx512 being broken should be an open\nitem?\n\nGreetings,\n\nAndres\n\n\n[1] https://learn.microsoft.com/en-us/cpp/preprocessor/predefined-macros?view=msvc-170\n[2] https://godbolt.org/z/c8Kj8r3PK\n\n\n",
"msg_date": "Tue, 30 Jul 2024 14:07:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 02:07:01PM -0700, Andres Freund wrote:\n> I've noticed that the configure probes for this are quite slow - pretty much\n> the slowest step in a meson setup (and autoconf is similar). While looking\n> into this, I also noticed that afaict the tests don't do the right thing for\n> msvc.\n> \n> ...\n> [6.825] Checking if \"__sync_val_compare_and_swap(int64)\" : links: YES\n> [6.883] Checking if \" __atomic_compare_exchange_n(int32)\" : links: YES\n> [6.940] Checking if \" __atomic_compare_exchange_n(int64)\" : links: YES\n> [7.481] Checking if \"XSAVE intrinsics without -mxsave\" : links: NO\n> [8.097] Checking if \"XSAVE intrinsics with -mxsave\" : links: YES\n> [8.641] Checking if \"AVX-512 popcount without -mavx512vpopcntdq -mavx512bw\" : links: NO\n> [9.183] Checking if \"AVX-512 popcount with -mavx512vpopcntdq -mavx512bw\" : links: YES\n> [9.242] Checking if \"_mm_crc32_u8 and _mm_crc32_u32 without -msse4.2\" : links: NO\n> [9.333] Checking if \"_mm_crc32_u8 and _mm_crc32_u32 with -msse4.2\" : links: YES\n> [9.367] Checking if \"x86_64: popcntq instruction\" compiles: YES\n> [9.382] Has header \"atomic.h\" : NO\n> ...\n> \n> (the times here are a bit exaggerated, enabling them in meson also turns on\n> python profiling, which makes everything a bit slower)\n> \n> \n> Looks like this is largely the fault of including immintrin.h:\n> \n> echo -e '#include <immintrin.h>\\nint main(){return _xgetbv(0) & 0xe0;}'|time gcc -mxsave -xc - -o /dev/null\n> 0.45user 0.04system 0:00.50elapsed 99%CPU (0avgtext+0avgdata 94184maxresident)k\n> \n> echo -e '#include <immintrin.h>\\n'|time gcc -c -mxsave -xc - -o /dev/null\n> 0.43user 0.03system 0:00.46elapsed 99%CPU (0avgtext+0avgdata 86004maxresident)k\n\nInteresting. Thanks for bringing this to my attention.\n\n> Do we really need to link the generated programs? If we instead were able to\n> just rely on the preprocessor, it'd be vastly faster.\n> \n> The __sync* and __atomic* checks actually need to link, as the compiler ends\n> up generating calls to unimplemented functions if the compilation target\n> doesn't support some operation natively - but I don't think that's true for\n> the xsave/avx512 stuff\n> \n> Afaict we could just check for predefined preprocessor macros:\n> \n> echo|time gcc -c -mxsave -mavx512vpopcntdq -mavx512bw -xc -dM -E - -o -|grep -E '__XSAVE__|__AVX512BW__|__AVX512VPOPCNTDQ__'\n> #define __AVX512BW__ 1\n> #define __AVX512VPOPCNTDQ__ 1\n> #define __XSAVE__ 1\n> 0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 13292maxresident)k\n> \n> echo|time gcc -c -march=nehalem -xc -dM -E - -o -|grep -E '__XSAVE__|__AVX512BW__|__AVX512VPOPCNTDQ__'\n> 0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 10972maxresident)k\n\nSeems promising. I can't think of a reason that wouldn't work.\n\n> Now, a reasonable counter-argument would be that only some of these macros are\n> defined for msvc ([1]). However, as it turns out, the test is broken\n> today, as msvc doesn't error out when using an intrinsic that's not\n> \"available\" by the target architecture, it seems to assume that the caller did\n> a cpuid check ahead of time.\n> \n> \n> Check out [2], it shows the various predefined macros for gcc, clang and msvc.\n> \n> \n> ISTM that the msvc checks for xsave/avx512 being broken should be an open\n> item?\n\nI'm not following this one. At the moment, we always do a runtime check\nfor the AVX-512 stuff, so in the worst case we'd check CPUID at startup and\nset the function pointers appropriately, right? We could, of course, still\nfix it, though.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 30 Jul 2024 16:32:07 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 04:32:07PM -0500, Nathan Bossart wrote:\n> On Tue, Jul 30, 2024 at 02:07:01PM -0700, Andres Freund wrote:\n>> Afaict we could just check for predefined preprocessor macros:\n>> \n>> echo|time gcc -c -mxsave -mavx512vpopcntdq -mavx512bw -xc -dM -E - -o -|grep -E '__XSAVE__|__AVX512BW__|__AVX512VPOPCNTDQ__'\n>> #define __AVX512BW__ 1\n>> #define __AVX512VPOPCNTDQ__ 1\n>> #define __XSAVE__ 1\n>> 0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 13292maxresident)k\n>> \n>> echo|time gcc -c -march=nehalem -xc -dM -E - -o -|grep -E '__XSAVE__|__AVX512BW__|__AVX512VPOPCNTDQ__'\n>> 0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 10972maxresident)k\n> \n> Seems promising. I can't think of a reason that wouldn't work.\n> \n>> Now, a reasonable counter-argument would be that only some of these macros are\n>> defined for msvc ([1]). However, as it turns out, the test is broken\n>> today, as msvc doesn't error out when using an intrinsic that's not\n>> \"available\" by the target architecture, it seems to assume that the caller did\n>> a cpuid check ahead of time.\n\nHm. Upon further inspection, I see that MSVC appears to be missing\n__XSAVE__ and __AVX512VPOPCNTDQ__, which is unfortunate. Still, I think\nthe worst case scenario is that the CPUID check fails and we don't use\nAVX-512 instructions. AFAICT we aren't adding new function pointers in any\nbuilds that don't already have them, just compiling some extra unused code.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 30 Jul 2024 16:54:54 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-30 16:32:07 -0500, Nathan Bossart wrote:\n> On Tue, Jul 30, 2024 at 02:07:01PM -0700, Andres Freund wrote:\n> > Now, a reasonable counter-argument would be that only some of these macros are\n> > defined for msvc ([1]). However, as it turns out, the test is broken\n> > today, as msvc doesn't error out when using an intrinsic that's not\n> > \"available\" by the target architecture, it seems to assume that the caller did\n> > a cpuid check ahead of time.\n> > \n> > \n> > Check out [2], it shows the various predefined macros for gcc, clang and msvc.\n> > \n> > \n> > ISTM that the msvc checks for xsave/avx512 being broken should be an open\n> > item?\n> \n> I'm not following this one. At the moment, we always do a runtime check\n> for the AVX-512 stuff, so in the worst case we'd check CPUID at startup and\n> set the function pointers appropriately, right? We could, of course, still\n> fix it, though.\n\nAh, I somehow thought we'd avoid the runtime check in case we determine at\ncompile time we don't need any extra flags to enable the AVX512 stuff (similar\nto how we deal with crc32). But it looks like that's not the case - which\nseems pretty odd to me:\n\nThis turns something that can be a single instruction into an indirect\nfunction call, even if we could know that it's guaranteed to be available for\nthe compilation target, due to -march=....\n\nIt's one thing for the avx512 path to have that overhead, but it's\nparticularly absurd for pg_popcount32/pg_popcount64, where\n\na) The function call overhead is a larger proportion of the cost.\nb) the instruction is almost universally available, including in the\n architecture baseline x86-64-v2, which several distros are using as the\n x86-64 baseline.\n\n\nWhy are we actually checking for xsave? We're not using xsave itself and I\ncouldn't find a comment in 792752af4eb5 explaining what we're using it as a\nproxy for? Is that just to know if _xgetbv() exists? Is it actually possible\nthat xsave isn't available when avx512 is?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Jul 2024 17:49:59 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 12:50 PM Andres Freund <[email protected]> wrote:\n> It's one thing for the avx512 path to have that overhead, but it's\n> particularly absurd for pg_popcount32/pg_popcount64, where\n>\n> a) The function call overhead is a larger proportion of the cost.\n> b) the instruction is almost universally available, including in the\n> architecture baseline x86-64-v2, which several distros are using as the\n> x86-64 baseline.\n\nFWIW, another recent thread about that:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGKS64zJezV9y9mPcB-J0i%2BfLGiv3FAdwSH_3SCaVdrjyQ%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 13:05:18 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 05:49:59PM -0700, Andres Freund wrote:\n> Ah, I somehow thought we'd avoid the runtime check in case we determine at\n> compile time we don't need any extra flags to enable the AVX512 stuff (similar\n> to how we deal with crc32). But it looks like that's not the case - which\n> seems pretty odd to me:\n> \n> This turns something that can be a single instruction into an indirect\n> function call, even if we could know that it's guaranteed to be available for\n> the compilation target, due to -march=....\n> \n> It's one thing for the avx512 path to have that overhead, but it's\n> particularly absurd for pg_popcount32/pg_popcount64, where\n> \n> a) The function call overhead is a larger proportion of the cost.\n> b) the instruction is almost universally available, including in the\n> architecture baseline x86-64-v2, which several distros are using as the\n> x86-64 baseline.\n\nYeah, pg_popcount32/64 have been doing this since v12 (02a6a54). Until v17\n(cc4826d), pg_popcount() repeatedly calls these function pointers, too. I\nthink it'd be awesome if we could start requiring some of these \"almost\nuniversally available\" instructions, but AFAICT that brings its own\ncomplexity [0].\n\n> Why are we actually checking for xsave? We're not using xsave itself and I\n> couldn't find a comment in 792752af4eb5 explaining what we're using it as a\n> proxy for? Is that just to know if _xgetbv() exists? Is it actually possible\n> that xsave isn't available when avx512 is?\n\nYes, it's to verify we have XGETBV, which IIUC requires support from both\nthe processor and the OS (see 598e011 and upthread discussion). AFAIK the\nway we are detecting AVX-512 support is quite literally by-the-book unless\nI've gotten something wrong.\n\n[0] https://postgr.es/m/ZmpG2ZzT30Q75BZO%40nathan\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 30 Jul 2024 20:20:34 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-30 20:20:34 -0500, Nathan Bossart wrote:\n> On Tue, Jul 30, 2024 at 05:49:59PM -0700, Andres Freund wrote:\n> > Ah, I somehow thought we'd avoid the runtime check in case we determine at\n> > compile time we don't need any extra flags to enable the AVX512 stuff (similar\n> > to how we deal with crc32). But it looks like that's not the case - which\n> > seems pretty odd to me:\n> > \n> > This turns something that can be a single instruction into an indirect\n> > function call, even if we could know that it's guaranteed to be available for\n> > the compilation target, due to -march=....\n> > \n> > It's one thing for the avx512 path to have that overhead, but it's\n> > particularly absurd for pg_popcount32/pg_popcount64, where\n> > \n> > a) The function call overhead is a larger proportion of the cost.\n> > b) the instruction is almost universally available, including in the\n> > architecture baseline x86-64-v2, which several distros are using as the\n> > x86-64 baseline.\n> \n> Yeah, pg_popcount32/64 have been doing this since v12 (02a6a54). Until v17\n> (cc4826d), pg_popcount() repeatedly calls these function pointers, too. I\n> think it'd be awesome if we could start requiring some of these \"almost\n> universally available\" instructions, but AFAICT that brings its own\n> complexity [0].\n\nI'll respond there...\n\n\n> > Why are we actually checking for xsave? We're not using xsave itself and I\n> > couldn't find a comment in 792752af4eb5 explaining what we're using it as a\n> > proxy for? Is that just to know if _xgetbv() exists? Is it actually possible\n> > that xsave isn't available when avx512 is?\n> \n> Yes, it's to verify we have XGETBV, which IIUC requires support from both\n> the processor and the OS (see 598e011 and upthread discussion). AFAIK the\n> way we are detecting AVX-512 support is quite literally by-the-book unless\n> I've gotten something wrong.\n\nI'm basically wondering whether we need to check for compiler (not OS support)\nsupport for xsave if we also check for -mavx512vpopcntdq -mavx512bw\nsupport. Afaict the latter implies support for xsave.\n\nandres@alap6:~$ echo|gcc -c - -march=x86-64 -xc -dM -E - -o -|grep '__XSAVE__'\nandres@alap6:~$ echo|gcc -c - -march=x86-64 -mavx512vpopcntdq -mavx512bw -xc -dM -E - -o -|grep '__XSAVE__'\n#define __XSAVE__ 1\n#define __XSAVE__ 1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Jul 2024 18:46:51 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 06:46:51PM -0700, Andres Freund wrote:\n> On 2024-07-30 20:20:34 -0500, Nathan Bossart wrote:\n>> On Tue, Jul 30, 2024 at 05:49:59PM -0700, Andres Freund wrote:\n>> > Why are we actually checking for xsave? We're not using xsave itself and I\n>> > couldn't find a comment in 792752af4eb5 explaining what we're using it as a\n>> > proxy for? Is that just to know if _xgetbv() exists? Is it actually possible\n>> > that xsave isn't available when avx512 is?\n>> \n>> Yes, it's to verify we have XGETBV, which IIUC requires support from both\n>> the processor and the OS (see 598e011 and upthread discussion). AFAIK the\n>> way we are detecting AVX-512 support is quite literally by-the-book unless\n>> I've gotten something wrong.\n> \n> I'm basically wondering whether we need to check for compiler (not OS support)\n> support for xsave if we also check for -mavx512vpopcntdq -mavx512bw\n> support. Afaict the latter implies support for xsave.\n\nThe main purpose of the XSAVE compiler check is to determine whether we\nneed to add -mxsave in order to use _xgetbv() [0]. If that wasn't a\nfactor, we could probably skip it. Earlier versions of the patch used\ninline assembly in the non-MSVC path to call XGETBV, which I was trying to\navoid.\n\n[0] https://postgr.es/m/20240330032209.GA2018686%40nathanxps13\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 30 Jul 2024 21:01:31 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-30 21:01:31 -0500, Nathan Bossart wrote:\n> On Tue, Jul 30, 2024 at 06:46:51PM -0700, Andres Freund wrote:\n> > On 2024-07-30 20:20:34 -0500, Nathan Bossart wrote:\n> >> On Tue, Jul 30, 2024 at 05:49:59PM -0700, Andres Freund wrote:\n> >> > Why are we actually checking for xsave? We're not using xsave itself and I\n> >> > couldn't find a comment in 792752af4eb5 explaining what we're using it as a\n> >> > proxy for? Is that just to know if _xgetbv() exists? Is it actually possible\n> >> > that xsave isn't available when avx512 is?\n> >> \n> >> Yes, it's to verify we have XGETBV, which IIUC requires support from both\n> >> the processor and the OS (see 598e011 and upthread discussion). AFAIK the\n> >> way we are detecting AVX-512 support is quite literally by-the-book unless\n> >> I've gotten something wrong.\n> > \n> > I'm basically wondering whether we need to check for compiler (not OS support)\n> > support for xsave if we also check for -mavx512vpopcntdq -mavx512bw\n> > support. Afaict the latter implies support for xsave.\n> \n> The main purpose of the XSAVE compiler check is to determine whether we\n> need to add -mxsave in order to use _xgetbv() [0]. If that wasn't a\n> factor, we could probably skip it. Earlier versions of the patch used\n> inline assembly in the non-MSVC path to call XGETBV, which I was trying to\n> avoid.\n\nMy point is that _xgetbv() is made available by -mavx512vpopcntdq -mavx512bw\nalone, without needing -mxsave:\n\necho -e '#include <immintrin.h>\\nint main() { return _xgetbv(0) & 0xe0; }'|time gcc -march=x86-64 -c -xc - -o /dev/null\n-> fails\n\necho -e '#include <immintrin.h>\\nint main() { return _xgetbv(0) & 0xe0;}'|time gcc -march=x86-64 -mavx512vpopcntdq -mavx512bw -c -xc - -o /dev/null\n-> succeeds\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Jul 2024 19:43:08 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 07:43:08PM -0700, Andres Freund wrote:\n> On 2024-07-30 21:01:31 -0500, Nathan Bossart wrote:\n>> The main purpose of the XSAVE compiler check is to determine whether we\n>> need to add -mxsave in order to use _xgetbv() [0]. If that wasn't a\n>> factor, we could probably skip it. Earlier versions of the patch used\n>> inline assembly in the non-MSVC path to call XGETBV, which I was trying to\n>> avoid.\n> \n> My point is that _xgetbv() is made available by -mavx512vpopcntdq -mavx512bw\n> alone, without needing -mxsave:\n\nOh, I see. I'll work on a patch to remove that compiler check, then...\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 30 Jul 2024 22:01:50 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 10:01:50PM -0500, Nathan Bossart wrote:\n> On Tue, Jul 30, 2024 at 07:43:08PM -0700, Andres Freund wrote:\n>> My point is that _xgetbv() is made available by -mavx512vpopcntdq -mavx512bw\n>> alone, without needing -mxsave:\n> \n> Oh, I see. I'll work on a patch to remove that compiler check, then...\n\nAs I started on this, I remembered why I needed it. The file\npg_popcount_avx512_choose.c is compiled without the AVX-512 flags in order\nto avoid inadvertently issuing any AVX-512 instructions before determining\nwe have support. If that's not a concern, we could still probably remove\nthe XSAVE check.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 30 Jul 2024 22:12:18 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-30 22:12:18 -0500, Nathan Bossart wrote:\n> On Tue, Jul 30, 2024 at 10:01:50PM -0500, Nathan Bossart wrote:\n> > On Tue, Jul 30, 2024 at 07:43:08PM -0700, Andres Freund wrote:\n> >> My point is that _xgetbv() is made available by -mavx512vpopcntdq -mavx512bw\n> >> alone, without needing -mxsave:\n> >\n> > Oh, I see. I'll work on a patch to remove that compiler check, then...\n>\n> As I started on this, I remembered why I needed it. The file\n> pg_popcount_avx512_choose.c is compiled without the AVX-512 flags in order\n> to avoid inadvertently issuing any AVX-512 instructions before determining\n> we have support. If that's not a concern, we could still probably remove\n> the XSAVE check.\n\nI think it's a valid concern - but isn't that theoretically also an issue with\nxsave itself? I guess practically the compiler won't do that, because there's\nno practical reason to emit any instructions enabled by -mxsave (in contrast\nto e.g. -mavx, which does trigger gcc to emit different instructions even for\nbasic math).\n\nI think this is one of the few instances where msvc has the right approach -\nif I use intrinsics to emit a specific instruction, the intrinsic should do\nso, regardless of whether the compiler is allowed to do so on its own.\n\n\nI think enabling options like these on a per-translation-unit basis isn't\nreally a scalable approach. To actually be safe there could only be a single\nfunction in each TU and that function could only be called after a cpuid check\nperformed in a separate TU. That a) ends up pretty unreadable b) requires\nfunctions to be implemented in .c files, which we really don't want for some\nof this.\n\nI think we'd be better off enabling architectural features on a per-function\nbasis, roughly like this:\nhttps://godbolt.org/z/a4q9Gc6Ez\n\n\nFor posterity, in the unlikely case anybody reads this after godbolt shuts\ndown:\n\nI'm thinking we'd have an attribute like this:\n\n/*\n * GCC like compilers don't support intrinsics without those intrinsics explicitly\n * having been enabled. We can't just add these options more widely, as that allows the\n * compiler to emit such instructions more widely, even if we gate reaching the code using\n * intrinsics. So we just enable the relevant support for individual functions.\n *\n * In contrast to this, msvc allows use of intrinsics independent of what the compiler\n * otherwise is allowed to emit.\n */\n#ifdef __GNUC__\n#define pg_enable_target(foo) __attribute__ ((__target__ (foo)))\n#else\n#define pg_enable_target(foo)\n#endif\n\nand then use that selectively for some functions:\n\n/* FIXME: Should be gated by configure check of -mavx512vpopcntdq -mavx512bw support */\npg_enable_target(\"avx512vpopcntdq,avx512bw\")\nuint64_t\npg_popcount_avx512(const char *buf, int bytes)\n...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2024 13:52:54 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 01:52:54PM -0700, Andres Freund wrote:\n> On 2024-07-30 22:12:18 -0500, Nathan Bossart wrote:\n>> As I started on this, I remembered why I needed it. The file\n>> pg_popcount_avx512_choose.c is compiled without the AVX-512 flags in order\n>> to avoid inadvertently issuing any AVX-512 instructions before determining\n>> we have support. If that's not a concern, we could still probably remove\n>> the XSAVE check.\n> \n> I think it's a valid concern - but isn't that theoretically also an issue with\n> xsave itself? I guess practically the compiler won't do that, because there's\n> no practical reason to emit any instructions enabled by -mxsave (in contrast\n> to e.g. -mavx, which does trigger gcc to emit different instructions even for\n> basic math).\n\nYeah, this crossed my mind. It's certainly not the sturdiest of\nassumptions...\n\n> I think enabling options like these on a per-translation-unit basis isn't\n> really a scalable approach. To actually be safe there could only be a single\n> function in each TU and that function could only be called after a cpuid check\n> performed in a separate TU. That a) ends up pretty unreadable b) requires\n> functions to be implemented in .c files, which we really don't want for some\n> of this.\n\nAgreed.\n\n> I think we'd be better off enabling architectural features on a per-function\n> basis, roughly like this:\n>\n> [...]\n> \n> /* FIXME: Should be gated by configure check of -mavx512vpopcntdq -mavx512bw support */\n> pg_enable_target(\"avx512vpopcntdq,avx512bw\")\n> uint64_t\n> pg_popcount_avx512(const char *buf, int bytes)\n> ...\n\nI remember wondering why the CRC-32C code wasn't already doing something\nlike this (old compiler versions? non-gcc-like compilers?), and I'm not\nsure I ever discovered the reason, so out of an abundance of caution I used\nthe same approach for AVX-512. If we can convince ourselves that\n__attribute__((target(\"...\"))) is standard enough at this point, +1 for\nmoving to that.\n\n-- \nnathan\n\n\n",
"msg_date": "Wed, 31 Jul 2024 16:43:02 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Popcount optimization using AVX512"
}
] |
[
{
"msg_contents": "I have three suggestions on committing that I thought would be helpful\nto the hacker audience.\n\nFirst, I have been hesitant to ascribe others as patch authors if I\nheavily modified a doc patch because I didn't want them blamed for any\nmistakes I made. However, I also want to give them credit, so I decided\nI would annotate commits with \"partial\", e.g.:\n\n\tAuthor: Andy Jackson (partial)\n\nSecond, I have found the git diff option --word-diff=color to be very\nhelpful and I have started using it, especially for doc patches where\nthe old text is in red and the new text is in green. Posting patches in\nthat format is probably not helpful though.\n\nThird, I have come up with the following shell script to test for proper\npgindentation, which I run automatically before commit:\n\n\t# https://www.postgresql.org/message-id/CAGECzQQL-Dbb%2BYkid9Dhq-491MawHvi6hR_NGkhiDE%2B5zRZ6vQ%40mail.gmail.com\n\tsrc/tools/pgindent/pgindent $(git diff --name-only --diff-filter=ACMR) > /tmp/$$\n\n\tif [ \\( \"$(wc -l < /tmp/$$)\" -eq 1 -a \"$(expr match \"$(cat /tmp/$$)\" \"No files to process\\>\")\" -eq 0 \\) -o \\\n\t \"$(wc -l < /tmp/$$)\" -gt 1 ]\n\tthen\techo \"pgindent failure in master branch, exiting.\" 1>&2\n\t\tcat /tmp/$$\n\t\texit 1\n\tfi\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 2 Nov 2023 11:22:38 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Three commit tips"
},
{
"msg_contents": "On Thu, Nov 02, 2023 at 11:22:38AM -0400, Bruce Momjian wrote:\n> First, I have been hesitant to ascribe others as patch authors if I\n> heavily modified a doc patch because I didn't want them blamed for any\n> mistakes I made. However, I also want to give them credit, so I decided\n> I would annotate commits with \"partial\", e.g.:\n> \n> \tAuthor: Andy Jackson (partial)\n\nI tend to use \"Co-authored-by\" for this purpose.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 2 Nov 2023 11:07:19 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Three commit tips"
},
{
"msg_contents": "On Thu, Nov 2, 2023 at 11:07:19AM -0500, Nathan Bossart wrote:\n> On Thu, Nov 02, 2023 at 11:22:38AM -0400, Bruce Momjian wrote:\n> > First, I have been hesitant to ascribe others as patch authors if I\n> > heavily modified a doc patch because I didn't want them blamed for any\n> > mistakes I made. However, I also want to give them credit, so I decided\n> > I would annotate commits with \"partial\", e.g.:\n> > \n> > \tAuthor: Andy Jackson (partial)\n> \n> I tend to use \"Co-authored-by\" for this purpose.\n\nVery good idea. I will use that instead.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 2 Nov 2023 21:18:02 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Three commit tips"
},
{
"msg_contents": "On Thu, Nov 2, 2023 at 8:52 PM Bruce Momjian <[email protected]> wrote:\n>\n> Third, I have come up with the following shell script to test for proper\n> pgindentation, which I run automatically before commit:\n>\n> # https://www.postgresql.org/message-id/CAGECzQQL-Dbb%2BYkid9Dhq-491MawHvi6hR_NGkhiDE%2B5zRZ6vQ%40mail.gmail.com\n> src/tools/pgindent/pgindent $(git diff --name-only --diff-filter=ACMR) > /tmp/$$\n>\n> if [ \\( \"$(wc -l < /tmp/$$)\" -eq 1 -a \"$(expr match \"$(cat /tmp/$$)\" \"No files to process\\>\")\" -eq 0 \\) -o \\\n> \"$(wc -l < /tmp/$$)\" -gt 1 ]\n> then echo \"pgindent failure in master branch, exiting.\" 1>&2\n> cat /tmp/$$\n> exit 1\n> fi\n>\n\nLooks useful. Git supports pre-push hook:\nhttps://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks, a sample\nscript showing how to access the commits being pushed and other\narguments at https://github.com/git/git/blob/87c86dd14abe8db7d00b0df5661ef8cf147a72a3/templates/hooks--pre-push.sample.\nI have not used it. But it seems that your script can be used to\nimplement the pre-push hook.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 3 Nov 2023 10:40:32 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Three commit tips"
},
{
"msg_contents": "02.11.2023 18:22, Bruce Momjian wrote:\n> Third, I have come up with the following shell script to test for proper\n> pgindentation, which I run automatically before commit:\n\nI would also suggest using a script like attached to check a patch for\nnew unicums (usually some of them are typos or inconsistencies introduced\nby the patch).\nThe script itself can be improved, without a doubt, but it still can be\nuseful as-is. For example, for a couple of arbitrarily chosen today's\npatches [1] it shows:\n\n.../postgresql.git$ check-patch-for-unicums.sh .../v4-0001-Make-all-SLRU-buffer-sizes-configurable.patch\nNew unicums:\ncotents: ./doc/src/sgml/config.sgml: Specifies the amount of memory to use to cache the cotents of\n====\n\n.../postgresql.git$ check-patch-for-unicums.sh .../v4-0003-Partition-wise-slru-locks.patch\nNew unicums:\nbank_tranche_id: ./src/include/access/slru.h: int bank_tranche_id, SyncRequestHandler sync_handler);\nCommitTSSLRU: ./src/backend/storage/lmgr/lwlock.c: \"CommitTSSLRU\",\nCommitTsSLRULock: ./src/backend/storage/lmgr/lwlocknames.txt:#38 was CommitTsSLRULock\nControlLock: ./src/backend/replication/slot.c: * flag while holding the ControlLock as otherwise a concurrent\nctllock: ./src/backend/access/transam/slru.c: * ctllock: LWLock to use to control access to the shared control \nstructure.\ncur_lru_count: ./src/backend/access/transam/slru.c: * Notice that this next line forcibly advances \ncur_lru_count to a\n...\n\n[1] https://www.postgresql.org/message-id/CAFiTN-uyiUXU__VwJAimZ%2B6jQbm1s4sYi6u4fXBD%3D47xVd%3Dthg%40mail.gmail.com\n\nBest regards,\nAlexander",
"msg_date": "Fri, 3 Nov 2023 09:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Three commit tips"
}
] |
[
{
"msg_contents": "Hi,\n\n\nLook like the tab completion for CREATE TABLE ... AS is not proposed.\n\n\ngilles=# CREATE TABLE test\n( OF PARTITION OF\n\n The attached patch fix that and also propose the further completion \nafter the AS keyword.\n\n\ngilles=# CREATE TABLE test\n( AS OF PARTITION OF\ngilles=# CREATE TABLE test AS\nSELECT WITH\n\nAdding the patch to current commitfest.\n\n\nBest regards,\n\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Thu, 2 Nov 2023 19:27:02 +0300",
"msg_from": "Gilles Darold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tab completion for CREATE TABLE ... AS"
},
{
"msg_contents": "Hi\n\nOn 02.11.23 17:27, Gilles Darold wrote:\n> Hi,\n>\n>\n> Look like the tab completion for CREATE TABLE ... AS is not proposed.\n>\n>\n> gilles=# CREATE TABLE test\n> ( OF PARTITION OF\n>\n> The attached patch fix that and also propose the further completion\n> after the AS keyword.\n>\n>\n> gilles=# CREATE TABLE test\n> ( AS OF PARTITION OF\n> gilles=# CREATE TABLE test AS\n> SELECT WITH\n>\n> Adding the patch to current commitfest.\n>\n>\n> Best regards,\n>\n\nThanks for the patch!\nIt applies and builds cleanly, and it works as expected\n\n\"AS\" is suggested after \"CREATE TABLE t\":\n\npostgres=# CREATE TABLE t <TAB><TAB>\n( AS OF PARTITION OF\n\n\n-- \nJim\n\n\n\n",
"msg_date": "Fri, 10 Nov 2023 08:53:31 +0100",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE TABLE ... AS"
},
{
"msg_contents": "On Thu, Nov 02, 2023 at 07:27:02PM +0300, Gilles Darold wrote:\n> Look like the tab completion for CREATE TABLE ... AS is not\n> proposed.\n>\n> +\t/* Complete CREATE TABLE <name> AS with list of keywords */\n> +\telse if (TailMatches(\"CREATE\", \"TABLE\", MatchAny, \"AS\") ||\n> +\t\t\t TailMatches(\"CREATE\", \"TEMP|TEMPORARY|UNLOGGED\", \"TABLE\", MatchAny, \"AS\"))\n> +\t\tCOMPLETE_WITH(\"SELECT\", \"WITH\");\n\nThere is a bit more than SELECT and WITH as possible query for a CTAS.\nHow about VALUES, TABLE or even EXECUTE (itself able to handle a\nSELECT, TABLE or VALUES)?\n--\nMichael",
"msg_date": "Wed, 15 Nov 2023 09:58:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE TABLE ... AS"
},
{
"msg_contents": "Le 15/11/2023 à 03:58, Michael Paquier a écrit :\n> On Thu, Nov 02, 2023 at 07:27:02PM +0300, Gilles Darold wrote:\n>> Look like the tab completion for CREATE TABLE ... AS is not\n>> proposed.\n>>\n>> +\t/* Complete CREATE TABLE <name> AS with list of keywords */\n>> +\telse if (TailMatches(\"CREATE\", \"TABLE\", MatchAny, \"AS\") ||\n>> +\t\t\t TailMatches(\"CREATE\", \"TEMP|TEMPORARY|UNLOGGED\", \"TABLE\", MatchAny, \"AS\"))\n>> +\t\tCOMPLETE_WITH(\"SELECT\", \"WITH\");\n> There is a bit more than SELECT and WITH as possible query for a CTAS.\n> How about VALUES, TABLE or even EXECUTE (itself able to handle a\n> SELECT, TABLE or VALUES)?\n> --\n> Michael\n\nRight, I don't know how I have missed the sql-createtableas page in the \ndocumentation.\n\nPatched v2 fixes the keyword list, I have also sorted by alphabetical \norder the CREATE TABLE completion (AS was at the end of the list).\n\nIt has also been re-based on current master.\n\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Wed, 15 Nov 2023 17:26:58 +0300",
"msg_from": "Gilles Darold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for CREATE TABLE ... AS"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 05:26:58PM +0300, Gilles Darold wrote:\n> Right, I don't know how I have missed the sql-createtableas page in the\n> documentation.\n> \n> Patched v2 fixes the keyword list, I have also sorted by alphabetical order\n> the CREATE TABLE completion (AS was at the end of the list).\n> \n> It has also been re-based on current master.\n\nFun. It has failed to apply here.\n\nAnyway, I can see that a comment update has been forgotten. A second\nthing is that it requires two more lines to add the query keywords for\nthe case where a CTAS has a list of column names. I've added both\nchanges, and applied the patch on HEAD. That's not all the patterns\npossible, but this covers the most useful ones.\n--\nMichael",
"msg_date": "Thu, 16 Nov 2023 09:46:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE TABLE ... AS"
}
] |
[
{
"msg_contents": "Hello!\n\nI was looking into table access methods recently and found the\nexisting page a bit sparse. Here's a small patch adding a little more\nexample code to the table access methods page.\n\nLet me know if there's anything I can do to fix my patch up!\n\nCheers,\nPhil",
"msg_date": "Thu, 2 Nov 2023 13:58:42 -0400",
"msg_from": "Phil Eaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add minimal C example and SQL registration example for custom table\n access methods."
},
{
"msg_contents": "On Sat, Nov 11, 2023 at 1:00 PM Phil Eaton <[email protected]> wrote:\n> I was looking into table access methods recently and found the\n> existing page a bit sparse. Here's a small patch adding a little more\n> example code to the table access methods page.\n\nI agree this is a helpful addition for people exploring table AMs. The\npatch applies and builds. No spelling/grammar errors. Integrates\nsmoothly with the surrounding text.\n\nI didn't write a full table AM (maybe someday! :-) but I put the\nexample code into a new extension and made sure it builds. I don't\nthink this snippet is likely to go out of date, but is there anything\nwe do in doc examples to test that? (I'm not aware of anything.)\n\nThere is no commitfest entry yet. Phil, would you mind adding one? (It\nwill need to be against the Jan 2024 commitfest.) I started one myself\nbut I don't think you're registered yet so I couldn't enter you as the\npatch author. Let me know if you have any trouble.\n\n+1 from me!\n\nRegards,\nPaul\n\n\n",
"msg_date": "Sat, 11 Nov 2023 13:17:26 -0800",
"msg_from": "Paul A Jungwirth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table access methods."
},
{
"msg_contents": "Suggestion:\r\n\r\nIn the C example you added you mention in the comment:\r\n\r\n+ /* Methods from TableAmRoutine omitted from example, but all\r\n+ non-optional ones must be provided here. */\r\n\r\nPerhaps you could provide a \"see <xyz>\" to point the reader finding your example where he could find these non-optional methods he must provide?\r\n\r\nNitpicking a little: your patch appears to change more lines than it does, because it added line breaks earlier in the lines. I would generally avoid that unless there's good reason to do so.",
"msg_date": "Wed, 15 Nov 2023 23:28:24 +0000",
"msg_from": "Roberto Mello <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table\n access methods."
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\nHello,\r\n\r\nI've reviewed your patch and it applies correctly and the documentation builds without any error. The built documentation also looks good with no formatting errors. It's always great to see more examples when reading through documentation so I think this patch is a good addition.\r\n\r\nthanks,\r\n\r\n-----------------------\r\nTristen Raab\r\nHighgo Software Canada\r\nwww.highgo.ca",
"msg_date": "Fri, 26 Jan 2024 19:56:37 +0000",
"msg_from": "Tristen Raab <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table\n access methods."
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 8:29 PM Roberto Mello <[email protected]>\nwrote:\n>\n> Suggestion:\n>\n> In the C example you added you mention in the comment:\n>\n> + /* Methods from TableAmRoutine omitted from example, but all\n> + non-optional ones must be provided here. */\n>\n> Perhaps you could provide a \"see <xyz>\" to point the reader finding your\nexample where he could find these non-optional methods he must provide?\n>\n> Nitpicking a little: your patch appears to change more lines than it\ndoes, because it added line breaks earlier in the lines. I would generally\navoid that unless there's good reason to do so.\n\nHey folks,\n\nThere is a previous patch [1] around the same topic. What about joining\nefforts on pointing these documentation changes to the proposed test module?\n\n[1] https://commitfest.postgresql.org/46/4588/\n\n-- \nFabrízio de Royes Mello\n\nOn Wed, Nov 15, 2023 at 8:29 PM Roberto Mello <[email protected]> wrote:>> Suggestion:>> In the C example you added you mention in the comment:>> + /* Methods from TableAmRoutine omitted from example, but all> + non-optional ones must be provided here. */>> Perhaps you could provide a \"see <xyz>\" to point the reader finding your example where he could find these non-optional methods he must provide?>> Nitpicking a little: your patch appears to change more lines than it does, because it added line breaks earlier in the lines. I would generally avoid that unless there's good reason to do so.Hey folks,There is a previous patch [1] around the same topic. What about joining efforts on pointing these documentation changes to the proposed test module?[1] https://commitfest.postgresql.org/46/4588/-- Fabrízio de Royes Mello",
"msg_date": "Fri, 26 Jan 2024 17:02:59 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table access methods."
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 3:03 PM Fabrízio de Royes Mello\n<[email protected]> wrote:\n> On Wed, Nov 15, 2023 at 8:29 PM Roberto Mello <[email protected]> wrote:\n> > Suggestion:\n> >\n> > In the C example you added you mention in the comment:\n> >\n> > + /* Methods from TableAmRoutine omitted from example, but all\n> > + non-optional ones must be provided here. */\n> >\n> > Perhaps you could provide a \"see <xyz>\" to point the reader finding your example where he could find these non-optional methods he must provide?\n> >\n> > Nitpicking a little: your patch appears to change more lines than it does, because it added line breaks earlier in the lines. I would generally avoid that unless there's good reason to do so.\n>\n> Hey folks,\n>\n> There is a previous patch [1] around the same topic. What about joining efforts on pointing these documentation changes to the proposed test module?\n>\n> [1] https://commitfest.postgresql.org/46/4588/\n\nLooking over this thread, I see that it was moved from pgsql-docs to\npgsql-hackers while at the same time dropping the original poster from\nthe Cc list. That seems rather unfortunate. I suspect there's a pretty\ngood chance that Phil Eaton hasn't seen any of the replies other than\nthe first one from Paul Jungwirth, which is also the only one that\ndidn't ask for anything to be changed.\n\nRe-adding Phil. Phil, you should have a look over\nhttps://www.postgresql.org/message-id/flat/CAByiw%2Br%2BCS-ojBDP7Dm%3D9YeOLkZTXVnBmOe_ajK%3Den8C_zB3_g%40mail.gmail.com\nand respond to the various emails and probably update the patch\nsomehow. Note that feature freeze is in 2 weeks, so if we can't reach\nagreement on what is to be done here soon, this will have to wait for\nthe next cycle, or later.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 13:40:03 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table access methods."
},
{
"msg_contents": "Thanks Robert for mentioning this! I indeed did not notice the switch.\n\n> Nitpicking a little: your patch appears to change more lines than it does, because it added line breaks earlier in the lines. I would generally avoid that unless there's good reason to do so.\n\nThanks! I'm not sure why that happened since I normally run\nfill-region in emacs and when I re-ran it now, it looked as it used\nto. I've fixed it up in this patch.\n\n> Perhaps you could provide a \"see <xyz>\" to point the reader finding your example where he could find these non-optional methods he must provide?\n\nSince the responses were positive, I've taken the liberty to extend\nthe sample code by simply including all the stub methods and the full\nstruct. Marking which methods are optional and not.\n\nIf that looks like too much, I can revert back. Perhaps only\nmentioning the struct like we do for the index AM here:\nhttps://www.postgresql.org/docs/current/index-api.html. However, as a\nreader, I feel like the full stubs are a bit more useful.\n\nHappy for feedback. Updated patch is attached.\n\nCheers,\nPhil\n\n\nOn Fri, Mar 22, 2024 at 1:40 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Jan 26, 2024 at 3:03 PM Fabrízio de Royes Mello\n> <[email protected]> wrote:\n> > On Wed, Nov 15, 2023 at 8:29 PM Roberto Mello <[email protected]> wrote:\n> > > Suggestion:\n> > >\n> > > In the C example you added you mention in the comment:\n> > >\n> > > + /* Methods from TableAmRoutine omitted from example, but all\n> > > + non-optional ones must be provided here. */\n> > >\n> > > Perhaps you could provide a \"see <xyz>\" to point the reader finding your example where he could find these non-optional methods he must provide?\n> > >\n> > > Nitpicking a little: your patch appears to change more lines than it does, because it added line breaks earlier in the lines. I would generally avoid that unless there's good reason to do so.\n> >\n> > Hey folks,\n> >\n> > There is a previous patch [1] around the same topic. What about joining efforts on pointing these documentation changes to the proposed test module?\n> >\n> > [1] https://commitfest.postgresql.org/46/4588/\n>\n> Looking over this thread, I see that it was moved from pgsql-docs to\n> pgsql-hackers while at the same time dropping the original poster from\n> the Cc list. That seems rather unfortunate. I suspect there's a pretty\n> good chance that Phil Eaton hasn't seen any of the replies other than\n> the first one from Paul Jungwirth, which is also the only one that\n> didn't ask for anything to be changed.\n>\n> Re-adding Phil. Phil, you should have a look over\n> https://www.postgresql.org/message-id/flat/CAByiw%2Br%2BCS-ojBDP7Dm%3D9YeOLkZTXVnBmOe_ajK%3Den8C_zB3_g%40mail.gmail.com\n> and respond to the various emails and probably update the patch\n> somehow. Note that feature freeze is in 2 weeks, so if we can't reach\n> agreement on what is to be done here soon, this will have to wait for\n> the next cycle, or later.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com",
"msg_date": "Fri, 3 May 2024 13:35:31 -0400",
"msg_from": "Phil Eaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table access methods."
},
{
"msg_contents": "On Fri, May 3, 2024 at 1:35 PM Phil Eaton <[email protected]> wrote:\n> Happy for feedback. Updated patch is attached.\n\nI took a look at this patch and I don't think this is a very good\nidea, for two reasons:\n\n1. We change the table access method interface definitions not all\nthat infrequently, so I think this will become out of date, and fail\nto get updated.\n\n2. Writing a table access method is really hard, and if you need this\nin order to be able to attempt it, you're probably shouldn't be\nattmempting it.\n\nI wouldn't mind patching the documentation to add the SQL part of\nthis; that seems short enough, non-obvious enough, and sufficiently\nunlikely to change that I can believe it would be a worthwhile\naddition. But there have been 21 commits to tableam.h in the last 6\nmonths and most of those would have needed to update this example, and\nI think it's very likely that some of them would have forgotten it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2024 14:46:01 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table access methods."
},
{
"msg_contents": "> I took a look at this patch and I don't think this is a very good\n> idea,\n\nNo problem! I've dropped the v2 code additions and stuck with the v1\nattempt plus feedback.\n\nThank you!\n\nPhil",
"msg_date": "Tue, 14 May 2024 15:02:03 -0400",
"msg_from": "Phil Eaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table access methods."
},
{
"msg_contents": "On Tue, May 14, 2024 at 3:02 PM Phil Eaton <[email protected]> wrote:\n> > I took a look at this patch and I don't think this is a very good\n> > idea,\n>\n> No problem! I've dropped the v2 code additions and stuck with the v1\n> attempt plus feedback.\n\nThat looks more reasonable. I'd like to quibble with this text:\n\n+. Here is an example of how to register an extension that provides a\n+ table access method handler:\n\nI think this should say something more like \"Here is how an extension\nSQL script might create a table access method handler\". I'm not sure\nif we have a standard term in our documentation that should be used\ninstead of \"extension SQL script\"; perhaps look for similar examples,\nor the documentation of extensions themselves, and copy the wording.\n\nShouldn't \"mem_tableam_handler\" be \"my_tableam_handler\"?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2024 16:07:09 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table access methods."
},
{
"msg_contents": "> I think this should say something more like \"Here is how an extension\n> SQL script might create a table access method handler\".\n\nFair point. It is referred to elsewhere [0] in docs as a \"script\nfile\", so I've done that.\n\n> Shouldn't \"mem_tableam_handler\" be \"my_tableam_handler\"?\n\nSorry about that, fixed.\n\n[0] https://www.postgresql.org/docs/current/extend-extensions.html\n\nPhil",
"msg_date": "Fri, 24 May 2024 14:10:53 -0400",
"msg_from": "Phil Eaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table access methods."
},
{
"msg_contents": "On Fri, May 24, 2024 at 3:11 PM Phil Eaton <[email protected]> wrote:\n\n> > I think this should say something more like \"Here is how an extension\n> > SQL script might create a table access method handler\".\n>\n> Fair point. It is referred to elsewhere [0] in docs as a \"script\n> file\", so I've done that.\n>\n> > Shouldn't \"mem_tableam_handler\" be \"my_tableam_handler\"?\n>\n> Sorry about that, fixed.\n>\n> [0] https://www.postgresql.org/docs/current/extend-extensions.html\n>\n> Phil\n>\n\nNice... LGTM!\n\n-- \nFabrízio de Royes Mello\n\nOn Fri, May 24, 2024 at 3:11 PM Phil Eaton <[email protected]> wrote:> I think this should say something more like \"Here is how an extension\n> SQL script might create a table access method handler\".\n\nFair point. It is referred to elsewhere [0] in docs as a \"script\nfile\", so I've done that.\n\n> Shouldn't \"mem_tableam_handler\" be \"my_tableam_handler\"?\n\nSorry about that, fixed.\n\n[0] https://www.postgresql.org/docs/current/extend-extensions.html\n\nPhil\nNice... LGTM!-- Fabrízio de Royes Mello",
"msg_date": "Fri, 24 May 2024 15:59:08 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table access methods."
},
{
"msg_contents": "I noticed that there were two CF entries pointing at this thread:\n\nhttps://commitfest.postgresql.org/48/4655/\nhttps://commitfest.postgresql.org/48/4973/\n\nThat doesn't seem helpful, so I've marked the second one \"Withdrawn\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jul 2024 16:00:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add minimal C example and SQL registration example for custom\n table access methods."
}
] |
[
{
"msg_contents": "\nI noticed that we occasionally use \"volatile\" to access shared memory,\nbut usually not; and I'm not clear on the rules for doing so. For\ninstance, AdvanceXLInsertBuffer() writes to XLogCtl->xlblocks[nextidx]\nthrough a volatile pointer; but then immediately writes to XLogCtl-\n>InitializedUpTo with a non-volatile pointer. There are also places in\nprocarray.c that make use of volatile through UINT32_ACCESS_ONCE(), and\nof course there are atomics (which use volatile as well as guaranteeing\natomicity).\n\nIn theory, I think we're always supposed to access shared memory\nthrough a volatile pointer, right? Otherwise a sufficiently smart (and\ncruel) compiler could theoretically optimize away the load/store in\nsome surprising cases, or hold a value in a register longer than we\nexpect, and then any memory barriers would be useless.\n\nBut in practice we don't do that even for sensitive structures like the\none referenced by XLogCtl. My intuition up until now was that if we\naccess through a global pointer, then the compiler wouldn't completely\noptimize away the store/load. I ran through some tests and that\nassumption seems to hold up, at least in a few simple examples with gcc\nat -O2, which seem to emit the loads/stores where expected.\n\nWhat is the guidance here? Is the volatile pointer use in\nAdvanceXLInsertBuffer() required, and if so, why not other places? \n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 02 Nov 2023 23:19:03 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inconsistent use of \"volatile\" when accessing shared memory?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-02 23:19:03 -0700, Jeff Davis wrote:\n> I noticed that we occasionally use \"volatile\" to access shared memory,\n> but usually not; and I'm not clear on the rules for doing so. For\n> instance, AdvanceXLInsertBuffer() writes to XLogCtl->xlblocks[nextidx]\n> through a volatile pointer; but then immediately writes to XLogCtl-\n> >InitializedUpTo with a non-volatile pointer. There are also places in\n> procarray.c that make use of volatile through UINT32_ACCESS_ONCE(), and\n> of course there are atomics (which use volatile as well as guaranteeing\n> atomicity).\n\n> In theory, I think we're always supposed to access shared memory\n> through a volatile pointer, right? Otherwise a sufficiently smart (and\n> cruel) compiler could theoretically optimize away the load/store in\n> some surprising cases, or hold a value in a register longer than we\n> expect, and then any memory barriers would be useless.\n\nI don't think so. We used to use volatile for most shared memory accesses, but\nvolatile doesn't provide particularly useful semantics - and generates\n*vastly* slower code in a lot of circumstances. Most of that usage predates\nspinlocks being proper compiler barriers, which was rectified in:\n\ncommit 0709b7ee72e\nAuthor: Robert Haas <[email protected]>\nDate: 2014-09-09 17:45:20 -0400\n\n Change the spinlock primitives to function as compiler barriers.\n\nor the introduction of compiler/memory barriers in\n\ncommit 0c8eda62588\nAuthor: Robert Haas <[email protected]>\nDate: 2011-09-23 17:52:43 -0400\n\n Memory barrier support for PostgreSQL.\n\n\nMost instances of volatile used for shared memory access should be replaced\nwith explicit compiler/memory barriers, as appropriate.\n\nNote that use of volatile does *NOT* guarantee anything about memory ordering!\n\n\n> But in practice we don't do that even for sensitive structures like the\n> one referenced by XLogCtl. My intuition up until now was that if we\n> access through a global pointer, then the compiler wouldn't completely\n> optimize away the store/load.\n\nThat's not guaranteed at all - luckily, as it'd lead to code being more\nbulky. It's not that the global variable will be optimized away entirely in\nmany situations, but repeated accesses can sometimes be merged and the access\ncan be moved around.\n\n\n> What is the guidance here? Is the volatile pointer use in\n> AdvanceXLInsertBuffer() required, and if so, why not other places?\n\nI don't think it's required. The crucial part is to avoid memory reordering\nbetween zeroing the block / initializing fields and changing that field - the\nrelevant part for that is the pg_write_barrier(); *not* the volatile.\n\nThe volatile does prevent the compiler from deferring the update of\nxlblocks[idx] to the next loop iteration. Which I guess isn't a bad idea - but\nit's not required for correctness.\n\n\nWhen an individual value is read/written from memory, and all that's desired\nis to prevent the compiler for eliding/moving that operation, it can lead to\nbetter code to use volatile *on the individual access* compared to using\npg_compiler_barrier(), because it allows the compiler to keep other variables\nin registers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Nov 2023 15:59:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent use of \"volatile\" when accessing shared memory?"
},
{
"msg_contents": "On Fri, 2023-11-03 at 15:59 -0700, Andres Freund wrote:\n> I don't think so. We used to use volatile for most shared memory\n> accesses, but\n> volatile doesn't provide particularly useful semantics - and\n> generates\n> *vastly* slower code in a lot of circumstances. Most of that usage\n> predates\n> spinlocks being proper compiler barriers, \n\nA compiler barrier doesn't always force the compiler to generate loads\nand stores, though.\n\nFor instance (example code I placed at the bottom of xlog.c):\n\n typedef struct DummyStruct {\n XLogRecPtr recptr;\n } DummyStruct;\n extern void DummyFunction(void);\n static DummyStruct Dummy = { 5 };\n static DummyStruct *pDummy = &Dummy;\n void\n DummyFunction(void)\n {\n while(true)\n {\n pg_compiler_barrier();\n pg_memory_barrier();\n if (pDummy->recptr == 0)\n break;\n pg_compiler_barrier();\n pg_memory_barrier();\n }\n }\n\n\nGenerates the following code (clang -O2):\n\n 000000000016ed10 <DummyFunction>:\n 16ed10: f0 83 04 24 00 lock addl $0x0,(%rsp)\n 16ed15: f0 83 04 24 00 lock addl $0x0,(%rsp)\n 16ed1a: eb f4 jmp 16ed10 <DummyFunction>\n 16ed1c: 0f 1f 40 00 nopl 0x0(%rax)\n\nObviously this is an oversimplified example and if I complicate it in\nany number of ways then it will start generating actual loads and\nstores, and then the compiler and memory barriers should do their job.\n\n\n> Note that use of volatile does *NOT* guarantee anything about memory\n> ordering!\n\nRight, but it does force loads/stores to be emitted by the compiler;\nand without loads/stores a memory barrier is useless.\n\nI understand that my example is too simple and I'm not claiming that\nthere's a problem. I'd just like to understand the key difference\nbetween my example and what we do with XLogCtl.\n\nAnother way to phrase my question: under what specific circumstances\nmust we use something like UINT32_ACCESS_ONCE()? That seems to be used\nfor local pointers, but it's not clear to me exactly why that matters.\nIntuitively, access through a local pointer seems much more likely to\nbe optimized and therefore more dangerous, but that doesn't imply that\naccess through global variables is not dangerous.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 03 Nov 2023 17:44:44 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent use of \"volatile\" when accessing shared memory?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-03 17:44:44 -0700, Jeff Davis wrote:\n> On Fri, 2023-11-03 at 15:59 -0700, Andres Freund wrote:\n> > I don't think so. We used to use volatile for most shared memory\n> > accesses, but\n> > volatile doesn't provide particularly useful semantics - and\n> > generates\n> > *vastly* slower code in a lot of circumstances. Most of that usage\n> > predates\n> > spinlocks being proper compiler barriers,\n>\n> A compiler barrier doesn't always force the compiler to generate loads\n> and stores, though.\n\nI don't think volatile does so entirely either, but I don't think it's\nrelevant anyway.\n\n\n> For instance (example code I placed at the bottom of xlog.c):\n>\n> typedef struct DummyStruct {\n> XLogRecPtr recptr;\n> } DummyStruct;\n> extern void DummyFunction(void);\n> static DummyStruct Dummy = { 5 };\n> static DummyStruct *pDummy = &Dummy;\n> void\n> DummyFunction(void)\n> {\n> while(true)\n> {\n> pg_compiler_barrier();\n> pg_memory_barrier();\n> if (pDummy->recptr == 0)\n> break;\n> pg_compiler_barrier();\n> pg_memory_barrier();\n> }\n> }\n>\n>\n> Generates the following code (clang -O2):\n>\n> 000000000016ed10 <DummyFunction>:\n> 16ed10: f0 83 04 24 00 lock addl $0x0,(%rsp)\n> 16ed15: f0 83 04 24 00 lock addl $0x0,(%rsp)\n> 16ed1a: eb f4 jmp 16ed10 <DummyFunction>\n> 16ed1c: 0f 1f 40 00 nopl 0x0(%rax)\n>\n> Obviously this is an oversimplified example and if I complicate it in\n> any number of ways then it will start generating actual loads and\n> stores, and then the compiler and memory barriers should do their job.\n\nAll that is happening here is that clang can prove that nothing ever could\nchange the variable (making this actually undefined behaviour, IIRC). If you\nadd another function like:\n\nextern void\nSetDummy(uint64_t lsn)\n{\n pDummy->recptr = lsn;\n}\n\nclang does generate the code you'd expect - there's a possibility that it\ncould be true.\n\nSee https://godbolt.org/z/EaM77E8jK\n\nWe don't gain anything from preventing the compiler from making such\nconclusions afaict.\n\n\n> > Note that use of volatile does *NOT* guarantee anything about memory\n> > ordering!\n>\n> Right, but it does force loads/stores to be emitted by the compiler;\n> and without loads/stores a memory barrier is useless.\n\nThere would not have been the point in generating that load...\n\n\n> I understand that my example is too simple and I'm not claiming that\n> there's a problem. I'd just like to understand the key difference\n> between my example and what we do with XLogCtl.\n\nThe key difference is that there's code changing relevant variables :)\n\n\n> Another way to phrase my question: under what specific circumstances\n> must we use something like UINT32_ACCESS_ONCE()? That seems to be used\n> for local pointers\n\nI guess I don't really know what you mean with global or local pointers?\nThere's no point in ever using something like it if the memory is local to a\nfunction.\n\nThe procarray uses all are for shared memory. Sure, we save a pointer to a\nsubset of shared memory in a local variable, but that doesn't change very much\n(it does tell the compiler that it doesnt' need to reload ProcGlobal->xid from\nmemory after calling an external function (which could change it), so it's a\nminor efficiency improvement).\n\nThe reason for\n\t\t/* Fetch xid just once - see GetNewTransactionId */\n\t\tpxid = UINT32_ACCESS_ONCE(other_xids[pgxactoff]);\nis explained in a comment in GetNewTransactionId():\n\t *\n\t * Note that readers of ProcGlobal->xids/PGPROC->xid should be careful to\n\t * fetch the value for each proc only once, rather than assume they can\n\t * read a value multiple times and get the same answer each time. Note we\n\t * are assuming that TransactionId and int fetch/store are atomic.\n\nwithout the READ_ONCE(), the compiler could decide to not actually load the\nmemory contents into a register through the loop body, but to just refetch it\nevery time.\n\nWe could also implement this with a compiler barrier between fetching pxid and\nusing it - but it'd potentially lead to worse code, because instead of just\nforcing one load to come from memory, it'd also force reloading *other*\nvariables from memory.\n\n\n> but it's not clear to me exactly why that matters. Intuitively, access\n> through a local pointer seems much more likely to be optimized and therefore\n> more dangerous, but that doesn't imply that access through global variables\n> is not dangerous.\n\nI really don't think there's a meaningful difference between the two. What is\ninteresting is where the memory points to and whether the compiler can reason\nabout other code potentially having had access to the memory.\n\nE.g. with code like this (overly simple example)\n\n int foo = 0;\n int *foop = &0;\n\n pg_memory_barrier();\n\n if (*foop != 0)\n return false;\n\nthe compiler can easily prove that the variable could not have changed - there\nis no legal way for foo too have changed, regardless of the memory barrier.\n\nHowever, if the code looked like this:\n\n int foo = 0;\n int *foop = &0;\n\n SomethingTheCompilerCantSee(foop);\n\n if (*foop != 0)\n return false;\n\nor\n\nextern int *extern_foop;\n\n int foo = 0;\n int *foop = &0;\n\n extern_foop = foop;\n\n if (*foop != 0)\n return false;\n\n\nthe compiler couldn't prove that anymore.\n\n\nOf course the case of a pointer to a stack variable is on the very trivial\nside. But this holds true for other cases, e.g. the one you noticed above, a\nstatic variable (guaranteeing it's not accessed in other translation units)\nonly accessed in one function - there couldn't have been anything modifying\nit.\n\nThis even holds true for memory allocated by malloc() - the language\nguarantees that there shall be no other pointers to that memory, so the\ncompiler knows that just after returning there can't be nobody but the caller\naccessing the memory. Until the pointer escapes in some form.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Nov 2023 18:36:39 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent use of \"volatile\" when accessing shared memory?"
},
{
"msg_contents": "On Fri, 2023-11-03 at 18:36 -0700, Andres Freund wrote:\n> All that is happening here is that clang can prove that nothing ever\n> could\n> change the variable \n\n...\n\n> See https://godbolt.org/z/EaM77E8jK\n> \n> We don't gain anything from preventing the compiler from making such\n> conclusions afaict.\n\nThank you. I modified this a bit to use a global variable and then\nperform a write outside the loop: https://godbolt.org/z/EfvGx64eE\n\n extern void DummyFunction(void);\n extern DummyStruct Dummy;\n DummyStruct Dummy = { 5 };\n static DummyStruct *pDummy = &Dummy;\n\n void\n DummyFunction(void)\n {\n pDummy->recptr = 3;\n while(true)\n {\n if (pDummy->recptr == 0)\n break;\n pg_compiler_barrier();\n }\n }\n\nAnd I think that mostly answers my question: if the compiler barrier is\nthere, it loads each time; and if I take the barrier away, it optimizes\ninto an infinite loop.\n\nI suppose what's happening without the compiler barrier is that the\nload is being moved up right below the store, which allows it to be\noptimized away entirely. The compiler barrier prevents moving the load,\nand therefore prevents that optimization.\n\nI'm still somewhat hazy on exactly what a compiler barrier prevents --\nit seems like it depends on the internal details of how the compiler\noptimizes in stages. But I see that it's doing what we need it to do\nnow.\n\n> \n> The key difference is that there's code changing relevant variables\n> :)\n\nYeah.\n\n> I guess I don't really know what you mean with global or local\n> pointers?\n\nI meant \"global pointers to shared memory\" (like XLogCtl) vs \"local\npointers to shared memory\" (like other_xids in\nTransactionIdIsActive()).\n\n> We could also implement this with a compiler barrier between fetching\n> pxid and\n> using it - but it'd potentially lead to worse code, because instead\n> of just\n> forcing one load to come from memory, it'd also force reloading\n> *other*\n> variables from memory.\n\nI see -- that's a way to be more selective about what gets reloaded\nsince the last compiler barrier, rather than inserting a new compiler\nbarrier which would cause everything to be reloaded.\n\nThis was helpful, thank you again. I wanted to be more clear on these\nnuances while reviewing Bharath's patch, because I'm suggesting that he\ncan avoid the WALBufMappingLock to reduce the risk of a regression. In\nthe process, we'll probably get rid of that unnecessary \"volatile\" in\nAdvanceXLInsertBuffer().\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 04 Nov 2023 13:57:48 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent use of \"volatile\" when accessing shared memory?"
}
] |
[
{
"msg_contents": "Hi\n\nWe have one such problem. A table field has skewed data. Statistics:\nn_distinct | -0.4481973\nmost_common_vals | {5f006ca25b52ed78e457b150ee95a30c}\nmost_common_freqs | {0.5518474}\n\n\nData generation:\n\nCREATE TABLE s_user (\n user_id varchar(32) NOT NULL,\n corp_id varchar(32), \n\n status int NOT NULL\n );\n\ninsert into s_user\nselect md5('user_id ' || a), md5('corp_id ' || a),\n case random()<0.877675 when true then 1 else -1 end\n FROM generate_series(1,10031) a;\n\ninsert into s_user\nselect md5('user_id ' || a), md5('corp_id 10032'),\n case random()<0.877675 when true then 1 else -1 end\n FROM generate_series(10031,22383) a;\n\nCREATE INDEX s_user_corp_id_idx ON s_user USING btree (corp_id);\n\nanalyze s_user;\n\n\n1. First, define a PREPARE statement\nprepare stmt as select count(*) from s_user where status=1 and corp_id = $1;\n\n2. Run it five times. Choose the custom plan.\nexplain (analyze,buffers) execute stmt('5f006ca25b52ed78e457b150ee95a30c');\n\nHere's the plan:\n Aggregate (cost=639.84..639.85 rows=1 width=8) (actual \ntime=4.653..4.654 rows=1 loops=1)\n Buffers: shared hit=277\n -> Seq Scan on s_user (cost=0.00..612.76 rows=10830 width=0) \n(actual time=1.402..3.747 rows=10836 loops=1)\n Filter: ((status = 1) AND ((corp_id)::text = \n'5f006ca25b52ed78e457b150ee95a30c'::text))\n Rows Removed by Filter: 11548\n Buffers: shared hit=277\n Planning Time: 0.100 ms\n Execution Time: 4.674 ms\n(8 rows)\n\n3.From the sixth time. Choose generic plan.\nWe can see that there is a huge deviation between the estimate and the \nactual value:\n Aggregate (cost=11.83..11.84 rows=1 width=8) (actual \ntime=4.424..4.425 rows=1 loops=1)\n Buffers: shared hit=154 read=13\n -> Bitmap Heap Scan on s_user (cost=4.30..11.82 rows=2 width=0) \n(actual time=0.664..3.371 rows=10836 loops=1)\n Recheck Cond: ((corp_id)::text = $1)\n Filter: (status = 1)\n Rows Removed by Filter: 1517\n Heap Blocks: exact=154\n Buffers: shared hit=154 read=13\n -> Bitmap Index Scan on s_user_corp_id_idx (cost=0.00..4.30 \nrows=2 width=0) (actual time=0.635..0.635 rows=12353 loops=1)\n Index Cond: ((corp_id)::text = $1)\n Buffers: shared read=13\n Planning Time: 0.246 ms\n Execution Time: 4.490 ms\n(13 rows)\n\nThis is because in the choose_custom_plan function, the generic plan is \nattempted after executing the custom plan five times.\n\n if (plansource->num_custom_plans < 5)\n return true;\n\nThe generic plan uses var_eq_non_const to estimate the average selectivity.\n\nThese are facts that many people already know. So a brief introduction.\n\n\nOur users actually use such parameter conditions in very complex PREPARE \nstatements. Once they use the generic plan for the sixth time. The \nexecution time will change from 5 milliseconds to 5 minutes.\n\n\nTo improve this problem. The following approaches can be considered:\n\n1. Determine whether data skew exists in the PREPARE statement parameter \nconditions based on the statistics.\nHowever, there is no way to know if the user will use the skewed parameter.\n\n2.When comparing the cost of the generic plan with the average cost of \nthe custom plan(function choose_custom_plan). Consider whether the \nmaximum cost of a custom plan executed is an order of magnitude \ndifferent from the cost of a generic plan.\nIf the first five use a small selectivity condition. And after the sixth \nuse a high selectivity condition. Problems will still arise.\n\n3.Trace the execution time of the PREPARE statement. When an execution \ntime is found to be much longer than the average execution time, the \ncustom plan is forced to run.\n\n\nIs there any better idea?\n\n--\nQuan Zongliang\n\n\n\n",
"msg_date": "Fri, 3 Nov 2023 15:27:16 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improvement discussion of custom and generic plans"
},
{
"msg_contents": "On 2023/11/3 15:27, Quan Zongliang wrote:\n> Hi\n> \n> We have one such problem. A table field has skewed data. Statistics:\n> n_distinct | -0.4481973\n> most_common_vals | {5f006ca25b52ed78e457b150ee95a30c}\n> most_common_freqs | {0.5518474}\n> \n> \n> Data generation:\n> \n> CREATE TABLE s_user (\n> user_id varchar(32) NOT NULL,\n> corp_id varchar(32),\n> status int NOT NULL\n> );\n> \n> insert into s_user\n> select md5('user_id ' || a), md5('corp_id ' || a),\n> case random()<0.877675 when true then 1 else -1 end\n> FROM generate_series(1,10031) a;\n> \n> insert into s_user\n> select md5('user_id ' || a), md5('corp_id 10032'),\n> case random()<0.877675 when true then 1 else -1 end\n> FROM generate_series(10031,22383) a;\n> \n> CREATE INDEX s_user_corp_id_idx ON s_user USING btree (corp_id);\n> \n> analyze s_user;\n> \n> \n> 1. First, define a PREPARE statement\n> prepare stmt as select count(*) from s_user where status=1 and corp_id = \n> $1;\n> \n> 2. Run it five times. Choose the custom plan.\n> explain (analyze,buffers) execute stmt('5f006ca25b52ed78e457b150ee95a30c');\n> \n> Here's the plan:\n> Aggregate (cost=639.84..639.85 rows=1 width=8) (actual \n> time=4.653..4.654 rows=1 loops=1)\n> Buffers: shared hit=277\n> -> Seq Scan on s_user (cost=0.00..612.76 rows=10830 width=0) \n> (actual time=1.402..3.747 rows=10836 loops=1)\n> Filter: ((status = 1) AND ((corp_id)::text = \n> '5f006ca25b52ed78e457b150ee95a30c'::text))\n> Rows Removed by Filter: 11548\n> Buffers: shared hit=277\n> Planning Time: 0.100 ms\n> Execution Time: 4.674 ms\n> (8 rows)\n> \n> 3.From the sixth time. Choose generic plan.\n> We can see that there is a huge deviation between the estimate and the \n> actual value:\n> Aggregate (cost=11.83..11.84 rows=1 width=8) (actual \n> time=4.424..4.425 rows=1 loops=1)\n> Buffers: shared hit=154 read=13\n> -> Bitmap Heap Scan on s_user (cost=4.30..11.82 rows=2 width=0) \n> (actual time=0.664..3.371 rows=10836 loops=1)\n> Recheck Cond: ((corp_id)::text = $1)\n> Filter: (status = 1)\n> Rows Removed by Filter: 1517\n> Heap Blocks: exact=154\n> Buffers: shared hit=154 read=13\n> -> Bitmap Index Scan on s_user_corp_id_idx (cost=0.00..4.30 \n> rows=2 width=0) (actual time=0.635..0.635 rows=12353 loops=1)\n> Index Cond: ((corp_id)::text = $1)\n> Buffers: shared read=13\n> Planning Time: 0.246 ms\n> Execution Time: 4.490 ms\n> (13 rows)\n> \n> This is because in the choose_custom_plan function, the generic plan is \n> attempted after executing the custom plan five times.\n> \n> if (plansource->num_custom_plans < 5)\n> return true;\n> \n> The generic plan uses var_eq_non_const to estimate the average selectivity.\n> \n> These are facts that many people already know. So a brief introduction.\n> \n> \n> Our users actually use such parameter conditions in very complex PREPARE \n> statements. Once they use the generic plan for the sixth time. The \n> execution time will change from 5 milliseconds to 5 minutes.\n> \n> \n> To improve this problem. The following approaches can be considered:\n> \n> 1. Determine whether data skew exists in the PREPARE statement parameter \n> conditions based on the statistics.\n> However, there is no way to know if the user will use the skewed parameter.\n> \n> 2.When comparing the cost of the generic plan with the average cost of \n> the custom plan(function choose_custom_plan). Consider whether the \n> maximum cost of a custom plan executed is an order of magnitude \n> different from the cost of a generic plan.\n> If the first five use a small selectivity condition. And after the sixth \n> use a high selectivity condition. Problems will still arise.\n> \n> 3.Trace the execution time of the PREPARE statement. When an execution \n> time is found to be much longer than the average execution time, the \n> custom plan is forced to run.\n> \n> \n> Is there any better idea?\n> \nI tried to do a demo. Add a member paramid to Const. When Const is \ngenerated by Param, the Const is identified as coming from Param. Then \ncheck in var_eq_const to see if the field in the condition using this \nparameter is skewed. If so, choose_custom_plan returns true every time, \nforcing custom_plan to be used.\nOnly conditional expressions such as var eq param or param eq var can be \nsupported.\nIf it makes sense. Continue to improve this patch.\n\n> -- \n> Quan Zongliang\n> \n>",
"msg_date": "Tue, 30 Jan 2024 21:25:35 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improvement discussion of custom and generic plans"
},
{
"msg_contents": "Add the GUC parameter.\n\nOn 2024/1/30 21:25, Quan Zongliang wrote:\n> \n> \n> On 2023/11/3 15:27, Quan Zongliang wrote:\n>> Hi\n>>\n>> We have one such problem. A table field has skewed data. Statistics:\n>> n_distinct | -0.4481973\n>> most_common_vals | {5f006ca25b52ed78e457b150ee95a30c}\n>> most_common_freqs | {0.5518474}\n>>\n>>\n>> Data generation:\n>>\n>> CREATE TABLE s_user (\n>> user_id varchar(32) NOT NULL,\n>> corp_id varchar(32),\n>> status int NOT NULL\n>> );\n>>\n>> insert into s_user\n>> select md5('user_id ' || a), md5('corp_id ' || a),\n>> case random()<0.877675 when true then 1 else -1 end\n>> FROM generate_series(1,10031) a;\n>>\n>> insert into s_user\n>> select md5('user_id ' || a), md5('corp_id 10032'),\n>> case random()<0.877675 when true then 1 else -1 end\n>> FROM generate_series(10031,22383) a;\n>>\n>> CREATE INDEX s_user_corp_id_idx ON s_user USING btree (corp_id);\n>>\n>> analyze s_user;\n>>\n>>\n>> 1. First, define a PREPARE statement\n>> prepare stmt as select count(*) from s_user where status=1 and corp_id \n>> = $1;\n>>\n>> 2. Run it five times. Choose the custom plan.\n>> explain (analyze,buffers) execute \n>> stmt('5f006ca25b52ed78e457b150ee95a30c');\n>>\n>> Here's the plan:\n>> Aggregate (cost=639.84..639.85 rows=1 width=8) (actual \n>> time=4.653..4.654 rows=1 loops=1)\n>> Buffers: shared hit=277\n>> -> Seq Scan on s_user (cost=0.00..612.76 rows=10830 width=0) \n>> (actual time=1.402..3.747 rows=10836 loops=1)\n>> Filter: ((status = 1) AND ((corp_id)::text = \n>> '5f006ca25b52ed78e457b150ee95a30c'::text))\n>> Rows Removed by Filter: 11548\n>> Buffers: shared hit=277\n>> Planning Time: 0.100 ms\n>> Execution Time: 4.674 ms\n>> (8 rows)\n>>\n>> 3.From the sixth time. Choose generic plan.\n>> We can see that there is a huge deviation between the estimate and the \n>> actual value:\n>> Aggregate (cost=11.83..11.84 rows=1 width=8) (actual \n>> time=4.424..4.425 rows=1 loops=1)\n>> Buffers: shared hit=154 read=13\n>> -> Bitmap Heap Scan on s_user (cost=4.30..11.82 rows=2 width=0) \n>> (actual time=0.664..3.371 rows=10836 loops=1)\n>> Recheck Cond: ((corp_id)::text = $1)\n>> Filter: (status = 1)\n>> Rows Removed by Filter: 1517\n>> Heap Blocks: exact=154\n>> Buffers: shared hit=154 read=13\n>> -> Bitmap Index Scan on s_user_corp_id_idx \n>> (cost=0.00..4.30 rows=2 width=0) (actual time=0.635..0.635 rows=12353 \n>> loops=1)\n>> Index Cond: ((corp_id)::text = $1)\n>> Buffers: shared read=13\n>> Planning Time: 0.246 ms\n>> Execution Time: 4.490 ms\n>> (13 rows)\n>>\n>> This is because in the choose_custom_plan function, the generic plan \n>> is attempted after executing the custom plan five times.\n>>\n>> if (plansource->num_custom_plans < 5)\n>> return true;\n>>\n>> The generic plan uses var_eq_non_const to estimate the average \n>> selectivity.\n>>\n>> These are facts that many people already know. So a brief introduction.\n>>\n>>\n>> Our users actually use such parameter conditions in very complex \n>> PREPARE statements. Once they use the generic plan for the sixth time. \n>> The execution time will change from 5 milliseconds to 5 minutes.\n>>\n>>\n>> To improve this problem. The following approaches can be considered:\n>>\n>> 1. Determine whether data skew exists in the PREPARE statement \n>> parameter conditions based on the statistics.\n>> However, there is no way to know if the user will use the skewed \n>> parameter.\n>>\n>> 2.When comparing the cost of the generic plan with the average cost of \n>> the custom plan(function choose_custom_plan). Consider whether the \n>> maximum cost of a custom plan executed is an order of magnitude \n>> different from the cost of a generic plan.\n>> If the first five use a small selectivity condition. And after the \n>> sixth use a high selectivity condition. Problems will still arise.\n>>\n>> 3.Trace the execution time of the PREPARE statement. When an execution \n>> time is found to be much longer than the average execution time, the \n>> custom plan is forced to run.\n>>\n>>\n>> Is there any better idea?\n>>\n> I tried to do a demo. Add a member paramid to Const. When Const is \n> generated by Param, the Const is identified as coming from Param. Then \n> check in var_eq_const to see if the field in the condition using this \n> parameter is skewed. If so, choose_custom_plan returns true every time, \n> forcing custom_plan to be used.\n> Only conditional expressions such as var eq param or param eq var can be \n> supported.\n> If it makes sense. Continue to improve this patch.\n> \n>> -- \n>> Quan Zongliang\n>>\n>>",
"msg_date": "Mon, 19 Feb 2024 16:05:03 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improvement discussion of custom and generic plans"
}
] |
[
{
"msg_contents": "I'm having errors restoring with pg_restore to v16.0, it appears to be a \nregression or bug. The same file restored to v15.4 without problem.\n\nDuring the restore:\n\n pg_restore: error: could not execute query: ERROR: type \"hash\" does not exist\n LINE 7: )::hash;\n [...]\n CONTEXT: SQL function \"gen_hash\" during inlining\n\nIt prompted me to separate the restore into steps:\n\n* An initial \"--schema-only\" completes\n* The \"--data-only\" when the error takes place\n\nI also double-checked for no mismatch of client/server etc.\n\nFor now, I can use 15.4 for this one-off task so will have to kick this \ncan down the road.\n\nBut I think it worth reporting that something in 16.0 appears to be \nfailing on valid data (or maybe there is an incompatibility with a dump \nfrom 13.5?)\n\nThanks\n\n-- \nMark\n\n\n\n$ export DUMP=\"$HOME/tmp/production.pgdump\"\n\n\n$ pg_restore --dbname=stattrx --no-owner --no-privileges --schema-only --verbose --exit-on-error $DUMP\n[succeeds, no errors]\n\n\n$ pg_restore --dbname=stattrx --no-owner --no-privileges --data-only --verbose --exit-on-error $DUMP\npg_restore: connecting to database for restore\npg_restore: processing data for table \"public.authentic\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 4183; 0 58291 TABLE DATA authentic postgres\npg_restore: error: could not execute query: ERROR: type \"hash\" does not exist\nLINE 7: )::hash;\n ^\nQUERY:\n SELECT\n substring(\n regexp_replace(\n encode(gen_random_bytes(1024), 'base64'),\n '[^a-zA-Z0-9]', '', 'g') for $1\n )::hash;\n\nCONTEXT: SQL function \"gen_hash\" during inlining\nCommand was: COPY public.authentic (key, generated, peer, expires, studio) FROM stdin;\n\n\n$ pg_restore --version\npg_restore (PostgreSQL) 16.0\n\n\n$ pg_restore --list $DUMP\n;\n; Archive created at 2023-10-30 06:47:01 GMT\n; dbname: production\n; TOC Entries: 227\n; Compression: gzip\n; Dump Version: 1.14-0\n; Format: CUSTOM\n; Integer: 4 bytes\n; Offset: 8 bytes\n; Dumped from database version: 13.5\n; Dumped by pg_dump version: 13.5\n;\n;\n; Selected TOC Entries:\n;\n4; 3079 57533 EXTENSION - btree_gist\n4212; 0 0 COMMENT - EXTENSION btree_gist\n2; 3079 492253 EXTENSION - ltree\n4213; 0 0 COMMENT - EXTENSION ltree\n3; 3079 58156 EXTENSION - pgcrypto\n4214; 0 0 COMMENT - EXTENSION pgcrypto\n1022; 1247 58194 DOMAIN public handle postgres\n1026; 1247 58197 DOMAIN public hash postgres\n[...]\n504; 1255 58233 FUNCTION public gen_hash(integer) postgres\n[...]\n\n\n--\n-- Relevant SQL declarations\n--\n\nCREATE DOMAIN hash AS text\n CHECK (VALUE ~ E'^[a-zA-Z0-9]{8,32}$');\n\nCREATE OR REPLACE FUNCTION gen_hash(int)\nRETURNS hash AS\n$$\n SELECT\n substring(\n regexp_replace(\n encode(gen_random_bytes(1024), 'base64'),\n '[^a-zA-Z0-9]', '', 'g') for $1\n )::hash;\n$$ LANGUAGE SQL;\n\n\n",
"msg_date": "Fri, 3 Nov 2023 10:17:48 +0000 (GMT)",
"msg_from": "Mark Hills <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regression on pg_restore to 16.0: DOMAIN not available to SQL\n function"
},
{
"msg_contents": "\nOn 2023-11-03 Fr 06:17, Mark Hills wrote:\n> I'm having errors restoring with pg_restore to v16.0, it appears to be a\n> regression or bug. The same file restored to v15.4 without problem.\n>\n> During the restore:\n>\n> pg_restore: error: could not execute query: ERROR: type \"hash\" does not exist\n> LINE 7: )::hash;\n> [...]\n> CONTEXT: SQL function \"gen_hash\" during inlining\n>\n> It prompted me to separate the restore into steps:\n>\n> * An initial \"--schema-only\" completes\n> * The \"--data-only\" when the error takes place\n>\n> I also double-checked for no mismatch of client/server etc.\n>\n> For now, I can use 15.4 for this one-off task so will have to kick this\n> can down the road.\n>\n> But I think it worth reporting that something in 16.0 appears to be\n> failing on valid data (or maybe there is an incompatibility with a dump\n> from 13.5?)\n\n\nIn general you should use pg_dump from the version you want to restore \ninto. Dumps from earlier versions might work in some cases, but there is \nno guarantee.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 3 Nov 2023 07:39:57 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression on pg_restore to 16.0: DOMAIN not available to SQL\n function"
},
{
"msg_contents": "On Friday, November 3, 2023, Mark Hills <[email protected]> wrote:\n\n>\n> pg_restore: error: could not execute query: ERROR: type \"hash\" does not\n> exist\n> LINE 7: )::hash;\n> [...]\n> CONTEXT: SQL function \"gen_hash\" during inlining\n>\n> --\n> -- Relevant SQL declarations\n> --\n>\n\nThose were not all of the relevant SQL declarations. In particular you\nhaven’t shown where in your schema the gen_hash gets called.\n\nOdds are you’ve violated a “cannot execute queries in …” rule in something\nlike a generated column or a check expression. That it didn’t fail before\nnow is just a fluke.\n\nI seem to recall another recent report of this for v16 that goes into more\ndetail.\n\nDavid J.\n\nOn Friday, November 3, 2023, Mark Hills <[email protected]> wrote:\n pg_restore: error: could not execute query: ERROR: type \"hash\" does not exist\n LINE 7: )::hash;\n [...]\n CONTEXT: SQL function \"gen_hash\" during inlining\n--\n-- Relevant SQL declarations\n--\nThose were not all of the relevant SQL declarations. In particular you haven’t shown where in your schema the gen_hash gets called.Odds are you’ve violated a “cannot execute queries in …” rule in something like a generated column or a check expression. That it didn’t fail before now is just a fluke.I seem to recall another recent report of this for v16 that goes into more detail.David J.",
"msg_date": "Fri, 3 Nov 2023 07:19:26 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression on pg_restore to 16.0: DOMAIN not available to SQL\n function"
},
{
"msg_contents": "Mark Hills <[email protected]> writes:\n> I'm having errors restoring with pg_restore to v16.0, it appears to be a \n> regression or bug. The same file restored to v15.4 without problem.\n\n> During the restore:\n\n> pg_restore: error: could not execute query: ERROR: type \"hash\" does not exist\n> LINE 7: )::hash;\n> [...]\n> CONTEXT: SQL function \"gen_hash\" during inlining\n\nIt looks like your gen_hash() function is not proof against being\nrun with a minimal search_path, which is how the restore script\nwould call it. However, then it's not clear why it would've worked\nin 15.4 which does the same thing. I wonder whether you are\nusing this function in a column default for the troublesome\ntable. If so, the discrepancy might be explained by this\nfix that I just got done writing a 16.1 release note for:\n\n <listitem>\n<!--\nAuthor: Andrew Dunstan <[email protected]>\nBranch: master [276393f53] 2023-10-01 10:18:41 -0400\nBranch: REL_16_STABLE [910eb61b2] 2023-10-01 10:25:33 -0400\n-->\n <para>\n In <command>COPY FROM</command>, avoid evaluating column default\n values that will not be needed by the command (Laurenz Albe)\n </para>\n\n <para>\n This avoids a possible error if the default value isn't actually\n valid for the column. Previous releases did not fail in this edge\n case, so prevent v16 from doing so.\n </para>\n </listitem>\n\n\n> It prompted me to separate the restore into steps:\n> * An initial \"--schema-only\" completes\n> * The \"--data-only\" when the error takes place\n\nUh, *what* prompted you to do that? By and large, separating a\nrestore into two steps creates more problems than it solves.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Nov 2023 10:35:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression on pg_restore to 16.0: DOMAIN not available to SQL\n function"
},
{
"msg_contents": "On Fri, 3 Nov 2023, Tom Lane wrote:\n\n> Mark Hills <[email protected]> writes:\n> > I'm having errors restoring with pg_restore to v16.0, it appears to be a \n> > regression or bug. The same file restored to v15.4 without problem.\n> \n> > During the restore:\n> \n> > pg_restore: error: could not execute query: ERROR: type \"hash\" does not exist\n> > LINE 7: )::hash;\n> > [...]\n> > CONTEXT: SQL function \"gen_hash\" during inlining\n> \n> It looks like your gen_hash() function is not proof against being\n> run with a minimal search_path, which is how the restore script\n> would call it.\n\nYes, that makes sense.\n\nI suppose I didn't expect these functions to be invoked at all on \npg_restore, as seems to have been the case before, because...\n\n> However, then it's not clear why it would've worked\n> in 15.4 which does the same thing. I wonder whether you are\n> using this function in a column default for the troublesome\n> table.\n\nYes, it's just a simple DEFAULT:\n\n CREATE TABLE authentic (\n key hash NOT NULL UNIQUE DEFAULT gen_hash(32),\n\nand so every row would have a value.\n\n> If so, the discrepancy might be explained by this fix that I just got \n> done writing a 16.1 release note for:\n> \n> <listitem>\n> <!--\n> Author: Andrew Dunstan <[email protected]>\n> Branch: master [276393f53] 2023-10-01 10:18:41 -0400\n> Branch: REL_16_STABLE [910eb61b2] 2023-10-01 10:25:33 -0400\n> -->\n> <para>\n> In <command>COPY FROM</command>, avoid evaluating column default\n> values that will not be needed by the command (Laurenz Albe)\n> </para>\n> \n> <para>\n> This avoids a possible error if the default value isn't actually\n> valid for the column. Previous releases did not fail in this edge\n> case, so prevent v16 from doing so.\n> </para>\n> </listitem>\n\nIndeed, that like a good match to this issue.\n\nIs there a thread or link for this? Interested in the positive change that \nhad this side effect.\n\nAnd I think this could imply that v16 can pg_dump a data set which itself \ncannot restore? Imagining that might be considered a more serious bug. \nOnly needs a column default that invokes another function or type, and \nthat would seem relatively common.\n\n> > It prompted me to separate the restore into steps:\n> > * An initial \"--schema-only\" completes\n> > * The \"--data-only\" when the error takes place\n> \n> Uh, *what* prompted you to do that? By and large, separating a\n> restore into two steps creates more problems than it solves.\n\nOnly to try and bisect the problem in some way to try and make a \nreasonable bug report :)\n\nThanks\n\n-- \nMark\n\n\n",
"msg_date": "Fri, 3 Nov 2023 15:43:43 +0000 (GMT)",
"msg_from": "Mark Hills <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression on pg_restore to 16.0: DOMAIN not available to SQL\n function"
},
{
"msg_contents": "Mark Hills <[email protected]> writes:\n> On Fri, 3 Nov 2023, Tom Lane wrote:\n>> However, then it's not clear why it would've worked\n>> in 15.4 which does the same thing. I wonder whether you are\n>> using this function in a column default for the troublesome\n>> table.\n\n> Yes, it's just a simple DEFAULT:\n\n> CREATE TABLE authentic (\n> key hash NOT NULL UNIQUE DEFAULT gen_hash(32),\n\n> and so every row would have a value.\n\nRight, so the 910eb61b2 fix explains it. I guess I'd better\nexpand the release note entry, because we'd not foreseen this\nparticular failure mode.\n\n> Is there a thread or link for this? Interested in the positive change that \n> had this side effect.\n\nYou could look at the commit:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=910eb61b2\n\nOur modern practice is for the commit log message to link to the mailing\nlist discussion that led up to the commit, and this one does so:\n\n> Discussion: https://postgr.es/m/[email protected]\n\nThat message says\n\n>> Bisecting shows that the regression was introduced by commit 9f8377f7a2,\n>> which introduced DEFAULT values for COPY FROM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Nov 2023 12:01:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression on pg_restore to 16.0: DOMAIN not available to SQL\n function"
},
{
"msg_contents": "On Fri, 3 Nov 2023, Tom Lane wrote:\n\n> Mark Hills <[email protected]> writes:\n> > On Fri, 3 Nov 2023, Tom Lane wrote:\n> >> However, then it's not clear why it would've worked\n> >> in 15.4 which does the same thing. I wonder whether you are\n> >> using this function in a column default for the troublesome\n> >> table.\n> \n> > Yes, it's just a simple DEFAULT:\n> \n> > CREATE TABLE authentic (\n> > key hash NOT NULL UNIQUE DEFAULT gen_hash(32),\n> \n> > and so every row would have a value.\n> \n> Right, so the 910eb61b2 fix explains it. I guess I'd better\n> expand the release note entry, because we'd not foreseen this\n> particular failure mode.\n\nIndeed, and curiosity got the better of me so I constructed a minimal test \ncase (see below)\n\nThis minimal test demonstrates a database which will pg_dump but cannot \nrestore (custom with pg_restore, or plain SQL with psql.)\n\nI assumed I'd need at least one row of data to trigger the bug (to call on \na default), but that's not the case and here it is with an empty table.\n\nI then tested REL_16_STABLE branch (e24daa94b) the problem does not occur, \nas expected.\n\nAlso, the stable branch version was able to restore the pg_dump from 16.0 \nrelease, which is as expected and is probably important (and helpful)\n\nThanks\n\n-- \nMark\n\n\n==> test.sql <==\nCREATE FUNCTION inner()\nRETURNS integer AS\n$$\n SELECT 1;\n$$ LANGUAGE SQL;\n\nCREATE FUNCTION outer()\nRETURNS integer AS\n$$\n SELECT inner();\n$$ LANGUAGE SQL;\n\nCREATE TABLE test (\n v integer NOT NULL DEFAULT outer()\n);\n\n\n$ createdb test\n$ psql test < test.sql\n\n\n$ pg_dump --format custom --file test.pgdump test\n$ createdb restore\n$ pg_restore --dbname restore test.pgdump\npg_restore: error: could not execute query: ERROR: function inner() does not exist\nLINE 2: SELECT inner();\n ^\nHINT: No function matches the given name and argument types. You might \nneed to add explicit type casts.\nQUERY:\n SELECT inner();\n\nCONTEXT: SQL function \"outer\" during inlining\nCommand was: COPY public.test (v) FROM stdin;\npg_restore: warning: errors ignored on restore: 1\n\n\n$ pg_dump --format plain --file test.pgdump test\n$ createdb restore\n$ psql restore < test.pgdump\nSET\nSET\nSET\nSET\nSET\n set_config\n------------\n\n(1 row)\n\nSET\nSET\nSET\nSET\nCREATE FUNCTION\nALTER FUNCTION\nCREATE FUNCTION\nALTER FUNCTION\nSET\nSET\nCREATE TABLE\nALTER TABLE\nERROR: function inner() does not exist\nLINE 2: SELECT inner();\n ^\nHINT: No function matches the given name and argument types. You might \nneed to add explicit type casts.\nQUERY:\n SELECT inner();\n\nCONTEXT: SQL function \"outer\" during inlining\ninvalid command \\.\n\n\n",
"msg_date": "Mon, 6 Nov 2023 16:46:36 +0000 (GMT)",
"msg_from": "Mark Hills <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression on pg_restore to 16.0: DOMAIN not available to SQL\n function"
},
{
"msg_contents": "Mark Hills <[email protected]> writes:\n> On Fri, 3 Nov 2023, Tom Lane wrote:\n>> Right, so the 910eb61b2 fix explains it. I guess I'd better\n>> expand the release note entry, because we'd not foreseen this\n>> particular failure mode.\n\n> Indeed, and curiosity got the better of me so I constructed a minimal test \n> case (see below)\n\nI checked this against 16 branch tip (about to become 16.1),\nand it works, as expected.\n\n> I assumed I'd need at least one row of data to trigger the bug (to call on \n> a default), but that's not the case and here it is with an empty table.\n\nRight. The step 16.0 is taking that fails is not evaluating the\ndefault expression, but merely prepping it for execution during COPY\nstartup. This error occurs while trying to inline the inline-able\nouter() function.\n\nWe worked around this by skipping the expression prep step when it's\nclear from the COPY arguments that we won't need it, which should\nbe true for all of pg_dump's uses of COPY.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Nov 2023 12:01:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression on pg_restore to 16.0: DOMAIN not available to SQL\n function"
}
] |
[
{
"msg_contents": "I've finished the first draft of 16.1 release notes; see\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=1e774846b870a858f8eb88b3f512562009177f45\n\nPlease send comments/corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Nov 2023 17:56:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Draft back-branch release notes are up"
}
] |
[
{
"msg_contents": "Hello PostgreSQL Developers,\n\nI hope this message finds you well. My name is Atharva, and I'm a\nsecond-year student majoring in computer application with a strong interest\nin contributing to the PostgreSQL project. I have experience in C\nprogramming and a solid understanding of PostgreSQL.\n\nI'm reaching out to request your guidance in identifying beginner-friendly\nissues that I can start working on. I'm eager to become an active\ncontributor to the PostgreSQL community and help improve the project.\nWhether it's bug fixes, documentation updates, or other tasks, I'm open to\ngetting involved in various areas.\n\nI would greatly appreciate any recommendations or pointers to where I can\nfind suitable issues or specific resources that can aid my journey as a\ncontributor.\n\nThank you for your time, and I look forward to being a part of this\nfantastic community.\n\nBest regards,\nAtharva\n\nHello PostgreSQL Developers,I hope this message finds you well. My name is Atharva, and I'm a second-year student majoring in computer application with a strong interest in contributing to the PostgreSQL project. I have experience in C programming and a solid understanding of PostgreSQL.I'm reaching out to request your guidance in identifying beginner-friendly issues that I can start working on. I'm eager to become an active contributor to the PostgreSQL community and help improve the project. Whether it's bug fixes, documentation updates, or other tasks, I'm open to getting involved in various areas.I would greatly appreciate any recommendations or pointers to where I can find suitable issues or specific resources that can aid my journey as a contributor.Thank you for your time, and I look forward to being a part of this fantastic community.Best regards,Atharva",
"msg_date": "Sat, 4 Nov 2023 22:17:13 +0530",
"msg_from": "Atharva Bhise <[email protected]>",
"msg_from_op": true,
"msg_subject": "Introduction and Inquiry on Beginner-Friendly Issues"
},
{
"msg_contents": "On Sat, Nov 4, 2023 at 10:17:13PM +0530, Atharva Bhise wrote:\n> Hello PostgreSQL Developers,\n> \n> I hope this message finds you well. My name is Atharva, and I'm a second-year\n> student majoring in computer application with a strong interest in contributing\n> to the PostgreSQL project. I have experience in C programming and a solid\n> understanding of PostgreSQL.\n> \n> I'm reaching out to request your guidance in identifying beginner-friendly\n> issues that I can start working on. I'm eager to become an active contributor\n> to the PostgreSQL community and help improve the project. Whether it's bug\n> fixes, documentation updates, or other tasks, I'm open to getting involved in\n> various areas.\n> \n> I would greatly appreciate any recommendations or pointers to where I can find\n> suitable issues or specific resources that can aid my journey as a contributor.\n> \n> Thank you for your time, and I look forward to being a part of this fantastic\n> community.\n\nPeople usually start by reading this:\n\n\thttps://wiki.postgresql.org/wiki/Developer_FAQ\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Sat, 4 Nov 2023 13:17:20 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introduction and Inquiry on Beginner-Friendly Issues"
},
{
"msg_contents": "Thanks a lot for sharing.\n\nOn Sat, Nov 4, 2023 at 10:47 PM Bruce Momjian <[email protected]> wrote:\n\n> On Sat, Nov 4, 2023 at 10:17:13PM +0530, Atharva Bhise wrote:\n> > Hello PostgreSQL Developers,\n> >\n> > I hope this message finds you well. My name is Atharva, and I'm a\n> second-year\n> > student majoring in computer application with a strong interest in\n> contributing\n> > to the PostgreSQL project. I have experience in C programming and a solid\n> > understanding of PostgreSQL.\n> >\n> > I'm reaching out to request your guidance in identifying\n> beginner-friendly\n> > issues that I can start working on. I'm eager to become an active\n> contributor\n> > to the PostgreSQL community and help improve the project. Whether it's\n> bug\n> > fixes, documentation updates, or other tasks, I'm open to getting\n> involved in\n> > various areas.\n> >\n> > I would greatly appreciate any recommendations or pointers to where I\n> can find\n> > suitable issues or specific resources that can aid my journey as a\n> contributor.\n> >\n> > Thank you for your time, and I look forward to being a part of this\n> fantastic\n> > community.\n>\n> People usually start by reading this:\n>\n> https://wiki.postgresql.org/wiki/Developer_FAQ\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n>\n\nThanks a lot for sharing. On Sat, Nov 4, 2023 at 10:47 PM Bruce Momjian <[email protected]> wrote:On Sat, Nov 4, 2023 at 10:17:13PM +0530, Atharva Bhise wrote:\n> Hello PostgreSQL Developers,\n> \n> I hope this message finds you well. My name is Atharva, and I'm a second-year\n> student majoring in computer application with a strong interest in contributing\n> to the PostgreSQL project. I have experience in C programming and a solid\n> understanding of PostgreSQL.\n> \n> I'm reaching out to request your guidance in identifying beginner-friendly\n> issues that I can start working on. I'm eager to become an active contributor\n> to the PostgreSQL community and help improve the project. Whether it's bug\n> fixes, documentation updates, or other tasks, I'm open to getting involved in\n> various areas.\n> \n> I would greatly appreciate any recommendations or pointers to where I can find\n> suitable issues or specific resources that can aid my journey as a contributor.\n> \n> Thank you for your time, and I look forward to being a part of this fantastic\n> community.\n\nPeople usually start by reading this:\n\n https://wiki.postgresql.org/wiki/Developer_FAQ\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Sat, 4 Nov 2023 23:01:30 +0530",
"msg_from": "Atharva Bhise <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Introduction and Inquiry on Beginner-Friendly Issues"
},
{
"msg_contents": "Hi,\n\n> I'm reaching out to request your guidance in identifying beginner-friendly issues that I can start working on. I'm eager to become an active contributor to the PostgreSQL community and help improve the project. Whether it's bug fixes, documentation updates, or other tasks, I'm open to getting involved in various areas.\n>\n> I would greatly appreciate any recommendations or pointers to where I can find suitable issues or specific resources that can aid my journey as a contributor.\n\nYou can find my thoughts on the subject here [1].\n\nI wish you all the best on the journey and look forward to your contributions!\n\n[1]: https://www.timescale.com/blog/how-and-why-to-become-a-postgresql-contributor/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 6 Nov 2023 13:31:10 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introduction and Inquiry on Beginner-Friendly Issues"
}
] |
[
{
"msg_contents": "hi\n\nhttps://www.postgresql.org/docs/current/datatype-json.html\nTable 8.23. JSON Primitive Types and Corresponding PostgreSQL Types\n\n\"SQL NULL is a different concept\"\ncan we change to\n\"Only accept lowercase null. SQL NULL is a different concept\"\nI saw people ask similar questions on stackoverflow. maybe worth mentioning.\n\n-----\nhttps://www.postgresql.org/docs/16/view-pg-backend-memory-contexts.html\nTable 54.4. pg_backend_memory_contexts Columns\nshould be\nTable 54.4. pg_backend_memory_contexts View.\n\n\n",
"msg_date": "Mon, 6 Nov 2023 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "minor doc issues."
}
] |
[
{
"msg_contents": "I came across the following compiling warnings on GCC (Red Hat 4.8.5-44)\n4.8.5 with 'CFLAGS=-Og'\n\nbe-fsstubs.c: In function ‘be_lo_export’:\nbe-fsstubs.c:537:24: warning: ‘fd’ may be used uninitialized in this\nfunction [-Wmaybe-uninitialized]\n if (CloseTransientFile(fd) != 0)\n ^\nIn file included from trigger.c:14:0:\ntrigger.c: In function ‘ExecCallTriggerFunc’:\n../../../src/include/postgres.h:314:2: warning: ‘result’ may be used\nuninitialized in this function [-Wmaybe-uninitialized]\n return (Pointer) X;\n ^\ntrigger.c:2316:9: note: ‘result’ was declared here\n Datum result;\n ^\n\nI wonder if this is worth fixing, maybe by a trivial patch like\nattached.\n\nThanks\nRichard",
"msg_date": "Mon, 6 Nov 2023 14:14:00 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compiling warnings on old GCC"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 19:14, Richard Guo <[email protected]> wrote:\n> I came across the following compiling warnings on GCC (Red Hat 4.8.5-44)\n> 4.8.5 with 'CFLAGS=-Og'\n\n> I wonder if this is worth fixing, maybe by a trivial patch like\n> attached.\n\nThere's some relevant discussion in\nhttps://postgr.es/m/flat/20220602024243.GJ29853%40telsasoft.com\n\nDavid\n\n\n",
"msg_date": "Mon, 6 Nov 2023 19:51:02 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compiling warnings on old GCC"
},
{
"msg_contents": "On Mon, Nov 6, 2023 at 2:51 PM David Rowley <[email protected]> wrote:\n\n> On Mon, 6 Nov 2023 at 19:14, Richard Guo <[email protected]> wrote:\n> > I came across the following compiling warnings on GCC (Red Hat 4.8.5-44)\n> > 4.8.5 with 'CFLAGS=-Og'\n>\n> > I wonder if this is worth fixing, maybe by a trivial patch like\n> > attached.\n>\n> There's some relevant discussion in\n> https://postgr.es/m/flat/20220602024243.GJ29853%40telsasoft.com\n\n\nAh, thanks for pointing that out. Somehow I failed to follow that\nthread.\n\nIt seems that the controversial '-Og' coupled with the old GCC version\n(4.8) makes it not worth fixing. So please ignore this thread.\n\nThanks\nRichard\n\nOn Mon, Nov 6, 2023 at 2:51 PM David Rowley <[email protected]> wrote:On Mon, 6 Nov 2023 at 19:14, Richard Guo <[email protected]> wrote:\n> I came across the following compiling warnings on GCC (Red Hat 4.8.5-44)\n> 4.8.5 with 'CFLAGS=-Og'\n\n> I wonder if this is worth fixing, maybe by a trivial patch like\n> attached.\n\nThere's some relevant discussion in\nhttps://postgr.es/m/flat/20220602024243.GJ29853%40telsasoft.comAh, thanks for pointing that out. Somehow I failed to follow thatthread.It seems that the controversial '-Og' coupled with the old GCC version(4.8) makes it not worth fixing. So please ignore this thread.ThanksRichard",
"msg_date": "Mon, 6 Nov 2023 15:46:01 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compiling warnings on old GCC"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> On Mon, Nov 6, 2023 at 2:51 PM David Rowley <[email protected]> wrote:\n>> There's some relevant discussion in\n>> https://postgr.es/m/flat/20220602024243.GJ29853%40telsasoft.com\n\n> It seems that the controversial '-Og' coupled with the old GCC version\n> (4.8) makes it not worth fixing. So please ignore this thread.\n\nAlthough nobody tried to enunciate a formal policy in the other thread,\nI think where we ended up is that we'd consider suppressing warnings\nseen with -Og, but only with reasonably-modern compiler versions.\nFor LTS compilers we are only going to care about warnings seen with\nproduction flags (i.e., -O2 or better).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Nov 2023 09:47:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compiling warnings on old GCC"
}
] |
[
{
"msg_contents": "Analogous to 388e80132c (which was for Perl) but for Python, I propose \nadding #pragma GCC system_header to plpython.h. Without it, you get \ntons of warnings about -Wdeclaration-after-statement, starting with \nPython 3.12. (In the past, I have regularly sent feedback to Python to \nfix their header files, but this is getting old, and we have an easier \nsolution now.)",
"msg_date": "Mon, 6 Nov 2023 13:02:15 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "apply pragma system_header to python headers"
},
{
"msg_contents": "\nOn 2023-11-06 Mo 07:02, Peter Eisentraut wrote:\n> Analogous to 388e80132c (which was for Perl) but for Python, I propose \n> adding #pragma GCC system_header to plpython.h. Without it, you get \n> tons of warnings about -Wdeclaration-after-statement, starting with \n> Python 3.12. (In the past, I have regularly sent feedback to Python \n> to fix their header files, but this is getting old, and we have an \n> easier solution now.)\n\n\nWFM\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 6 Nov 2023 08:34:41 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: apply pragma system_header to python headers"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Analogous to 388e80132c (which was for Perl) but for Python, I propose \n> adding #pragma GCC system_header to plpython.h. Without it, you get \n> tons of warnings about -Wdeclaration-after-statement, starting with \n> Python 3.12. (In the past, I have regularly sent feedback to Python to \n> fix their header files, but this is getting old, and we have an easier \n> solution now.)\n\n+1 for the concept --- I was just noticing yesterday that my buildfarm\nwarning scraping script is turning up some of these. However, we ought\nto try to minimize the amount of our own code that is subject to the\npragma. So I think a prerequisite ought to be to get this out of\nplpython.h:\n\n/*\n * Used throughout, so it's easier to just include it everywhere.\n */\n#include \"plpy_util.h\"\n\nAlternatively, is there a way to reverse the effect of the\npragma after we've included what we need?\n\n(I'm not too happy about the state of plperl.h on this point, either.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Nov 2023 09:57:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: apply pragma system_header to python headers"
},
{
"msg_contents": "\nOn 2023-11-06 Mo 09:57, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> Analogous to 388e80132c (which was for Perl) but for Python, I propose\n>> adding #pragma GCC system_header to plpython.h. Without it, you get\n>> tons of warnings about -Wdeclaration-after-statement, starting with\n>> Python 3.12. (In the past, I have regularly sent feedback to Python to\n>> fix their header files, but this is getting old, and we have an easier\n>> solution now.)\n> +1 for the concept --- I was just noticing yesterday that my buildfarm\n> warning scraping script is turning up some of these. However, we ought\n> to try to minimize the amount of our own code that is subject to the\n> pragma. So I think a prerequisite ought to be to get this out of\n> plpython.h:\n>\n> /*\n> * Used throughout, so it's easier to just include it everywhere.\n> */\n> #include \"plpy_util.h\"\n>\n> Alternatively, is there a way to reverse the effect of the\n> pragma after we've included what we need?\n\n\nThere's \"GCC diagnostic push\" and \"GCC diagnostic pop\" but I don't know \nif they apply to \"GCC system_header\". Instead of using \"GCC \nsystem_header\" we could just ignore the warnings we're seeing. e.g. \"GCC \ndiagnostic ignored \\\"-Wdeclaration-after-statement\\\"\"\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 6 Nov 2023 10:30:28 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: apply pragma system_header to python headers"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2023-11-06 Mo 09:57, Tom Lane wrote:\n>> +1 for the concept --- I was just noticing yesterday that my buildfarm\n>> warning scraping script is turning up some of these. However, we ought\n>> to try to minimize the amount of our own code that is subject to the\n>> pragma. So I think a prerequisite ought to be to get this out of\n>> plpython.h:\n>> \n>> /*\n>> * Used throughout, so it's easier to just include it everywhere.\n>> */\n>> #include \"plpy_util.h\"\n>> \n>> Alternatively, is there a way to reverse the effect of the\n>> pragma after we've included what we need?\n\n> There's \"GCC diagnostic push\" and \"GCC diagnostic pop\" but I don't know \n> if they apply to \"GCC system_header\". Instead of using \"GCC \n> system_header\" we could just ignore the warnings we're seeing. e.g. \"GCC \n> diagnostic ignored \\\"-Wdeclaration-after-statement\\\"\"\n\nProbably a better way is to invent a separate header \"plpython_system.h\"\nthat just includes the Python headers, to scope the pragma precisely.\n(I guess it could have the fixup #defines we're wrapping those headers\nin, too.)\n\nThe same idea would work in plperl.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Nov 2023 11:06:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: apply pragma system_header to python headers"
},
{
"msg_contents": "I wrote:\n> Probably a better way is to invent a separate header \"plpython_system.h\"\n> that just includes the Python headers, to scope the pragma precisely.\n> (I guess it could have the fixup #defines we're wrapping those headers\n> in, too.)\n\n> The same idea would work in plperl.\n\nAfter updating one of my test machines to Fedora 39, I'm seeing these\nPython warnings too. So here's a fleshed-out patch proposal for doing\nit like that.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 25 Dec 2023 12:36:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: apply pragma system_header to python headers"
}
] |
[
{
"msg_contents": "Create a table and a deferrable constraint trigger:\n\n CREATE TABLE tab (i integer);\n\n CREATE FUNCTION trig() RETURNS trigger\n LANGUAGE plpgsql AS\n $$BEGIN\n RAISE NOTICE 'current_user = %', current_user;\n RETURN NEW;\n END;$$;\n\n CREATE CONSTRAINT TRIGGER trig AFTER INSERT ON tab\n DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW EXECUTE FUNCTION trig();\n\nCreate a role and allow it INSERT on the table:\n\n CREATE ROLE duff;\n\n GRANT INSERT ON tab TO duff;\n\nNow become that role and try some inserts:\n\n SET ROLE duff;\n\n BEGIN;\n\n INSERT INTO tab VALUES (1);\n NOTICE: current_user = duff\n\nThat looks ok; the current user is \"duff\".\n\n SET CONSTRAINTS ALL DEFERRED;\n\n INSERT INTO tab VALUES (2);\n\nBecome a superuser again and commit:\n\n RESET ROLE;\n\n COMMIT;\n NOTICE: current_user = postgres\n\n\nSo a deferred constraint trigger does not run with the same security context\nas an immediate trigger. This is somewhat nasty in combination with\nSECURITY DEFINER functions: if that function performs an operation, and that\noperation triggers a deferred trigger, that trigger will run in the wrong\nsecurity context.\n\nThis behavior looks buggy to me. What do you think?\nI cannot imagine that it is a security problem, though.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 06 Nov 2023 14:23:04 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Mon, 2023-11-06 at 14:23 +0100, Laurenz Albe wrote:\n> CREATE FUNCTION trig() RETURNS trigger\n> LANGUAGE plpgsql AS\n> $$BEGIN\n> RAISE NOTICE 'current_user = %', current_user;\n> RETURN NEW;\n> END;$$;\n> \n> CREATE CONSTRAINT TRIGGER trig AFTER INSERT ON tab\n> DEFERRABLE INITIALLY IMMEDIATE\n> FOR EACH ROW EXECUTE FUNCTION trig();\n> \n> SET ROLE duff;\n> \n> BEGIN;\n> \n> INSERT INTO tab VALUES (1);\n> NOTICE: current_user = duff\n> \n> That looks ok; the current user is \"duff\".\n> \n> SET CONSTRAINTS ALL DEFERRED;\n> \n> INSERT INTO tab VALUES (2);\n> \n> Become a superuser again and commit:\n> \n> RESET ROLE;\n> \n> COMMIT;\n> NOTICE: current_user = postgres\n> \n> \n> So a deferred constraint trigger does not run with the same security context\n> as an immediate trigger. This is somewhat nasty in combination with\n> SECURITY DEFINER functions: if that function performs an operation, and that\n> operation triggers a deferred trigger, that trigger will run in the wrong\n> security context.\n> \n> This behavior looks buggy to me. What do you think?\n> I cannot imagine that it is a security problem, though.\n\nI just realized one problem with running a deferred constraint trigger as\nthe triggering role: that role might have been dropped by the time the trigger\nexecutes. But then we could still error out.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 06 Nov 2023 17:58:01 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 11:58, Laurenz Albe <[email protected]> wrote:\n\n> Become a superuser again and commit:\n> >\n> > RESET ROLE;\n> >\n> > COMMIT;\n> > NOTICE: current_user = postgres\n> >\n> >\n> > So a deferred constraint trigger does not run with the same security\n> context\n> > as an immediate trigger. This is somewhat nasty in combination with\n> > SECURITY DEFINER functions: if that function performs an operation, and\n> that\n> > operation triggers a deferred trigger, that trigger will run in the wrong\n> > security context.\n> >\n> > This behavior looks buggy to me. What do you think?\n> > I cannot imagine that it is a security problem, though.\n>\n\nThis looks to me like another reason that triggers should run as the\ntrigger owner. Which role owns the trigger won’t change as a result of\nconstraints being deferred or not, or any role setting done during the\ntransaction, including that relating to security definer functions.\n\nRight now triggers can’t do anything that those who can\nINSERT/UPDATE/DELETE (i.e., cause the trigger to fire) can’t do, which in\nparticular breaks the almost canonical example of using triggers to log\nchanges — I can’t do it without also allowing users to make spurious log\nentries.\n\nAlso if I cause a trigger to fire, I’ve just given the trigger owner the\nopportunity to run arbitrary code as me.\n\n\n> I just realized one problem with running a deferred constraint trigger as\n> the triggering role: that role might have been dropped by the time the\n> trigger\n> executes. But then we could still error out.\n>\n\nThis problem is also fixed by running triggers as their owners: there\nshould be a dependency between an object and its owner. So the\ntrigger-executing role can’t be dropped without dropping the trigger.\n\nOn Mon, 6 Nov 2023 at 11:58, Laurenz Albe <[email protected]> wrote:\n> Become a superuser again and commit:\n> \n> RESET ROLE;\n> \n> COMMIT;\n> NOTICE: current_user = postgres\n> \n> \n> So a deferred constraint trigger does not run with the same security context\n> as an immediate trigger. This is somewhat nasty in combination with\n> SECURITY DEFINER functions: if that function performs an operation, and that\n> operation triggers a deferred trigger, that trigger will run in the wrong\n> security context.\n> \n> This behavior looks buggy to me. What do you think?\n> I cannot imagine that it is a security problem, though.This looks to me like another reason that triggers should run as the trigger owner. Which role owns the trigger won’t change as a result of constraints being deferred or not, or any role setting done during the transaction, including that relating to security definer functions.Right now triggers can’t do anything that those who can INSERT/UPDATE/DELETE (i.e., cause the trigger to fire) can’t do, which in particular breaks the almost canonical example of using triggers to log changes — I can’t do it without also allowing users to make spurious log entries.Also if I cause a trigger to fire, I’ve just given the trigger owner the opportunity to run arbitrary code as me. \nI just realized one problem with running a deferred constraint trigger as\nthe triggering role: that role might have been dropped by the time the trigger\nexecutes. But then we could still error out.This problem is also fixed by running triggers as their owners: there should be a dependency between an object and its owner. So the trigger-executing role can’t be dropped without dropping the trigger.",
"msg_date": "Mon, 6 Nov 2023 12:28:52 -0500",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On 11/6/23 14:23, Laurenz Albe wrote:\n> ...\n> \n> This behavior looks buggy to me. What do you think?\n> I cannot imagine that it is a security problem, though.\n> \n\nHow could code getting executed under the wrong role not be a security\nissue? Also, does this affect just the role, or are there some other\nsettings that may unexpectedly change (e.g. search_path)?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 Nov 2023 18:29:39 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Mon, 2023-11-06 at 18:29 +0100, Tomas Vondra wrote:\n> On 11/6/23 14:23, Laurenz Albe wrote:\n> > This behavior looks buggy to me. What do you think?\n> > I cannot imagine that it is a security problem, though.\n> \n> How could code getting executed under the wrong role not be a security\n> issue? Also, does this affect just the role, or are there some other\n> settings that may unexpectedly change (e.g. search_path)?\n\nPerhaps it is a security issue, and I am just lacking imagination.\n\nYes, changes to \"search_path\" should also have an effect.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 06 Nov 2023 21:00:43 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Mon, 2023-11-06 at 18:29 +0100, Tomas Vondra wrote:\n> On 11/6/23 14:23, Laurenz Albe wrote:\n> > This behavior looks buggy to me. What do you think?\n> > I cannot imagine that it is a security problem, though.\n> \n> How could code getting executed under the wrong role not be a security\n> issue? Also, does this affect just the role, or are there some other\n> settings that may unexpectedly change (e.g. search_path)?\n\nHere is a patch that fixes this problem by keeping track of the\ncurrent role in the AfterTriggerSharedData.\n\nI have thought some more about the security aspects:\n\n1. With the new code, you could call a SECURITY DEFINER function\n that modifies data on a table with a deferred trigger, then\n modify the trigger function before you commit and have your\n code run with elevated privileges.\n But I think that we need not worry about that. If a\n superuser performs DML on a table that an untrusted user\n controls, all bets are off anyway. The attacker might as\n well put the bad code into the trigger *before* calling the\n SECURITY DEFINER function.\n\n2. The more serious concern is that the old code constitutes\n a narrow security hazard: a superuser could temporarily\n assume an unprivileged role to avoid risks while performing\n DML on a table controlled by an untrusted user, but for\n some reason resume being a superuser *before* COMMIT.\n Then a deferred trigger would inadvertently run with\n superuser privileges.\n\n That seems like a very unlikely scenario (who would RESET ROLE\n before committing in that situation?). Moreover, it seems\n like the current buggy behavior has been in place for decades\n without anybody noticing.\n\n I am not against backpatching this (the fix is simple enough),\n but I am not convinced that it is necessary.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 06 Mar 2024 14:32:06 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "Hi,\n\nI see that this patch is marked as ready for review, so I thought I\nwould attempt to review it. This is my first review, so please take it\nwith a grain of salt.\n\n> So a deferred constraint trigger does not run with the same security\ncontext\n> as an immediate trigger.\n\nIt sounds like the crux of your argument is that the current behavior\nis that triggers are executed with the role and security context of the\nsession at the time of execution. Instead, the trigger should be\nexecuted with the role and security context of the session at the time\ntime of queuing (i.e. the same context as the action that triggered the\ntrigger). While I understand that the current behavior can be\nsurprising in some scenarios, it's not clear to me why this behavior is\nwrong. It seems that the whole point of deferring a trigger to commit\ntime is that the context that the trigger is executed in is different\nthan the context that it was triggered in. Tables may have changed,\npermissions may have changed, session configuration variables may have\nchanged, roles may have changed, etc. So why should the executing role\nbe treated differently and restored to the value at the time of\ntriggering. Perhaps you can expand on why you feel that the current\nbehavior is wrong?\n\n> This is somewhat nasty in combination with\n> SECURITY DEFINER functions: if that function performs an operation, and\nthat\n> operation triggers a deferred trigger, that trigger will run in the wrong\n> security context.\n...\n> The more serious concern is that the old code constitutes\n> a narrow security hazard: a superuser could temporarily\n> assume an unprivileged role to avoid risks while performing\n> DML on a table controlled by an untrusted user, but for\n> some reason resume being a superuser *before* COMMIT.\n> Then a deferred trigger would inadvertently run with\n> superuser privileges.\n\nI find these examples to be surprising, but not necessarily wrong\n(as per my comment above). If someone wants the triggers to be executed\nas the triggering role, then they can run `SET CONSTRAINTS ALL\nIMMEDIATE`. If deferring a trigger to commit time and executing it as\nthe triggering role is desirable, then maybe we should add a modifier\nto triggers that can control this behavior. Something like\n`SECURITY INVOKER | SECURITY TRIGGERER` (modeled after the modifiers in\n`CREATE FUNCTION`) that control which role is used.\n\n> This looks to me like another reason that triggers should run as the\n> trigger owner. Which role owns the trigger won’t change as a result of\n> constraints being deferred or not, or any role setting done during the\n> transaction, including that relating to security definer functions.\n>\n> Right now triggers can’t do anything that those who can\n> INSERT/UPDATE/DELETE (i.e., cause the trigger to fire) can’t do, which in\n>particular breaks the almost canonical example of using triggers to log\n> changes — I can’t do it without also allowing users to make spurious log\n> entries.\n>\n> Also if I cause a trigger to fire, I’ve just given the trigger owner the\n> opportunity to run arbitrary code as me.\n>\n>> I just realized one problem with running a deferred constraint trigger as\n>> the triggering role: that role might have been dropped by the time the\n>> trigger\n>> executes. But then we could still error out.\n>\n> This problem is also fixed by running triggers as their owners: there\n> should be a dependency between an object and its owner. So the\n> trigger-executing role can’t be dropped without dropping the trigger.\n\n+1, this approach would remove all of the surprising/wrong behavior and\nin my opinion is more obvious. I'd like to add some more reasons why\nthis behavior makes sense:\n\n - The documentation [0] indicates that to create a trigger, the\n creating role must have the `EXECUTE` privilege on the trigger\n function. In fact this check is skipped for the role that triggers\n trigger.\n\n -- Create trig_owner role and function. Grant execute on function\n -- to role.\n test=# CREATE ROLE trig_owner;\n CREATE ROLE\n test=# GRANT CREATE ON SCHEMA public TO trig_owner;\n GRANT\n test=# CREATE OR REPLACE FUNCTION f() RETURNS trigger\n LANGUAGE plpgsql AS\n $$BEGIN\n RAISE NOTICE 'current_user = %', current_user;\n RETURN NEW;\n END;$$;\n CREATE FUNCTION\n test=# REVOKE EXECUTE ON FUNCTION f FROM PUBLIC;\n REVOKE\n test=# GRANT EXECUTE ON FUNCTION f TO trig_owner;\n GRANT\n\n -- Create the trigger as trig_owner.\n test=# SET ROLE trig_owner;\n SET\n test=> CREATE TABLE t (a INT);\n CREATE TABLE\n test=> CREATE CONSTRAINT TRIGGER trig AFTER INSERT ON t\n DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW EXECUTE FUNCTION f();\n CREATE TRIGGER\n\n -- Trigger the trigger with a role that doesn't have execute\n -- privileges on the trigger function and also call the function\n -- directly. The trigger succeeds but the function call fails.\n test=> RESET ROLE;\n RESET\n test=# CREATE ROLE r1;\n CREATE ROLE\n test=# GRANT INSERT ON t TO r1;\n GRANT\n test=# SET ROLE r1;\n SET\n test=> INSERT INTO t VALUES (1);\n NOTICE: current_user = r1\n INSERT 0 1\n test=> SELECT f();\n ERROR: permission denied for function f\n\n So the execute privilege check on the trigger function is done using\n the trigger owner role, why shouldn't all privilege checks use the\n trigger owner?\n\n - The set of triggers that will execute as a result of any statement\n is not obvious. Therefore it is non-trivial to figure out if you're\n role will have the privileges necessary to execute a given statement.\n Switching to the trigger owner role makes this check trivial.\n\n - This is consistent with how view privileges work. When querying a\n view, the privileges of the view owner is checked against the view\n definition. Similarly when executing a trigger, the trigger owner's\n privileges should be checked against the trigger definition.\n\nHowever, I do worry that this is too much of a breaking change and\nfundamentally changes how triggers work. Another draw back is that if\nthe trigger owner loses the required privileges, then no one can modify\nto the table.\n\n> Here is a patch that fixes this problem by keeping track of the\ncurrent role in the AfterTriggerSharedData.\n\nI skimmed the code and haven't looked at in depth. Whichever direction\nwe go, I think it's worth updating the documentation to make the\nbehavior around triggers and roles clear.\n\nAdditionally, I applied your patch to master and re-ran the example and\ndidn't notice any behavior change.\n\n test=# CREATE TABLE tab (i integer);\n CREATE TABLE\n test=# CREATE FUNCTION trig() RETURNS trigger\n LANGUAGE plpgsql AS\n $$BEGIN\n RAISE NOTICE 'current_user = %', current_user;\n RETURN NEW;\n END;$$;\n CREATE FUNCTION\n test=# CREATE CONSTRAINT TRIGGER trig AFTER INSERT ON tab\n DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW EXECUTE FUNCTION trig();\n CREATE TRIGGER\n test=# CREATE ROLE duff;\n CREATE ROLE\n test=# GRANT INSERT ON tab TO duff;\n GRANT\n test=# SET ROLE duff;\n SET\n test=> BEGIN;\n BEGIN\n test=*> INSERT INTO tab VALUES (1);\n NOTICE: current_user = duff\n INSERT 0 1\n test=*> SET CONSTRAINTS ALL DEFERRED;\n SET CONSTRAINTS\n test=*> INSERT INTO tab VALUES (2);\n INSERT 0 1\n test=*> RESET ROLE;\n RESET\n test=*# COMMIT;\n NOTICE: current_user = joe\n COMMIT\n\nThough maybe I'm just doing something wrong.\n\nThanks,\nJoe Koshakow\n\nP.S. Since this is my first review, feel free to give me meta-comments\ncritiquing the review.\n\n[0] https://www.postgresql.org/docs/16/sql-createtrigger.html\n\nHi,I see that this patch is marked as ready for review, so I thought Iwould attempt to review it. This is my first review, so please take itwith a grain of salt.> So a deferred constraint trigger does not run with the same security context> as an immediate trigger.It sounds like the crux of your argument is that the current behavioris that triggers are executed with the role and security context of thesession at the time of execution. Instead, the trigger should beexecuted with the role and security context of the session at the timetime of queuing (i.e. the same context as the action that triggered thetrigger). While I understand that the current behavior can besurprising in some scenarios, it's not clear to me why this behavior iswrong. It seems that the whole point of deferring a trigger to committime is that the context that the trigger is executed in is differentthan the context that it was triggered in. Tables may have changed,permissions may have changed, session configuration variables may havechanged, roles may have changed, etc. So why should the executing rolebe treated differently and restored to the value at the time oftriggering. Perhaps you can expand on why you feel that the currentbehavior is wrong?> This is somewhat nasty in combination with> SECURITY DEFINER functions: if that function performs an operation, and that> operation triggers a deferred trigger, that trigger will run in the wrong> security context....> The more serious concern is that the old code constitutes> a narrow security hazard: a superuser could temporarily> assume an unprivileged role to avoid risks while performing> DML on a table controlled by an untrusted user, but for> some reason resume being a superuser *before* COMMIT.> Then a deferred trigger would inadvertently run with> superuser privileges.I find these examples to be surprising, but not necessarily wrong(as per my comment above). If someone wants the triggers to be executedas the triggering role, then they can run `SET CONSTRAINTS ALLIMMEDIATE`. If deferring a trigger to commit time and executing it asthe triggering role is desirable, then maybe we should add a modifierto triggers that can control this behavior. Something like`SECURITY INVOKER | SECURITY TRIGGERER` (modeled after the modifiers in`CREATE FUNCTION`) that control which role is used.> This looks to me like another reason that triggers should run as the> trigger owner. Which role owns the trigger won’t change as a result of> constraints being deferred or not, or any role setting done during the> transaction, including that relating to security definer functions.>> Right now triggers can’t do anything that those who can> INSERT/UPDATE/DELETE (i.e., cause the trigger to fire) can’t do, which in>particular breaks the almost canonical example of using triggers to log> changes — I can’t do it without also allowing users to make spurious log> entries.>> Also if I cause a trigger to fire, I’ve just given the trigger owner the> opportunity to run arbitrary code as me.>>> I just realized one problem with running a deferred constraint trigger as>> the triggering role: that role might have been dropped by the time the>> trigger>> executes. But then we could still error out.>> This problem is also fixed by running triggers as their owners: there> should be a dependency between an object and its owner. So the> trigger-executing role can’t be dropped without dropping the trigger.+1, this approach would remove all of the surprising/wrong behavior andin my opinion is more obvious. I'd like to add some more reasons whythis behavior makes sense: - The documentation [0] indicates that to create a trigger, the creating role must have the `EXECUTE` privilege on the trigger function. In fact this check is skipped for the role that triggers trigger. -- Create trig_owner role and function. Grant execute on function -- to role. test=# CREATE ROLE trig_owner; CREATE ROLE test=# GRANT CREATE ON SCHEMA public TO trig_owner; GRANT test=# CREATE OR REPLACE FUNCTION f() RETURNS trigger LANGUAGE plpgsql AS $$BEGIN RAISE NOTICE 'current_user = %', current_user; RETURN NEW; END;$$; CREATE FUNCTION test=# REVOKE EXECUTE ON FUNCTION f FROM PUBLIC; REVOKE test=# GRANT EXECUTE ON FUNCTION f TO trig_owner; GRANT -- Create the trigger as trig_owner. test=# SET ROLE trig_owner; SET test=> CREATE TABLE t (a INT); CREATE TABLE test=> CREATE CONSTRAINT TRIGGER trig AFTER INSERT ON t DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION f(); CREATE TRIGGER -- Trigger the trigger with a role that doesn't have execute -- privileges on the trigger function and also call the function -- directly. The trigger succeeds but the function call fails. test=> RESET ROLE; RESET test=# CREATE ROLE r1; CREATE ROLE test=# GRANT INSERT ON t TO r1; GRANT test=# SET ROLE r1; SET test=> INSERT INTO t VALUES (1); NOTICE: current_user = r1 INSERT 0 1 test=> SELECT f(); ERROR: permission denied for function f So the execute privilege check on the trigger function is done using the trigger owner role, why shouldn't all privilege checks use the trigger owner? - The set of triggers that will execute as a result of any statement is not obvious. Therefore it is non-trivial to figure out if you're role will have the privileges necessary to execute a given statement. Switching to the trigger owner role makes this check trivial. - This is consistent with how view privileges work. When querying a view, the privileges of the view owner is checked against the view definition. Similarly when executing a trigger, the trigger owner's privileges should be checked against the trigger definition.However, I do worry that this is too much of a breaking change andfundamentally changes how triggers work. Another draw back is that ifthe trigger owner loses the required privileges, then no one can modifyto the table.> Here is a patch that fixes this problem by keeping track of thecurrent role in the AfterTriggerSharedData.I skimmed the code and haven't looked at in depth. Whichever directionwe go, I think it's worth updating the documentation to make thebehavior around triggers and roles clear.Additionally, I applied your patch to master and re-ran the example anddidn't notice any behavior change. test=# CREATE TABLE tab (i integer); CREATE TABLE test=# CREATE FUNCTION trig() RETURNS trigger LANGUAGE plpgsql AS $$BEGIN RAISE NOTICE 'current_user = %', current_user; RETURN NEW; END;$$; CREATE FUNCTION test=# CREATE CONSTRAINT TRIGGER trig AFTER INSERT ON tab DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION trig(); CREATE TRIGGER test=# CREATE ROLE duff; CREATE ROLE test=# GRANT INSERT ON tab TO duff; GRANT test=# SET ROLE duff; SET test=> BEGIN; BEGIN test=*> INSERT INTO tab VALUES (1); NOTICE: current_user = duff INSERT 0 1 test=*> SET CONSTRAINTS ALL DEFERRED; SET CONSTRAINTS test=*> INSERT INTO tab VALUES (2); INSERT 0 1 test=*> RESET ROLE; RESET test=*# COMMIT; NOTICE: current_user = joe COMMITThough maybe I'm just doing something wrong.Thanks,Joe KoshakowP.S. Since this is my first review, feel free to give me meta-commentscritiquing the review.[0] https://www.postgresql.org/docs/16/sql-createtrigger.html",
"msg_date": "Sat, 8 Jun 2024 17:36:54 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Sat, Jun 8, 2024 at 5:36 PM Joseph Koshakow <[email protected]> wrote:\n\n> Additionally, I applied your patch to master and re-ran the example and\n> didn't notice any behavior change.\n>\n> test=# CREATE TABLE tab (i integer);\n> CREATE TABLE\n> test=# CREATE FUNCTION trig() RETURNS trigger\n> LANGUAGE plpgsql AS\n> $$BEGIN\n> RAISE NOTICE 'current_user = %', current_user;\n> RETURN NEW;\n> END;$$;\n> CREATE FUNCTION\n> test=# CREATE CONSTRAINT TRIGGER trig AFTER INSERT ON tab\n> DEFERRABLE INITIALLY IMMEDIATE\n> FOR EACH ROW EXECUTE FUNCTION trig();\n> CREATE TRIGGER\n> test=# CREATE ROLE duff;\n> CREATE ROLE\n> test=# GRANT INSERT ON tab TO duff;\n> GRANT\n> test=# SET ROLE duff;\n> SET\n> test=> BEGIN;\n> BEGIN\n> test=*> INSERT INTO tab VALUES (1);\n> NOTICE: current_user = duff\n> INSERT 0 1\n> test=*> SET CONSTRAINTS ALL DEFERRED;\n> SET CONSTRAINTS\n> test=*> INSERT INTO tab VALUES (2);\n> INSERT 0 1\n> test=*> RESET ROLE;\n> RESET\n> test=*# COMMIT;\n> NOTICE: current_user = joe\n> COMMIT\n>\n> Though maybe I'm just doing something wrong.\n\nSorry, there's definitely something wrong with my environment. You can\nignore this.\n\nThanks,\nJoe Koshakow\n\nOn Sat, Jun 8, 2024 at 5:36 PM Joseph Koshakow <[email protected]> wrote:> Additionally, I applied your patch to master and re-ran the example and> didn't notice any behavior change.>> test=# CREATE TABLE tab (i integer);> CREATE TABLE> test=# CREATE FUNCTION trig() RETURNS trigger> LANGUAGE plpgsql AS> $$BEGIN> RAISE NOTICE 'current_user = %', current_user;> RETURN NEW;> END;$$;> CREATE FUNCTION> test=# CREATE CONSTRAINT TRIGGER trig AFTER INSERT ON tab> DEFERRABLE INITIALLY IMMEDIATE> FOR EACH ROW EXECUTE FUNCTION trig();> CREATE TRIGGER> test=# CREATE ROLE duff;> CREATE ROLE> test=# GRANT INSERT ON tab TO duff;> GRANT> test=# SET ROLE duff;> SET> test=> BEGIN;> BEGIN> test=*> INSERT INTO tab VALUES (1);> NOTICE: current_user = duff> INSERT 0 1> test=*> SET CONSTRAINTS ALL DEFERRED;> SET CONSTRAINTS> test=*> INSERT INTO tab VALUES (2);> INSERT 0 1> test=*> RESET ROLE;> RESET> test=*# COMMIT;> NOTICE: current_user = joe> COMMIT>> Though maybe I'm just doing something wrong.Sorry, there's definitely something wrong with my environment. You canignore this.Thanks,Joe Koshakow",
"msg_date": "Sat, 8 Jun 2024 21:13:07 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Sat, 8 Jun 2024 at 17:37, Joseph Koshakow <[email protected]> wrote:\n\n\n> However, I do worry that this is too much of a breaking change and\n> fundamentally changes how triggers work. Another draw back is that if\n> the trigger owner loses the required privileges, then no one can modify\n> to the table.\n>\n\nI worry about breaking changes too. The more I think about it, though, the\nmore I think the existing behaviour doesn’t make sense.\n\nSpeaking as a table owner, when I set a trigger on it, I expect that when\nthe specified actions occur my trigger will fire and will do what I\nspecify, without regard to the execution environment of the caller\n(search_path in particular); and my trigger should be able to do anything\nthat I can do. For the canonical case of a logging table the trigger has to\nbe able to do stuff the caller can't do. I don't expect to be able to do\nstuff that the caller can do.\n\nSpeaking as someone making an update on a table, I don't expect to have it\nfail because my execution environment (search_path in particular) is wrong\nfor the trigger implementation, and I consider it a security violation if\nthe table owner is able to do stuff as me as a result, especially if I am\nan administrator making an update as superuser.\n\n In effect, I want the action which fires the trigger to be like a system\ncall, and the trigger, plus the underlying action, to be like what the OS\ndoes in response to the system call.\n\nI'm not sure how to evaluate what problems with existing implementations\nwould be caused by changing what role executes triggers, but I think it's\npretty clear the existing behaviour is the wrong choice in every other way\nthan backward compatibility. I welcome examples to the contrary, where the\nexisting behaviour is not just OK but actually wanted.\n\nOn Sat, 8 Jun 2024 at 17:37, Joseph Koshakow <[email protected]> wrote: However, I do worry that this is too much of a breaking change andfundamentally changes how triggers work. Another draw back is that ifthe trigger owner loses the required privileges, then no one can modifyto the table.I worry about breaking changes too. The more I think about it, though, the more I think the existing behaviour doesn’t make sense.Speaking as a table owner, when I set a trigger on it, I expect that when the specified actions occur my trigger will fire and will do what I specify, without regard to the execution environment of the caller (search_path in particular); and my trigger should be able to do anything that I can do. For the canonical case of a logging table the trigger has to be able to do stuff the caller can't do. I don't expect to be able to do stuff that the caller can do.Speaking as someone making an update on a table, I don't expect to have it fail because my execution environment (search_path in particular) is wrong for the trigger implementation, and I consider it a security violation if the table owner is able to do stuff as me as a result, especially if I am an administrator making an update as superuser. In effect, I want the action which fires the trigger to be like a system call, and the trigger, plus the underlying action, to be like what the OS does in response to the system call.I'm not sure how to evaluate what problems with existing implementations would be caused by changing what role executes triggers, but I think it's pretty clear the existing behaviour is the wrong choice in every other way than backward compatibility. I welcome examples to the contrary, where the existing behaviour is not just OK but actually wanted.",
"msg_date": "Sat, 8 Jun 2024 22:13:27 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Sat, Jun 8, 2024 at 10:13 PM Isaac Morland <[email protected]>\nwrote:\n\n> Speaking as a table owner, when I set a trigger on it, I expect that when\nthe specified actions occur my trigger will fire and will do what I\nspecify, without regard to the execution environment of the caller\n(search_path in particular); and my trigger should be able to do anything\nthat I can do. For the canonical case of a logging table the trigger has to\nbe able to do stuff the caller can't do. I don't expect to be able to do\nstuff that the caller can do.\n>\n> Speaking as someone making an update on a table, I don't expect to have\nit fail because my execution environment (search_path in particular) is\nwrong for the trigger implementation, and I consider it a security\nviolation if the table owner is able to do stuff as me as a result,\nespecially if I am an administrator making an update as superuser.\n\nCan you expand on this a bit? When a trigger executes should the\nexecution environment match:\n\n - The execution environment of the trigger owner at the time of\n trigger creation?\n - The execution environment of the function owner at the time of\n function creation?\n - An execution environment built from the trigger owner's default\n configuration parameters?\n - Something else?\n\nWhile I am convinced that privileges should be checked using the\ntrigger owner's role, I'm less convinced of other configuration\nparameters. For the search_path example, that can be resolved by\neither fully qualifying object names or setting the search_path in the\nfunction itself. Similar approaches can be taken with other\nconfiguration parameters.\n\nI also worry that it would be a source of confusion that the execution\nenvironment of triggers come from the trigger/function owner, but the\nexecution environment of function calls come from the caller.\n\n> I think it's pretty clear the existing behaviour is the wrong choice in\nevery other way than backward compatibility. I welcome examples to the\ncontrary, where the existing behaviour is not just OK but actually wanted.\n\nThis is perhaps a contrived example, but here's one. Suppose I create a\ntrigger that raises a notice that includes the current timestamp. I\nwould probably want to use the timezone of the caller, not the\ntrigger owner.\n\nThanks,\nJoe Koshakow\n\nOn Sat, Jun 8, 2024 at 10:13 PM Isaac Morland <[email protected]> wrote:> Speaking as a table owner, when I set a trigger on it, I expect that when the specified actions occur my trigger will fire and will do what I specify, without regard to the execution environment of the caller (search_path in particular); and my trigger should be able to do anything that I can do. For the canonical case of a logging table the trigger has to be able to do stuff the caller can't do. I don't expect to be able to do stuff that the caller can do.>> Speaking as someone making an update on a table, I don't expect to have it fail because my execution environment (search_path in particular) is wrong for the trigger implementation, and I consider it a security violation if the table owner is able to do stuff as me as a result, especially if I am an administrator making an update as superuser.Can you expand on this a bit? When a trigger executes should theexecution environment match: - The execution environment of the trigger owner at the time of trigger creation? - The execution environment of the function owner at the time of function creation? - An execution environment built from the trigger owner's default configuration parameters? - Something else?While I am convinced that privileges should be checked using thetrigger owner's role, I'm less convinced of other configurationparameters. For the search_path example, that can be resolved byeither fully qualifying object names or setting the search_path in thefunction itself. Similar approaches can be taken with otherconfiguration parameters.I also worry that it would be a source of confusion that the executionenvironment of triggers come from the trigger/function owner, but theexecution environment of function calls come from the caller.> I think it's pretty clear the existing behaviour is the wrong choice in every other way than backward compatibility. I welcome examples to the contrary, where the existing behaviour is not just OK but actually wanted.This is perhaps a contrived example, but here's one. Suppose I create atrigger that raises a notice that includes the current timestamp. Iwould probably want to use the timezone of the caller, not thetrigger owner.Thanks,Joe Koshakow",
"msg_date": "Sun, 9 Jun 2024 16:08:37 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Sat, 2024-06-08 at 17:36 -0400, Joseph Koshakow wrote:\n> I see that this patch is marked as ready for review, so I thought I\n> would attempt to review it. This is my first review, so please take it\n> with a grain of salt.\n\nThank you. Your review is valuable and much to the point.\n\n> \n> It sounds like the crux of your argument is that the current behavior\n> is that triggers are executed with the role and security context of the\n> session at the time of execution. Instead, the trigger should be\n> executed with the role and security context of the session at the time\n> time of queuing (i.e. the same context as the action that triggered the\n> trigger). While I understand that the current behavior can be\n> surprising in some scenarios, it's not clear to me why this behavior is\n> wrong.\n\nSince neither the documentation nor any source comment cover this case,\nit is to some extent a matter of taste if the current behavior is\ncorrect ot not. An alternative to my patch would be to document the\ncurrent behavior rather than change it.\n\nLike you, I was surprised by the current behavior. There is a design\nprinciple that PostgreSQL tries to follow, called the \"Principle of\nleast astonishment\". Things should behave like a moderately skilled\nuser would expect them to. In my opinion, the current behavior violates\nthat principle. Tomas seems to agree with that point of view.\n\nOn the other hand, changing current behavior carries the risk of\nbackward compatibility problems. I don't know how much of a problem\nthat would be, but I have the impression that few people use deferred\ntriggers in combination with SET ROLE or SECURITY DEFINER functions,\nwhich makes the problem rather exotic, so hopefully the pain caused by\nthe compatibility break will be moderate.\n\nI didn't find this strange behavior myself: it was one of our customers\nwho uses security definer functions for data modifications and has\nproblems with the current behavior, and I am trying to improve the\nsituation on their behalf.\n\n> It seems that the whole point of deferring a trigger to commit\n> time is that the context that the trigger is executed in is different\n> than the context that it was triggered in. Tables may have changed,\n> permissions may have changed, session configuration variables may have\n> changed, roles may have changed, etc. So why should the executing role\n> be treated differently and restored to the value at the time of\n> triggering. Perhaps you can expand on why you feel that the current\n> behavior is wrong?\n\nTrue, somebody could change permissions or parameters between the\nDML statement and COMMIT, but that feels like external influences to me.\nThose changes would be explicit.\n\nBut I feel that the database user that runs the trigger should be the\nsame user that ran the triggering SQL statement. Even though I cannot\nput my hand on a case where changing this user would constitute a real\nsecurity problem, it feels wrong.\n\nI am aware that that is rather squishy argumentation, but I have no\nbetter one. Both my and Thomas' gut reaction seems to have been \"the\ncurrent behavior is wrong\".\n\n> \n> I find these examples to be surprising, but not necessarily wrong\n> (as per my comment above). If someone wants the triggers to be executed\n> as the triggering role, then they can run `SET CONSTRAINTS ALL\n> IMMEDIATE`. If deferring a trigger to commit time and executing it as\n> the triggering role is desirable, then maybe we should add a modifier\n> to triggers that can control this behavior. Something like\n> `SECURITY INVOKER | SECURITY TRIGGERER` (modeled after the modifiers in\n> `CREATE FUNCTION`) that control which role is used.\n\nSECURITY INVOKER and SECURITY TRIGGERER seem too confusing. I would say\nthat the triggerer is the one who invokes the trigger...\n\nIt would have to be a switch like EXECUTE DEFERRED TRIGGER AS INVOKER|COMMITTER\nor so, but I think that special SQL syntax for this exotic corner case\nis a little too much. And then: is there anybody who feels that the current\nbehavior is desirable?\n\n> Isaac Morland wrote:\n> > This looks to me like another reason that triggers should run as the\n> > trigger owner. Which role owns the trigger won’t change as a result of\n> > constraints being deferred or not, or any role setting done during the\n> > transaction, including that relating to security definer functions.\n> \n> +1, this approach would remove all of the surprising/wrong behavior and\n> in my opinion is more obvious. I'd like to add some more reasons why\n> this behavior makes sense: [...]\n> \n> However, I do worry that this is too much of a breaking change and\n> fundamentally changes how triggers work.\n\nYes. This might be the right thing to do if we designed triggers as a\nnew feature, but changing the behavior like that would certainly break\na lot of cases.\n\nPeople who want that behavior use a SECURITY DEFINER trigger function.\n\n> \n> I skimmed the code and haven't looked at in depth. Whichever direction\n> we go, I think it's worth updating the documentation to make the\n> behavior around triggers and roles clear.\n\nI agree: adding a sentence somewhere won't hurt.\nI'll do that once the feedback has given me the feeling that I am on\nthe right track.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 10 Jun 2024 19:00:20 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Mon, Jun 10, 2024 at 1:00 PM Laurenz Albe <[email protected]>\nwrote:\n\n> Like you, I was surprised by the current behavior. There is a design\n> principle that PostgreSQL tries to follow, called the \"Principle of\n> least astonishment\". Things should behave like a moderately skilled\n> user would expect them to. In my opinion, the current behavior\nviolates\n> that principle. Tomas seems to agree with that point of view.\n\nI worry that both approaches violate this principle in different ways.\nFor example consider the following sequence of events:\n\n SET ROLE r1;\n BEGIN;\n SET CONSTRAINTS ALL DEFERRED;\n INSERT INTO ...;\n SET ROLE r2;\n SET search_path = '...';\n COMMIT;\n\nI think that it would be reasonable to expect that the triggers execute\nwith r2 and not r1, since the triggers were explicitly deferred and the\nrole was explicitly set. It would likely be surprising that the search\npath was updated for the trigger but not the role. With your proposed\napproach it would be impossible for someone to trigger a trigger with\none role and execute it with another, if that's a desirable feature.\n\n> I didn't find this strange behavior myself: it was one of our customers\n> who uses security definer functions for data modifications and has\n> problems with the current behavior, and I am trying to improve the\n> situation on their behalf.\n\nWould it be possible to share more details about this use case? For\nexample, What are their current problems? Are they not able to set\nconstraints to immediate? Or can they update the trigger function\nitself be a security definer function? That might help illuminate why\nthe current behavior is wrong.\n\n> But I feel that the database user that runs the trigger should be the\n> same user that ran the triggering SQL statement. Even though I cannot\n> put my hand on a case where changing this user would constitute a real\n> security problem, it feels wrong.\n>\n> I am aware that that is rather squishy argumentation, but I have no\n> better one. Both my and Thomas' gut reaction seems to have been \"the\n> current behavior is wrong\".\n\nI understand the gut reaction, and I even have the same gut reaction,\nbut since we would be treating roles exceptionally compared to the rest\nof the execution context, I would feel better if we had a more concrete\nreason.\n\nI also took a look at the code. It doesn't apply cleanly to master, so\nI took the liberty of rebasing and attaching it.\n\n> + /*\n> + * The role could have been dropped since the trigger was queued.\n> + * In that case, give up and error out.\n> + */\n> + pfree(GetUserNameFromId(evtshared->ats_rolid, false));\n\nIt feels a bit wasteful to allocate and copy the role name when we\nnever actually use it. Is it possible to check that the role exists\nwithout copying the name?\n\nEverything else looked good, and the code does what it says it will.\n\nThanks,\nJoe Koshakow",
"msg_date": "Sat, 22 Jun 2024 17:50:14 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Sat, Jun 8, 2024 at 2:37 PM Joseph Koshakow <[email protected]> wrote:\n\n>\n>\nSomething like\n> `SECURITY INVOKER | SECURITY TRIGGERER` (modeled after the modifiers in\n> `CREATE FUNCTION`) that control which role is used.\n>\n\nI'm inclined toward this option (except invoker and triggerer are the same\nentity, we need owner|definer). I'm having trouble accepting changing the\nexisting behavior here but agree that having a mode whereby the owner of\nthe trigger/table executes the trigger function in an initially clean\nenvironment (server/database defaults; the owner role isn't considered as\nhaving logged in so their personalized configurations do not take effect)\n(maybe add a SET clause to create trigger too). Security invoker would be\nthe default, retaining current behavior for upgrade/dump+restore.\n\nSecurity definer on the function would take precedence as would its set\nclause.\n\nDavid J.\n\nOn Sat, Jun 8, 2024 at 2:37 PM Joseph Koshakow <[email protected]> wrote: Something like`SECURITY INVOKER | SECURITY TRIGGERER` (modeled after the modifiers in`CREATE FUNCTION`) that control which role is used.I'm inclined toward this option (except invoker and triggerer are the same entity, we need owner|definer). I'm having trouble accepting changing the existing behavior here but agree that having a mode whereby the owner of the trigger/table executes the trigger function in an initially clean environment (server/database defaults; the owner role isn't considered as having logged in so their personalized configurations do not take effect) (maybe add a SET clause to create trigger too). Security invoker would be the default, retaining current behavior for upgrade/dump+restore.Security definer on the function would take precedence as would its set clause.David J.",
"msg_date": "Sat, 22 Jun 2024 15:22:25 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Sat, Jun 22, 2024 at 6:23 PM David G. Johnston <\[email protected]> wrote:\n\n> except invoker and triggerer are the same entity\n\nMaybe \"executor\" would have been a better term than 'invoker\". In this\nspecific example they are not the same entity. The trigger is\ntriggered and queued by one role and executed by a different role,\nhence the confusion. Though I agree with Laurenz, special SQL syntax\nfor this exotic corner case is a little too much.\n\n> Security definer on the function would take precedence as would its set\nclause.\n\nThese trigger options seem a bit redundant with the equivalent options\non the function that is executed by the trigger. What would be the\nadvantages or differences of setting these options on the trigger\nversus the function?\n\nThanks,\nJoe Koshakow\n\nOn Sat, Jun 22, 2024 at 6:23 PM David G. Johnston <[email protected]> wrote:> except invoker and triggerer are the same entityMaybe \"executor\" would have been a better term than 'invoker\". In thisspecific example they are not the same entity. The trigger istriggered and queued by one role and executed by a different role,hence the confusion. Though I agree with Laurenz, special SQL syntaxfor this exotic corner case is a little too much.> Security definer on the function would take precedence as would its set clause.These trigger options seem a bit redundant with the equivalent optionson the function that is executed by the trigger. What would be theadvantages or differences of setting these options on the triggerversus the function?Thanks,Joe Koshakow",
"msg_date": "Sat, 22 Jun 2024 22:21:20 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Sat, Jun 22, 2024 at 7:21 PM Joseph Koshakow <[email protected]> wrote:\n\n> On Sat, Jun 22, 2024 at 6:23 PM David G. Johnston <\n> [email protected]> wrote:\n>\n> > except invoker and triggerer are the same entity\n>\n> Maybe \"executor\" would have been a better term than 'invoker\". In this\n> specific example they are not the same entity. The trigger is\n> triggered and queued by one role and executed by a different role,\n> hence the confusion.\n>\n\nNo matter what we label the keyword it would be represent the default and\nexisting behavior whereby the environment at trigger resolution time, not\ntrigger enqueue time, is used.\n\nI suppose there is an argument for capturing and reapplying the trigger\nenqueue time environment and giving that a keyword as well. But fewer\noptions has value and security definer seems like the strictly superior\noption.\n\n\n> Though I agree with Laurenz, special SQL syntax\n> for this exotic corner case is a little too much.\n>\n\nIt doesn't seem like a corner case if we want to institute a new\nrecommended practice that all triggers should be created with security\ndefiner. We seem to be discussing that without giving the dba a choice in\nthe matter - but IMO we do want to provide the choice and leave the default.\n\n\n> > Security definer on the function would take precedence as would its set\n> clause.\n>\n> These trigger options seem a bit redundant with the equivalent options\n> on the function that is executed by the trigger. What would be the\n> advantages or differences of setting these options on the trigger\n> versus the function?\n>\n>\nAt least security definer needs to take precedence as the function owner is\nfully expecting their role to be the one executing the function, not\nwhomever the trigger owner might be.\n\nIf putting a set clause on the trigger is a thing then the same thing goes\n- the function author, if they also did that, expects their settings to be\nin place. Whether it really makes sense to have trigger owner set\nconfiguration when they attach the function is arguable but also the most\nflexible option.\n\nDavid J.\n\nOn Sat, Jun 22, 2024 at 7:21 PM Joseph Koshakow <[email protected]> wrote:On Sat, Jun 22, 2024 at 6:23 PM David G. Johnston <[email protected]> wrote:> except invoker and triggerer are the same entityMaybe \"executor\" would have been a better term than 'invoker\". In thisspecific example they are not the same entity. The trigger istriggered and queued by one role and executed by a different role,hence the confusion.No matter what we label the keyword it would be represent the default and existing behavior whereby the environment at trigger resolution time, not trigger enqueue time, is used.I suppose there is an argument for capturing and reapplying the trigger enqueue time environment and giving that a keyword as well. But fewer options has value and security definer seems like the strictly superior option. Though I agree with Laurenz, special SQL syntaxfor this exotic corner case is a little too much.It doesn't seem like a corner case if we want to institute a new recommended practice that all triggers should be created with security definer. We seem to be discussing that without giving the dba a choice in the matter - but IMO we do want to provide the choice and leave the default.> Security definer on the function would take precedence as would its set clause.These trigger options seem a bit redundant with the equivalent optionson the function that is executed by the trigger. What would be theadvantages or differences of setting these options on the triggerversus the function?At least security definer needs to take precedence as the function owner is fully expecting their role to be the one executing the function, not whomever the trigger owner might be.If putting a set clause on the trigger is a thing then the same thing goes - the function author, if they also did that, expects their settings to be in place. Whether it really makes sense to have trigger owner set configuration when they attach the function is arguable but also the most flexible option.David J.",
"msg_date": "Sat, 22 Jun 2024 20:13:09 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Sat, 2024-06-22 at 20:13 -0700, David G. Johnston wrote:\n> [bikeshedding discussion about SQL syntax]\n\nSure, something like CREATE TRIGGER ... USING {INVOKER|CURRENT} ROLE\norsimilar would work, but think that this discussion is premature\nat this point. If we have syntax to specify the behavior\nof deferred triggers, that needs a new column in \"pg_trigger\", support\nin pg_get_triggerdef(), pg_dump, pg_upgrade etc.\n\nAll that is possible, but keep in mind that we are talking about corner\ncase behavior. To the best of my knowledge, nobody has even noticed the\ndifference in behavior up to now.\n\nI think that we should have some consensus about the following before\nwe discuss syntax:\n\n- Does anybody depend on the current behavior and would be hurt if\n my current patch went in as it is?\n\n- Is this worth changing at all or had we better document the current\n behavior and leave it as it is?\n\nConcerning the latter, I am hoping for a detailed description of our\ncustomer's use case some time soon.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 26 Jun 2024 11:02:30 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Wed, Jun 26, 2024 at 2:02 AM Laurenz Albe <[email protected]>\nwrote:\n\n>\n> I think that we should have some consensus about the following before\n> we discuss syntax:\n>\n> - Does anybody depend on the current behavior and would be hurt if\n> my current patch went in as it is?\n>\n> - Is this worth changing at all or had we better document the current\n> behavior and leave it as it is?\n>\n> Concerning the latter, I am hoping for a detailed description of our\n> customer's use case some time soon.\n>\n>\nWe have a few choices then:\n1. Status quo + documentation backpatch\n2. Change v18 narrowly + documentation backpatch\n3. Backpatch narrowly (one infers the new behavior after reading the\nexisting documentation)\n4. Option 1, plus a new v18 owner-execution mode in lieu of the narrow\nchange to fix the POLA violation\n\nI've been presenting option 4.\n\nPondering further, I see now that having the owner-execution mode be the\nonly way to avoid the POLA violation in deferred triggers isn't great since\nmany triggers benefit from the implied security of being able to run in the\ninvoker's execution context - especially if the trigger doesn't do anything\nthat PUBLIC cannot already do.\n\nSo, I'm on board with option 2 at this point.\n\nDavid J.\n\nOn Wed, Jun 26, 2024 at 2:02 AM Laurenz Albe <[email protected]> wrote:\nI think that we should have some consensus about the following before\nwe discuss syntax:\n\n- Does anybody depend on the current behavior and would be hurt if\n my current patch went in as it is?\n\n- Is this worth changing at all or had we better document the current\n behavior and leave it as it is?\n\nConcerning the latter, I am hoping for a detailed description of our\ncustomer's use case some time soon. We have a few choices then:1. Status quo + documentation backpatch2. Change v18 narrowly + documentation backpatch3. Backpatch narrowly (one infers the new behavior after reading the existing documentation)4. Option 1, plus a new v18 owner-execution mode in lieu of the narrow change to fix the POLA violationI've been presenting option 4.Pondering further, I see now that having the owner-execution mode be the only way to avoid the POLA violation in deferred triggers isn't great since many triggers benefit from the implied security of being able to run in the invoker's execution context - especially if the trigger doesn't do anything that PUBLIC cannot already do.So, I'm on board with option 2 at this point.David J.",
"msg_date": "Wed, 26 Jun 2024 07:38:08 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Wed, 2024-06-26 at 07:38 -0700, David G. Johnston wrote:\n> We have a few choices then:\n> 1. Status quo + documentation backpatch\n> 2. Change v18 narrowly + documentation backpatch\n> 3. Backpatch narrowly (one infers the new behavior after reading the existing documentation)\n> 4. Option 1, plus a new v18 owner-execution mode in lieu of the narrow change to fix the POLA violation\n> \n> I've been presenting option 4.\n> \n> Pondering further, I see now that having the owner-execution mode be the only way to avoid\n> the POLA violation in deferred triggers isn't great since many triggers benefit from the\n> implied security of being able to run in the invoker's execution context - especially if\n> the trigger doesn't do anything that PUBLIC cannot already do.\n> \n> So, I'm on board with option 2 at this point.\n\nNice.\n\nI think we can safely rule out option 3.\nEven if it is a bug, it is not one that has bothered anybody so far that a backpatch\nis indicated.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 26 Jun 2024 17:53:02 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "Hi,\n\nAllow me to provide some background on how we came across this.\n\n(This is my first time posting to a pgsql list so hopefully I've got \neverything set up correctly.)\n\nWe have a db with a big legacy section that we're in the process of \nmodernizing. To compensate for some of the shortcomings we have designed \na layer of writable views to better represent the underlying data and \nmake working with it more convenient. The following should illustrate \nwhat we're doing:\n\n -- this is the schema containing the view layer.\n create schema api;\n -- and this user is granted access to the api, but not the rest of \nthe legacy db.\n create role apiuser;\n grant usage on schema api to apiuser;\n\n -- some dummy objects in the legacy db - poorly laid out and poorly \nnamed.\n create schema legacy;\n create table legacy.stock_base (\n id serial primary key\n , lbl text not null unique\n , num int not null\n -- etc\n );\n create table legacy.stock_extra (\n id int not null unique references legacy.stock_base (id)\n , man text\n -- etc\n );\n\n -- create the stock view which better names and logically groups \nthe data.\n create view api.stock as\n select sb.id\n , sb.lbl as description\n , sb.num as quantity\n , se.man as manufacturer\n from legacy.stock_base sb\n left join legacy.stock_extra se using (id);\n -- make it writable so it is easier to work with. use security \ndefiner to allow access to legacy sections.\n create function api.stock_cud() returns trigger language plpgsql \nsecurity definer as $$\n begin\n -- insert/update legacy.stock_base and legacy.stock_extra \ndepending on trigger action, modified fields, etc.\n assert tg_op = 'INSERT'; -- assume insert for example's sake.\n insert into legacy.stock_base (lbl, num) values \n(new.description, new.quantity) returning id into new.id;\n insert into legacy.stock_extra (id, man) values (new.id, \nnew.manufacturer);\n return new;\n end;\n $$;\n create trigger stock_cud\n instead of insert or update or delete on api.stock\n for each row execute function api.stock_cud();\n\n -- grant the apiuser permission to work with the view.\n grant insert, update, delete on api.stock to apiuser;\n\n -- insert as superuser - works as expected.\n insert into api.stock (description, quantity, manufacturer) values \n('item1', 10, 'manufacturer1');\n -- insert as apiuser - works as expected.\n set role apiuser;\n insert into api.stock (description, quantity, manufacturer) values \n('item2', 10, 'manufacturer2');\n\nIn some cases there are constraint triggers on the underlying tables to \nvalidate certain states. It is, however, possible for a state to be \ntemporarily invalid between statements, so long as it is valid at tx \ncommit. For this reason the triggers are deferred by default. Consider \nthe following example:\n\n reset role;\n create function legacy.stock_check_state() returns trigger language \nplpgsql as $$\n begin\n -- do some queries to check the state of stock based on \nmodified rows and error if invalid.\n raise notice 'current_user %', current_user;\n -- dummy validation code.\n perform * from legacy.stock_base sb left join \nlegacy.stock_extra se using (id) where sb.id = new.id;\n return new;\n end;\n $$;\n create constraint trigger stock_check_state\n after insert or update or delete on legacy.stock_base\n deferrable initially deferred\n for each row execute function legacy.stock_check_state();\n\nThen repeat the inserts:\n\n -- insert as superuser - works as expected.\n reset role;\n insert into api.stock (description, quantity, manufacturer) values \n('item3', 10, 'manufacturer3'); -- NOTICE: current_user postgres\n -- insert as apiuser - fails.\n set role apiuser;\n insert into api.stock (description, quantity, manufacturer) values \n('item4', 10, 'manufacturer4'); -- NOTICE: current_user apiuser\n\n -- insert as apiuser, not deferred - works as expected.\n begin;\n set constraints all immediate;\n insert into api.stock (description, quantity, manufacturer) values \n('item4', 10, 'manufacturer4'); -- NOTICE: current_user postgres\n commit;\n\nThe insert as apiuser fails when the constraint trigger is deferred, but \nworks as expected when it is immediate.\n\nHopefully this helps to better paint the picture. Our workaround \ncurrently is to just make `legacy.stock_check_state()` security definer \nas well. As I told Laurenz, we're not looking to advocate for any \nspecific outcome here. We noticed this strange behaviour and thought it \nto be a bug that should be fixed - whatever \"fixed\" ends up meaning.\n\nRegards,\n\nBennie Swart\n\n\nOn 2024/06/26 17:53, Laurenz Albe wrote:\n> On Wed, 2024-06-26 at 07:38 -0700, David G. Johnston wrote:\n>> We have a few choices then:\n>> 1. Status quo + documentation backpatch\n>> 2. Change v18 narrowly + documentation backpatch\n>> 3. Backpatch narrowly (one infers the new behavior after reading the existing documentation)\n>> 4. Option 1, plus a new v18 owner-execution mode in lieu of the narrow change to fix the POLA violation\n>>\n>> I've been presenting option 4.\n>>\n>> Pondering further, I see now that having the owner-execution mode be the only way to avoid\n>> the POLA violation in deferred triggers isn't great since many triggers benefit from the\n>> implied security of being able to run in the invoker's execution context - especially if\n>> the trigger doesn't do anything that PUBLIC cannot already do.\n>>\n>> So, I'm on board with option 2 at this point.\n> Nice.\n>\n> I think we can safely rule out option 3.\n> Even if it is a bug, it is not one that has bothered anybody so far that a backpatch\n> is indicated.\n>\n> Yours,\n> Laurenz Albe\n>\n>\n>\n>\n\n\n\n",
"msg_date": "Mon, 1 Jul 2024 15:39:39 +0200",
"msg_from": "Bennie Swart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Sat, 2024-06-22 at 17:50 -0400, Joseph Koshakow wrote:\n> On Mon, Jun 10, 2024 at 1:00 PM Laurenz Albe <[email protected]> wrote:\n> > Like you, I was surprised by the current behavior. There is a design\n> > principle that PostgreSQL tries to follow, called the \"Principle of\n> > least astonishment\". Things should behave like a moderately skilled\n> > user would expect them to. In my opinion, the current behavior violates\n> > that principle. Tomas seems to agree with that point of view.\n> \n> I worry that both approaches violate this principle in different ways.\n> For example consider the following sequence of events:\n> \n> SET ROLE r1;\n> BEGIN;\n> SET CONSTRAINTS ALL DEFERRED;\n> INSERT INTO ...;\n> SET ROLE r2;\n> SET search_path = '...';\n> COMMIT;\n> \n> I think that it would be reasonable to expect that the triggers execute\n> with r2 and not r1, since the triggers were explicitly deferred and the\n> role was explicitly set. It would likely be surprising that the search\n> path was updated for the trigger but not the role. With your proposed\n> approach it would be impossible for someone to trigger a trigger with\n> one role and execute it with another, if that's a desirable feature.\n\nI definitely see your point, although GUC settings and the current\nsecurity context are something different.\n\nIt would definitely not be viable to put all GUC values in the trigger\nstate.\n\nSo if you say \"all or nothing\", it would be nothing, and the patch should\nbe rejected.\n\n> > I didn't find this strange behavior myself: it was one of our customers\n> > who uses security definer functions for data modifications and has\n> > problems with the current behavior, and I am trying to improve the\n> > situation on their behalf.\n> \n> Would it be possible to share more details about this use case? For\n> example, What are their current problems? Are they not able to set\n> constraints to immediate? Or can they update the trigger function\n> itself be a security definer function? That might help illuminate why\n> the current behavior is wrong.\n\nI asked them for a statement, and they were nice enough to write up\nhttps://postgr.es/m/e89e8dd9-7143-4db8-ac19-b2951cb0c0da%40gmail.com\n\nThey have a workaround, so the patch is not absolutely necessary for them.\n\n> \n> I also took a look at the code. It doesn't apply cleanly to master, so\n> I took the liberty of rebasing and attaching it.\n> \n> > + /*\n> > + * The role could have been dropped since the trigger was queued.\n> > + * In that case, give up and error out.\n> > + */\n> > + pfree(GetUserNameFromId(evtshared->ats_rolid, false));\n> \n> It feels a bit wasteful to allocate and copy the role name when we\n> never actually use it. Is it possible to check that the role exists\n> without copying the name?\n\nIf that is a concern (and I can envision it to be), I can improve that.\nOne option is to copy the guts of GetUserNameFromId(), and another\nis to factor out the common parts into a new function.\n\nI'd wait until we have a decision whether we want the patch or not\nbefore I make the effort, if that's ok.\n\n> Everything else looked good, and the code does what it says it will.\n\nThanks for the review!\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 01 Jul 2024 17:45:17 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Mon, Jul 1, 2024 at 11:45 AM Laurenz Albe <[email protected]>\nwrote:\n\n> I asked them for a statement, and they were nice enough to write up\n> https://postgr.es/m/e89e8dd9-7143-4db8-ac19-b2951cb0c0da%40gmail.com\n\n> They have a workaround, so the patch is not absolutely necessary for them.\n\nIt sounds like the issue is that there is there is a constraint\ntrigger to check a table constraint that must be executed at commit\ntime, and we'd like to guarantee that if the triggering action was\nsuccessful, then the constraint check is also successful. This is an\neven bigger issue for transactions that have multiple of these\nconstraint checks where there may be no single role that has the\nprivileges required to execute all checks.\n\nYour patch would fix the issue in a majority of cases, but not all.\nSince INSERT, UPDATE, DELETE privileges don't necessarily imply SELECT\nprivileges, the role that modifies a table doesn't necessarily have the\nprivileges required to check the constraints. It sounds like creating\nthe constraint check triggers as a security definer function, with a\nrole that has SELECT privileges, is the more complete solution rather\nthan a workaround.\n\nGiven the above and the fact that the patch is a breaking change, my\nvote would still be to keep the current behavior and update the\ndocumentation. Though I'd be happy to be overruled by someone with more\nknowledge of triggers.\n\nThanks,\nJoe Koshakow\n\nOn Mon, Jul 1, 2024 at 11:45 AM Laurenz Albe <[email protected]> wrote:> I asked them for a statement, and they were nice enough to write up> https://postgr.es/m/e89e8dd9-7143-4db8-ac19-b2951cb0c0da%40gmail.com> They have a workaround, so the patch is not absolutely necessary for them.It sounds like the issue is that there is there is a constrainttrigger to check a table constraint that must be executed at committime, and we'd like to guarantee that if the triggering action wassuccessful, then the constraint check is also successful. This is aneven bigger issue for transactions that have multiple of theseconstraint checks where there may be no single role that has theprivileges required to execute all checks.Your patch would fix the issue in a majority of cases, but not all.Since INSERT, UPDATE, DELETE privileges don't necessarily imply SELECTprivileges, the role that modifies a table doesn't necessarily have theprivileges required to check the constraints. It sounds like creatingthe constraint check triggers as a security definer function, with arole that has SELECT privileges, is the more complete solution ratherthan a workaround.Given the above and the fact that the patch is a breaking change, myvote would still be to keep the current behavior and update thedocumentation. Though I'd be happy to be overruled by someone with moreknowledge of triggers.Thanks,Joe Koshakow",
"msg_date": "Sun, 7 Jul 2024 22:14:57 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On 7/8/24 04:14, Joseph Koshakow wrote:\n> Given the above and the fact that the patch is a breaking change, my\n> vote would still be to keep the current behavior and update the\n> documentation. Though I'd be happy to be overruled by someone with more\n> knowledge of triggers.\n\nThanks for that feedback.\nBased on that, the patch should be rejected.\n\nSince there were a couple of other opinions early in the thread, I'll let\nit sit like that for now, and judgement can be passed at the end of the\ncommitfest. Perhaps somebody else wants to chime in.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 8 Jul 2024 14:36:45 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "po 8. 7. 2024 v 14:36 odesílatel Laurenz Albe <[email protected]>\nnapsal:\n\n> On 7/8/24 04:14, Joseph Koshakow wrote:\n> > Given the above and the fact that the patch is a breaking change, my\n> > vote would still be to keep the current behavior and update the\n> > documentation. Though I'd be happy to be overruled by someone with more\n> > knowledge of triggers.\n>\n> Thanks for that feedback.\n> Based on that, the patch should be rejected.\n>\n> Since there were a couple of other opinions early in the thread, I'll let\n> it sit like that for now, and judgement can be passed at the end of the\n> commitfest. Perhaps somebody else wants to chime in.\n>\n\nIt is hard to say what should be expected behaviour in this case. I think\nthe best is just to document this issue, and change nothing.\n\nRegards\n\nPavel\n\n\n>\n> Yours,\n> Laurenz Albe\n>\n>\n>\n>\n\npo 8. 7. 2024 v 14:36 odesílatel Laurenz Albe <[email protected]> napsal:On 7/8/24 04:14, Joseph Koshakow wrote:\n> Given the above and the fact that the patch is a breaking change, my\n> vote would still be to keep the current behavior and update the\n> documentation. Though I'd be happy to be overruled by someone with more\n> knowledge of triggers.\n\nThanks for that feedback.\nBased on that, the patch should be rejected.\n\nSince there were a couple of other opinions early in the thread, I'll let\nit sit like that for now, and judgement can be passed at the end of the\ncommitfest. Perhaps somebody else wants to chime in.It is hard to say what should be expected behaviour in this case. I think the best is just to document this issue, and change nothing.RegardsPavel \n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 24 Jul 2024 17:52:49 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong security context for deferred triggers?"
},
{
"msg_contents": "On Wed, 2024-03-06 at 14:32 +0100, Laurenz Albe wrote:\n> On Mon, 2023-11-06 at 18:29 +0100, Tomas Vondra wrote:\n> > On 11/6/23 14:23, Laurenz Albe wrote:\n> > > This behavior looks buggy to me. What do you think?\n> > > I cannot imagine that it is a security problem, though.\n> > \n> > How could code getting executed under the wrong role not be a security\n> > issue? Also, does this affect just the role, or are there some other\n> > settings that may unexpectedly change (e.g. search_path)?\n> \n> Here is a patch that fixes this problem by keeping track of the\n> current role in the AfterTriggerSharedData.\n\nFunny enough, this problem has just surfaced on pgsql-general:\nhttps://postgr.es/m/[email protected]\n\nI take this as one more vote for this patch...\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 09 Sep 2024 23:14:30 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong security context for deferred triggers?"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nAttached is the release announcement draft for the 2023-11-09 release \r\n(16.1 et al.).\r\n\r\nPlease review for accuracy and notable omissions. Please have all \r\nfeedback in by 2023-11-09 08:00 UTC at the latest (albeit the sooner the \r\nbetter).\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 6 Nov 2023 17:04:25 -0500",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "2023-11-09 release announcement draft"
},
{
"msg_contents": "Hi,\n\nOn 11/6/23 17:04, Jonathan S. Katz wrote:\n> Attached is the release announcement draft for the 2023-11-09 release \n> (16.1 et al.).\n>\n> Please review for accuracy and notable omissions. Please have all \n> feedback in by 2023-11-09 08:00 UTC at the latest (albeit the sooner \n> the better).\n\n\ns/PostgreSQL 10/PostgreSQL 11/g\n\n\nBest regards,\n\n Jesper\n\n\n\n\n",
"msg_date": "Mon, 6 Nov 2023 18:07:03 -0500",
"msg_from": "Jesper Pedersen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 2023-11-09 release announcement draft"
},
{
"msg_contents": "On Mon, Nov 06, 2023 at 05:04:25PM -0500, Jonathan S. Katz wrote:\n> The PostgreSQL Global Development Group has released an update to all supported\n> versions of PostgreSQL, including 16.1, 15.5, 14.10, 13.13, 12.17, and 11.22\n> This release fixes over 55 bugs reported over the last several months.\n> \n> This release includes fixes for indexes where in certain cases, we advise\n> reindexing. Please see the \"Update\" section for more details.\n\ns/\"Update\" section/\"Updating\" section/ or change section title below.\n\nDelete lines starting here ...\n\n> This is the **final release of PostgreSQL 11**. PostgreSQL 10 will no longer\n> receive\n> [security and bug fixes](https://www.postgresql.org/support/versioning/).\n> If you are running PostgreSQL 10 in a production environment, we suggest that\n> you make plans to upgrade.\n\n... to here. They're redundant with \"PostgreSQL 11 EOL Notice\" below:\n\n> For the full list of changes, please review the\n> [release notes](https://www.postgresql.org/docs/release/).\n> \n> PostgreSQL 11 EOL Notice\n> ------------------------\n> \n> **This is the final release of PostgreSQL 11**. PostgreSQL 11 is now end-of-life\n> and will no longer receive security and bug fixes. If you are\n> running PostgreSQL 11 in a production environment, we suggest that you make\n> plans to upgrade to a newer, supported version of PostgreSQL. Please see our\n> [versioning policy](https://www.postgresql.org/support/versioning/) for more\n> information.\n\n\n",
"msg_date": "Mon, 6 Nov 2023 18:52:30 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 2023-11-09 release announcement draft"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 23:04, Jonathan S. Katz <[email protected]> wrote:\n>\n> Hi,\n>\n> Attached is the release announcement draft for the 2023-11-09 release\n> (16.1 et al.).\n>\n> Please review for accuracy and notable omissions. Please have all\n> feedback in by 2023-11-09 08:00 UTC at the latest (albeit the sooner the\n> better).\n\n> 20231109updaterelease.md\n> [...]\n> * Provide more efficient indexing of `date`, `timestamptz`, and `timestamp`\n> values in BRIN indexes. While not required, we recommend\n> [reindexing](https://www.postgresql.org/docs/current/sql-reindex.html) BRIN\n> indexes that include these data types after installing this update.\n\nAs the type's minmax_multi opclasses are marked as default, I believe\nit makes sense to explicitly mention that only indexes that use the\ntype's minmax_multi opclasses would need to be reindexed for them to\nsee improved performance. The types' *_bloom and *_minmax opclasses\nwere not affected and therefore do not need to be reindexed.\n\nKind regards,\n\nMatthias van de meent.\n\n\n",
"msg_date": "Tue, 7 Nov 2023 14:14:10 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 2023-11-09 release announcement draft"
},
{
"msg_contents": "On 11/6/23 9:52 PM, Noah Misch wrote:\r\n> On Mon, Nov 06, 2023 at 05:04:25PM -0500, Jonathan S. Katz wrote:\r\n>> The PostgreSQL Global Development Group has released an update to all supported\r\n>> versions of PostgreSQL, including 16.1, 15.5, 14.10, 13.13, 12.17, and 11.22\r\n>> This release fixes over 55 bugs reported over the last several months.\r\n>>\r\n>> This release includes fixes for indexes where in certain cases, we advise\r\n>> reindexing. Please see the \"Update\" section for more details.\r\n> \r\n> s/\"Update\" section/\"Updating\" section/ or change section title below.\r\n\r\nFixed.\r\n\r\n> Delete lines starting here ...\r\n> \r\n>> This is the **final release of PostgreSQL 11**. PostgreSQL 10 will no longer\r\n>> receive\r\n>> [security and bug fixes](https://www.postgresql.org/support/versioning/).\r\n>> If you are running PostgreSQL 10 in a production environment, we suggest that\r\n>> you make plans to upgrade.\r\n> \r\n> ... to here. They're redundant with \"PostgreSQL 11 EOL Notice\" below:\r\n\r\nInitially, I strongly disagreed with this recommendation, as I've seen \r\nenough people say that they were unaware that a community version is \r\nEOL. We can't say this enough.\r\n\r\nHowever, I did decide to clip it out because the notice is just below.\r\n\r\nThat said, perhaps we should put out a separate announcement that states \r\nPostgreSQL 11 is EOL. We may want to consider doing standalone EOL \r\nannouncement -- perhaps 6 months out, and then day of, to make it \r\nabundantly clear that a version is deprecating.\r\n\r\nFinally, I included Matthias' downthread recommendation in this version.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 7 Nov 2023 09:02:03 -0500",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 2023-11-09 release announcement draft"
},
{
"msg_contents": "On Tue, Nov 07, 2023 at 09:02:03AM -0500, Jonathan S. Katz wrote:\n> On 11/6/23 9:52 PM, Noah Misch wrote:\n> > On Mon, Nov 06, 2023 at 05:04:25PM -0500, Jonathan S. Katz wrote:\n\n> > Delete lines starting here ...\n> > \n> > > This is the **final release of PostgreSQL 11**. PostgreSQL 10 will no longer\n> > > receive\n> > > [security and bug fixes](https://www.postgresql.org/support/versioning/).\n> > > If you are running PostgreSQL 10 in a production environment, we suggest that\n> > > you make plans to upgrade.\n> > \n> > ... to here. They're redundant with \"PostgreSQL 11 EOL Notice\" below:\n> \n> Initially, I strongly disagreed with this recommendation, as I've seen\n> enough people say that they were unaware that a community version is EOL. We\n> can't say this enough.\n> \n> However, I did decide to clip it out because the notice is just below.\n\nI just figured it was a copy-paste error, given the similarity of nearby\nsentences. I have no concern with a general goal of saying more about the end\nof v11.\n\n\n",
"msg_date": "Tue, 7 Nov 2023 09:41:33 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 2023-11-09 release announcement draft"
}
] |
[
{
"msg_contents": "(Unfortunately, I'm posting this too late for the November commitfest, but\nI'm hoping this will be the first in a series of proposed improvements\ninvolving SIMD instructions for v17.)\n\nPresently, we ask compilers to autovectorize checksum.c and numeric.c. The\npage checksum code actually lives in checksum_impl.h, and checksum.c just\nincludes it. But checksum_impl.h is also used in pg_upgrade/file.c and\npg_checksums.c, and since we don't ask compilers to autovectorize those\nfiles, the page checksum code may remain un-vectorized.\n\nThe attached patch is a quick attempt at adding CFLAGS_UNROLL_LOOPS and\nCFLAGS_VECTORIZE to the CFLAGS for the aforementioned objects. The gains\nare modest (i.e., some system CPU and/or a few percentage points on the\ntotal time), but it seemed like a no-brainer.\n\nSeparately, I'm wondering whether we should consider using CFLAGS_VECTORIZE\non the whole tree. Commit fdea253 seems to be responsible for introducing\nthis targeted autovectorization strategy, and AFAICT this was just done to\nminimize the impact elsewhere while optimizing page checksums. Are there\nfundamental problems with adding CFLAGS_VECTORIZE everywhere? Or is it\njust waiting on someone to do the analysis/benchmarking?\n\n[0] https://postgr.es/m/1367013190.11576.249.camel%40sussancws0025\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 6 Nov 2023 20:47:34 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 9:47 AM Nathan Bossart <[email protected]> wrote:\n> Separately, I'm wondering whether we should consider using CFLAGS_VECTORIZE\n> on the whole tree. Commit fdea253 seems to be responsible for introducing\n> this targeted autovectorization strategy, and AFAICT this was just done to\n> minimize the impact elsewhere while optimizing page checksums. Are there\n> fundamental problems with adding CFLAGS_VECTORIZE everywhere? Or is it\n> just waiting on someone to do the analysis/benchmarking?\n\nIt's already the default for gcc 12 with -O2 (looking further in the\ndocs, it uses the \"very-cheap\" vectorization cost model), so it may be\nworth investigating what the effect of that was. I can't quickly find\nthe equivalent info for clang.\n\nThat being the case, if the difference you found was real, it must\nhave been due to unrolling loops. What changed in the binary?\n\nhttps://gcc.gnu.org/gcc-12/changes.html\nhttps://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html\n\n\n",
"msg_date": "Sat, 11 Nov 2023 19:38:59 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Sat, Nov 11, 2023 at 07:38:59PM +0700, John Naylor wrote:\n> On Tue, Nov 7, 2023 at 9:47 AM Nathan Bossart <[email protected]> wrote:\n>> Separately, I'm wondering whether we should consider using CFLAGS_VECTORIZE\n>> on the whole tree. Commit fdea253 seems to be responsible for introducing\n>> this targeted autovectorization strategy, and AFAICT this was just done to\n>> minimize the impact elsewhere while optimizing page checksums. Are there\n>> fundamental problems with adding CFLAGS_VECTORIZE everywhere? Or is it\n>> just waiting on someone to do the analysis/benchmarking?\n> \n> It's already the default for gcc 12 with -O2 (looking further in the\n> docs, it uses the \"very-cheap\" vectorization cost model), so it may be\n> worth investigating what the effect of that was. I can't quickly find\n> the equivalent info for clang.\n\nMy x86 machine is using gcc 9.4.0, which isn't even aware of \"very-cheap\".\nI don't see any difference with any of the cost models, though. It isn't\nuntil I add -O3 that I see things like inlining pg_checksum_block into\npg_checksum_page. -O3 is generating far more SSE2 instructions, too.\n\nI'll have to check whether gcc 12 is finding anything else within Postgres\nto autovectorize with it's \"very-cheap\" cost model...\n\n> That being the case, if the difference you found was real, it must\n> have been due to unrolling loops. What changed in the binary?\n\nFor gcc 9.4.0 on x86, the autovectorization flag alone indeed makes no\ndifference, while the loop unrolling one does. For Apple clang 14.0.0 on\nan M2, both flags seem to generate very different machine code.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 11 Nov 2023 15:49:43 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-06 20:47:34 -0600, Nathan Bossart wrote:\n> Separately, I'm wondering whether we should consider using CFLAGS_VECTORIZE\n> on the whole tree. Commit fdea253 seems to be responsible for introducing\n> this targeted autovectorization strategy, and AFAICT this was just done to\n> minimize the impact elsewhere while optimizing page checksums. Are there\n> fundamental problems with adding CFLAGS_VECTORIZE everywhere? Or is it\n> just waiting on someone to do the analysis/benchmarking?\n\nHistorically sometimes vectorization ended up hurting in a bunch of\nplaces. But I think that was in the gcc 4 era, which long has\npassed.\n\nIME these days using -O3 yields decent improvements over -O2 when used tree\nwide - even if there are perhaps a few isolated cases where the code is a bit\nworse, they're far outweighed by the improved code.\n\nCompile time wise it's noticeably slower, but not catastrophically so. On an\nolder but decent laptop, while on battery:\n\nO2:\n800.29user 41.99system 0:59.17elapsed 1423%CPU (0avgtext+0avgdata 282324maxresident)k\n152inputs+4408176outputs (95major+13359282minor)pagefaults 0swaps\n\nO3:\n911.80user 44.71system 1:06.79elapsed 1431%CPU (0avgtext+0avgdata 278660maxresident)k\n82624inputs+4571480outputs (571major+14004898minor)pagefaults 0swaps\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Nov 2023 17:00:14 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 9:47 AM Nathan Bossart <[email protected]> wrote:\n>\n> Presently, we ask compilers to autovectorize checksum.c and numeric.c. The\n> page checksum code actually lives in checksum_impl.h, and checksum.c just\n> includes it. But checksum_impl.h is also used in pg_upgrade/file.c and\n> pg_checksums.c, and since we don't ask compilers to autovectorize those\n> files, the page checksum code may remain un-vectorized.\n\nPoking in those files a bit, I also see references to building with\nSSE 4.1. Maybe that's an avenue that we should pursue? (an indirect\nfunction call is surely worth it for page-sized data)\n\n\n",
"msg_date": "Wed, 22 Nov 2023 16:44:03 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Wed, 22 Nov 2023 at 11:44, John Naylor <[email protected]> wrote:\n>\n> On Tue, Nov 7, 2023 at 9:47 AM Nathan Bossart <[email protected]> wrote:\n> >\n> > Presently, we ask compilers to autovectorize checksum.c and numeric.c. The\n> > page checksum code actually lives in checksum_impl.h, and checksum.c just\n> > includes it. But checksum_impl.h is also used in pg_upgrade/file.c and\n> > pg_checksums.c, and since we don't ask compilers to autovectorize those\n> > files, the page checksum code may remain un-vectorized.\n>\n> Poking in those files a bit, I also see references to building with\n> SSE 4.1. Maybe that's an avenue that we should pursue? (an indirect\n> function call is surely worth it for page-sized data)\n\nFor reference, executing the page checksum 10M times on a AMD 3900X CPU:\n\nclang-14 -O2 4.292s (17.8 GiB/s)\nclang-14 -O2 -msse4.1 2.859s (26.7 GiB/s)\nclang-14 -O2 -msse4.1 -mavx2 1.378s (55.4 GiB/s)\n\n--\nAnts Aasma\nSenior Database Engineer\nwww.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:54:13 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 02:54:13PM +0200, Ants Aasma wrote:\n> On Wed, 22 Nov 2023 at 11:44, John Naylor <[email protected]> wrote:\n>> Poking in those files a bit, I also see references to building with\n>> SSE 4.1. Maybe that's an avenue that we should pursue? (an indirect\n>> function call is surely worth it for page-sized data)\n\nYes, I think we should, but we also need to be careful not to hurt\nperformance on platforms that aren't able to benefit [0] [1].\n\nThere are a couple of other threads about adding support for newer\ninstructions [2] [3], and properly detecting the availability of these\ninstructions seems to be a common obstacle. We have a path forward for\nstuff that's already using a runtime check (e.g., CRC32C), but I think\nwe're still trying to figure out what to do for things that must be inlined\n(e.g., simd.h).\n\nOne half-formed idea I have is to introduce some sort of ./configure flag\nthat enables all the newer instructions that your CPU understands. It\nwould also remove any existing runtime checks. This option would make it\neasy to take advantage of the newer instructions if you are building\nPostgres for only your machine (or others just like it).\n\n> For reference, executing the page checksum 10M times on a AMD 3900X CPU:\n> \n> clang-14 -O2 4.292s (17.8 GiB/s)\n> clang-14 -O2 -msse4.1 2.859s (26.7 GiB/s)\n> clang-14 -O2 -msse4.1 -mavx2 1.378s (55.4 GiB/s)\n\nNice. I've noticed similar improvements with AVX2 intrinsics in simd.h.\n\n[0] https://postgr.es/m/2613682.1698779776%40sss.pgh.pa.us\n[1] https://postgr.es/m/36329.1699325578%40sss.pgh.pa.us\n[2] https://postgr.es/m/BL1PR11MB5304097DF7EA81D04C33F3D1DCA6A%40BL1PR11MB5304.namprd11.prod.outlook.com\n[3] https://postgr.es/m/DB9PR08MB6991329A73923BF8ED4B3422F5DBA@DB9PR08MB6991.eurprd08.prod.outlook.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 22 Nov 2023 12:49:35 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 1:49 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Wed, Nov 22, 2023 at 02:54:13PM +0200, Ants Aasma wrote:\n> > On Wed, 22 Nov 2023 at 11:44, John Naylor <[email protected]> wrote:\n> >> Poking in those files a bit, I also see references to building with\n> >> SSE 4.1. Maybe that's an avenue that we should pursue? (an indirect\n> >> function call is surely worth it for page-sized data)\n>\n> Yes, I think we should, but we also need to be careful not to hurt\n> performance on platforms that aren't able to benefit [0] [1].\n\nWell, yes (see popcount using a direct function call on non-x86), but\nI don't think it's as important for page-sized data. Also, sse4.1 is\n~10 years old, I think.\n\n> There are a couple of other threads about adding support for newer\n> instructions [2] [3], and properly detecting the availability of these\n> instructions seems to be a common obstacle. We have a path forward for\n> stuff that's already using a runtime check (e.g., CRC32C), but I think\n> we're still trying to figure out what to do for things that must be inlined\n> (e.g., simd.h).\n>\n> One half-formed idea I have is to introduce some sort of ./configure flag\n> that enables all the newer instructions that your CPU understands.\n\nThat's not doable, but we should consider taking advantage of\nx86-64-v2, which RedHat 9 builds with. That would allow inlining CRC\nand popcount there. Not sure how to detect that easily.\n\n\n",
"msg_date": "Thu, 23 Nov 2023 17:50:48 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 05:50:48PM +0700, John Naylor wrote:\n> On Thu, Nov 23, 2023 at 1:49 AM Nathan Bossart <[email protected]> wrote:\n>> One half-formed idea I have is to introduce some sort of ./configure flag\n>> that enables all the newer instructions that your CPU understands.\n> \n> That's not doable,\n\nIt's not?\n\n> but we should consider taking advantage of\n> x86-64-v2, which RedHat 9 builds with. That would allow inlining CRC\n> and popcount there.\n\nMaybe we have something like --with-isa-level where you can specify\nx86-64-v[1-4] or \"auto\" to mean \"build whatever the current machine can\nhandle.\" I can imagine packagers requiring v2 these days (it's probably\nworth asking them), and that would not only allow compiling in SSE 4.2 on\nmany more machines, but it would also open up a path to supporting\nAVX2/AVX512 and beyond.\n\n> Not sure how to detect that easily.\n\nI briefly looked around and didn't see a portable way to do so. We might\nhave to exhaustively check the features, which doesn't seem like it'd be\ntoo bad for x86_64, but I haven't looked too closely at other\narchitectures.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 23 Nov 2023 10:51:09 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 11:51 PM Nathan Bossart\n<[email protected]> wrote:\n>\n> On Thu, Nov 23, 2023 at 05:50:48PM +0700, John Naylor wrote:\n> > On Thu, Nov 23, 2023 at 1:49 AM Nathan Bossart <[email protected]> wrote:\n> >> One half-formed idea I have is to introduce some sort of ./configure flag\n> >> that enables all the newer instructions that your CPU understands.\n> >\n> > That's not doable,\n>\n> It's not?\n\nWhat exactly would our build system do differently with e.g.\n\"-march=native\" (which is already possible for people who compile\ntheir own binaries, via CFLAGS), if I understand you correctly?\n\n> > but we should consider taking advantage of\n> > x86-64-v2, which RedHat 9 builds with. That would allow inlining CRC\n> > and popcount there.\n>\n> Maybe we have something like --with-isa-level where you can specify\n> x86-64-v[1-4] or \"auto\" to mean \"build whatever the current machine can\n> handle.\"\n\nThat could work, but with the below OS's, it should work\nautomatically. Also, we may not be many years off from the day we can\nmove our baseline as well, such that older buildfarm members (if we\nhave any) would need to pass --disable-isa-extensions, but that may be\npushing things too much for too little benefit. *\n\n> I can imagine packagers requiring v2 these days (it's probably\n> worth asking them), and that would not only allow compiling in SSE 4.2 on\n> many more machines, but it would also open up a path to supporting\n> AVX2/AVX512 and beyond.\n\nA brief look found these OS's are moving / have moved to x86-64-v2:\n\nRedhat 9 [1][2]\nOpenSuse ALP [3]\nOpenSuse Tumbleweed [4]\n\nDebian considers it a bug if the package fails to build with\nx86-64-v2, but they haven't changed their baseline requirement. [5]\n\n> > Not sure how to detect that easily.\n>\n> I briefly looked around and didn't see a portable way to do so. We might\n> have to exhaustively check the features, which doesn't seem like it'd be\n> too bad for x86_64, but I haven't looked too closely at other\n> architectures.\n\nSorry, I wasn't clear, I mean: detect that a packager passed\n\"-march=x86_64-v2\" in the CFLAGS, so that a symbol in header files\nwould cause inlining of functions containing certain intrinsics.\nExhaustively checking features defeats the purpose of having an\nindustry-standard shorthand, and says nothing about what the package\ndoes or does not require of the target machine.\n\n* Note: I have seen the threads with the idea of compiling multiple\nentire binaries, and switching at postmaster start. I think it's a\ngood idea, but I also suspect detecting flags from the packager is an\neasier intermediate step.\n\n[1] https://developers.redhat.com/blog/2021/01/05/building-red-hat-enterprise-linux-9-for-the-x86-64-v2-microarchitecture-level\n[2] https://access.redhat.com/solutions/6833751\n[3] https://news.opensuse.org/2022/09/26/alp-architecture-baselevel-x86_64-v2/\n[4] https://news.opensuse.org/2022/11/28/tw-to-roll-out-mitigation-plan-advance-microarchitecture/\n[5] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=983926\n\n\n",
"msg_date": "Sat, 25 Nov 2023 14:09:14 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 1:49 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Wed, Nov 22, 2023 at 02:54:13PM +0200, Ants Aasma wrote:\n> > For reference, executing the page checksum 10M times on a AMD 3900X CPU:\n> >\n> > clang-14 -O2 4.292s (17.8 GiB/s)\n> > clang-14 -O2 -msse4.1 2.859s (26.7 GiB/s)\n> > clang-14 -O2 -msse4.1 -mavx2 1.378s (55.4 GiB/s)\n>\n> Nice. I've noticed similar improvements with AVX2 intrinsics in simd.h.\n\nIf you're thinking to support AVX2 anywhere, I'd start with checksum\nfirst. Much less code to review, and less risk.\n\n\n",
"msg_date": "Sat, 25 Nov 2023 14:24:11 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Sat, Nov 25, 2023 at 02:09:14PM +0700, John Naylor wrote:\n> On Thu, Nov 23, 2023 at 11:51 PM Nathan Bossart\n> <[email protected]> wrote:\n>>\n>> On Thu, Nov 23, 2023 at 05:50:48PM +0700, John Naylor wrote:\n>> > On Thu, Nov 23, 2023 at 1:49 AM Nathan Bossart <[email protected]> wrote:\n>> >> One half-formed idea I have is to introduce some sort of ./configure flag\n>> >> that enables all the newer instructions that your CPU understands.\n>> >\n>> > That's not doable,\n>>\n>> It's not?\n> \n> What exactly would our build system do differently with e.g.\n> \"-march=native\" (which is already possible for people who compile\n> their own binaries, via CFLAGS), if I understand you correctly?\n\nIt would probably just be an easier way of doing that than adjusting COPT\nin src/Makefile.custom.\n\n>> Maybe we have something like --with-isa-level where you can specify\n>> x86-64-v[1-4] or \"auto\" to mean \"build whatever the current machine can\n>> handle.\"\n> \n> That could work, but with the below OS's, it should work\n> automatically. Also, we may not be many years off from the day we can\n> move our baseline as well, such that older buildfarm members (if we\n> have any) would need to pass --disable-isa-extensions, but that may be\n> pushing things too much for too little benefit. *\n\nYou are probably right. I guess I'm wondering whether we need to make all\nthis configurable. Maybe we could get away with moving our baseline to v2\nsoon, but if we'd also like to start adding AVX2 enhancements (and I think\nwe will), I'm assuming we'll want to provide an easy way for users to\ndeclare that they are building for v3+ CPUs.\n\n>> > Not sure how to detect that easily.\n>>\n>> I briefly looked around and didn't see a portable way to do so. We might\n>> have to exhaustively check the features, which doesn't seem like it'd be\n>> too bad for x86_64, but I haven't looked too closely at other\n>> architectures.\n> \n> Sorry, I wasn't clear, I mean: detect that a packager passed\n> \"-march=x86_64-v2\" in the CFLAGS, so that a symbol in header files\n> would cause inlining of functions containing certain intrinsics.\n> Exhaustively checking features defeats the purpose of having an\n> industry-standard shorthand, and says nothing about what the package\n> does or does not require of the target machine.\n\nI'm not sure why I thought checking each feature might be necessary.\n--with-isa-level could essentially just be an alias for adding all the\nCFLAGS for the extensions provided at that level, and --with-isa-level=auto\nwould just mean -march=native. With those flags set, the ./configure\nchecks would succeed with the base set of compiler flags passed in, which\ncould be used as a heuristic for inlining (like CRC32C does today).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Nov 2023 15:21:53 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 03:21:53PM -0600, Nathan Bossart wrote:\n> On Sat, Nov 25, 2023 at 02:09:14PM +0700, John Naylor wrote:\n>> Sorry, I wasn't clear, I mean: detect that a packager passed\n>> \"-march=x86_64-v2\" in the CFLAGS, so that a symbol in header files\n>> would cause inlining of functions containing certain intrinsics.\n>> Exhaustively checking features defeats the purpose of having an\n>> industry-standard shorthand, and says nothing about what the package\n>> does or does not require of the target machine.\n> \n> I'm not sure why I thought checking each feature might be necessary.\n> --with-isa-level could essentially just be an alias for adding all the\n> CFLAGS for the extensions provided at that level, and --with-isa-level=auto\n> would just mean -march=native. With those flags set, the ./configure\n> checks would succeed with the base set of compiler flags passed in, which\n> could be used as a heuristic for inlining (like CRC32C does today).\n\nOr, perhaps you mean removing those ./configure checks completely and\nassuming that the compiler knows about the intrinsics required for the\nspecified ISA level...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Nov 2023 16:26:19 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-25 14:09:14 +0700, John Naylor wrote:\n> * Note: I have seen the threads with the idea of compiling multiple\n> entire binaries, and switching at postmaster start. I think it's a\n> good idea, but I also suspect detecting flags from the packager is an\n> easier intermediate step.\n\nIt's certainly an easier incremental step - but will it get us all that far?\nOther architectures have similar issues, e.g. ARM with ARMv8.1-A having much\nmore scalable atomic instructions than plain ARMv8.0. And even on x86-64,\nrelying on distros to roll out new minimum x86_64-v2 doesn't get us that far -\nwe'd be up to a uarch from ~2010. And I doubt that they'll raise the bar to v4\nin the next few years, so we'll leave years of improvements on the table. And\neven PG specific repos can't just roll out a hard requirement for a new, more\nmodern, uarch that will crash some time after startup on some hardware.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 Nov 2023 16:51:25 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 7:51 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-11-25 14:09:14 +0700, John Naylor wrote:\n> > * Note: I have seen the threads with the idea of compiling multiple\n> > entire binaries, and switching at postmaster start. I think it's a\n> > good idea, but I also suspect detecting flags from the packager is an\n> > easier intermediate step.\n>\n> It's certainly an easier incremental step - but will it get us all that far?\n\nGranted, not much.\n\n(TBH, I'm not sure how to design the multiple-binaries idea, but\nsurely we're not the first project to think of this...)\n\nOn Tue, Nov 28, 2023 at 4:21 AM Nathan Bossart <[email protected]> wrote:\n> soon, but if we'd also like to start adding AVX2 enhancements (and I think\n> we will), I'm assuming we'll want to provide an easy way for users to\n> declare that they are building for v3+ CPUs.\n\nYeah, I remember now I saw instruction selection change a single shift\nwith -v3 which made the UTF-8 DFA significantly faster:\n\nhttps://www.postgresql.org/message-id/CAFBsxsHR08mHEf06PvrMRstfcyPJLwF69g0r1pvRrxWD4GEVoQ%40mail.gmail.com\n\nI imagine a number of places would get automatic improvements, and I\nthink others have said the same thing.\n\nOn Tue, Nov 28, 2023 at 5:26 AM Nathan Bossart <[email protected]> wrote:\n> > I'm not sure why I thought checking each feature might be necessary.\n> > --with-isa-level could essentially just be an alias for adding all the\n> > CFLAGS for the extensions provided at that level, and --with-isa-level=auto\n> > would just mean -march=native. With those flags set, the ./configure\n> > checks would succeed with the base set of compiler flags passed in, which\n> > could be used as a heuristic for inlining (like CRC32C does today).\n>\n> Or, perhaps you mean removing those ./configure checks completely and\n> assuming that the compiler knows about the intrinsics required for the\n> specified ISA level...\n\nWith the multiple-binaries, we might be able to assume, since it'll be\nopt-in (default is just use the baseline), but I'm not sure. And to\navoid needing a newish compiler, than we could probably just use the\nequivalent, e.g. for -v2:\n\n-march=x86-64 -mmmx -msse -msse2 -mfxsr -msahf -mcx16 -mpopcnt -msse3\n-msse4.1 -msse4.2 -mssse3\n\n...and it seems we'll want to make something up for Arm.\n\n\n",
"msg_date": "Wed, 29 Nov 2023 14:07:15 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
},
{
"msg_contents": "I don't think anything discussed in this thread is ready for v17, so I am\ngoing to punt it to v18.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Mar 2024 11:01:01 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovectorize page checksum code included elsewhere"
}
] |
[
{
"msg_contents": "The following entries are WoA but have not had an email update since\nJuly. If there is no actionable update soon, I plan to mark these\nReturned with Feedback before end of CF:\n\npg_usleep for multisecond delays\nvector search support\nImprove \"user mapping not found\" error message\nParallel CREATE INDEX for BRIN indexes\nDirect SSL Connections\nBRIN - SK_SEARCHARRAY and scan key preprocessing\nltree hash functions\npg_regress.c: Fix \"make check\" on Mac OS X: Pass DYLD_LIBRARY_PATH\n\n--\nJohn Naylor\n\n\n",
"msg_date": "Tue, 7 Nov 2023 15:27:53 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest: older Waiting on Author entries"
},
{
"msg_contents": "On Tue, Nov 07, 2023 at 03:27:53PM +0700, John Naylor wrote:\n> The following entries are WoA but have not had an email update since\n> July. If there is no actionable update soon, I plan to mark these\n> Returned with Feedback before end of CF:\n> \n> pg_usleep for multisecond delays\n> vector search support\n\nI haven't had a chance to follow up on these in some time, so I went ahead\nand marked them returned-with-feedback.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 7 Nov 2023 09:04:46 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest: older Waiting on Author entries"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 8:28 AM John Naylor <[email protected]> wrote:\n>\n> The following entries are WoA but have not had an email update since\n> July. If there is no actionable update soon, I plan to mark these\n> Returned with Feedback before end of CF:\n>\n> ltree hash functions\n\nI should be able to update this with a fully working patch by the end\nof this month. This was blocked by my other patch, which has now been\ncommitted. There's not much left to do, but I'm a bit busy for the\nnext few weeks.\n\nTommy\n\n\n",
"msg_date": "Wed, 8 Nov 2023 11:47:34 +0000",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest: older Waiting on Author entries"
}
] |
[
{
"msg_contents": "In pipeline mode after queuing a message to be sent we would flush the\nbuffer if the size of the buffer passed some threshold. The only message\ntype that we didn't do that for was the Flush message. This addresses\nthat oversight.\n\nI noticed this discrepancy while reviewing the\nPQsendSyncMessage/PQpipelinePutSync patchset.",
"msg_date": "Tue, 7 Nov 2023 10:38:04 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Call pqPipelineFlush from PQsendFlushRequest"
},
{
"msg_contents": "On Tue, Nov 07, 2023 at 10:38:04AM +0100, Jelte Fennema-Nio wrote:\n> In pipeline mode after queuing a message to be sent we would flush the\n> buffer if the size of the buffer passed some threshold. The only message\n> type that we didn't do that for was the Flush message. This addresses\n> that oversight.\n> \n> I noticed this discrepancy while reviewing the\n> PQsendSyncMessage/PQpipelinePutSync patchset.\n\nIndeed, it looks a bit strange that there is no flush if the buffer\nthreshold is reached once the message is sent, so your suggestion\nsounds right. Alvaro?\n--\nMichael",
"msg_date": "Tue, 7 Nov 2023 20:24:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Call pqPipelineFlush from PQsendFlushRequest"
},
{
"msg_contents": "On 2023-Nov-07, Michael Paquier wrote:\n\n> On Tue, Nov 07, 2023 at 10:38:04AM +0100, Jelte Fennema-Nio wrote:\n> > In pipeline mode after queuing a message to be sent we would flush the\n> > buffer if the size of the buffer passed some threshold. The only message\n> > type that we didn't do that for was the Flush message. This addresses\n> > that oversight.\n> > \n> > I noticed this discrepancy while reviewing the\n> > PQsendSyncMessage/PQpipelinePutSync patchset.\n> \n> Indeed, it looks a bit strange that there is no flush if the buffer\n> threshold is reached once the message is sent, so your suggestion\n> sounds right. Alvaro?\n\nI agree, and I intend to get this patch pushed once the release freeze\nis lifted.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n",
"msg_date": "Tue, 7 Nov 2023 12:43:18 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Call pqPipelineFlush from PQsendFlushRequest"
},
{
"msg_contents": "On Tue, Nov 07, 2023 at 12:43:18PM +0100, Alvaro Herrera wrote:\n> I agree, and I intend to get this patch pushed once the release freeze\n> is lifted.\n\nThanks!\n--\nMichael",
"msg_date": "Wed, 8 Nov 2023 09:35:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Call pqPipelineFlush from PQsendFlushRequest"
},
{
"msg_contents": "On 2023-Nov-07, Michael Paquier wrote:\n\n> On Tue, Nov 07, 2023 at 10:38:04AM +0100, Jelte Fennema-Nio wrote:\n> > In pipeline mode after queuing a message to be sent we would flush the\n> > buffer if the size of the buffer passed some threshold. The only message\n> > type that we didn't do that for was the Flush message. This addresses\n> > that oversight.\n> > \n> > I noticed this discrepancy while reviewing the\n> > PQsendSyncMessage/PQpipelinePutSync patchset.\n> \n> Indeed, it looks a bit strange that there is no flush if the buffer\n> threshold is reached once the message is sent, so your suggestion\n> sounds right. Alvaro?\n\nPushed, thanks.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Porque Kim no hacía nada, pero, eso sí,\ncon extraordinario éxito\" (\"Kim\", Kipling)\n\n\n",
"msg_date": "Wed, 8 Nov 2023 17:10:27 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Call pqPipelineFlush from PQsendFlushRequest"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2023-Nov-07, Michael Paquier wrote:\n>> Indeed, it looks a bit strange that there is no flush if the buffer\n>> threshold is reached once the message is sent, so your suggestion\n>> sounds right. Alvaro?\n\n> Pushed, thanks.\n\nI observe that this patch did not touch libpq.sgml, which still says\n\n Note that the request is not itself flushed to the server automatically;\n use <function>PQflush</function> if necessary.\n\nDoesn't that require some rewording?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Feb 2024 15:00:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Call pqPipelineFlush from PQsendFlushRequest"
},
{
"msg_contents": "On Thu, 1 Feb 2024 at 21:00, Tom Lane <[email protected]> wrote:\n> Note that the request is not itself flushed to the server automatically;\n> use <function>PQflush</function> if necessary.\n>\n> Doesn't that require some rewording?\n\nI agree that the current wording is slightly incorrect, but I think I\nprefer we keep it this way. The fact that we actually DO flush when\nsome internal buffer is filled up seems more like an implementation\ndetail, than behavior that people should actually be depending upon.\nAnd even knowing the actual behavior, still the only way to know that\nyour data is flushed is by calling PQflush (since a user has no way of\nchecking if we automatically flushed the internal buffer).\n\n\n",
"msg_date": "Thu, 1 Feb 2024 23:33:49 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Call pqPipelineFlush from PQsendFlushRequest"
}
] |
[
{
"msg_contents": "Greetings,\n\nIf we use a Portal it is possible to open the portal and do a describe and\nthen Fetch N records.\n\nUsing a Cursor we open the cursor. Is there a corresponding describe and a\nway to fetch N records without getting the fields each time. Currently we\nhave to send the SQL \"fetch <direction> N\" and we get the fields and the\nrows. This seems overly verbose.\n\nDave Cramer\n\nGreetings,If we use a Portal it is possible to open the portal and do a describe and then Fetch N records.Using a Cursor we open the cursor. Is there a corresponding describe and a way to fetch N records without getting the fields each time. Currently we have to send the SQL \"fetch <direction> N\" and we get the fields and the rows. This seems overly verbose.Dave Cramer",
"msg_date": "Tue, 7 Nov 2023 06:38:18 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "Dave Cramer <[email protected]> writes:\n> If we use a Portal it is possible to open the portal and do a describe and\n> then Fetch N records.\n\n> Using a Cursor we open the cursor. Is there a corresponding describe and a\n> way to fetch N records without getting the fields each time. Currently we\n> have to send the SQL \"fetch <direction> N\" and we get the fields and the\n> rows. This seems overly verbose.\n\nPortals and cursors are pretty much the same thing, so why not use\nthe API that suits you better?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Nov 2023 10:26:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Tue, 7 Nov 2023 at 10:26, Tom Lane <[email protected]> wrote:\n\n> Dave Cramer <[email protected]> writes:\n> > If we use a Portal it is possible to open the portal and do a describe\n> and\n> > then Fetch N records.\n>\n> > Using a Cursor we open the cursor. Is there a corresponding describe and\n> a\n> > way to fetch N records without getting the fields each time. Currently we\n> > have to send the SQL \"fetch <direction> N\" and we get the fields and the\n> > rows. This seems overly verbose.\n>\n> Portals and cursors are pretty much the same thing, so why not use\n> the API that suits you better?\n>\n\nSo in this case this is a refcursor. Based on above then I should be able\nto do a describe on the refcursor and fetch using the extended query\nprotocol\n\nCool!\n\nDave\n\nDave CramerOn Tue, 7 Nov 2023 at 10:26, Tom Lane <[email protected]> wrote:Dave Cramer <[email protected]> writes:\n> If we use a Portal it is possible to open the portal and do a describe and\n> then Fetch N records.\n\n> Using a Cursor we open the cursor. Is there a corresponding describe and a\n> way to fetch N records without getting the fields each time. Currently we\n> have to send the SQL \"fetch <direction> N\" and we get the fields and the\n> rows. This seems overly verbose.\n\nPortals and cursors are pretty much the same thing, so why not use\nthe API that suits you better?So in this case this is a refcursor. Based on above then I should be able to do a describe on the refcursor and fetch using the extended query protocolCool!Dave",
"msg_date": "Wed, 8 Nov 2023 06:02:56 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "Hi Tom,\n\n\n\n\n\nOn Wed, 8 Nov 2023 at 06:02, Dave Cramer <[email protected]> wrote:\n\n>\n> Dave Cramer\n>\n>\n> On Tue, 7 Nov 2023 at 10:26, Tom Lane <[email protected]> wrote:\n>\n>> Dave Cramer <[email protected]> writes:\n>> > If we use a Portal it is possible to open the portal and do a describe\n>> and\n>> > then Fetch N records.\n>>\n>> > Using a Cursor we open the cursor. Is there a corresponding describe\n>> and a\n>> > way to fetch N records without getting the fields each time. Currently\n>> we\n>> > have to send the SQL \"fetch <direction> N\" and we get the fields and\n>> the\n>> > rows. This seems overly verbose.\n>>\n>> Portals and cursors are pretty much the same thing, so why not use\n>> the API that suits you better?\n>>\n>\n> So in this case this is a refcursor. Based on above then I should be able\n> to do a describe on the refcursor and fetch using the extended query\n> protocol\n>\n\nIs it possible to describe a CURSOR\n\nTesting out the above hypothesis\n\n2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\nsendSimpleQuery FE=> SimpleQuery(query=\"declare C_3 CURSOR WITHOUT HOLD\nFOR SELECT * FROM testsps WHERE id = 2\")\n2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\nsendDescribePortal FE=> Describe(portal=C_3)\n2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\nsendExecute FE=> Execute(portal=C_3,limit=10)\n2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\nsendSync FE=> Sync\n\ngives me the following results\n\n2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\nreceiveErrorResponse <=BE ErrorMessage(ERROR: portal \"C_3\" does not exist\n Location: File: postgres.c, Routine: exec_describe_portal_message, Line:\n2708\n Server SQLState: 34000)\n\nNote Describe portal is really just a DESCRIBE message, the log messages\nare misleading\n\nDave\n\n>\n\nHi Tom,On Wed, 8 Nov 2023 at 06:02, Dave Cramer <[email protected]> wrote:Dave CramerOn Tue, 7 Nov 2023 at 10:26, Tom Lane <[email protected]> wrote:Dave Cramer <[email protected]> writes:\n> If we use a Portal it is possible to open the portal and do a describe and\n> then Fetch N records.\n\n> Using a Cursor we open the cursor. Is there a corresponding describe and a\n> way to fetch N records without getting the fields each time. Currently we\n> have to send the SQL \"fetch <direction> N\" and we get the fields and the\n> rows. This seems overly verbose.\n\nPortals and cursors are pretty much the same thing, so why not use\nthe API that suits you better?So in this case this is a refcursor. Based on above then I should be able to do a describe on the refcursor and fetch using the extended query protocolIs it possible to describe a CURSORTesting out the above hypothesis2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl sendSimpleQuery FE=> SimpleQuery(query=\"declare C_3 CURSOR WITHOUT HOLD FOR SELECT * FROM testsps WHERE id = 2\")2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl sendDescribePortal FE=> Describe(portal=C_3)2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl sendExecute FE=> Execute(portal=C_3,limit=10)2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl sendSync FE=> Syncgives me the following results2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl receiveErrorResponse <=BE ErrorMessage(ERROR: portal \"C_3\" does not exist Location: File: postgres.c, Routine: exec_describe_portal_message, Line: 2708 Server SQLState: 34000)Note Describe portal is really just a DESCRIBE message, the log messages are misleadingDave",
"msg_date": "Thu, 25 Jul 2024 16:14:24 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "On Thursday, July 25, 2024, Dave Cramer <[email protected]> wrote:\n\nMay not make a difference but…\n\n\n> 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\n> sendSimpleQuery FE=> SimpleQuery(query=\"declare C_3 CURSOR WITHOUT HOLD\n> FOR SELECT * FROM testsps WHERE id = 2\")\n>\n\nYou named the cursor c_3 (lowercase due to SQL case folding)\n\n\n> 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\n> sendDescribePortal FE=> Describe(portal=C_3)\n>\n\nThe protocol doesn’t do case folding\n\n\n>\n> 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\n> receiveErrorResponse <=BE ErrorMessage(ERROR: portal \"C_3\" does not exist\n>\n\nAs evidenced by this error message.\n\n Location: File: postgres.c, Routine: exec_describe_portal_message, Line:\n> 2708\n>\n>\n\nDavid J.\n\nOn Thursday, July 25, 2024, Dave Cramer <[email protected]> wrote:May not make a difference but… 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl sendSimpleQuery FE=> SimpleQuery(query=\"declare C_3 CURSOR WITHOUT HOLD FOR SELECT * FROM testsps WHERE id = 2\")You named the cursor c_3 (lowercase due to SQL case folding) 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl sendDescribePortal FE=> Describe(portal=C_3)The protocol doesn’t do case folding 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl receiveErrorResponse <=BE ErrorMessage(ERROR: portal \"C_3\" does not existAs evidenced by this error message. Location: File: postgres.c, Routine: exec_describe_portal_message, Line: 2708 David J.",
"msg_date": "Thu, 25 Jul 2024 13:19:03 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "On Thu, 25 Jul 2024 at 16:19, David G. Johnston <[email protected]>\nwrote:\n\n> On Thursday, July 25, 2024, Dave Cramer <[email protected]> wrote:\n>\n> May not make a difference but…\n>\n>\n>> 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\n>> sendSimpleQuery FE=> SimpleQuery(query=\"declare C_3 CURSOR WITHOUT HOLD\n>> FOR SELECT * FROM testsps WHERE id = 2\")\n>>\n>\n> You named the cursor c_3 (lowercase due to SQL case folding)\n>\n>\n>> 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\n>> sendDescribePortal FE=> Describe(portal=C_3)\n>>\n>\n> The protocol doesn’t do case folding\n>\n>\n>>\n>> 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\n>> receiveErrorResponse <=BE ErrorMessage(ERROR: portal \"C_3\" does not exist\n>>\n>\n> As evidenced by this error message.\n>\n> Location: File: postgres.c, Routine: exec_describe_portal_message, Line:\n>> 2708\n>>\n>>\n>\n> You would be absolutely correct! Thanks for the quick response\n\nDave\n\n>\n\nOn Thu, 25 Jul 2024 at 16:19, David G. Johnston <[email protected]> wrote:On Thursday, July 25, 2024, Dave Cramer <[email protected]> wrote:May not make a difference but… 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl sendSimpleQuery FE=> SimpleQuery(query=\"declare C_3 CURSOR WITHOUT HOLD FOR SELECT * FROM testsps WHERE id = 2\")You named the cursor c_3 (lowercase due to SQL case folding) 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl sendDescribePortal FE=> Describe(portal=C_3)The protocol doesn’t do case folding 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl receiveErrorResponse <=BE ErrorMessage(ERROR: portal \"C_3\" does not existAs evidenced by this error message. Location: File: postgres.c, Routine: exec_describe_portal_message, Line: 2708 You would be absolutely correct! Thanks for the quick responseDave",
"msg_date": "Thu, 25 Jul 2024 17:52:41 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "On Thu, 25 Jul 2024 at 17:52, Dave Cramer <[email protected]> wrote:\n\n>\n>\n> On Thu, 25 Jul 2024 at 16:19, David G. Johnston <\n> [email protected]> wrote:\n>\n>> On Thursday, July 25, 2024, Dave Cramer <[email protected]> wrote:\n>>\n>> May not make a difference but…\n>>\n>>\n>>> 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\n>>> sendSimpleQuery FE=> SimpleQuery(query=\"declare C_3 CURSOR WITHOUT HOLD\n>>> FOR SELECT * FROM testsps WHERE id = 2\")\n>>>\n>>\n>> You named the cursor c_3 (lowercase due to SQL case folding)\n>>\n>>\n>>> 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\n>>> sendDescribePortal FE=> Describe(portal=C_3)\n>>>\n>>\n>> The protocol doesn’t do case folding\n>>\n>>\n>>>\n>>> 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl\n>>> receiveErrorResponse <=BE ErrorMessage(ERROR: portal \"C_3\" does not exist\n>>>\n>>\n>> As evidenced by this error message.\n>>\n>> Location: File: postgres.c, Routine: exec_describe_portal_message,\n>>> Line: 2708\n>>>\n>>>\n>>\n>> You would be absolutely correct! Thanks for the quick response\n>\n>\nSo while the API's are \"virtually\" identical AFAICT there is no way to\ncreate a \"WITH HOLD\" portal ?\n\nDave\n\n>\n\nOn Thu, 25 Jul 2024 at 17:52, Dave Cramer <[email protected]> wrote:On Thu, 25 Jul 2024 at 16:19, David G. Johnston <[email protected]> wrote:On Thursday, July 25, 2024, Dave Cramer <[email protected]> wrote:May not make a difference but… 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl sendSimpleQuery FE=> SimpleQuery(query=\"declare C_3 CURSOR WITHOUT HOLD FOR SELECT * FROM testsps WHERE id = 2\")You named the cursor c_3 (lowercase due to SQL case folding) 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl sendDescribePortal FE=> Describe(portal=C_3)The protocol doesn’t do case folding 2024-07-25 15:55:39 FINEST org.postgresql.core.v3.QueryExecutorImpl receiveErrorResponse <=BE ErrorMessage(ERROR: portal \"C_3\" does not existAs evidenced by this error message. Location: File: postgres.c, Routine: exec_describe_portal_message, Line: 2708 You would be absolutely correct! Thanks for the quick responseSo while the API's are \"virtually\" identical AFAICT there is no way to create a \"WITH HOLD\" portal ?Dave",
"msg_date": "Fri, 26 Jul 2024 08:22:50 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "> So while the API's are \"virtually\" identical AFAICT there is no way to\n> create a \"WITH HOLD\" portal ?\n\nI am not sure if I fully understand your question but I think you can\ncreate a portal with \"WITH HOLD\" option.\n\nBEGIN;\nDECLARE c CURSOR WITH HOLD FOR SELECT * FROM generate_series(1,10);\n\n(of course you could use extended query protocol instead of simple\nquery protocol here)\n\nAfter this there's portal named \"c\" in the backend with WITH HOLD\nattribute. And you could issue a Describe message against the portal.\nAlso you could issue an Execute messages to fetch N rows (N can be\nspecified in the Execute message) with or without in a transaction\nbecause WITH HOLD is specified.\n\nHere is a sample session. The generate_series() generates 10 rows. You\ncan fetch 5 rows from portal \"c\" inside the transaction. After the\ntransaction closed, you can fetch remaining 5 rows as expected.\n\nFE=> Query (query=\"BEGIN\")\n<= BE CommandComplete(BEGIN)\n<= BE ReadyForQuery(T)\nFE=> Query (query=\"DECLARE c CURSOR WITH HOLD FOR SELECT * FROM generate_series(1,10)\")\n<= BE CommandComplete(DECLARE CURSOR)\n<= BE ReadyForQuery(T)\nFE=> Describe(portal=\"c\")\nFE=> Execute(portal=\"c\")\nFE=> Sync\n<= BE RowDescription\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE PortalSuspended\n<= BE ReadyForQuery(T)\nFE=> Query (query=\"END\")\n<= BE CommandComplete(COMMIT)\n<= BE ReadyForQuery(I)\nFE=> Execute(portal=\"c\")\nFE=> Sync\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE PortalSuspended\n<= BE ReadyForQuery(I)\nFE=> Terminate\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sat, 27 Jul 2024 14:55:04 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Sat, 27 Jul 2024 at 01:55, Tatsuo Ishii <[email protected]> wrote:\n\n> > So while the API's are \"virtually\" identical AFAICT there is no way to\n> > create a \"WITH HOLD\" portal ?\n>\n> I am not sure if I fully understand your question but I think you can\n> create a portal with \"WITH HOLD\" option.\n>\n> BEGIN;\n> DECLARE c CURSOR WITH HOLD FOR SELECT * FROM generate_series(1,10);\n>\n> (of course you could use extended query protocol instead of simple\n> query protocol here)\n>\n> After this there's portal named \"c\" in the backend with WITH HOLD\n> attribute. And you could issue a Describe message against the portal.\n> Also you could issue an Execute messages to fetch N rows (N can be\n> specified in the Execute message) with or without in a transaction\n> because WITH HOLD is specified.\n>\n> Here is a sample session. The generate_series() generates 10 rows. You\n> can fetch 5 rows from portal \"c\" inside the transaction. After the\n> transaction closed, you can fetch remaining 5 rows as expected.\n>\n> FE=> Query (query=\"BEGIN\")\n> <= BE CommandComplete(BEGIN)\n> <= BE ReadyForQuery(T)\n> FE=> Query (query=\"DECLARE c CURSOR WITH HOLD FOR SELECT * FROM\n> generate_series(1,10)\")\n> <= BE CommandComplete(DECLARE CURSOR)\n> <= BE ReadyForQuery(T)\n> FE=> Describe(portal=\"c\")\n> FE=> Execute(portal=\"c\")\n> FE=> Sync\n> <= BE RowDescription\n> <= BE DataRow\n> <= BE DataRow\n> <= BE DataRow\n> <= BE DataRow\n> <= BE DataRow\n> <= BE PortalSuspended\n> <= BE ReadyForQuery(T)\n> FE=> Query (query=\"END\")\n> <= BE CommandComplete(COMMIT)\n> <= BE ReadyForQuery(I)\n> FE=> Execute(portal=\"c\")\n> FE=> Sync\n> <= BE DataRow\n> <= BE DataRow\n> <= BE DataRow\n> <= BE DataRow\n> <= BE DataRow\n> <= BE PortalSuspended\n> <= BE ReadyForQuery(I)\n> FE=> Terminate\n>\n> Best reagards,\n>\n\n\nYes, sorry, I should have said one can not create a with hold portal using\nthe BIND command\n\nDave\n\nDave CramerOn Sat, 27 Jul 2024 at 01:55, Tatsuo Ishii <[email protected]> wrote:> So while the API's are \"virtually\" identical AFAICT there is no way to\n> create a \"WITH HOLD\" portal ?\n\nI am not sure if I fully understand your question but I think you can\ncreate a portal with \"WITH HOLD\" option.\n\nBEGIN;\nDECLARE c CURSOR WITH HOLD FOR SELECT * FROM generate_series(1,10);\n\n(of course you could use extended query protocol instead of simple\nquery protocol here)\n\nAfter this there's portal named \"c\" in the backend with WITH HOLD\nattribute. And you could issue a Describe message against the portal.\nAlso you could issue an Execute messages to fetch N rows (N can be\nspecified in the Execute message) with or without in a transaction\nbecause WITH HOLD is specified.\n\nHere is a sample session. The generate_series() generates 10 rows. You\ncan fetch 5 rows from portal \"c\" inside the transaction. After the\ntransaction closed, you can fetch remaining 5 rows as expected.\n\nFE=> Query (query=\"BEGIN\")\n<= BE CommandComplete(BEGIN)\n<= BE ReadyForQuery(T)\nFE=> Query (query=\"DECLARE c CURSOR WITH HOLD FOR SELECT * FROM generate_series(1,10)\")\n<= BE CommandComplete(DECLARE CURSOR)\n<= BE ReadyForQuery(T)\nFE=> Describe(portal=\"c\")\nFE=> Execute(portal=\"c\")\nFE=> Sync\n<= BE RowDescription\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE PortalSuspended\n<= BE ReadyForQuery(T)\nFE=> Query (query=\"END\")\n<= BE CommandComplete(COMMIT)\n<= BE ReadyForQuery(I)\nFE=> Execute(portal=\"c\")\nFE=> Sync\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE DataRow\n<= BE PortalSuspended\n<= BE ReadyForQuery(I)\nFE=> Terminate\n\nBest reagards,Yes, sorry, I should have said one can not create a with hold portal using the BIND commandDave",
"msg_date": "Sat, 27 Jul 2024 12:05:00 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "Dave Cramer <[email protected]> writes:\n> On Sat, 27 Jul 2024 at 01:55, Tatsuo Ishii <[email protected]> wrote:\n>>> So while the API's are \"virtually\" identical AFAICT there is no way to\n>>> create a \"WITH HOLD\" portal ?\n\n> Yes, sorry, I should have said one can not create a with hold portal using\n> the BIND command\n\nYeah. The two APIs (cursors and extended query protocol) manipulate\nthe same underlying Portal objects, but the features exposed by the\nAPIs aren't all identical. We've felt that this isn't high priority\nto sync up, since you can create a Portal with one API then manipulate\nit through the other if need be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2024 15:18:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "> Yes, sorry, I should have said one can not create a with hold portal using\n> the BIND command\n\nOk.\n\nIt would be possible to add a new parameter to the BIND command to\ncreate such a portal. But it needs some changes to the existing\nprotocol definition and requires protocol version up, which is a major\npain.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 28 Jul 2024 08:06:00 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
},
{
"msg_contents": "On Sat, 27 Jul 2024 at 19:06, Tatsuo Ishii <[email protected]> wrote:\n\n> > Yes, sorry, I should have said one can not create a with hold portal\n> using\n> > the BIND command\n>\n> Ok.\n>\n> It would be possible to add a new parameter to the BIND command to\n> create such a portal. But it needs some changes to the existing\n> protocol definition and requires protocol version up, which is a major\n> pain.\n>\n\nI'm trying to add WITH HOLD to the JDBC driver and currently I would have\n1) rewrite the query\n2) issue a new query ... declare .. and bind variables to that statement\n3) execute fetch\n\nvs\n\n1) bind variables to the statement\n2) execute fetch\n\nThe second can be done much lower in the code.\n\nHowever as you mentioned this would require a new protocol version which is\nunlikely to happen.\n\nDave\n\nOn Sat, 27 Jul 2024 at 19:06, Tatsuo Ishii <[email protected]> wrote:> Yes, sorry, I should have said one can not create a with hold portal using\n> the BIND command\n\nOk.\n\nIt would be possible to add a new parameter to the BIND command to\ncreate such a portal. But it needs some changes to the existing\nprotocol definition and requires protocol version up, which is a major\npain.I'm trying to add WITH HOLD to the JDBC driver and currently I would have 1) rewrite the query2) issue a new query ... declare .. and bind variables to that statement3) execute fetchvs 1) bind variables to the statement2) execute fetchThe second can be done much lower in the code. However as you mentioned this would require a new protocol version which is unlikely to happen.Dave",
"msg_date": "Sun, 28 Jul 2024 06:30:02 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Protocol question regarding Portal vs Cursor"
}
] |
[
{
"msg_contents": "Hi, postgres hackers, I’m studying postgres buffer cache part. So I open this thread to communicate some buffer cache codes design and try to improve some tricky codes.\n\nFor Buffer Cache, we know it’s a buffer array, every bucket of this array is consist of a data page and its header which is used to describe the state of the buffer. \n\nThis is the origin code of buffer header:\ntypedef struct BufferDesc\n{\n\tBufferTag\ttag;\t\t\t/* ID of page contained in buffer */\n\tint\t\t\tbuf_id;\t\t\t/* buffer's index number (from 0) */\n\n\t/* state of the tag, containing flags, refcount and usagecount */\n\tpg_atomic_uint32 state;\n\n\tint\t\t\twait_backend_pgprocno;\t/* backend of pin-count waiter */\n\tint\t\t\tfreeNext;\t\t/* link in freelist chain */\n\tLWLock\t\tcontent_lock;\t/* to lock access to buffer contents */\n} BufferDesc;\n\nFor field wait_backend_pgprocno, the comment is \"backend of pin-count waiter”, I have problems below:\n1. it means which processId is waiting this buffer, right? \n2. and if wait_backend_pgprocno is valid, so it says this buffer is in use by one process, right?\n3. if one buffer is wait by another process, it means all buffers are out of use, right? So let’s try this: we have 5 buffers with ids (1,2,3,4,5), and they are all in use, now another process with processId 8017 is coming, and it choose buffer id 1, so buffer1’s wait_backend_pgprocno is 8017, but later\nbuffer4 is released, can process 8017 change to get buffer4? how?\n4. wait_backend_pgprocno is a “integer” type, not an array, why can one buffer be wait by only one process?\n\nHope your reply, thanks!! I’m willing to do contributions after I study buffer cache implementations.\nHi, postgres hackers, I’m studying postgres buffer cache part. So I open this thread to communicate some buffer cache codes design and try to improve some tricky codes.For Buffer Cache, we know it’s a buffer array, every bucket of this array is consist of a data page and its header which is used to describe the state of the buffer. This is the origin code of buffer header:typedef struct BufferDesc{ BufferTag tag; /* ID of page contained in buffer */ int buf_id; /* buffer's index number (from 0) */ /* state of the tag, containing flags, refcount and usagecount */ pg_atomic_uint32 state; int wait_backend_pgprocno; /* backend of pin-count waiter */ int freeNext; /* link in freelist chain */ LWLock content_lock; /* to lock access to buffer contents */} BufferDesc;For field wait_backend_pgprocno, the comment is \"backend of pin-count waiter”, I have problems below:1. it means which processId is waiting this buffer, right? 2. and if wait_backend_pgprocno is valid, so it says this buffer is in use by one process, right?3. if one buffer is wait by another process, it means all buffers are out of use, right? So let’s try this: we have 5 buffers with ids (1,2,3,4,5), and they are all in use, now another process with processId 8017 is coming, and it choose buffer id 1, so buffer1’s wait_backend_pgprocno is 8017, but laterbuffer4 is released, can process 8017 change to get buffer4? how?4. wait_backend_pgprocno is a “integer” type, not an array, why can one buffer be wait by only one process?Hope your reply, thanks!! I’m willing to do contributions after I study buffer cache implementations.",
"msg_date": "Tue, 7 Nov 2023 21:28:12 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Buffer Cache Problem"
},
{
"msg_contents": "On Tue, 7 Nov 2023 at 14:28, jacktby jacktby <[email protected]> wrote:\n>\n> Hi, postgres hackers, I’m studying postgres buffer cache part. So I open this thread to communicate some buffer cache codes design and try to improve some tricky codes.\n>\n> For Buffer Cache, we know it’s a buffer array, every bucket of this array is consist of a data page and its header which is used to describe the state of the buffer.\n>\n> For field wait_backend_pgprocno, the comment is \"backend of pin-count waiter”, I have problems below:\n\nDid you read the README at src/backend/storage/buffer/README, as well\nas the comments and documentation in and around the buffer-locking\nfunctions?\n\n> 1. it means which processId is waiting this buffer, right?\n> 2. and if wait_backend_pgprocno is valid, so it says this buffer is in use by one process, right?\n> 3. if one buffer is wait by another process, it means all buffers are out of use, right? So let’s try this: we have 5 buffers with ids (1,2,3,4,5), and they are all in use, now another process with processId 8017 is coming, and it choose buffer id 1, so buffer1’s wait_backend_pgprocno is 8017, but later\n> buffer4 is released, can process 8017 change to get buffer4? how?\n\nI believe these questions are generally answered by the README and the\ncomments in bufmgr.c/buf_internal.h for the functions that try to lock\nbuffers.\n\n> 4. wait_backend_pgprocno is a “integer” type, not an array, why can one buffer be wait by only one process?\n\nYes, that is correct. It seems like PostgreSQL has yet to find a\nworkload requires more than one backend to wait for super exclusive\naccess to a buffer at the same time.\nVACUUM seems to be the only workload that currently can wait and sleep\nfor this exclusive buffer access, and that is already limited to one\nprocess per relation, so there are no explicit concurrent\nsuper-exclusive waits in the system right now.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 7 Nov 2023 16:45:19 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Buffer Cache Problem"
},
{
"msg_contents": "In the bus_internal.h,I see\n====================================================\n Note: Buffer header lock (BM_LOCKED flag) must be held to examine or change tag, state or wait_backend_pgprocno fields.\n====================================================\nAs we all know, this buffer header lock is implemented by a bit in state filed, and this state field is a atomic_u32 type, so in fact we don’t need to \nhold buffer lock when we update state, this comment has error,right?\nIn the bus_internal.h,I see==================================================== Note: Buffer header lock (BM_LOCKED flag) must be held to examine or change tag, state or wait_backend_pgprocno fields.====================================================As we all know, this buffer header lock is implemented by a bit in state filed, and this state field is a atomic_u32 type, so in fact we don’t need to hold buffer lock when we update state, this comment has error,right?",
"msg_date": "Fri, 10 Nov 2023 22:31:40 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Buffer Cache Problem"
},
{
"msg_contents": "> 2023年11月10日 22:31,jacktby jacktby <[email protected]> 写道:\n> \n> In the bus_internal.h,I see\n> ====================================================\n> Note: Buffer header lock (BM_LOCKED flag) must be held to examine or change tag, state or wait_backend_pgprocno fields.\n> ====================================================\n> As we all know, this buffer header lock is implemented by a bit in state filed, and this state field is a atomic_u32 type, so in fact we don’t need to \n> hold buffer lock when we update state, this comment has error,right?\nOh, sorry this is true, in fact we never acquire a spin lock when update the state.\n2023年11月10日 22:31,jacktby jacktby <[email protected]> 写道:In the bus_internal.h,I see==================================================== Note: Buffer header lock (BM_LOCKED flag) must be held to examine or change tag, state or wait_backend_pgprocno fields.====================================================As we all know, this buffer header lock is implemented by a bit in state filed, and this state field is a atomic_u32 type, so in fact we don’t need to hold buffer lock when we update state, this comment has error,right?Oh, sorry this is true, in fact we never acquire a spin lock when update the state.",
"msg_date": "Fri, 10 Nov 2023 22:38:57 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Buffer Cache Problem"
},
{
"msg_contents": "Hi, I have 3 questions here:\n1. I see comments in but_internals.h below:\n========================================\n * Also, in places we do one-time reads of the flags without bothering to\n * lock the buffer header; this is generally for situations where we don't\n * expect the flag bit being tested to be changing.\n========================================\nIn fact, the flag is in state filed which is an atomic_u32, so we don’t need to acquire buffer header lock in any case, but for this comment, seems it’s saying we need to hold a buffer header lock when read flag in general.\n\n2. Another question:\n========================================\n * We can't physically remove items from a disk page if another backend has\n * the buffer pinned. Hence, a backend may need to wait for all other pins\n * to go away. This is signaled by storing its own pgprocno into\n * wait_backend_pgprocno and setting flag bit BM_PIN_COUNT_WAITER. At present,\n * there can be only one such waiter per buffer.\n========================================\nThe comments above, in fact for now, if a backend plan to remove items from a disk page, this is a mutation operation, so this backend must hold a exclusive lock for this buffer page, then in this case, there are no other backends pinning this buffer, so the pin refcount must be 1 (it’s by this backend), then this backend can remove the items safely and no need to wait other backends (because there are no other backends pinning this buffer). So my question is below:\n The operation “storing its own pgprocno into\n * wait_backend_pgprocno and setting flag bit BM_PIN_COUNT_WAITER” is whether too expensive, we should not do like this, right?\n\n3. Where is the array?\n========================================\n * Per-buffer I/O condition variables are currently kept outside this struct in\n * a separate array. They could be moved in here and still fit within that\n * limit on common systems, but for now that is not done.\n========================================\nHi, I have 3 questions here:1. I see comments in but_internals.h below:======================================== * Also, in places we do one-time reads of the flags without bothering to * lock the buffer header; this is generally for situations where we don't * expect the flag bit being tested to be changing.========================================In fact, the flag is in state filed which is an atomic_u32, so we don’t need to acquire buffer header lock in any case, but for this comment, seems it’s saying we need to hold a buffer header lock when read flag in general.2. Another question:======================================== * We can't physically remove items from a disk page if another backend has * the buffer pinned. Hence, a backend may need to wait for all other pins * to go away. This is signaled by storing its own pgprocno into * wait_backend_pgprocno and setting flag bit BM_PIN_COUNT_WAITER. At present, * there can be only one such waiter per buffer.========================================The comments above, in fact for now, if a backend plan to remove items from a disk page, this is a mutation operation, so this backend must hold a exclusive lock for this buffer page, then in this case, there are no other backends pinning this buffer, so the pin refcount must be 1 (it’s by this backend), then this backend can remove the items safely and no need to wait other backends (because there are no other backends pinning this buffer). So my question is below: The operation “storing its own pgprocno into * wait_backend_pgprocno and setting flag bit BM_PIN_COUNT_WAITER” is whether too expensive, we should not do like this, right?3. Where is the array?======================================== * Per-buffer I/O condition variables are currently kept outside this struct in * a separate array. They could be moved in here and still fit within that * limit on common systems, but for now that is not done.========================================",
"msg_date": "Sat, 11 Nov 2023 00:46:13 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Buffer Cache Problem"
}
] |
[
{
"msg_contents": "While working on the MERGE RETURNING patch, I noticed a pre-existing\nMERGE bug --- ExecMergeMatched() should not call ExecUpdateEpilogue()\nif ExecUpdateAct() indicates that it did a cross-partition update.\n\nThe immediate consequence is that it incorrectly tries (and fails) to\nfire AFTER UPDATE ROW triggers, which it should not do if the UPDATE\nhas been turned into a DELETE and an INSERT:\n\nDROP TABLE IF EXISTS foo CASCADE;\n\nCREATE TABLE foo (a int) PARTITION BY LIST (a);\nCREATE TABLE foo_p1 PARTITION OF foo FOR VALUES IN (1);\nCREATE TABLE foo_p2 PARTITION OF foo FOR VALUES IN (2);\nINSERT INTO foo VALUES (1);\n\nCREATE OR REPLACE FUNCTION foo_trig_fn() RETURNS trigger AS\n$$\nBEGIN\n RAISE NOTICE 'Trigger: % %', TG_WHEN, TG_OP;\n IF TG_OP = 'DELETE' THEN RETURN OLD; END IF;\n RETURN NEW;\nEND\n$$ LANGUAGE plpgsql;\n\nCREATE TRIGGER foo_b_trig BEFORE INSERT OR UPDATE OR DELETE ON foo\n FOR EACH ROW EXECUTE FUNCTION foo_trig_fn();\nCREATE TRIGGER foo_a_trig AFTER INSERT OR UPDATE OR DELETE ON foo\n FOR EACH ROW EXECUTE FUNCTION foo_trig_fn();\n\nUPDATE foo SET a = 2 WHERE a = 1;\n\nNOTICE: Trigger: BEFORE UPDATE\nNOTICE: Trigger: BEFORE DELETE\nNOTICE: Trigger: BEFORE INSERT\nNOTICE: Trigger: AFTER DELETE\nNOTICE: Trigger: AFTER INSERT\nUPDATE 1\n\nMERGE INTO foo USING (VALUES (1)) AS v(a) ON true\n WHEN MATCHED THEN UPDATE SET a = v.a;\n\nNOTICE: Trigger: BEFORE UPDATE\nNOTICE: Trigger: BEFORE DELETE\nNOTICE: Trigger: BEFORE INSERT\nNOTICE: Trigger: AFTER DELETE\nNOTICE: Trigger: AFTER INSERT\nERROR: failed to fetch tuple2 for AFTER trigger\n\nThe attached patch fixes that, making the UPDATE path in\nExecMergeMatched() consistent with ExecUpdate().\n\n(If there were no AFTER ROW triggers, the old code would go on to do\nother unnecessary things, like WCO/RLS checks, which I didn't really\nlook into. This patch will stop any of that too.)\n\nRegards,\nDean",
"msg_date": "Tue, 7 Nov 2023 15:10:10 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "MERGE: AFTER ROW trigger failure for cross-partition update"
}
] |
[
{
"msg_contents": "Found this issue during my Fedora 39 upgrade. Tested that uninstalling \nopenssl still allows the various ssl tests to run and succeed.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 07 Nov 2023 16:06:56 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix use of openssl.path() if openssl isn't found"
},
{
"msg_contents": "On Tue, Nov 07, 2023 at 04:06:56PM -0600, Tristan Partin wrote:\n> Found this issue during my Fedora 39 upgrade. Tested that uninstalling\n> openssl still allows the various ssl tests to run and succeed.\n\nGood catch. You are right that this is inconsistent with what we\nexpect in the test.\n\n> +openssl_path = ''\n> +if openssl.found()\n> + openssl_path = openssl.path()\n> +endif\n> +\n> tests += {\n> 'name': 'ssl',\n> 'sd': meson.current_source_dir(),\n> @@ -7,7 +12,7 @@ tests += {\n> 'tap': {\n> 'env': {\n> 'with_ssl': ssl_library,\n> - 'OPENSSL': openssl.path(),\n> + 'OPENSSL': openssl_path,\n> },\n> 'tests': [\n> 't/001_ssltests.pl',\n\nOkay, that's a nit and it leads to the same result, but why not using\nthe same one-liner style like all the other meson.build files that\nrely on optional commands? See pg_verifybackup, pg_dump, etc. That\nwould be more consistent.\n--\nMichael",
"msg_date": "Wed, 8 Nov 2023 14:53:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix use of openssl.path() if openssl isn't found"
},
{
"msg_contents": "On Tue Nov 7, 2023 at 11:53 PM CST, Michael Paquier wrote:\n> On Tue, Nov 07, 2023 at 04:06:56PM -0600, Tristan Partin wrote:\n> > Found this issue during my Fedora 39 upgrade. Tested that uninstalling\n> > openssl still allows the various ssl tests to run and succeed.\n>\n> Good catch. You are right that this is inconsistent with what we\n> expect in the test.\n>\n> > +openssl_path = ''\n> > +if openssl.found()\n> > + openssl_path = openssl.path()\n> > +endif\n> > +\n> > tests += {\n> > 'name': 'ssl',\n> > 'sd': meson.current_source_dir(),\n> > @@ -7,7 +12,7 @@ tests += {\n> > 'tap': {\n> > 'env': {\n> > 'with_ssl': ssl_library,\n> > - 'OPENSSL': openssl.path(),\n> > + 'OPENSSL': openssl_path,\n> > },\n> > 'tests': [\n> > 't/001_ssltests.pl',\n>\n> Okay, that's a nit and it leads to the same result, but why not using\n> the same one-liner style like all the other meson.build files that\n> rely on optional commands? See pg_verifybackup, pg_dump, etc. That\n> would be more consistent.\n\nBecause I forgot there were ternary statements in Meson :). Thanks for \nthe review. Here is v2.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 08 Nov 2023 00:07:49 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix use of openssl.path() if openssl isn't found"
},
{
"msg_contents": "On Wed, Nov 08, 2023 at 12:07:49AM -0600, Tristan Partin wrote:\n> 'with_ssl': ssl_library,\n> - 'OPENSSL': openssl.path(),\n> + 'OPENSSL': openssl.found() ? openssl.path : '',\n\nExcept that this was incorrect. I've fixed the grammar and applied\nthat down to 16.\n--\nMichael",
"msg_date": "Wed, 8 Nov 2023 17:31:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix use of openssl.path() if openssl isn't found"
},
{
"msg_contents": "On Wed Nov 8, 2023 at 2:31 AM CST, Michael Paquier wrote:\n> On Wed, Nov 08, 2023 at 12:07:49AM -0600, Tristan Partin wrote:\n> > 'with_ssl': ssl_library,\n> > - 'OPENSSL': openssl.path(),\n> > + 'OPENSSL': openssl.found() ? openssl.path : '',\n>\n> Except that this was incorrect. I've fixed the grammar and applied\n> that down to 16.\n\nCoding at 12 in the morning is never conducive to coherent thought :). \nThanks. Sorry for the trouble.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 08 Nov 2023 09:59:40 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix use of openssl.path() if openssl isn't found"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at GUC messages today I noticed the following:\n\ngettext_noop(\"The server will use the fsync() system call in several\nplaces to make \"\n \"sure that updates are physically written to disk. This insures \"\n \"that a database cluster will recover to a consistent state after \"\n \"an operating system or hardware crash.\")\n\n~\n\nI believe the word should have been \"ensures\"; not \"insures\".\n\nIn passing I found/fixed a bunch of similar misuses in comments.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Wed, 8 Nov 2023 17:55:37 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "ensure, not insure"
},
{
"msg_contents": "On Wed, 8 Nov 2023 at 19:56, Peter Smith <[email protected]> wrote:\n> gettext_noop(\"The server will use the fsync() system call in several\n> places to make \"\n> \"sure that updates are physically written to disk. This insures \"\n> \"that a database cluster will recover to a consistent state after \"\n> \"an operating system or hardware crash.\")\n>\n> ~\n>\n> I believe the word should have been \"ensures\"; not \"insures\".\n\nI agree. It's surprisingly ancient, having arrived in b700a672f (June 2003).\n\n> In passing I found/fixed a bunch of similar misuses in comments.\n\nThose all look fine to me too.\n\nDavid\n\n\n",
"msg_date": "Wed, 8 Nov 2023 20:31:28 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ensure, not insure"
},
{
"msg_contents": "On Wed, Nov 08, 2023 at 08:31:28PM +1300, David Rowley wrote:\n> On Wed, 8 Nov 2023 at 19:56, Peter Smith <[email protected]> wrote:\n>> In passing I found/fixed a bunch of similar misuses in comments.\n> \n> Those all look fine to me too.\n\n+1.\n--\nMichael",
"msg_date": "Thu, 9 Nov 2023 10:22:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ensure, not insure"
},
{
"msg_contents": "On Thu, 9 Nov 2023 at 14:22, Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Nov 08, 2023 at 08:31:28PM +1300, David Rowley wrote:\n> > Those all look fine to me too.\n>\n> +1.\n\nI've pushed this. I backpatched due to the typo in the fsync GUC\ndescription. I'd have only pushed to master if it were just the\ncomment typos.\n\nI noticed older versions had another instance of \"insure\" in a code\ncomment. I opted to leave that one alone since that file is now gone\nin more recent versions.\n\nDavid\n\n\n",
"msg_date": "Fri, 10 Nov 2023 00:20:59 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ensure, not insure"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> I've pushed this. I backpatched due to the typo in the fsync GUC\n> description. I'd have only pushed to master if it were just the\n> comment typos.\n\nFTR, I do not think you should have back-patched. You created extra\nwork for the translation team, and the mistake is subtle enough that\nit wasn't worth that. (My dictionary says that \"insure and ensure\nare often interchangeable, particularly in US English\".)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Nov 2023 10:00:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ensure, not insure"
}
] |
[
{
"msg_contents": "Hello!\n\nI noticed that the same block\n\n-- SET statements.\n-- These use two different strings, still they count as one entry.\nSET work_mem = '1MB';\nSet work_mem = '1MB';\nSET work_mem = '2MB';\nRESET work_mem;\nSET enable_seqscan = off;\nSET enable_seqscan = on;\nRESET enable_seqscan;\n\nis checked twice in contrib/pg_stat_statements/sql/utility.sql on lines 278-286 and 333-341. Is this on any purpose? I think the second set of tests is not needed and can be removed, as in the attached patch.\n\nregards, Sergei",
"msg_date": "Wed, 08 Nov 2023 10:33:23 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Doubled test for SET statements in pg_stat_statements tests"
},
{
"msg_contents": "On Wed, Nov 08, 2023 at 10:33:23AM +0300, Sergei Kornilov wrote:\n> is checked twice in contrib/pg_stat_statements/sql/utility.sql on\n> lines 278-286 and 333-341. Is this on any purpose? I think the\n> second set of tests is not needed and can be removed, as in the\n> attached patch.\n\nThanks, applied. This looks like a copy-paste mistake coming from\nde2aca288569, even if it has added more scenarios for patterns around\nSET.\n--\nMichael",
"msg_date": "Thu, 9 Nov 2023 12:58:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doubled test for SET statements in pg_stat_statements tests"
}
] |
[
{
"msg_contents": "Hi all,\nI have some questions about the implementation of vector32_is_highbit_set on arm.\nBelow is the comment and the implementation for this function.\n/*\n * Exactly like vector8_is_highbit_set except for the input type, so it\n * looks at each byte separately.\n *\n * XXX x86 uses the same underlying type for 8-bit, 16-bit, and 32-bit\n * integer elements, but Arm does not, hence the need for a separate\n * function. We could instead adopt the behavior of Arm's vmaxvq_u32(), i.e.\n * check each 32-bit element, but that would require an additional mask\n * operation on x86.\n */\n#ifndef USE_NO_SIMD\nstatic inline bool\nvector32_is_highbit_set(const Vector32 v)\n{\n#if defined(USE_NEON)\n return vector8_is_highbit_set((Vector8) v);\n#else\n return vector8_is_highbit_set(v);\n#endif\n}\n#endif /* ! USE_NO_SIMD */\n\nBut I still don't understand why the vmaxvq_u32 intrinsic is not used on the arm platform.\nWe have used the macro USE_NEON to distinguish different platforms.\nIn addition, according to the \"Arm Neoverse N1 Software Optimization Guide\",\nThe vmaxvq_u32 intrinsic has half the latency of vmaxvq_u8 and twice the bandwidth.\nSo I think just use vmaxvq_u32 directly.\n\nAny comments or feedback are welcome.\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 07:44:11 +0000",
"msg_from": "Xiang Gao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about the Implementation of vector32_is_highbit_set on ARM"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 2:44 PM Xiang Gao <[email protected]> wrote:\n> * function. We could instead adopt the behavior of Arm's vmaxvq_u32(), i.e.\n> * check each 32-bit element, but that would require an additional mask\n> * operation on x86.\n> */\n\n> But I still don't understand why the vmaxvq_u32 intrinsic is not used on the arm platform.\n\nThe current use case expects all 1's or all 0's in a 32-bit lane. If\nanyone tried using it for arbitrary values, vmaxvq_u32 could give a\ndifferent answer than on x86 using _mm_movemask_epi8, so I think\nthat's the origin of that comment. But it's still a maintenance hazard\nas is, since x86 wouldn't work for arbitrary values. It seems the path\nforward is to rename this function to vector32_is_any_lane_set(), as\nin the attached (untested on Arm). That would allow each\nimplementation to use the most efficient path, whether it's by 8- or\n32-bit lanes. If we someday needed to look at only the high bits, we\nwould need a new function that performed the necessary masking on x86.\n\nIt's possible this method could shave cycles on Arm in some 8-bit lane\ncases where we don't actually care about the high bit specifically,\nsince the movemask equivalent is slow on that platform, but I haven't\nlooked yet.",
"msg_date": "Mon, 20 Nov 2023 16:05:43 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about the Implementation of vector32_is_highbit_set on\n ARM"
},
{
"msg_contents": "On Date: Mon, 20 Nov 2023 16:05:43PM +0700, John Naylor wrote:\r\n\r\n>On Wed, Nov 8, 2023 at 2:44=E2=80=AFPM Xiang Gao <[email protected]> wrote:\r\n>> * function. We could instead adopt the behavior of Arm's vmaxvq_u32(), i=\r\n>.e.\r\n>> * check each 32-bit element, but that would require an additional mask\r\n>> * operation on x86.\r\n>> */\r\n>\r\n>> But I still don't understand why the vmaxvq_u32 intrinsic is not used on=\r\n the arm platform.\r\n\r\n>The current use case expects all 1's or all 0's in a 32-bit lane. If\r\n>anyone tried using it for arbitrary values, vmaxvq_u32 could give a\r\n>different answer than on x86 using _mm_movemask_epi8, so I think\r\n>that's the origin of that comment. But it's still a maintenance hazard\r\n>as is, since x86 wouldn't work for arbitrary values. It seems the path\r\n>forward is to rename this function to vector32_is_any_lane_set(), as\r\n>in the attached (untested on Arm). That would allow each\r\n>implementation to use the most efficient path, whether it's by 8- or\r\n>32-bit lanes. If we someday needed to look at only the high bits, we\r\n>would need a new function that performed the necessary masking on x86.\r\n>\r\n>It's possible this method could shave cycles on Arm in some 8-bit lane\r\n>cases where we don't actually care about the high bit specifically,\r\n>since the movemask equivalent is slow on that platform, but I haven't\r\n>looked yet.\r\n\r\nThank you for your detailed explanation.\r\nCan I do some testing and submit this patch?\r\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.\r\n",
"msg_date": "Thu, 23 Nov 2023 09:28:50 +0000",
"msg_from": "Xiang Gao <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Question about the Implementation of vector32_is_highbit_set on\n ARM"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 4:29 PM Xiang Gao <[email protected]> wrote:\n>\n> Thank you for your detailed explanation.\n> Can I do some testing and submit this patch?\n\nPlease do, thanks.\n\n\n",
"msg_date": "Thu, 23 Nov 2023 17:56:55 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about the Implementation of vector32_is_highbit_set on\n ARM"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nWhile tracking a buildfarm, I found that drongo failed the test pg_upgrade/003_logical_slots [1].\nA strange point is that the test passed in the next iteration. Currently I'm not\nsure the reason, but I will keep my eye for it and will investigate if it\nhappens again.\n\nI think this failure is not related with our logical slots work, whereas it\nfailed 003_logical_slots.pl. More detail, please see latter part.\n\nFor more investigation, a server log during the upgrade may be needed. It will\nbe in the data directory so BF system will not upload them. I may need additional\ninformation if it failed again.\n\n# Analysis of failure\n\nAccording to the output, pg_upgrade seemed to be failed while restoring objects\nto new cluster[2].\n\nAs code-level anaysis, pg_upgrade command failed in exec_prog().\nIn the function, pg_restore tried to be executed for database \"postgres\".\nBelow is a brief call-stack. Note that pg_restore is not used for migrating\nlogical replication slots, it is done by pg_upgrade binary itself. Also, the\nmigration is done after all objects are copied, not in create_new_objects().\n\n```\nexec_prog()\nparallel_exec_prog(\"pg_restore ... \") <-- Since -j option is not specified, it is just a wrapper\ncreate_new_objects()\nmain()\n```\n\nIn exec_prog(), system() system call was called but returned non-zero value.\nDoc said that sytem() returns value that is returned by the command interpreter,\nwhen input is not NULL [3]. Unfortunately, current code does not output the\nreturn code. Also, BF system does not upload data directory for failed tests.\nTherefore, I could not get more information for the investigation.\n\n[1]: https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2023-11-07%2013%3A43%3A23&stg=pg_upgrade-check\n[2]:\n```\n...\n# No postmaster PID for node \"oldpub\"\n# Running: pg_upgrade --no-sync -d C:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build/testrun/pg_upgrade/003_logical_slots\\\\data/t_003_logical_slots_oldpub_data/pgdata -D C:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build/testrun/pg_upgrade/003_logical_slots\\\\data/t_003_logical_slots_newpub_data/pgdata -b C:/prog/bf/root/HEAD/PGSQL~1.BUI/TMP_IN~1/prog/bf/root/HEAD/inst/bin -B C:/prog/bf/root/HEAD/PGSQL~1.BUI/TMP_IN~1/prog/bf/root/HEAD/inst/bin -s 127.0.0.1 -p 54813 -P 54814 --copy\nPerforming Consistency Checks\n...\nSetting frozenxid and minmxid counters in new cluster ok\nRestoring global objects in the new cluster ok\nRestoring database schemas in the new cluster \n*failure*\n\nConsult the last few lines of \"C:/prog/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/003_logical_slots/data/t_003_logical_slots_newpub_data/pgdata/pg_upgrade_output.d/20231107T142224.580/log/pg_upgrade_dump_5.log\" for\nthe probable cause of the failure.\nFailure, exiting\n[14:23:26.632](70.141s) not ok 10 - run of pg_upgrade of old cluster\n[14:23:26.632](0.000s) # Failed test 'run of pg_upgrade of old cluster'\n# at C:/prog/bf/root/HEAD/pgsql/src/bin/pg_upgrade/t/003_logical_slots.pl line 170.\n### Starting node \"newpub\"\n# Running: pg_ctl -w -D C:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build/testrun/pg_upgrade/003_logical_slots\\\\data/t_003_logical_slots_newpub_data/pgdata -l C:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build/testrun/pg_upgrade/003_logical_slots\\\\log/003_logical_slots_newpub.log -o --cluster-name=newpub start\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"newpub\" is 4604\n[14:23:28.398](1.766s) not ok 11 - check the slot exists on new cluster\n[14:23:28.398](0.001s) # Failed test 'check the slot exists on new cluster'\n# at C:/prog/bf/root/HEAD/pgsql/src/bin/pg_upgrade/t/003_logical_slots.pl line 176.\n[14:23:28.399](0.000s) # got: ''\n# expected: 'regress_sub|t'\n...\n```\n[3]: https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/system-wsystem?view=msvc-170\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Wed, 8 Nov 2023 08:13:06 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Random pg_upgrade test failure on drongo "
},
{
"msg_contents": "Dear hackers,\n \n> While tracking a buildfarm, I found that drongo failed the test\n> pg_upgrade/003_logical_slots [1].\n> A strange point is that the test passed in the next iteration. Currently I'm not\n> sure the reason, but I will keep my eye for it and will investigate if it\n> happens again.\n \nThis email just tells an update. We found that fairywren was also failed due to\nthe same reason [2]. It fails inconsistently, but there might be a bad thing on\nwindows. I'm now trying to reproduce with my colleagues to analyze more detail.\nAlso, working with Andrew for getting logs emitted during the upgrade.\nI will continue to keep on my eye.\n \n[2]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-11-08%2010%3A22%3A45\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 16 Nov 2023 05:12:02 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Random pg_upgrade test failure on drongo "
},
{
"msg_contents": "Dear hackers,\n\nThis email tells an update. The machine drongo failed the test a week ago [1]\nand finally got logfiles. PSA files.\n\n## Observed failure\n\npg_upgrade_server.log is a server log during the pg_upgrade command. According to\nit, the TRUNCATE command seemed to be failed due to a \"File exists\" error.\n\n```\n2023-11-15 00:02:02.239 UTC [1752:18] 003_logical_slots.pl ERROR: could not create file \"base/1/2683\": File exists\n2023-11-15 00:02:02.239 UTC [1752:19] 003_logical_slots.pl STATEMENT: \n\t-- For binary upgrade, preserve pg_largeobject and index relfilenodes\n\tSELECT pg_catalog.binary_upgrade_set_next_index_relfilenode('2683'::pg_catalog.oid);\n\tSELECT pg_catalog.binary_upgrade_set_next_heap_relfilenode('2613'::pg_catalog.oid);\n\tTRUNCATE pg_catalog.pg_largeobject;\n...\n```\n\n## Analysis\n\nI think it caused due to the STATUS_DELETE_PENDING failure, not related with recent\nupdates for pg_upgrade.\n\nThe file \"base/1/2683\" is an index file for pg_largeobject_loid_pn_index, and the\noutput meant that file creation was failed. Below is a backtrace.\n\n```\npgwin32_open() // <-- this returns -1\nopen()\nBasicOpenFilePerm()\nPathNameOpenFilePerm()\nPathNameOpenFile()\nmdcreate()\nsmgrcreate()\nRelationCreateStorage()\nRelationSetNewRelfilenumber()\nExecuteTruncateGuts()\nExecuteTruncate()\n```\n\nBut this is strange. Before calling mdcreate(), we surely unlink the file which\nhave the same name. Below is a trace until unlink.\n\n```\npgunlink()\nunlink()\nmdunlinkfork()\nmdunlink()\nsmgrdounlinkall()\nRelationSetNewRelfilenumber() // common path with above\nExecuteTruncateGuts()\nExecuteTruncate()\n```\n\nI found Thomas said that [2] pgunlink sometimes could not remove file even if\nit returns OK, at that time NTSTATUS is STATUS_DELETE_PENDING. Also, a comment\nin pgwin32_open_handle() mentions the same thing:\n\n```\n\t\t/*\n\t\t * ERROR_ACCESS_DENIED is returned if the file is deleted but not yet\n\t\t * gone (Windows NT status code is STATUS_DELETE_PENDING). In that\n\t\t * case, we'd better ask for the NT status too so we can translate it\n\t\t * to a more Unix-like error. We hope that nothing clobbers the NT\n\t\t * status in between the internal NtCreateFile() call and CreateFile()\n\t\t * returning.\n\t\t *\n```\n\nThe definition of STATUS_DELETE_PENDING can be seen in [3]. Based on that, indeed,\nopen() would be able to fail with STATUS_DELETE_PENDING if the deletion is pending\nbut it is tried to open.\n\nAnother thread [4] also tries the issue while doing rmtree->unlink, and it reties\nto remove if it fails with STATUS_DELETE_PENDING. So, should we retry to open when\nit fails as well? Anyway, this fix seems out-of-scope for pg_upgrade.\n\nHow do you think? Do you have any thoughts about it?\n\n## Acknowledgement\n\nI want to say thanks to Sholk, Vingesh, for helping the analysis.\n\n[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2023-11-15%2006%3A16%3A15\n[2]: https://www.postgresql.org/message-id/CA%2BhUKGKsdzw06c5nnb%3DKYG9GmvyykoVpJA_VR3k0r7cZOKcx6Q%40mail.gmail.com\n[3]: https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55\n[4]: https://www.postgresql.org/message-id/flat/20220919213217.ptqfdlcc5idk5xup%40awork3.anarazel.de#6ae5e2ba3dd6e1fd680dcc34eab710d5\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 21 Nov 2023 10:35:13 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Random pg_upgrade test failure on drongo "
},
{
"msg_contents": "Dear hackers,\n\n> This email tells an update. The machine drongo failed the test a week ago [1]\n> and finally got logfiles. PSA files.\n\nOh, sorry. I missed to attach files. You can see pg_upgrade_server.log for now.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Tue, 21 Nov 2023 10:37:03 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Random pg_upgrade test failure on drongo "
},
{
"msg_contents": "Hello Kuroda-san,\n\n21.11.2023 13:37, Hayato Kuroda (Fujitsu) wrote:\n>> This email tells an update. The machine drongo failed the test a week ago [1]\n>> and finally got logfiles. PSA files.\n> Oh, sorry. I missed to attach files. You can see pg_upgrade_server.log for now.\n>\n\nI can easily reproduce this failure on my workstation by running 5 tests\n003_logical_slots in parallel inside Windows VM with it's CPU resources\nlimited to 50%, like so:\nVBoxManage controlvm \"Windows\" cpuexecutioncap 50\n\nset PGCTLTIMEOUT=180\npython3 -c \"NUMITERATIONS=20;NUMTESTS=5;import os;tsts='';exec('for i in range(1,NUMTESTS+1): \ntsts+=f\\\"pg_upgrade_{i}/003_logical_slots \\\"'); exec('for i in range(1,NUMITERATIONS+1):print(f\\\"iteration {i}\\\"); \nassert(os.system(f\\\"meson test --num-processes {NUMTESTS} {tsts}\\\") == 0)')\"\n...\niteration 2\nninja: Entering directory `C:\\src\\postgresql\\build'\nninja: no work to do.\n1/5 postgresql:pg_upgrade_2 / pg_upgrade_2/003_logical_slots ERROR 60.30s exit status 25\n...\npg_restore: error: could not execute query: ERROR: could not create file \"base/1/2683\": File exists\n...\n\nI agree with your analysis and would like to propose a PoC fix (see\nattached). With this patch applied, 20 iterations succeeded for me.\n\nBest regards,\nAlexander",
"msg_date": "Thu, 23 Nov 2023 14:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Dear Alexander,\r\n\r\n> \r\n> I can easily reproduce this failure on my workstation by running 5 tests\r\n> 003_logical_slots in parallel inside Windows VM with it's CPU resources\r\n> limited to 50%, like so:\r\n> VBoxManage controlvm \"Windows\" cpuexecutioncap 50\r\n> \r\n> set PGCTLTIMEOUT=180\r\n> python3 -c \"NUMITERATIONS=20;NUMTESTS=5;import os;tsts='';exec('for i in\r\n> range(1,NUMTESTS+1):\r\n> tsts+=f\\\"pg_upgrade_{i}/003_logical_slots \\\"'); exec('for i in\r\n> range(1,NUMITERATIONS+1):print(f\\\"iteration {i}\\\");\r\n> assert(os.system(f\\\"meson test --num-processes {NUMTESTS} {tsts}\\\") == 0)')\"\r\n> ...\r\n> iteration 2\r\n> ninja: Entering directory `C:\\src\\postgresql\\build'\r\n> ninja: no work to do.\r\n> 1/5 postgresql:pg_upgrade_2 / pg_upgrade_2/003_logical_slots\r\n> ERROR 60.30s exit status 25\r\n> ...\r\n> pg_restore: error: could not execute query: ERROR: could not create file\r\n> \"base/1/2683\": File exists\r\n> ...\r\n\r\nGreat. I do not have such an environment so I could not find. This seemed to\r\nsuggest that the failure was occurred because the system was busy.\r\n\r\n> I agree with your analysis and would like to propose a PoC fix (see\r\n> attached). With this patch applied, 20 iterations succeeded for me.\r\n\r\nThanks, here are comments. I'm quite not sure for the windows, so I may say\r\nsomething wrong.\r\n\r\n* I'm not sure why the file/directory name was changed before doing a unlink.\r\n Could you add descriptions?\r\n* IIUC, the important points is the latter part, which waits until the status is\r\n changed. Based on that, can we remove a double rmtree() from cleanup_output_dirs()?\r\n They seems to be add for the similar motivation.\r\n\r\n```\r\n+\tloops = 0;\r\n+\twhile (lstat(curpath, &st) < 0 && lstat_error_was_status_delete_pending())\r\n+\t{\r\n+\t\tif (++loops > 100)\t\t/* time out after 10 sec */\r\n+\t\t\treturn -1;\r\n+\t\tpg_usleep(100000);\t\t/* us */\r\n+\t}\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Thu, 23 Nov 2023 12:15:22 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Hello Kuroda-san,\n\n23.11.2023 15:15, Hayato Kuroda (Fujitsu) wrote:\n>\n>> I agree with your analysis and would like to propose a PoC fix (see\n>> attached). With this patch applied, 20 iterations succeeded for me.\n> Thanks, here are comments. I'm quite not sure for the windows, so I may say\n> something wrong.\n>\n> * I'm not sure why the file/directory name was changed before doing a unlink.\n> Could you add descriptions?\n\nPlease look at the simple test program attached. It demonstrates the\nfailure for me when running in two sessions as follows:\nunlink-open test 150 1000\n unlink-open test2 150 1000\n...\niteration 60\niteration 61\nfopen() after unlink() failed (13)\n\nProcess Monitor shows:\n...\n 9:16:55.9249792 AM unlink-open.exe 3232 WriteFile C:\\src\\test2 SUCCESS Offset: 138,412,032, \nLength: 1,048,576\n### unlink() performed for the file \"test\":\n9:16:55.9852903 AM unlink-open.exe 4968 CreateFile C:\\src\\test SUCCESS Desired Access: Read Attributes, \nDelete, Disposition: Open, Options: Non-Directory File, Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, \nDelete, AllocationSize: n/a, OpenResult: Opened\n9:16:55.9853637 AM unlink-open.exe 4968 QueryAttributeTagFile C:\\src\\test SUCCESS Attributes: A, \nReparseTag: 0x0\n### file \"test\" gets into state DELETE PENDING:\n9:16:55.9853756 AM unlink-open.exe 4968 SetDispositionInformationFile C:\\src\\test SUCCESS Delete: True\n9:16:55.9853888 AM unlink-open.exe 4968 CloseFile C:\\src\\test SUCCESS\n### concurrent operations with file \"test2\":\n 9:16:55.9866362 AM unlink-open.exe 3232 WriteFile C:\\src\\test2 SUCCESS Offset: 139,460,608, \nLength: 1,048,576\n...\n 9:16:55.9972373 AM unlink-open.exe 3232 WriteFile C:\\src\\test2 SUCCESS Offset: 157,286,400, \nLength: 1,048,576\n 9:16:55.9979040 AM unlink-open.exe 3232 CloseFile C:\\src\\test2 SUCCESS\n### unlink() for \"test2\":\n 9:16:56.1029981 AM unlink-open.exe 3232 CreateFile C:\\src\\test2 SUCCESS Desired Access: Read \nAttributes, Delete, Disposition: Open, Options: Non-Directory File, Open Reparse Point, Attributes: n/a, ShareMode: \nRead, Write, Delete, AllocationSize: n/a, OpenResult: Opened\n 9:16:56.1030432 AM unlink-open.exe 3232 QueryAttributeTagFile C:\\src\\test2 SUCCESS Attributes: \nA, ReparseTag: 0x0\n### file \"test2\" gets into state DELETE PENDING:\n 9:16:56.1030517 AM unlink-open.exe 3232 SetDispositionInformationFile C:\\src\\test2 SUCCESS \nDelete: True\n 9:16:56.1030625 AM unlink-open.exe 3232 CloseFile C:\\src\\test2 SUCCESS\n### and then it opened successfully:\n 9:16:56.1189503 AM unlink-open.exe 3232 CreateFile C:\\src\\test2 SUCCESS Desired Access: Generic \nWrite, Read Attributes, Disposition: OverwriteIf, Options: Synchronous IO Non-Alert, Non-Directory File, Attributes: N, \nShareMode: Read, Write, AllocationSize: 0, OpenResult: Created\n 9:16:56.1192016 AM unlink-open.exe 3232 CloseFile C:\\src\\test2 SUCCESS\n### operations with file \"test2\" continued:\n 9:16:56.1193394 AM unlink-open.exe 3232 CreateFile C:\\src\\test2 SUCCESS Desired Access: Read \nAttributes, Delete, Disposition: Open, Options: Non-Directory File, Open Reparse Point, Attributes: n/a, ShareMode: \nRead, Write, Delete, AllocationSize: n/a, OpenResult: Opened\n 9:16:56.1193895 AM unlink-open.exe 3232 QueryAttributeTagFile C:\\src\\test2 SUCCESS Attributes: \nA, ReparseTag: 0x0\n 9:16:56.1194042 AM unlink-open.exe 3232 SetDispositionInformationFile C:\\src\\test2 SUCCESS \nDelete: True\n 9:16:56.1194188 AM unlink-open.exe 3232 CloseFile C:\\src\\test2 SUCCESS\n 9:16:56.1198459 AM unlink-open.exe 3232 CreateFile C:\\src\\test2 SUCCESS Desired Access: Generic \nWrite, Read Attributes, Disposition: OverwriteIf, Options: Synchronous IO Non-Alert, Non-Directory File, Attributes: N, \nShareMode: Read, Write, AllocationSize: 0, OpenResult: Created\n 9:16:56.1200302 AM unlink-open.exe 3232 WriteFile C:\\src\\test2 SUCCESS Offset: 0, Length: \n1,048,576, Priority: Normal\n...\n 9:16:56.1275871 AM unlink-open.exe 3232 WriteFile C:\\src\\test2 SUCCESS Offset: 10,485,760, \nLength: 1,048,576\n\n### at the same time, CreateFile() for file \"test\" failed:\n9:16:56.1276453 AM unlink-open.exe 4968 CreateFile C:\\src\\test DELETE PENDING Desired Access: Generic \nWrite, Read Attributes, Disposition: OverwriteIf, Options: Synchronous IO Non-Alert, Non-Directory File, Attributes: N, \nShareMode: Read, Write, AllocationSize: 0\n9:16:56.1279359 AM unlink-open.exe 3232 WriteFile C:\\src\\test2 SUCCESS Offset: 11,534,336, Length: 1,048,576\n9:16:56.1283452 AM unlink-open.exe 3232 WriteFile C:\\src\\test2 SUCCESS Offset: 12,582,912, Length: 1,048,576\n...\n\nBut with rename(MoveFileEx), I see:\nunlink-open test 150 1000 rename\n...\n9:38:01.7035286 AM unlink-open.exe 10208 WriteFile C:\\src\\test SUCCESS Offset: 156,237,824, Length: 1,048,576\n9:38:01.7075621 AM unlink-open.exe 10208 WriteFile C:\\src\\test SUCCESS Offset: 157,286,400, Length: 1,048,576\n9:38:01.7101299 AM unlink-open.exe 10208 CloseFile C:\\src\\test SUCCESS\n9:38:01.7130802 AM unlink-open.exe 10208 CreateFile C:\\src\\test SUCCESS Desired Access: Read Attributes, \nDelete, Synchronize, Disposition: Open, Options: Synchronous IO Non-Alert, Open Reparse Point, Attributes: n/a, \nShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened\n9:38:01.7132844 AM unlink-open.exe 10208 QueryAttributeTagFile C:\\src\\test SUCCESS Attributes: A, \nReparseTag: 0x0\n9:38:01.7133420 AM unlink-open.exe 10208 QueryBasicInformationFile C:\\src\\test SUCCESS CreationTime: \n11/24/2023 9:38:01 AM, LastAccessTime: 11/24/2023 9:38:01 AM, LastWriteTime: 11/24/2023 9:38:01 AM, ChangeTime: \n11/24/2023 9:38:01 AM, FileAttributes: A\n9:38:01.7135191 AM unlink-open.exe 10208 CreateFile C:\\src SUCCESS Desired Access: Write Data/Add File, \nSynchronize, Disposition: Open, Options: , Attributes: n/a, ShareMode: Read, Write, AllocationSize: n/a, OpenResult: Opened\n\n### file \"test\" renamed to \"test.tmp\", it doesn't get into state DELETE PENDING\n9:38:01.7136221 AM unlink-open.exe 10208 SetRenameInformationFile C:\\src\\test SUCCESS ReplaceIfExists: True, \nFileName: C:\\src\\test.tmp\n9:38:01.8384110 AM unlink-open.exe 10208 CloseFile C:\\src SUCCESS\n9:38:01.8388203 AM unlink-open.exe 10208 CloseFile C:\\src\\test.tmp SUCCESS\n\n### then file \"test.tmp\" deleted as usual:\n9:38:01.8394278 AM unlink-open.exe 10208 CreateFile C:\\src\\test.tmp SUCCESS Desired Access: Read \nAttributes, Delete, Disposition: Open, Options: Non-Directory File, Open Reparse Point, Attributes: n/a, ShareMode: \nRead, Write, Delete, AllocationSize: n/a, OpenResult: Opened\n9:38:01.8396534 AM unlink-open.exe 10208 QueryAttributeTagFile C:\\src\\test.tmp SUCCESS Attributes: A, \nReparseTag: 0x0\n9:38:01.8396885 AM unlink-open.exe 10208 SetDispositionInformationFile C:\\src\\test.tmp SUCCESS Delete: True\n9:38:01.8397312 AM unlink-open.exe 10208 CloseFile C:\\src\\test.tmp SUCCESS\n9:38:01.9162566 AM unlink-open.exe 10208 CreateFile C:\\src\\test SUCCESS Desired Access: Generic Write, \nRead Attributes, Disposition: OverwriteIf, Options: Synchronous IO Non-Alert, Non-Directory File, Attributes: N, \nShareMode: Read, Write, AllocationSize: 0, OpenResult: Created\n9:38:01.9167628 AM unlink-open.exe 10208 CloseFile C:\\src\\test SUCCESS\n\nSo the same test run with MoveFileEx():\nunlink-open test 150 1000 rename\n unlink-open test2 150 1000 rename\nsuccessfully passes for me in the same environment (Windows VM slowed down to 50%).\n\nThat is, my idea was to try removing a file through renaming it as a fast\npath (thus avoiding that troublesome state DELETE PENDING), and if that\nfails, to perform removal as before. May be the whole function might be\nsimplified, but I'm not sure about special cases yet.\n\n> * IIUC, the important points is the latter part, which waits until the status is\n> changed. Based on that, can we remove a double rmtree() from cleanup_output_dirs()?\n> They seems to be add for the similar motivation.\n\nI couldn't yet reproduce a failure, which motivated that doubling (IIUC, it\nwas observed in [1]), with c28911750 reverted, so I need more time to\nresearch that issue to answer this question.\n\n[1] https://www.postgresql.org/message-id/20230131172806.GM22427%40telsasoft.com\n\nBest regards,\nAlexander",
"msg_date": "Fri, 24 Nov 2023 11:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Dear Alexander,\r\n\r\n> \r\n> Please look at the simple test program attached. It demonstrates the\r\n> failure for me when running in two sessions as follows:\r\n> unlink-open test 150 1000\r\n> unlink-open test2 150 1000\r\n\r\nThanks for attaching a program. This helps us to understand the issue.\r\nI wanted to confirm your env - this failure was occurred on windows server XXXX, right?\r\n\r\n> \r\n> That is, my idea was to try removing a file through renaming it as a fast\r\n> path (thus avoiding that troublesome state DELETE PENDING), and if that\r\n> fails, to perform removal as before. May be the whole function might be\r\n> simplified, but I'm not sure about special cases yet.\r\n\r\nI felt that your result showed pgrename() would be more rarely delayed than unlink().\r\nThat's why a file which has original name would not exist when subsequent open() was called.\r\n\r\nAbout special cases, I wanted seniors to check.\r\n\r\n> > * IIUC, the important points is the latter part, which waits until the status is\r\n> > changed. Based on that, can we remove a double rmtree() from\r\n> cleanup_output_dirs()?\r\n> > They seems to be add for the similar motivation.\r\n> \r\n> I couldn't yet reproduce a failure, which motivated that doubling (IIUC, it\r\n> was observed in [1]), with c28911750 reverted, so I need more time to\r\n> research that issue to answer this question.\r\n\r\nYeah, as the first place, this failure seldom occurred....\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Sat, 25 Nov 2023 15:19:22 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Hello Kuroda-san,\n\n25.11.2023 18:19, Hayato Kuroda (Fujitsu) wrote:\n> Thanks for attaching a program. This helps us to understand the issue.\n> I wanted to confirm your env - this failure was occurred on windows server XXXX, right?\n\nI see that behavior on:\nWindows 10 Version 1607 (OS Build 14393.0)\nWindows Server 2016 Version 1607 (OS Build 14393.0)\nWindows Server 2019 Version 1809 (OS Build 17763.1)\n\nBut it's not reproduced on:\nWindows 10 Version 1809 (OS Build 17763.1) (triple-checked)\nWindows Server 2019 Version 1809 (OS Build 17763.592)\nWindows 10 Version 22H2 (OS Build 19045.3693)\nWindows 11 Version 21H2 (OS Build 22000.613)\n\nSo it looks like the failure occurs depending not on Windows edition, but\nrather on it's build. For Windows Server 2019 the \"good\" build is\nsomewhere between 17763.1 and 17763.592, but for Windows 10 it's between\n14393.0 and 17763.1.\n(Maybe there was some change related to FILE_DISPOSITION_POSIX_SEMANTICS/\nFILE_DISPOSITION_ON_CLOSE implementation; I don't know where to find\ninformation about that change.)\n\nIt's also interesting, what is full version/build of OS on drongo and\nfairywren.\n\n>> That is, my idea was to try removing a file through renaming it as a fast\n>> path (thus avoiding that troublesome state DELETE PENDING), and if that\n>> fails, to perform removal as before. May be the whole function might be\n>> simplified, but I'm not sure about special cases yet.\n> I felt that your result showed pgrename() would be more rarely delayed than unlink().\n> That's why a file which has original name would not exist when subsequent open() was called.\n\nI think that's because unlink() is performed asynchronously on those old\nWindows versions, but rename() is always synchronous.\n\n>>> * IIUC, the important points is the latter part, which waits until the status is\n>>> changed. Based on that, can we remove a double rmtree() from\n>> cleanup_output_dirs()?\n>>> They seems to be add for the similar motivation.\n>> I couldn't yet reproduce a failure, which motivated that doubling (IIUC, it\n>> was observed in [1]), with c28911750 reverted, so I need more time to\n>> research that issue to answer this question.\n> Yeah, as the first place, this failure seldom occurred....\n\nI've managed to reproduce that issue (or at least a situation that\nmanifested similarly) with a sleep added in miscinit.c:\n ereport(IsPostmasterEnvironment ? LOG : NOTICE,\n (errmsg(\"database system is shut down\")));\n+ pg_usleep(500000L);\n\nWith this change, I get the same warning as in [1] when running in\nparallel 10 tests 002_pg_upgrade with a minimal olddump (on iterations\n33, 46, 8). And with my PoC patch applied, I could see the same warning\nas well (on iteration 6).\n\nI believe that's because rename() can't rename a directory containing an\nopen file, just as unlink() can't remove it.\n\nIn the light of the above, I think that the issue in question should be\nfixed in accordance with/as a supplement to [2].\n\n[1] https://www.postgresql.org/message-id/20230131172806.GM22427%40telsasoft.com\n[2] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BajSQ_8eu2AogTncOnZ5me2D-Cn66iN_-wZnRjLN%2Bicg%40mail.gmail.com\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 27 Nov 2023 15:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "\nOn 2023-11-27 Mo 07:00, Alexander Lakhin wrote:\n> Hello Kuroda-san,\n>\n> 25.11.2023 18:19, Hayato Kuroda (Fujitsu) wrote:\n>> Thanks for attaching a program. This helps us to understand the issue.\n>> I wanted to confirm your env - this failure was occurred on windows \n>> server XXXX, right?\n>\n> I see that behavior on:\n> Windows 10 Version 1607 (OS Build 14393.0)\n> Windows Server 2016 Version 1607 (OS Build 14393.0)\n> Windows Server 2019 Version 1809 (OS Build 17763.1)\n>\n> But it's not reproduced on:\n> Windows 10 Version 1809 (OS Build 17763.1) (triple-checked)\n> Windows Server 2019 Version 1809 (OS Build 17763.592)\n> Windows 10 Version 22H2 (OS Build 19045.3693)\n> Windows 11 Version 21H2 (OS Build 22000.613)\n>\n> So it looks like the failure occurs depending not on Windows edition, but\n> rather on it's build. For Windows Server 2019 the \"good\" build is\n> somewhere between 17763.1 and 17763.592, but for Windows 10 it's between\n> 14393.0 and 17763.1.\n> (Maybe there was some change related to FILE_DISPOSITION_POSIX_SEMANTICS/\n> FILE_DISPOSITION_ON_CLOSE implementation; I don't know where to find\n> information about that change.)\n>\n> It's also interesting, what is full version/build of OS on drongo and\n> fairywren.\n>\n>\n\nIt's WS 2019 1809/17763.4252. The latest available AFAICT is 17763.5122\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 27 Nov 2023 07:39:41 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "\nOn 2023-11-27 Mo 07:39, Andrew Dunstan wrote:\n>\n> On 2023-11-27 Mo 07:00, Alexander Lakhin wrote:\n>> Hello Kuroda-san,\n>>\n>> 25.11.2023 18:19, Hayato Kuroda (Fujitsu) wrote:\n>>> Thanks for attaching a program. This helps us to understand the issue.\n>>> I wanted to confirm your env - this failure was occurred on windows \n>>> server XXXX, right?\n>>\n>> I see that behavior on:\n>> Windows 10 Version 1607 (OS Build 14393.0)\n>> Windows Server 2016 Version 1607 (OS Build 14393.0)\n>> Windows Server 2019 Version 1809 (OS Build 17763.1)\n>>\n>> But it's not reproduced on:\n>> Windows 10 Version 1809 (OS Build 17763.1) (triple-checked)\n>> Windows Server 2019 Version 1809 (OS Build 17763.592)\n>> Windows 10 Version 22H2 (OS Build 19045.3693)\n>> Windows 11 Version 21H2 (OS Build 22000.613)\n>>\n>> So it looks like the failure occurs depending not on Windows edition, \n>> but\n>> rather on it's build. For Windows Server 2019 the \"good\" build is\n>> somewhere between 17763.1 and 17763.592, but for Windows 10 it's between\n>> 14393.0 and 17763.1.\n>> (Maybe there was some change related to \n>> FILE_DISPOSITION_POSIX_SEMANTICS/\n>> FILE_DISPOSITION_ON_CLOSE implementation; I don't know where to find\n>> information about that change.)\n>>\n>> It's also interesting, what is full version/build of OS on drongo and\n>> fairywren.\n>>\n>>\n>\n> It's WS 2019 1809/17763.4252. The latest available AFAICT is 17763.5122\n>\n>\n>\n\nI've updated it to 17763.5122 now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 27 Nov 2023 08:58:35 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Dear Alexander, Andrew,\r\n \r\nThanks for your analysis!\r\n \r\n> I see that behavior on:\r\n> Windows 10 Version 1607 (OS Build 14393.0)\r\n> Windows Server 2016 Version 1607 (OS Build 14393.0)\r\n> Windows Server 2019 Version 1809 (OS Build 17763.1)\r\n>\r\n> But it's not reproduced on:\r\n> Windows 10 Version 1809 (OS Build 17763.1) (triple-checked)\r\n> Windows Server 2019 Version 1809 (OS Build 17763.592)\r\n> Windows 10 Version 22H2 (OS Build 19045.3693)\r\n> Windows 11 Version 21H2 (OS Build 22000.613)\r\n>\r\n> So it looks like the failure occurs depending not on Windows edition, but\r\n> rather on it's build. For Windows Server 2019 the \"good\" build is\r\n> somewhere between 17763.1 and 17763.592, but for Windows 10 it's between\r\n> 14393.0 and 17763.1.\r\n> (Maybe there was some change related to\r\n> FILE_DISPOSITION_POSIX_SEMANTICS/\r\n> FILE_DISPOSITION_ON_CLOSE implementation; I don't know where to find\r\n> information about that change.)\r\n>\r\n> It's also interesting, what is full version/build of OS on drongo and\r\n> fairywren.\r\n \r\nThanks for your interest for the issue. I have been tracking the failure but been not occurred.\r\nYour analysis seems to solve BF failures, by updating OSes.\r\n \r\n> I think that's because unlink() is performed asynchronously on those old\r\n> Windows versions, but rename() is always synchronous.\r\n \r\nOK. Actually I could not find descriptions about them, but your experiment showed facts.\r\n \r\n> I've managed to reproduce that issue (or at least a situation that\r\n> manifested similarly) with a sleep added in miscinit.c:\r\n> ereport(IsPostmasterEnvironment ? LOG : NOTICE,\r\n> (errmsg(\"database system is shut down\")));\r\n> + pg_usleep(500000L);\r\n>\r\n> With this change, I get the same warning as in [1] when running in\r\n> parallel 10 tests 002_pg_upgrade with a minimal olddump (on iterations\r\n> 33, 46, 8). And with my PoC patch applied, I could see the same warning\r\n> as well (on iteration 6).\r\n>\r\n> I believe that's because rename() can't rename a directory containing an\r\n> open file, just as unlink() can't remove it.\r\n>\r\n> In the light of the above, I think that the issue in question should be\r\n> fixed in accordance with/as a supplement to [2].\r\n \r\nOK, I understood that we need to fix more around here. For now, we should focus our points.\r\n \r\nYour patch seems good, but it needs more sight from windows-friendly developers.\r\nHow do other think?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 30 Nov 2023 10:00:21 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Hello Andrew and Kuroda-san,\n\n27.11.2023 16:58, Andrew Dunstan wrote:\n>>> It's also interesting, what is full version/build of OS on drongo and\n>>> fairywren.\n>>>\n>>\n>> It's WS 2019 1809/17763.4252. The latest available AFAICT is 17763.5122\n>>\n>\n> I've updated it to 17763.5122 now.\n>\n\nThank you for the information! It had pushed me to upgrade my Server\n2019 1809 17763.592 to 17763.4252. And then I discovered that I have\ndifficulties with reproducing the issue on all my VMs after reboot (even\non old versions/builds). It took me a while to understand what's going on\nand what affects reproduction of the issue.\nI was puzzled by the fact that I can't reproduce the issue with my\nunlink-open test program under seemingly the same conditions as before,\nuntil I realized that the issue reproduced only when the target directory\nopened in Windows Explorer.\n\nNow I'm sorry for bringing more mystery into the topic and for misleading\ninformation.\n\nSo, the issue reproduced only when something scans the working directory\nfor files/opens them. I added the same logic into my test program (see\nunlink-open-scandir attached) and now I see the failure on Windows Server\n2019 (Version 10.0.17763.4252).\nA script like this:\nstart cmd /c \"unlink-open-scandir test1 10 5000 >log1 2>&1\"\n...\nstart cmd /c \"unlink-open-scandir test10 10 5000 >log10 2>&1\"\n\nresults in:\nC:\\temp>find \"failed\" log*\n---------- LOG1\n---------- LOG10\nfopen() after unlink() failed (13)\n---------- LOG2\nfopen() after unlink() failed (13)\n---------- LOG3\nfopen() after unlink() failed (13)\n---------- LOG4\nfopen() after unlink() failed (13)\n---------- LOG5\nfopen() after unlink() failed (13)\n---------- LOG6\nfopen() after unlink() failed (13)\n---------- LOG7\nfopen() after unlink() failed (13)\n---------- LOG8\nfopen() after unlink() failed (13)\n---------- LOG9\nfopen() after unlink() failed (13)\n\nC:\\temp>type log10\n...\niteration 108\nfopen() after unlink() failed (13)\n\nThe same observed on:\nWindows 10 Version 1809 (OS Build 17763.1)\n\nBut no failures on:\nWindows 10 Version 22H2 (OS Build 19045.3693)\nWindows 11 Version 21H2 (OS Build 22000.613)\n\nSo the behavior change really took place, but my previous estimations were\nincorrect (my apologies).\n\nBTW, \"rename\" mode of the test program can produce more rare errors on\nrename:\n---------- LOG3\nMoveFileEx() failed (0)\n\nbut not on open.\n\n30.11.2023 13:00, Hayato Kuroda (Fujitsu) wrote:\n\n> Thanks for your interest for the issue. I have been tracking the failure but been not occurred.\n> Your analysis seems to solve BF failures, by updating OSes.\n\nYes, but I don't think that leaving Server 2019 behind (I suppose Windows\nServer 2019 build 20348 would have the same behaviour as Windows 10 19045)\nis affordable. (Though looking at Cirrus CI logs, I see that what is\nentitled \"Windows Server 2019\" in fact is Windows Server 2022 there.)\n\n>> I think that's because unlink() is performed asynchronously on those old\n>> Windows versions, but rename() is always synchronous.\n> \n> OK. Actually I could not find descriptions about them, but your experiment showed facts.\n\nI don't know how this peculiarity is called, but it looks like when some\nother process captures the file handle, unlink() exits as if the file was\ndeleted completely, but the subsequent open() fails.\n\nBest regards,\nAlexander",
"msg_date": "Thu, 30 Nov 2023 19:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Dear Alexander, \r\n\r\n> I agree with your analysis and would like to propose a PoC fix (see\r\n> attached). With this patch applied, 20 iterations succeeded for me.\r\n\r\nThere are no reviewers so that I will review again. Let's move the PoC\r\nto the concrete patch. Note that I only focused on fixes of random failure,\r\nother parts are out-of-scope.\r\n\r\nBasically, code comments can be updated accordingly.\r\n\r\n01.\r\n\r\n```\r\n\t/*\r\n\t * This function might be called for a regular file or for a junction\r\n\t * point (which we use to emulate symlinks). The latter must be unlinked\r\n\t * with rmdir() on Windows. Before we worry about any of that, let's see\r\n\t * if we can unlink directly, since that's expected to be the most common\r\n\t * case.\r\n\t */\r\n\tsnprintf(tmppath, sizeof(tmppath), \"%s.tmp\", path);\r\n\tif (pgrename(path, tmppath) == 0)\r\n\t{\r\n\t\tif (unlink(tmppath) == 0)\r\n\t\t\treturn 0;\r\n\t\tcurpath = tmppath;\r\n\t}\r\n```\r\n\r\nYou can modify comments atop changes because it is not trivial.\r\nBelow is my draft:\r\n\r\n```\r\n\t * XXX: we rename the target file to \".tmp\" before calling unlink. The\r\n\t * removal may fail with STATUS_DELETE_PENDING status on Windows, so\r\n\t * creating the same file would fail. This assumes that renaming is a\r\n\t * synchronous operation.\r\n```\r\n\r\n02.\r\n\r\n```\r\n\tloops = 0;\r\n\twhile (lstat(curpath, &st) < 0 && lstat_error_was_status_delete_pending())\r\n\t{\r\n\t\tif (++loops > 100)\t\t/* time out after 10 sec */\r\n\t\t\treturn -1;\r\n\t\tpg_usleep(100000);\t\t/* us */\r\n\t}\r\n```\r\n\r\nComments can be added atop the part. Below one is my draft.\r\n\r\n```\r\n\t/*\r\n\t * Wait until the removal is really finished to avoid ERRORs for creating a\r\n\t * same file in other functions.\r\n\t */\r\n```\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 28 Dec 2023 03:08:19 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Hello Kuroda-san,\n\n28.12.2023 06:08, Hayato Kuroda (Fujitsu) wrote:\n> Dear Alexander,\n>\n>> I agree with your analysis and would like to propose a PoC fix (see\n>> attached). With this patch applied, 20 iterations succeeded for me.\n> There are no reviewers so that I will review again. Let's move the PoC\n> to the concrete patch. Note that I only focused on fixes of random failure,\n> other parts are out-of-scope.\n\nThinking about that fix more, I'm not satisfied with the approach proposed.\nFirst, it turns every unlink operation into two write operations\n(rename + unlink), not to say about new risks of having stale .tmp files\n(perhaps, it's ok for regular files (MoveFileEx can overwrite existing\nfiles), but not for directories)\nSecond, it does that on any Windows OS versions, including modern ones,\nwhich are not affected by the issue, as we know.\n\nSo I started to think about other approach: to perform unlink as it's\nimplemented now, but then wait until the DELETE_PENDING state is gone.\nAnd I was very surprised to see that this state is not transient in our case.\nAdditional investigation showed that the test fails not because some aside\nprocess opens a file (concretely, {template1_id/postgres_id}/2683), that is\nbeing deleted, but because of an internal process that opens the file and\nholds a handle to it indefinitely.\nAnd the internal process is ... background writer (BgBufferSync()).\n\nSo, I tried just adding bgwriter_lru_maxpages = 0 to postgresql.conf and\ngot 20 x 10 tests passing.\n\nThus, it we want just to get rid of the test failure, maybe it's enough to\nadd this to the test's config...\n\nThe other way to go is to find out whether the background writer process\nshould react on a shared-inval message, sent from smgrdounlinkall(), and\nclose that file's handle,\n\nMaybe we could also (after changing the bgwriter's behaviour) add a waiting\nloop into pgwin32_open_handle() to completely rule out transient open()\nfailures due to some other process (such as Windows Exporer) opening a file\nbeing deleted, but I would not complicate the things until we have a clear\nvision/plans of using modern APIs/relying of modern OS versions' behavior.\nI mean proceeding with something like:\nhttps://commitfest.postgresql.org/40/3951/\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 2 Jan 2024 08:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "On Tue, Jan 2, 2024 at 10:30 AM Alexander Lakhin <[email protected]> wrote:\n>\n> 28.12.2023 06:08, Hayato Kuroda (Fujitsu) wrote:\n> > Dear Alexander,\n> >\n> >> I agree with your analysis and would like to propose a PoC fix (see\n> >> attached). With this patch applied, 20 iterations succeeded for me.\n> > There are no reviewers so that I will review again. Let's move the PoC\n> > to the concrete patch. Note that I only focused on fixes of random failure,\n> > other parts are out-of-scope.\n>\n> Thinking about that fix more, I'm not satisfied with the approach proposed.\n> First, it turns every unlink operation into two write operations\n> (rename + unlink), not to say about new risks of having stale .tmp files\n> (perhaps, it's ok for regular files (MoveFileEx can overwrite existing\n> files), but not for directories)\n> Second, it does that on any Windows OS versions, including modern ones,\n> which are not affected by the issue, as we know.\n>\n> So I started to think about other approach: to perform unlink as it's\n> implemented now, but then wait until the DELETE_PENDING state is gone.\n>\n\nThere is a comment in the code which suggests we shouldn't wait\nindefinitely. See \"However, we won't wait indefinitely for someone\nelse to close the file, as the caller might be holding locks and\nblocking other backends.\"\n\n> And I was very surprised to see that this state is not transient in our case.\n> Additional investigation showed that the test fails not because some aside\n> process opens a file (concretely, {template1_id/postgres_id}/2683), that is\n> being deleted, but because of an internal process that opens the file and\n> holds a handle to it indefinitely.\n> And the internal process is ... background writer (BgBufferSync()).\n>\n> So, I tried just adding bgwriter_lru_maxpages = 0 to postgresql.conf and\n> got 20 x 10 tests passing.\n>\n> Thus, it we want just to get rid of the test failure, maybe it's enough to\n> add this to the test's config...\n>\n\nWhat about checkpoints? Can't it do the same while writing the buffers?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 3 Jan 2024 17:12:21 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Hello Amit,\n\n03.01.2024 14:42, Amit Kapila wrote:\n>\n>> So I started to think about other approach: to perform unlink as it's\n>> implemented now, but then wait until the DELETE_PENDING state is gone.\n>>\n> There is a comment in the code which suggests we shouldn't wait\n> indefinitely. See \"However, we won't wait indefinitely for someone\n> else to close the file, as the caller might be holding locks and\n> blocking other backends.\"\n\nYes, I saw it, but initially I thought that we have a transient condition\nthere, so waiting in open() (instead of failing immediately) seemed like a\ngood idea then...\n\n>> And the internal process is ... background writer (BgBufferSync()).\n>>\n>> So, I tried just adding bgwriter_lru_maxpages = 0 to postgresql.conf and\n>> got 20 x 10 tests passing.\n>>\n>> Thus, it we want just to get rid of the test failure, maybe it's enough to\n>> add this to the test's config...\n>>\n> What about checkpoints? Can't it do the same while writing the buffers?\n\nAs we deal here with pg_upgrade/pg_restore, it must not be very easy to get\nthe desired effect, but I think it's not impossible in principle.\nMore details below.\nWhat happens during the pg_upgrade execution is essentially:\n1) CREATE DATABASE \"postgres\" WITH TEMPLATE = template0 OID = 5 ...;\n-- this command flushes file buffers as well\n2) ALTER DATABASE postgres OWNER TO ...\n3) COMMENT ON DATABASE \"postgres\" IS ...\n4) -- For binary upgrade, preserve pg_largeobject and index relfilenodes\n SELECT pg_catalog.binary_upgrade_set_next_index_relfilenode('2683'::pg_catalog.oid);\n SELECT pg_catalog.binary_upgrade_set_next_heap_relfilenode('2613'::pg_catalog.oid);\n TRUNCATE pg_catalog.pg_largeobject;\n-- ^^^ here we can get the error \"could not create file \"base/5/2683\": File exists\"\n...\n\nWe get the effect discussed when the background writer process decides to\nflush a file buffer for pg_largeobject during stage 1.\n(Thus, if a checkpoint somehow happened to occur during CREATE DATABASE,\nthe result must be the same.)\nAnd another important factor is shared_buffers = 1MB (set during the test).\nWith the default setting of 128MB I couldn't see the failure.\n\nIt can be reproduced easily (on old Windows versions) just by running\npg_upgrade in a loop (I've got failures on iterations 22, 37, 17 (with the\ndefault cluster)).\nIf an old cluster contains dozen of databases, this increases the failure\nprobability significantly (with 10 additional databases I've got failures\non iterations 4, 1, 6).\n\nPlease see the reproducing script attached.\n\nBest regards,\nAlexander",
"msg_date": "Thu, 4 Jan 2024 15:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "On Thu, Jan 4, 2024 at 5:30 PM Alexander Lakhin <[email protected]> wrote:\n>\n> 03.01.2024 14:42, Amit Kapila wrote:\n> >\n>\n> >> And the internal process is ... background writer (BgBufferSync()).\n> >>\n> >> So, I tried just adding bgwriter_lru_maxpages = 0 to postgresql.conf and\n> >> got 20 x 10 tests passing.\n> >>\n> >> Thus, it we want just to get rid of the test failure, maybe it's enough to\n> >> add this to the test's config...\n> >>\n> > What about checkpoints? Can't it do the same while writing the buffers?\n>\n> As we deal here with pg_upgrade/pg_restore, it must not be very easy to get\n> the desired effect, but I think it's not impossible in principle.\n> More details below.\n> What happens during the pg_upgrade execution is essentially:\n> 1) CREATE DATABASE \"postgres\" WITH TEMPLATE = template0 OID = 5 ...;\n> -- this command flushes file buffers as well\n> 2) ALTER DATABASE postgres OWNER TO ...\n> 3) COMMENT ON DATABASE \"postgres\" IS ...\n> 4) -- For binary upgrade, preserve pg_largeobject and index relfilenodes\n> SELECT pg_catalog.binary_upgrade_set_next_index_relfilenode('2683'::pg_catalog.oid);\n> SELECT pg_catalog.binary_upgrade_set_next_heap_relfilenode('2613'::pg_catalog.oid);\n> TRUNCATE pg_catalog.pg_largeobject;\n> -- ^^^ here we can get the error \"could not create file \"base/5/2683\": File exists\"\n> ...\n>\n> We get the effect discussed when the background writer process decides to\n> flush a file buffer for pg_largeobject during stage 1.\n> (Thus, if a checkpoint somehow happened to occur during CREATE DATABASE,\n> the result must be the same.)\n> And another important factor is shared_buffers = 1MB (set during the test).\n> With the default setting of 128MB I couldn't see the failure.\n>\n> It can be reproduced easily (on old Windows versions) just by running\n> pg_upgrade in a loop (I've got failures on iterations 22, 37, 17 (with the\n> default cluster)).\n> If an old cluster contains dozen of databases, this increases the failure\n> probability significantly (with 10 additional databases I've got failures\n> on iterations 4, 1, 6).\n>\n\nI don't have an old Windows environment to test but I agree with your\nanalysis and theory. The question is what should we do for these new\nrandom BF failures? I think we should set bgwriter_lru_maxpages to 0\nand checkpoint_timeout to 1hr for these new tests. Doing some invasive\nfix as part of this doesn't sound reasonable because this is an\nexisting problem and there seems to be another patch by Thomas that\nprobably deals with the root cause of the existing problem [1] as\npointed out by you.\n\n[1] - https://commitfest.postgresql.org/40/3951/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 5 Jan 2024 09:49:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "On 1/4/24 10:19 PM, Amit Kapila wrote:\n> On Thu, Jan 4, 2024 at 5:30 PM Alexander Lakhin <[email protected]> wrote:\n>>\n>> 03.01.2024 14:42, Amit Kapila wrote:\n>>>\n>>\n>>>> And the internal process is ... background writer (BgBufferSync()).\n>>>>\n>>>> So, I tried just adding bgwriter_lru_maxpages = 0 to postgresql.conf and\n>>>> got 20 x 10 tests passing.\n>>>>\n>>>> Thus, it we want just to get rid of the test failure, maybe it's enough to\n>>>> add this to the test's config...\n>>>>\n>>> What about checkpoints? Can't it do the same while writing the buffers?\n>>\n>> As we deal here with pg_upgrade/pg_restore, it must not be very easy to get\n>> the desired effect, but I think it's not impossible in principle.\n>> More details below.\n>> What happens during the pg_upgrade execution is essentially:\n>> 1) CREATE DATABASE \"postgres\" WITH TEMPLATE = template0 OID = 5 ...;\n>> -- this command flushes file buffers as well\n>> 2) ALTER DATABASE postgres OWNER TO ...\n>> 3) COMMENT ON DATABASE \"postgres\" IS ...\n>> 4) -- For binary upgrade, preserve pg_largeobject and index relfilenodes\n>> SELECT pg_catalog.binary_upgrade_set_next_index_relfilenode('2683'::pg_catalog.oid);\n>> SELECT pg_catalog.binary_upgrade_set_next_heap_relfilenode('2613'::pg_catalog.oid);\n>> TRUNCATE pg_catalog.pg_largeobject;\n>> -- ^^^ here we can get the error \"could not create file \"base/5/2683\": File exists\"\n>> ...\n>>\n>> We get the effect discussed when the background writer process decides to\n>> flush a file buffer for pg_largeobject during stage 1.\n>> (Thus, if a checkpoint somehow happened to occur during CREATE DATABASE,\n>> the result must be the same.)\n>> And another important factor is shared_buffers = 1MB (set during the test).\n>> With the default setting of 128MB I couldn't see the failure.\n>>\n>> It can be reproduced easily (on old Windows versions) just by running\n>> pg_upgrade in a loop (I've got failures on iterations 22, 37, 17 (with the\n>> default cluster)).\n>> If an old cluster contains dozen of databases, this increases the failure\n>> probability significantly (with 10 additional databases I've got failures\n>> on iterations 4, 1, 6).\n>>\n> \n> I don't have an old Windows environment to test but I agree with your\n> analysis and theory. The question is what should we do for these new\n> random BF failures? I think we should set bgwriter_lru_maxpages to 0\n> and checkpoint_timeout to 1hr for these new tests. Doing some invasive\n> fix as part of this doesn't sound reasonable because this is an\n> existing problem and there seems to be another patch by Thomas that\n> probably deals with the root cause of the existing problem [1] as\n> pointed out by you.\n> \n> [1] - https://commitfest.postgresql.org/40/3951/\n\nIsn't this just sweeping the problem (non-POSIX behavior on SMB and \nReFS) under the carpet? I realize that synthetic test workloads like \npg_upgrade in a loop aren't themselves real-world scenarios, but what \nabout other cases? Even if we're certain it's not possible for these \nissues to wedge a server, it's still not a good experience for users to \nget random, unexplained IO-related errors...\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n\n",
"msg_date": "Mon, 8 Jan 2024 10:06:40 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "On Mon, Jan 8, 2024 at 9:36 PM Jim Nasby <[email protected]> wrote:\n>\n> On 1/4/24 10:19 PM, Amit Kapila wrote:\n> > On Thu, Jan 4, 2024 at 5:30 PM Alexander Lakhin <[email protected]> wrote:\n> >>\n> >> 03.01.2024 14:42, Amit Kapila wrote:\n> >>>\n> >>\n> >>>> And the internal process is ... background writer (BgBufferSync()).\n> >>>>\n> >>>> So, I tried just adding bgwriter_lru_maxpages = 0 to postgresql.conf and\n> >>>> got 20 x 10 tests passing.\n> >>>>\n> >>>> Thus, it we want just to get rid of the test failure, maybe it's enough to\n> >>>> add this to the test's config...\n> >>>>\n> >>> What about checkpoints? Can't it do the same while writing the buffers?\n> >>\n> >> As we deal here with pg_upgrade/pg_restore, it must not be very easy to get\n> >> the desired effect, but I think it's not impossible in principle.\n> >> More details below.\n> >> What happens during the pg_upgrade execution is essentially:\n> >> 1) CREATE DATABASE \"postgres\" WITH TEMPLATE = template0 OID = 5 ...;\n> >> -- this command flushes file buffers as well\n> >> 2) ALTER DATABASE postgres OWNER TO ...\n> >> 3) COMMENT ON DATABASE \"postgres\" IS ...\n> >> 4) -- For binary upgrade, preserve pg_largeobject and index relfilenodes\n> >> SELECT pg_catalog.binary_upgrade_set_next_index_relfilenode('2683'::pg_catalog.oid);\n> >> SELECT pg_catalog.binary_upgrade_set_next_heap_relfilenode('2613'::pg_catalog.oid);\n> >> TRUNCATE pg_catalog.pg_largeobject;\n> >> -- ^^^ here we can get the error \"could not create file \"base/5/2683\": File exists\"\n> >> ...\n> >>\n> >> We get the effect discussed when the background writer process decides to\n> >> flush a file buffer for pg_largeobject during stage 1.\n> >> (Thus, if a checkpoint somehow happened to occur during CREATE DATABASE,\n> >> the result must be the same.)\n> >> And another important factor is shared_buffers = 1MB (set during the test).\n> >> With the default setting of 128MB I couldn't see the failure.\n> >>\n> >> It can be reproduced easily (on old Windows versions) just by running\n> >> pg_upgrade in a loop (I've got failures on iterations 22, 37, 17 (with the\n> >> default cluster)).\n> >> If an old cluster contains dozen of databases, this increases the failure\n> >> probability significantly (with 10 additional databases I've got failures\n> >> on iterations 4, 1, 6).\n> >>\n> >\n> > I don't have an old Windows environment to test but I agree with your\n> > analysis and theory. The question is what should we do for these new\n> > random BF failures? I think we should set bgwriter_lru_maxpages to 0\n> > and checkpoint_timeout to 1hr for these new tests. Doing some invasive\n> > fix as part of this doesn't sound reasonable because this is an\n> > existing problem and there seems to be another patch by Thomas that\n> > probably deals with the root cause of the existing problem [1] as\n> > pointed out by you.\n> >\n> > [1] - https://commitfest.postgresql.org/40/3951/\n>\n> Isn't this just sweeping the problem (non-POSIX behavior on SMB and\n> ReFS) under the carpet? I realize that synthetic test workloads like\n> pg_upgrade in a loop aren't themselves real-world scenarios, but what\n> about other cases? Even if we're certain it's not possible for these\n> issues to wedge a server, it's still not a good experience for users to\n> get random, unexplained IO-related errors...\n>\n\nThe point is that this is an existing known Windows behavior and that\ntoo only in certain versions. The fix doesn't seem to be\nstraightforward, so it seems advisable to avoid random BF failures by\nhaving an appropriate configuration.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Jan 2024 08:35:05 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Dear Amit, Alexander,\r\n\r\n> > We get the effect discussed when the background writer process decides to\r\n> > flush a file buffer for pg_largeobject during stage 1.\r\n> > (Thus, if a checkpoint somehow happened to occur during CREATE DATABASE,\r\n> > the result must be the same.)\r\n> > And another important factor is shared_buffers = 1MB (set during the test).\r\n> > With the default setting of 128MB I couldn't see the failure.\r\n> >\r\n> > It can be reproduced easily (on old Windows versions) just by running\r\n> > pg_upgrade in a loop (I've got failures on iterations 22, 37, 17 (with the\r\n> > default cluster)).\r\n> > If an old cluster contains dozen of databases, this increases the failure\r\n> > probability significantly (with 10 additional databases I've got failures\r\n> > on iterations 4, 1, 6).\r\n> >\r\n> \r\n> I don't have an old Windows environment to test but I agree with your\r\n> analysis and theory. The question is what should we do for these new\r\n> random BF failures? I think we should set bgwriter_lru_maxpages to 0\r\n> and checkpoint_timeout to 1hr for these new tests. Doing some invasive\r\n> fix as part of this doesn't sound reasonable because this is an\r\n> existing problem and there seems to be another patch by Thomas that\r\n> probably deals with the root cause of the existing problem [1] as\r\n> pointed out by you.\r\n> \r\n> [1] - https://commitfest.postgresql.org/40/3951/\r\n\r\nBased on the suggestion by Amit, I have created a patch with the alternative\r\napproach. This just does GUC settings. The reported failure is only for\r\n003_logical_slots, but the patch also includes changes for the recently added\r\ntest, 004_subscription. IIUC, there is a possibility that 004 would fail as well.\r\n\r\nPer our understanding, this patch can stop random failures. Alexander, can you\r\ntest for the confirmation?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 9 Jan 2024 05:49:26 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Hello Kuroda-san,\n\n09.01.2024 08:49, Hayato Kuroda (Fujitsu) wrote:\n> Based on the suggestion by Amit, I have created a patch with the alternative\n> approach. This just does GUC settings. The reported failure is only for\n> 003_logical_slots, but the patch also includes changes for the recently added\n> test, 004_subscription. IIUC, there is a possibility that 004 would fail as well.\n>\n> Per our understanding, this patch can stop random failures. Alexander, can you\n> test for the confirmation?\n>\n\nYes, the patch fixes the issue for me (without the patch I observe failures\non iterations 1-2, with 10 tests running in parallel, but with the patch\n10 iterations succeeded).\n\nBut as far I can see, 004_subscription is not affected by the issue,\nbecause it doesn't enable streaming for nodes new_sub, new_sub1.\nAs I noted before, I could see the failure only with\nshared_buffers = 1MB (which is set with allows_streaming => 'logical').\nSo I'm not sure, whether we need to modify 004 (or any other test that\nruns pg_upgrade).\n\nAs to checkpoint_timeout, personally I would not increase it, because it\nseems unbelievable to me that pg_restore (with the cluster containing only\ntwo empty databases) can run for longer than 5 minutes. I'd rather\ninvestigate such situation separately, in case we encounter it, but maybe\nit's only me.\nOn the other hand, if a checkpoint could occur by some reason within a\nshorter time span, then increasing the timeout would not matter, I suppose.\n(I've also tested the bgwriter_lru_maxpages-only modification of your patch\nand can confirm that it works as well.)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 9 Jan 2024 12:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 2:30 PM Alexander Lakhin <[email protected]> wrote:\n>\n> 09.01.2024 08:49, Hayato Kuroda (Fujitsu) wrote:\n> > Based on the suggestion by Amit, I have created a patch with the alternative\n> > approach. This just does GUC settings. The reported failure is only for\n> > 003_logical_slots, but the patch also includes changes for the recently added\n> > test, 004_subscription. IIUC, there is a possibility that 004 would fail as well.\n> >\n> > Per our understanding, this patch can stop random failures. Alexander, can you\n> > test for the confirmation?\n> >\n>\n> Yes, the patch fixes the issue for me (without the patch I observe failures\n> on iterations 1-2, with 10 tests running in parallel, but with the patch\n> 10 iterations succeeded).\n>\n> But as far I can see, 004_subscription is not affected by the issue,\n> because it doesn't enable streaming for nodes new_sub, new_sub1.\n> As I noted before, I could see the failure only with\n> shared_buffers = 1MB (which is set with allows_streaming => 'logical').\n> So I'm not sure, whether we need to modify 004 (or any other test that\n> runs pg_upgrade).\n>\n\nI see your point and the probable reason for failure with\nshared_buffers=1MB is that the probability of bgwriter holding the\nfile handle for pg_largeobject increases. So, let's change it only for\n003.\n\n> As to checkpoint_timeout, personally I would not increase it, because it\n> seems unbelievable to me that pg_restore (with the cluster containing only\n> two empty databases) can run for longer than 5 minutes. I'd rather\n> investigate such situation separately, in case we encounter it, but maybe\n> it's only me.\n>\n\nI feel it is okay to set a higher value of checkpoint_timeout due to\nthe same reason though the probability is less. I feel here it is\nimportant to explain in the comments why we are using these settings\nin the new test. I have thought of something like: \"During the\nupgrade, bgwriter or checkpointer could hold the file handle for some\nremoved file. Now, during restore when we try to create the file with\nthe same name, it errors out. This behavior is specific to only some\nspecific Windows versions and the probability of seeing this behavior\nis higher in this test because we use wal_level as logical via\nallows_streaming => 'logical' which in turn sets shared_buffers as\n1MB.\"\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Jan 2024 15:38:53 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Hello Amit,\n\n09.01.2024 13:08, Amit Kapila wrote:\n>\n>> As to checkpoint_timeout, personally I would not increase it, because it\n>> seems unbelievable to me that pg_restore (with the cluster containing only\n>> two empty databases) can run for longer than 5 minutes. I'd rather\n>> investigate such situation separately, in case we encounter it, but maybe\n>> it's only me.\n>>\n> I feel it is okay to set a higher value of checkpoint_timeout due to\n> the same reason though the probability is less. I feel here it is\n> important to explain in the comments why we are using these settings\n> in the new test. I have thought of something like: \"During the\n> upgrade, bgwriter or checkpointer could hold the file handle for some\n> removed file. Now, during restore when we try to create the file with\n> the same name, it errors out. This behavior is specific to only some\n> specific Windows versions and the probability of seeing this behavior\n> is higher in this test because we use wal_level as logical via\n> allows_streaming => 'logical' which in turn sets shared_buffers as\n> 1MB.\"\n>\n> Thoughts?\n\nI would describe that behavior as \"During upgrade, when pg_restore performs\nCREATE DATABASE, bgwriter or checkpointer may flush buffers and hold a file\nhandle for pg_largeobject, so later TRUNCATE pg_largeobject command will\nfail if OS (such as older Windows versions) doesn't remove an unlinked file\ncompletely till it's open. ...\"\n\nBest regards,\nAlexander\n\n\n\n",
"msg_date": "Tue, 9 Jan 2024 14:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 4:30 PM Alexander Lakhin <[email protected]> wrote:\n>\n> 09.01.2024 13:08, Amit Kapila wrote:\n> >\n> >> As to checkpoint_timeout, personally I would not increase it, because it\n> >> seems unbelievable to me that pg_restore (with the cluster containing only\n> >> two empty databases) can run for longer than 5 minutes. I'd rather\n> >> investigate such situation separately, in case we encounter it, but maybe\n> >> it's only me.\n> >>\n> > I feel it is okay to set a higher value of checkpoint_timeout due to\n> > the same reason though the probability is less. I feel here it is\n> > important to explain in the comments why we are using these settings\n> > in the new test. I have thought of something like: \"During the\n> > upgrade, bgwriter or checkpointer could hold the file handle for some\n> > removed file. Now, during restore when we try to create the file with\n> > the same name, it errors out. This behavior is specific to only some\n> > specific Windows versions and the probability of seeing this behavior\n> > is higher in this test because we use wal_level as logical via\n> > allows_streaming => 'logical' which in turn sets shared_buffers as\n> > 1MB.\"\n> >\n> > Thoughts?\n>\n> I would describe that behavior as \"During upgrade, when pg_restore performs\n> CREATE DATABASE, bgwriter or checkpointer may flush buffers and hold a file\n> handle for pg_largeobject, so later TRUNCATE pg_largeobject command will\n> fail if OS (such as older Windows versions) doesn't remove an unlinked file\n> completely till it's open. ...\"\n>\n\nI am slightly hesitant to add any particular system table name in the\ncomments as this can happen for any other system table as well, so\nslightly adjusted the comments in the attached. However, I think it is\nokay to mention the particular system table name in the commit\nmessage. Let me know what do you think.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 10 Jan 2024 15:01:30 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "10.01.2024 12:31, Amit Kapila wrote:\n> I am slightly hesitant to add any particular system table name in the\n> comments as this can happen for any other system table as well, so\n> slightly adjusted the comments in the attached. However, I think it is\n> okay to mention the particular system table name in the commit\n> message. Let me know what do you think.\n\nThank you, Amit!\n\nI'd like to note that the culprit is exactly pg_largeobject as coded in\ndumpDatabase():\n /*\n * pg_largeobject comes from the old system intact, so set its\n * relfrozenxids, relminmxids and relfilenode.\n */\n if (dopt->binary_upgrade)\n...\n appendPQExpBufferStr(loOutQry,\n \"TRUNCATE pg_catalog.pg_largeobject;\\n\");\n\nI see no other TRUNCATEs (or similar logic) around, so I would specify the\ntable name in the comments. Though maybe I'm missing something...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 10 Jan 2024 13:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "On Wed, Jan 10, 2024 at 3:30 PM Alexander Lakhin <[email protected]> wrote:\n>\n> 10.01.2024 12:31, Amit Kapila wrote:\n> > I am slightly hesitant to add any particular system table name in the\n> > comments as this can happen for any other system table as well, so\n> > slightly adjusted the comments in the attached. However, I think it is\n> > okay to mention the particular system table name in the commit\n> > message. Let me know what do you think.\n>\n> Thank you, Amit!\n>\n> I'd like to note that the culprit is exactly pg_largeobject as coded in\n> dumpDatabase():\n> /*\n> * pg_largeobject comes from the old system intact, so set its\n> * relfrozenxids, relminmxids and relfilenode.\n> */\n> if (dopt->binary_upgrade)\n> ...\n> appendPQExpBufferStr(loOutQry,\n> \"TRUNCATE pg_catalog.pg_largeobject;\\n\");\n>\n> I see no other TRUNCATEs (or similar logic) around, so I would specify the\n> table name in the comments. Though maybe I'm missing something...\n>\n\nBut tomorrow it could be for other tables and if we change this\nTRUNCATE logic for pg_largeobject (of which chances are less) then\nthere is always a chance that one misses changing this comment. I feel\nkeeping it generic in this case would be better as the problem is\ngeneric but it is currently shown for pg_largeobject.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 10 Jan 2024 16:07:17 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "10.01.2024 13:37, Amit Kapila wrote:\n> But tomorrow it could be for other tables and if we change this\n> TRUNCATE logic for pg_largeobject (of which chances are less) then\n> there is always a chance that one misses changing this comment. I feel\n> keeping it generic in this case would be better as the problem is\n> generic but it is currently shown for pg_largeobject.\n\nYes, for sure. So let's keep it generic as you prefer.\n\nThank you!\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 10 Jan 2024 18:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Dear Alexander, Amit,\r\n\r\n> > But tomorrow it could be for other tables and if we change this\r\n> > TRUNCATE logic for pg_largeobject (of which chances are less) then\r\n> > there is always a chance that one misses changing this comment. I feel\r\n> > keeping it generic in this case would be better as the problem is\r\n> > generic but it is currently shown for pg_largeobject.\r\n> \r\n> Yes, for sure. So let's keep it generic as you prefer.\r\n> \r\n> Thank you!\r\n\r\nThanks for working the patch. I'm also OK to push the Amit's fix patch.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 11 Jan 2024 02:45:41 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 8:15 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > > But tomorrow it could be for other tables and if we change this\n> > > TRUNCATE logic for pg_largeobject (of which chances are less) then\n> > > there is always a chance that one misses changing this comment. I feel\n> > > keeping it generic in this case would be better as the problem is\n> > > generic but it is currently shown for pg_largeobject.\n> >\n> > Yes, for sure. So let's keep it generic as you prefer.\n> >\n> > Thank you!\n>\n> Thanks for working the patch. I'm also OK to push the Amit's fix patch.\n>\n\nThanks to both of you. I have pushed the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 Jan 2024 15:05:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random pg_upgrade test failure on drongo"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> \r\n> Thanks to both of you. I have pushed the patch.\r\n>\r\n\r\nI have been tracking the BF animals these days, and this failure has not seen anymore.\r\nI think we can close the topic. Again, thanks for all efforts.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 17 Jan 2024 02:53:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Random pg_upgrade test failure on drongo"
}
] |
[
{
"msg_contents": "One of the new tests in the infinite interval patch has revealed a bug\nin our 64-bit integer subtraction code. Consider the following:\n\nselect 0::int8 - '-9223372036854775808'::int8;\n\nThis should overflow, since the correct result (+9223372036854775808)\nis out of range. However, on platforms without integer overflow\nbuiltins or 128-bit integers, pg_sub_s64_overflow() does the\nfollowing:\n\n if ((a < 0 && b > 0 && a < PG_INT64_MIN + b) ||\n (a > 0 && b < 0 && a > PG_INT64_MAX + b))\n {\n *result = 0x5EED; /* to avoid spurious warnings */\n return true;\n }\n *result = a - b;\n return false;\n\nwhich fails to spot the fact that overflow is also possible when a ==\n0. So on such platforms, it returns the wrong result.\n\nPatch attached.\n\nRegards,\nDean",
"msg_date": "Wed, 8 Nov 2023 11:58:18 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "64-bit integer subtraction bug on some platforms"
},
{
"msg_contents": "On Wed, 2023-11-08 at 11:58 +0000, Dean Rasheed wrote:\n> One of the new tests in the infinite interval patch has revealed a bug\n> in our 64-bit integer subtraction code. Consider the following:\n> \n> select 0::int8 - '-9223372036854775808'::int8;\n> \n> This should overflow, since the correct result (+9223372036854775808)\n> is out of range. However, on platforms without integer overflow\n> builtins or 128-bit integers, pg_sub_s64_overflow() does the\n> following:\n> \n> if ((a < 0 && b > 0 && a < PG_INT64_MIN + b) ||\n> (a > 0 && b < 0 && a > PG_INT64_MAX + b))\n> {\n> *result = 0x5EED; /* to avoid spurious warnings */\n> return true;\n> }\n> *result = a - b;\n> return false;\n> \n> which fails to spot the fact that overflow is also possible when a ==\n> 0. So on such platforms, it returns the wrong result.\n> \n> Patch attached.\n\nThe patch looks good to me.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 08 Nov 2023 13:15:35 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 64-bit integer subtraction bug on some platforms"
},
{
"msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Wed, 2023-11-08 at 11:58 +0000, Dean Rasheed wrote:\n>> This should overflow, since the correct result (+9223372036854775808)\n>> is out of range. However, on platforms without integer overflow\n>> builtins or 128-bit integers, pg_sub_s64_overflow() does the\n>> following:\n>> ...\n>> which fails to spot the fact that overflow is also possible when a ==\n>> 0. So on such platforms, it returns the wrong result.\n>> \n>> Patch attached.\n\n> The patch looks good to me.\n\n+1: good catch, fix looks correct.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Nov 2023 11:08:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 64-bit integer subtraction bug on some platforms"
}
] |
[
{
"msg_contents": "I found this in our logs, and reproduced it under v11-v16.\n\nCREATE TABLE t(a int, b int);\nINSERT INTO t SELECT generate_series(1,999);\nCREATE STATISTICS t_stats ON a,b FROM t;\n\nwhile :; do psql postgres -qtxc \"ANALYZE t\"; done &\nwhile :; do psql postgres -qtxc \"begin; DROP STATISTICS t_stats\"; done &\n\nIt's known that concurrent DDL can hit elog(). But in this case,\nthere's only one DDL operation.\n\n(gdb) bt\n#0 0x00000000009442a0 in pg_re_throw ()\n#1 0x0000000000943504 in errfinish ()\n#2 0x00000000004fcafe in simple_heap_delete ()\n#3 0x0000000000639d3f in RemoveStatisticsDataById ()\n#4 0x0000000000639d79 in RemoveStatisticsById ()\n#5 0x000000000057a428 in deleteObjectsInList ()\n#6 0x000000000057a8f0 in performMultipleDeletions ()\n#7 0x000000000060b5ed in RemoveObjects ()\n#8 0x00000000008099ce in ProcessUtilitySlow.isra.1 ()\n#9 0x0000000000808c71 in standard_ProcessUtility ()\n#10 0x00007efbfed7a508 in pgss_ProcessUtility () from /usr/pgsql-16/lib/pg_stat_statements.so\n#11 0x000000000080745a in PortalRunUtility ()\n#12 0x0000000000807579 in PortalRunMulti ()\n#13 0x00000000008079dc in PortalRun ()\n#14 0x0000000000803927 in exec_simple_query ()\n#15 0x0000000000803f28 in PostgresMain ()\n#16 0x000000000077bae6 in ServerLoop ()\n#17 0x000000000077cbaa in PostmasterMain ()\n#18 0x00000000004ba788 in main ()\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 8 Nov 2023 09:10:51 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "XX000: tuple concurrently deleted during DROP STATISTICS"
},
{
"msg_contents": "On 11/8/23 16:10, Justin Pryzby wrote:\n> I found this in our logs, and reproduced it under v11-v16.\n> \n> CREATE TABLE t(a int, b int);\n> INSERT INTO t SELECT generate_series(1,999);\n> CREATE STATISTICS t_stats ON a,b FROM t;\n> \n> while :; do psql postgres -qtxc \"ANALYZE t\"; done &\n> while :; do psql postgres -qtxc \"begin; DROP STATISTICS t_stats\"; done &\n> \n> It's known that concurrent DDL can hit elog(). But in this case,\n> there's only one DDL operation.\n> \nAFAICS this happens because store_statext (after ANALYZE builds the new\nstatistics) does this:\n\n----------------------------\n/*\n * Delete the old tuple if it exists, and insert a new one. It's easier\n * than trying to update or insert, based on various conditions.\n */\nRemoveStatisticsDataById(statOid, inh);\n\n/* form and insert a new tuple */\nstup = heap_form_tuple(RelationGetDescr(pg_stextdata), values, nulls);\nCatalogTupleInsert(pg_stextdata, stup);\n----------------------------\n\nSo it deletes the tuple first (if there's one), and then inserts the new\nstatistics tuple.\n\nWe could update the tuple instead, but that would be more complex (as\nthe comment explains), and it doesn't actually fix anything because then\nsimple_heap_delete just fails with TM_Updated instead.\n\nI think the only solution would be to lock the statistics tuple before\nrunning ANALYZE, or something like that. Or maybe we should even lock\nthe statistics object itself, so that ANALYZE and DROP can't run\nconcurrently on it?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Nov 2023 16:27:40 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XX000: tuple concurrently deleted during DROP STATISTICS"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 11/8/23 16:10, Justin Pryzby wrote:\n>> I found this in our logs, and reproduced it under v11-v16.\n>> \n>> CREATE TABLE t(a int, b int);\n>> INSERT INTO t SELECT generate_series(1,999);\n>> CREATE STATISTICS t_stats ON a,b FROM t;\n>> \n>> while :; do psql postgres -qtxc \"ANALYZE t\"; done &\n>> while :; do psql postgres -qtxc \"begin; DROP STATISTICS t_stats\"; done &\n>> \n>> It's known that concurrent DDL can hit elog(). But in this case,\n>> there's only one DDL operation.\n\n> AFAICS this happens because store_statext (after ANALYZE builds the new\n> statistics) does this:\n\nShouldn't DROP STATISTICS be taking a lock on the associated table\nthat is strong enough to lock out ANALYZE?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Nov 2023 10:52:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XX000: tuple concurrently deleted during DROP STATISTICS"
},
{
"msg_contents": "On 11/8/23 16:52, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 11/8/23 16:10, Justin Pryzby wrote:\n>>> I found this in our logs, and reproduced it under v11-v16.\n>>>\n>>> CREATE TABLE t(a int, b int);\n>>> INSERT INTO t SELECT generate_series(1,999);\n>>> CREATE STATISTICS t_stats ON a,b FROM t;\n>>>\n>>> while :; do psql postgres -qtxc \"ANALYZE t\"; done &\n>>> while :; do psql postgres -qtxc \"begin; DROP STATISTICS t_stats\"; done &\n>>>\n>>> It's known that concurrent DDL can hit elog(). But in this case,\n>>> there's only one DDL operation.\n> \n>> AFAICS this happens because store_statext (after ANALYZE builds the new\n>> statistics) does this:\n> \n> Shouldn't DROP STATISTICS be taking a lock on the associated table\n> that is strong enough to lock out ANALYZE?\n> \n\nYes, I think that's the correct thing to do. I recall having a\ndiscussion about this with someone while working on the patch, leading\nto the current code. But I haven't managed to find that particular bit\nin the archives :-(\n\nAnyway, the attached patch should fix this by getting the lock, I think.\n\n- RemoveStatisticsById is what gets called drop DROP STATISTICS (or for\ndependencies), so that's where we get the AE lock\n\n- RemoveStatisticsDataById gets called from ANALYZE, so that already\nshould have a lock (so no need to acquire another one)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 8 Nov 2023 20:16:05 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XX000: tuple concurrently deleted during DROP STATISTICS"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 11/8/23 16:52, Tom Lane wrote:\n>> Shouldn't DROP STATISTICS be taking a lock on the associated table\n>> that is strong enough to lock out ANALYZE?\n\n> Yes, I think that's the correct thing to do. I recall having a\n> discussion about this with someone while working on the patch, leading\n> to the current code. But I haven't managed to find that particular bit\n> in the archives :-(\n> Anyway, the attached patch should fix this by getting the lock, I think.\n\nThis looks generally correct, but surely we don't need it to be as\nstrong as AccessExclusiveLock? There seems no reason to conflict with\nordinary readers/writers of the table.\n\nANALYZE takes ShareUpdateExclusiveLock, and offhand I think this\ncommand should do the same.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Nov 2023 14:58:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XX000: tuple concurrently deleted during DROP STATISTICS"
},
{
"msg_contents": "On 11/8/23 20:58, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 11/8/23 16:52, Tom Lane wrote:\n>>> Shouldn't DROP STATISTICS be taking a lock on the associated table\n>>> that is strong enough to lock out ANALYZE?\n> \n>> Yes, I think that's the correct thing to do. I recall having a\n>> discussion about this with someone while working on the patch, leading\n>> to the current code. But I haven't managed to find that particular bit\n>> in the archives :-(\n>> Anyway, the attached patch should fix this by getting the lock, I think.\n> \n> This looks generally correct, but surely we don't need it to be as\n> strong as AccessExclusiveLock? There seems no reason to conflict with\n> ordinary readers/writers of the table.\n> \n> ANALYZE takes ShareUpdateExclusiveLock, and offhand I think this\n> command should do the same.\n> \n\nRight. I did copy that from DROP TRIGGER code somewhat mindlessly, but\nyou're right this does not need block readers/writers.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Nov 2023 22:25:50 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XX000: tuple concurrently deleted during DROP STATISTICS"
},
{
"msg_contents": "I've pushed a cleaned up version of the fix.\n\nI had to make some adjustments in the backbranches, because the way we\nstore the analyzed statistics evolved, and RemoveStatisticsById() used\nto do everything. I ended up introducing RemoveStatisticsDataById() in\nthe backbranches too, but only as a static function - that makes the\ncode much cleaner.\n\nregards\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 19 Nov 2023 21:12:44 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XX000: tuple concurrently deleted during DROP STATISTICS"
}
] |
[
{
"msg_contents": "clang and gcc both now support -fsanitize=address,undefined. These are \nreally useful to me personally when trying to debug issues. \nUnfortunately ecpg code has a ton of memory leaks, which makes builds \nreally painful. It would be great to fix all of them, but I don't have \nthe patience to try to read flex/bison code. Here are two memory leak \nfixes in any case.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 08 Nov 2023 11:01:06 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix some memory leaks in ecpg.addons"
},
{
"msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> clang and gcc both now support -fsanitize=address,undefined. These are \n> really useful to me personally when trying to debug issues. \n> Unfortunately ecpg code has a ton of memory leaks, which makes builds \n> really painful. It would be great to fix all of them, but I don't have \n> the patience to try to read flex/bison code. Here are two memory leak \n> fixes in any case.\n\nI'm kind of failing to see the point. As you say, the ecpg\npreprocessor leaks memory like there's no tomorrow. But given its\nusage (process one source file and exit) I'm not sure that is worth\nmuch effort to fix. And what does it buy to fix just two spots?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Nov 2023 12:07:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix some memory leaks in ecpg.addons"
},
{
"msg_contents": "Am Mittwoch, dem 08.11.2023 um 12:07 -0500 schrieb Tom Lane:\n> \"Tristan Partin\" <[email protected]> writes:\n> > clang and gcc both now support -fsanitize=address,undefined. These\n> > are \n> > really useful to me personally when trying to debug issues. \n> > Unfortunately ecpg code has a ton of memory leaks, which makes\n> > builds \n> > really painful. It would be great to fix all of them, but I don't\n> > have \n> > the patience to try to read flex/bison code. Here are two memory\n> > leak \n> > fixes in any case.\n> \n> I'm kind of failing to see the point. As you say, the ecpg\n> preprocessor leaks memory like there's no tomorrow. But given its\n> usage (process one source file and exit) I'm not sure that is worth\n> much effort to fix. And what does it buy to fix just two spots?\n\nAgreed, it's not exactly uncommon for tools like ecpg to not worry\nabout memory. After all it gets freed when the program ends.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De\nMichael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\n\n\n",
"msg_date": "Wed, 08 Nov 2023 18:18:06 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix some memory leaks in ecpg.addons"
},
{
"msg_contents": "On Wed Nov 8, 2023 at 11:18 AM CST, Michael Meskes wrote:\n> Am Mittwoch, dem 08.11.2023 um 12:07 -0500 schrieb Tom Lane:\n> > \"Tristan Partin\" <[email protected]> writes:\n> > > clang and gcc both now support -fsanitize=address,undefined. These\n> > > are \n> > > really useful to me personally when trying to debug issues. \n> > > Unfortunately ecpg code has a ton of memory leaks, which makes\n> > > builds \n> > > really painful. It would be great to fix all of them, but I don't\n> > > have \n> > > the patience to try to read flex/bison code. Here are two memory\n> > > leak \n> > > fixes in any case.\n> > \n> > I'm kind of failing to see the point. As you say, the ecpg\n> > preprocessor leaks memory like there's no tomorrow. But given its\n> > usage (process one source file and exit) I'm not sure that is worth\n> > much effort to fix. And what does it buy to fix just two spots?\n>\n> Agreed, it's not exactly uncommon for tools like ecpg to not worry\n> about memory. After all it gets freed when the program ends.\n\nIn the default configuration of AddressSanitizer, I can't even complete \na full build of Postgres.\n\n\tmeson setup build -Db_sanitize=address\n\tninja -C build\n\t[1677/1855] Generating src/interfaces/ecpg/test/compat_informix/rfmtlong.c with a custom command\n\tFAILED: src/interfaces/ecpg/test/compat_informix/rfmtlong.c \n\t/home/tristan957/Projects/work/postgresql/build/src/interfaces/ecpg/preproc/ecpg --regression -I../src/interfaces/ecpg/test/compat_informix -I../src/interfaces/ecpg/include/ -C INFORMIX -o src/interfaces/ecpg/test/compat_informix/rfmtlong.c ../src/interfaces/ecpg/test/compat_informix/rfmtlong.pgc\n\n\t=================================================================\n\t==114881==ERROR: LeakSanitizer: detected memory leaks\n\n\tDirect leak of 5 byte(s) in 1 object(s) allocated from:\n\t #0 0x7f88c34814a8 in strdup (/lib64/libasan.so.8+0x814a8) (BuildId: 6f17f87dc4c1aa9f9dde7c4856604c3a25ba4872)\n\t #1 0x4cfd93 in get_progname ../src/port/path.c:589\n\t #2 0x4b6dae in main ../src/interfaces/ecpg/preproc/ecpg.c:137\n\t #3 0x7f88c3246149 in __libc_start_call_main (/lib64/libc.so.6+0x28149) (BuildId: 651b2bed7ecaf18098a63b8f10299821749766e6)\n\t #4 0x7f88c324620a in __libc_start_main_impl (/lib64/libc.so.6+0x2820a) (BuildId: 651b2bed7ecaf18098a63b8f10299821749766e6)\n\t #5 0x402664 in _start (/home/tristan957/Projects/work/postgresql/build/src/interfaces/ecpg/preproc/ecpg+0x402664) (BuildId: fab06f774e305cbe628e03cdc22d935f7bb70a76)\n\n\tSUMMARY: AddressSanitizer: 5 byte(s) leaked in 1 allocation(s).\n\tninja: build stopped: subcommand failed.\n\nAre people using some suppression file or setting ASAN_OPTIONS to \nsomething?\n\nHere is a patch with a better solution.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 08 Nov 2023 11:37:46 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix some memory leaks in ecpg.addons"
},
{
"msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> On Wed Nov 8, 2023 at 11:18 AM CST, Michael Meskes wrote:\n>> Agreed, it's not exactly uncommon for tools like ecpg to not worry\n>> about memory. After all it gets freed when the program ends.\n\n> In the default configuration of AddressSanitizer, I can't even complete \n> a full build of Postgres.\n\nWhy is the meson stuff building ecpg test cases as part of the core build?\nThat seems wrong for a number of reasons, not only that we don't hold\nthat code to the same standards as the core server.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Nov 2023 12:52:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix some memory leaks in ecpg.addons"
},
{
"msg_contents": "Hello Tristan,\n\n08.11.2023 20:37, Tristan Partin wrote:\n> Are people using some suppression file or setting ASAN_OPTIONS to something?\n>\n\nI use the following:\nASAN_OPTIONS=detect_leaks=0:abort_on_error=1:print_stacktrace=1:\\\ndisable_coredump=0:strict_string_checks=1:check_initialization_order=1:\\\nstrict_init_order=1:detect_stack_use_after_return=0\n\n(You'll need to add detect_stack_use_after_return=0 with a newer clang\n(I use clang-18) to workaround an incompatibility of check_stack_depth()\nwith that sanitizer feature enabled by default.)\n\nThere is also another story with hwasan ([1]).\nand yet another incompatibility of check_stack_depth() related to the\naarch64-specific address tagging (TBI).\n\nSo I would say that fixing ecpg won't make postgres sanitizer-friendly in\na whole.\n\n[1] https://www.postgresql.org/message-id/dbf77bf7-6e54-ed8a-c4ae-d196eeb664ce%40gmail.com\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 8 Nov 2023 22:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix some memory leaks in ecpg.addons"
},
{
"msg_contents": "On Wed Nov 8, 2023 at 11:52 AM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > On Wed Nov 8, 2023 at 11:18 AM CST, Michael Meskes wrote:\n> >> Agreed, it's not exactly uncommon for tools like ecpg to not worry\n> >> about memory. After all it gets freed when the program ends.\n>\n> > In the default configuration of AddressSanitizer, I can't even complete \n> > a full build of Postgres.\n>\n> Why is the meson stuff building ecpg test cases as part of the core build?\n> That seems wrong for a number of reasons, not only that we don't hold\n> that code to the same standards as the core server.\n\nAfter looking into this a tiny bit more, we are building the \ndependencies of the ecpg tests.\n\n> foreach pgc_file : pgc_files\n> exe_input = custom_target('@[email protected]'.format(pgc_file),\n> input: '@[email protected]'.format(pgc_file),\n> output: '@[email protected]',\n> command: ecpg_preproc_test_command_start +\n> ['-C', 'ORACLE',] +\n> ecpg_preproc_test_command_end,\n> install: false,\n> build_by_default: false,\n> kwargs: exe_preproc_kw,\n> )\n> \n> ecpg_test_dependencies += executable(pgc_file,\n> exe_input,\n> kwargs: ecpg_test_exec_kw,\n> )\n> endforeach\n\nThis is the pattern that we have in all the ecpg/test/*/meson.build \nfiles. That ecpg_test_dependencies variable is then used in the actual \necpg tests:\n\n> tests += {\n> 'name': 'ecpg',\n> 'sd': meson.current_source_dir(),\n> 'bd': meson.current_build_dir(),\n> 'ecpg': {\n> 'expecteddir': meson.current_source_dir(),\n> 'inputdir': meson.current_build_dir(),\n> 'schedule': ecpg_test_files,\n> 'sql': [\n> 'sql/twophase',\n> ],\n> 'test_kwargs': {\n> 'depends': ecpg_test_dependencies,\n> },\n> 'dbname': 'ecpg1_regression,ecpg2_regression',\n> 'regress_args': ecpg_regress_args,\n> },\n> }\n\nSo in my opinion there is nothing wrong here. The build is working as \nintended. Does this make sense to you, Tom?\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 15 Nov 2023 04:14:50 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix some memory leaks in ecpg.addons"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-08 11:37:46 -0600, Tristan Partin wrote:\n> On Wed Nov 8, 2023 at 11:18 AM CST, Michael Meskes wrote:\n> > Am Mittwoch, dem 08.11.2023 um 12:07 -0500 schrieb Tom Lane:\n> > > \"Tristan Partin\" <[email protected]> writes:\n> > > > clang and gcc both now support -fsanitize=address,undefined. These\n> > > > are > > really useful to me personally when trying to debug issues.\n> > > > Unfortunately ecpg code has a ton of memory leaks, which makes\n> > > > builds > > really painful. It would be great to fix all of them, but\n> > I don't\n> > > > have > > the patience to try to read flex/bison code. Here are two\n> > memory\n> > > > leak > > fixes in any case.\n> > > > I'm kind of failing to see the point.� As you say, the ecpg\n> > > preprocessor leaks memory like there's no tomorrow.� But given its\n> > > usage (process one source file and exit) I'm not sure that is worth\n> > > much effort to fix.� And what does it buy to fix just two spots?\n> > \n> > Agreed, it's not exactly uncommon for tools like ecpg to not worry\n> > about memory. After all it gets freed when the program ends.\n> \n> In the default configuration of AddressSanitizer, I can't even complete a\n> full build of Postgres.\n\nI don't find the leak checks very useful for the moment. Leaks that happen\nonce in the lifetime of the program aren't problematic, and often tracking\nthem would make code more complicated. Perhaps we'll eventually change our\ntune on this, but I don't think it's worth fighting this windmill at this\npoint. I think at the very least we'd first want to port the memory context\ninfrastructure to frontend programs.\n\n> \n> Are people using some suppression file or setting ASAN_OPTIONS to something?\n\nYou pretty much have to. Locally I use this:\n\nexport ASAN_OPTIONS='debug=1:print_stacktrace=1:disable_coredump=0:abort_on_error=1:detect_leaks=0:detect_stack_use_after_return=0' UBSAN_OPTIONS='print_stacktrace=1:disable_coredump=0:abort_on_error=1'\n\nCI uses something similar.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 29 Nov 2023 11:20:14 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix some memory leaks in ecpg.addons"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-08 22:00:00 +0300, Alexander Lakhin wrote:\n> Hello Tristan,\n>\n> 08.11.2023 20:37, Tristan Partin wrote:\n> > Are people using some suppression file or setting ASAN_OPTIONS to something?\n> >\n>\n> I use the following:\n> ASAN_OPTIONS=detect_leaks=0:abort_on_error=1:print_stacktrace=1:\\\n> disable_coredump=0:strict_string_checks=1:check_initialization_order=1:\\\n> strict_init_order=1:detect_stack_use_after_return=0\n\nI wonder if we should add some of these options by default ourselves. We could\ne.g. add something like the __ubsan_default_options() in\nsrc/backend/main/main.c to src/port/... instead, and return a combination of\n\"our\" options (like detect_leaks=0) and the ones from the environment.\n\n\n> (You'll need to add detect_stack_use_after_return=0 with a newer clang\n> (I use clang-18) to workaround an incompatibility of check_stack_depth()\n> with that sanitizer feature enabled by default.)\n\nI have been wondering if we should work on fixing that. There are a few ways:\n\n- We can add a compiler parameter explicitly disabling the use-after-return\n checks - however, the checks are quite useful, so that'd be somewhat of a\n shame.\n\n- We could exempt the stack depth checking functions from being validated with\n asan, I think that should fix this issue. Looks like\n __attribute__((no_sanitize(\"address\")))\n would work\n\n- Asan has an interface for getting the real stack address. See\n https://github.com/llvm/llvm-project/blob/main/compiler-rt/include/sanitizer/asan_interface.h#L322\n\n\nISTM that, if it actually works as I theorize it should, using\n__attribute__((no_sanitize(\"address\"))) would be the easiest approach\nhere. Something like\n\n#if defined(__has_feature) && __has_feature(address_sanitizer)\n#define pg_attribute_no_asan __attribute__((no_sanitize(\"address\")))\n#else\n#define pg_attribute_no_asan\n#endif\n\nor such should work.\n\n\n> So I would say that fixing ecpg won't make postgres sanitizer-friendly in\n> a whole.\n\nOne thing that's been holding me back on trying to do something around this is\nthe basically non-existing documentation around all of this. I haven't even\nfound documentation referencing the fact that there are headers like\nsanitizer/asan_interface.h, you just have to figure that out yourself. Compare\nthat to something like valgrind, which has documented this at least somewhat.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 29 Nov 2023 11:39:20 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix some memory leaks in ecpg.addons"
}
] |
[
{
"msg_contents": "Hi,\n\nI happened to notice that some GUC names \"max_fsm_pages\" and\n\"max_fsm_relations\" are still mentioned in these translation files\n(from the REL_16_1 source zip)\n\nsrc\\backend\\po\\fr.po\nsrc\\backend\\po\\tr.po\n\n~~\n\nShould those be removed?\n\nThere was a commit [1] that said these all traces of those GUCs had\nbeen eliminated.\n\n======\n\n[1] https://github.com/postgres/postgres/commit/15c121b3ed7eb2f290e19533e41ccca734d23574#diff-65c699b5d467081e780d255ea0ed7d720b5bca2427e300f9fd0776bffe51560a\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 9 Nov 2023 10:51:06 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some deleted GUCs are still referred to"
},
{
"msg_contents": "> On 9 Nov 2023, at 00:51, Peter Smith <[email protected]> wrote:\n> \n> Hi,\n> \n> I happened to notice that some GUC names \"max_fsm_pages\" and\n> \"max_fsm_relations\" are still mentioned in these translation files\n> (from the REL_16_1 source zip)\n> \n> src\\backend\\po\\fr.po\n> src\\backend\\po\\tr.po\n> \n> ~~\n> \n> Should those be removed?\n\nThese mentions are only in comments and not in actual translations, so I don't\nthink they risk causing any issues.\n\n $ git grep max_fsm_ |cut -d\":\" -f 2 |grep -v \"^#\" | wc -l\n 0\n\nI don't know enough about the translation workflow to know how these comments\nare handled, sending this to pgsql-translators@ might be a better way to reach\nthe authors working on this.\n\n> There was a commit [1] that said these all traces of those GUCs had\n> been eliminated.\n\nTranslations are managed in an external repo and synced with the main repo at\nintervals, so such a commit couldn't have updated the master translation work\nfiles anyways.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 9 Nov 2023 10:12:03 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some deleted GUCs are still referred to"
},
{
"msg_contents": "On Thu, Nov 9, 2023 at 8:12 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 9 Nov 2023, at 00:51, Peter Smith <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > I happened to notice that some GUC names \"max_fsm_pages\" and\n> > \"max_fsm_relations\" are still mentioned in these translation files\n> > (from the REL_16_1 source zip)\n> >\n> > src\\backend\\po\\fr.po\n> > src\\backend\\po\\tr.po\n> >\n> > ~~\n> >\n> > Should those be removed?\n>\n> These mentions are only in comments and not in actual translations, so I don't\n> think they risk causing any issues.\n>\n> $ git grep max_fsm_ |cut -d\":\" -f 2 |grep -v \"^#\" | wc -l\n> 0\n>\n> I don't know enough about the translation workflow to know how these comments\n> are handled, sending this to pgsql-translators@ might be a better way to reach\n> the authors working on this.\n>\n\nThanks for the advice. Done that.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 14 Nov 2023 10:32:16 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some deleted GUCs are still referred to"
},
{
"msg_contents": "FYI.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n---------- Forwarded message ---------\nFrom: Daniel Gustafsson <[email protected]>\nDate: Thu, Nov 9, 2023 at 8:12 PM\nSubject: Re: Some deleted GUCs are still referred to\nTo: Peter Smith <[email protected]>\nCc: PostgreSQL Hackers <[email protected]>\n\n\n> On 9 Nov 2023, at 00:51, Peter Smith <[email protected]> wrote:\n>\n> Hi,\n>\n> I happened to notice that some GUC names \"max_fsm_pages\" and\n> \"max_fsm_relations\" are still mentioned in these translation files\n> (from the REL_16_1 source zip)\n>\n> src\\backend\\po\\fr.po\n> src\\backend\\po\\tr.po\n>\n> ~~\n>\n> Should those be removed?\n\nThese mentions are only in comments and not in actual translations, so I don't\nthink they risk causing any issues.\n\n $ git grep max_fsm_ |cut -d\":\" -f 2 |grep -v \"^#\" | wc -l\n 0\n\nI don't know enough about the translation workflow to know how these comments\nare handled, sending this to pgsql-translators@ might be a better way to reach\nthe authors working on this.\n\n> There was a commit [1] that said these all traces of those GUCs had\n> been eliminated.\n\nTranslations are managed in an external repo and synced with the main repo at\nintervals, so such a commit couldn't have updated the master translation work\nfiles anyways.\n\n--\nDaniel Gustafsson\n\n\n",
"msg_date": "Wed, 15 Nov 2023 12:20:10 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Some deleted GUCs are still referred to"
}
] |
[
{
"msg_contents": "Hi,\nI am trying to build postgres with meson on Windows. And I am stuck in\nthe process.\n\nSteps I followed:\n\n1. I clone postgres repo\n\n2.Installed meson and ninja\npip install meson ninja\n\n3. Then running following command:\nmeson setup build --buildtype debug\n\n4. Then I ran\ncd build\nninja\n\nGot following error\nD:\\project\\repo\\pg_meson\\postgres\\build>C:\\Users\\kyals\\AppData\\Roaming\\Python\\Python311\\Scripts\\ninja\nninja: error: 'src/backend/postgres_lib.a.p/meson_pch-c.obj', needed\nby 'src/backend/postgres.exe', missing and no known rule to make it.\n\nAny thoughts on how to resolve this error?\n\nThanks\nShlok Kumar Kyal\n\n\n",
"msg_date": "Thu, 9 Nov 2023 14:29:39 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Failure during Building Postgres in Windows with Meson"
},
{
"msg_contents": "Hi,\n\nAdding Kyotaro to CC because Kyotaro reported a similar issue before [1].\n\nOn Thu, 9 Nov 2023 at 11:59, Shlok Kyal <[email protected]> wrote:\n>\n> Hi,\n> I am trying to build postgres with meson on Windows. And I am stuck in\n> the process.\n>\n> Steps I followed:\n>\n> 1. I clone postgres repo\n>\n> 2.Installed meson and ninja\n> pip install meson ninja\n>\n> 3. Then running following command:\n> meson setup build --buildtype debug\n>\n> 4. Then I ran\n> cd build\n> ninja\n>\n> Got following error\n> D:\\project\\repo\\pg_meson\\postgres\\build>C:\\Users\\kyals\\AppData\\Roaming\\Python\\Python311\\Scripts\\ninja\n> ninja: error: 'src/backend/postgres_lib.a.p/meson_pch-c.obj', needed\n> by 'src/backend/postgres.exe', missing and no known rule to make it.\n\nI am able to reproduce the error. This error was introduced at meson\nv1.2.0, v1.1.0 and before work successfully. It seems meson tries to\nuse pch files although Postgres is compiled with b_pch=false.\nThis error occurs when Developer Powershell for VS is used and\nPostgres is compiled with b_pch=false option (which is the default on\nPostgres). If the -Db_pch=true option or the default powershell is\nused, Postgres gets built successfully.\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 9 Nov 2023 18:11:33 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure during Building Postgres in Windows with Meson"
},
{
"msg_contents": "Can you try with Meson v1.2.3?\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 09 Nov 2023 09:27:54 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure during Building Postgres in Windows with Meson"
},
{
"msg_contents": "Hi,\n\nOn Thu, 9 Nov 2023 at 18:27, Tristan Partin <[email protected]> wrote:\n>\n> Can you try with Meson v1.2.3?\n\nI tried with Meson v1.2.3 and upstream, both failed with the same error.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 9 Nov 2023 18:31:57 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure during Building Postgres in Windows with Meson"
},
{
"msg_contents": "On Thu Nov 9, 2023 at 9:31 AM CST, Nazir Bilal Yavuz wrote:\n> Hi,\n>\n> On Thu, 9 Nov 2023 at 18:27, Tristan Partin <[email protected]> wrote:\n> >\n> > Can you try with Meson v1.2.3?\n>\n> I tried with Meson v1.2.3 and upstream, both failed with the same error.\n\nPlease open a bug in the Meson repository which also mentions the last \nknown working version. I wonder what versions of Meson we use in the \nbuild farm.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 09 Nov 2023 09:42:17 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure during Building Postgres in Windows with Meson"
},
{
"msg_contents": "\nOn 2023-11-09 Th 10:42, Tristan Partin wrote:\n> On Thu Nov 9, 2023 at 9:31 AM CST, Nazir Bilal Yavuz wrote:\n>> Hi,\n>>\n>> On Thu, 9 Nov 2023 at 18:27, Tristan Partin <[email protected]> wrote:\n>> >\n>> > Can you try with Meson v1.2.3?\n>>\n>> I tried with Meson v1.2.3 and upstream, both failed with the same error.\n>\n> Please open a bug in the Meson repository which also mentions the last \n> known working version. I wonder what versions of Meson we use in the \n> build farm.\n\n\nfairywren / drongo have 1.0.1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 9 Nov 2023 14:44:36 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure during Building Postgres in Windows with Meson"
},
{
"msg_contents": "On Thu, 9 Nov 2023 at 21:12, Tristan Partin <[email protected]> wrote:\n>\n> On Thu Nov 9, 2023 at 9:31 AM CST, Nazir Bilal Yavuz wrote:\n> > Hi,\n> >\n> > On Thu, 9 Nov 2023 at 18:27, Tristan Partin <[email protected]> wrote:\n> > >\n> > > Can you try with Meson v1.2.3?\n> >\n> > I tried with Meson v1.2.3 and upstream, both failed with the same error.\n>\n> Please open a bug in the Meson repository which also mentions the last\n> known working version. I wonder what versions of Meson we use in the\n> build farm.\n\nShould we document the supported meson version that should be used for\nbuilding from source?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 10 Nov 2023 08:17:48 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure during Building Postgres in Windows with Meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-10 08:17:48 +0530, vignesh C wrote:\n> On Thu, 9 Nov 2023 at 21:12, Tristan Partin <[email protected]> wrote:\n> >\n> > On Thu Nov 9, 2023 at 9:31 AM CST, Nazir Bilal Yavuz wrote:\n> > > Hi,\n> > >\n> > > On Thu, 9 Nov 2023 at 18:27, Tristan Partin <[email protected]> wrote:\n> > > >\n> > > > Can you try with Meson v1.2.3?\n> > >\n> > > I tried with Meson v1.2.3 and upstream, both failed with the same error.\n> >\n> > Please open a bug in the Meson repository which also mentions the last\n> > known working version. I wonder what versions of Meson we use in the\n> > build farm.\n> \n> Should we document the supported meson version that should be used for\n> building from source?\n\nIt should be supported, I think we need to analyze the problem further\nfirst. It's extremely odd that the problem only happens when invoked from one\nversion of powershell but not the other. I assume there's a difference in\nPATH leading to a different version of *something* being used.\n\nBilal, you apparently can repro the failure happening in one shell but not the\nother? Could you send the PATH variable set in either shell and\nmeson-logs/meson-log.txt for both the working and non-working case?\n\nAndres\n\n\n",
"msg_date": "Thu, 9 Nov 2023 19:06:33 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure during Building Postgres in Windows with Meson"
},
{
"msg_contents": "Hi,\n\nOn Fri, 10 Nov 2023 at 06:06, Andres Freund <[email protected]> wrote:\n>\n> Bilal, you apparently can repro the failure happening in one shell but not the\n> other? Could you send the PATH variable set in either shell and\n> meson-logs/meson-log.txt for both the working and non-working case?\n\nYes, all of them are attached.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Fri, 10 Nov 2023 11:29:34 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure during Building Postgres in Windows with Meson"
},
{
"msg_contents": "On Thu Nov 9, 2023 at 9:31 AM CST, Nazir Bilal Yavuz wrote:\n> Hi,\n>\n> On Thu, 9 Nov 2023 at 18:27, Tristan Partin <[email protected]> wrote:\n> >\n> > Can you try with Meson v1.2.3?\n>\n> I tried with Meson v1.2.3 and upstream, both failed with the same error.\n\nAn employee at Collabora produced a fix[0]. It might still be worthwhile \nhowever to see why it happens in one shell and not the other.\n\n[0]: https://github.com/mesonbuild/meson/pull/12498\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 10 Nov 2023 12:53:12 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure during Building Postgres in Windows with Meson"
},
{
"msg_contents": "Hi, \n\nOn November 10, 2023 10:53:12 AM PST, Tristan Partin <[email protected]> wrote:\n>On Thu Nov 9, 2023 at 9:31 AM CST, Nazir Bilal Yavuz wrote:\n>> Hi,\n>> \n>> On Thu, 9 Nov 2023 at 18:27, Tristan Partin <[email protected]> wrote:\n>> >\n>> > Can you try with Meson v1.2.3?\n>> \n>> I tried with Meson v1.2.3 and upstream, both failed with the same error.\n\n>An employee at Collabora produced a fix[0].\n\nIf I understand correctly, you can thus work around the problem by enabling use of precompiled headers. Which also explains why CI didn't show this - normally on Windows you want to use pch. \n\n\n> It might still be worthwhile however to see why it happens in one shell and not the other.\n\nIt's gcc vs msvc due to path.\n\nAndres\n\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Fri, 10 Nov 2023 14:47:45 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure during Building Postgres in Windows with Meson"
}
] |
[
{
"msg_contents": "doc: fix wording describing the checkpoint_flush_after GUC\n\nReported-by: Evan Macbeth\n\nDiscussion: https://postgr.es/m/[email protected]\n\nBackpatch-through: master\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/5ba1ac99a8d8623604d3152be8fd9a201ba5240b\n\nModified Files\n--------------\ndoc/src/sgml/wal.sgml | 2 +-\n1 file changed, 1 insertion(+), 1 deletion(-)",
"msg_date": "Thu, 09 Nov 2023 22:51:42 +0000",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: doc: fix wording describing the checkpoint_flush_after GUC"
},
{
"msg_contents": "On 2023-Nov-09, Bruce Momjian wrote:\n\n> doc: fix wording describing the checkpoint_flush_after GUC\n\nHmm. Is this new wording really more clear than the original wording?\nI agree the original may not have been the most simple, but I don't\nthink it was wrong English.\n\nI'm not suggesting to revert this change, but rather I'd like to prevent\nfuture changes of this type. Just saying it'd be sad to turn all the\nPostgres documentation to using Basic English or whatever.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La fuerza no está en los medios físicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n\n",
"msg_date": "Mon, 13 Nov 2023 12:31:42 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: doc: fix wording describing the checkpoint_flush_after GUC"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-13 12:31:42 +0100, Alvaro Herrera wrote:\n> On 2023-Nov-09, Bruce Momjian wrote:\n>\n> > doc: fix wording describing the checkpoint_flush_after GUC\n>\n> Hmm. Is this new wording really more clear than the original wording?\n> I agree the original may not have been the most simple, but I don't\n> think it was wrong English.\n\nI think it was somewhat wrong (I probably wrote it) or at least awkwardly\nformulated. \"force the OS that pages .. should be flushed\" doesn't make a ton\nof sense.\n\nOTOH, the new formulation doesn't seem great either. The request(s) that we\nmake to the OS are not guaranteed to be followed, so the \"should be\" was\nactually a correct part of the sentence.\n\nIt probably should be something like:\n On Linux and POSIX platforms <xref linkend=\"guc-checkpoint-flush-after\"/>\n allows to request that the OS flushes pages written by the checkpoint to disk\n after a configurable number of bytes. Otherwise, these [...]\n\n\n> I'm not suggesting to revert this change, but rather I'd like to prevent\n> future changes of this type. Just saying it'd be sad to turn all the\n> Postgres documentation to using Basic English or whatever.\n\n+1 for the general notion.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 16:32:56 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: doc: fix wording describing the checkpoint_flush_after GUC"
},
{
"msg_contents": "Hola-hallo,\n\nOn 2023-Nov-13, Andres Freund wrote:\n\n> On 2023-11-13 12:31:42 +0100, Alvaro Herrera wrote:\n> > On 2023-Nov-09, Bruce Momjian wrote:\n> >\n> > > doc: fix wording describing the checkpoint_flush_after GUC\n> >\n> > Hmm. Is this new wording really more clear than the original wording?\n> > I agree the original may not have been the most simple, but I don't\n> > think it was wrong English.\n> \n> I think it was somewhat wrong (I probably wrote it) or at least awkwardly\n> formulated. \"force the OS that pages .. should be flushed\" doesn't make a ton\n> of sense.\n\nHeh, you know what? I was mistaken. There was indeed a grammatical\nerror being fixed. The complaint [1] was that \"you\" was missing in the\nsentence, and apparently that's correct [2]. \n\n[1] https://postgr.es/m/[email protected]\n[2] https://english.stackexchange.com/a/60285\n\nSo the core of the requested change was to turn \"allows to force\" into\n\"allows you to force\". And this means that your new proposal:\n\n> It probably should be something like:\n> On Linux and POSIX platforms <xref linkend=\"guc-checkpoint-flush-after\"/>\n> allows to request that the OS flushes pages written by the checkpoint to disk\n> after a configurable number of bytes. Otherwise, these [...]\n\nwould still fall afoul of the reported problem, because it still says\n\"allows to request\", which is bad English.\n\n> OTOH, the new formulation doesn't seem great either. The request(s) that we\n> make to the OS are not guaranteed to be followed, so the \"should be\" was\n> actually a correct part of the sentence.\n\nHmm, I hadn't noticed that nuance. Your text looks OK to me, except\nthat \"... after a configurable number of bytes\" reads odd after what's\nalready in the sentence. I would rewrite it in a different form, maybe\n\n On Linux and POSIX platforms, checkpoint_flush_after specifies the\n number of bytes written by a checkpoint after which the OS is requested\n to flush pages to disk. Otherwise, these pages ...\n\nCheers\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n\n\n",
"msg_date": "Tue, 14 Nov 2023 17:49:59 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: doc: fix wording describing the checkpoint_flush_after GUC"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-14 17:49:59 +0100, Alvaro Herrera wrote:\n> On 2023-Nov-13, Andres Freund wrote:\n> > On 2023-11-13 12:31:42 +0100, Alvaro Herrera wrote:\n> > > On 2023-Nov-09, Bruce Momjian wrote:\n> > >\n> > > > doc: fix wording describing the checkpoint_flush_after GUC\n> > >\n> > > Hmm. Is this new wording really more clear than the original wording?\n> > > I agree the original may not have been the most simple, but I don't\n> > > think it was wrong English.\n> >\n> > I think it was somewhat wrong (I probably wrote it) or at least awkwardly\n> > formulated. \"force the OS that pages .. should be flushed\" doesn't make a ton\n> > of sense.\n>\n> Heh, you know what? I was mistaken. There was indeed a grammatical\n> error being fixed. The complaint [1] was that \"you\" was missing in the\n> sentence, and apparently that's correct [2].\n\n> [1] https://postgr.es/m/[email protected]\n> [2] https://english.stackexchange.com/a/60285\n\nHm, I really can't get excited about this. To me the \"you\" sounds worse, but\nwhatever...\n\n\n> > OTOH, the new formulation doesn't seem great either. The request(s) that we\n> > make to the OS are not guaranteed to be followed, so the \"should be\" was\n> > actually a correct part of the sentence.\n>\n> Hmm, I hadn't noticed that nuance. Your text looks OK to me, except\n> that \"... after a configurable number of bytes\" reads odd after what's\n> already in the sentence. I would rewrite it in a different form, maybe\n>\n> On Linux and POSIX platforms, checkpoint_flush_after specifies the\n> number of bytes written by a checkpoint after which the OS is requested\n> to flush pages to disk. Otherwise, these pages ...\n\nThat works for me!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Nov 2023 12:01:47 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: doc: fix wording describing the checkpoint_flush_after GUC"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 3:01 PM Andres Freund <[email protected]> wrote:\n> Hm, I really can't get excited about this. To me the \"you\" sounds worse, but\n> whatever...\n\nTo me, it seems flat-out incorrect without the \"you\".\n\nIt might be better to rephrase the whole thing entirely so that it\ndoesn't need to address the reader, like allows you to force ->\nforces.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 11:15:58 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: doc: fix wording describing the checkpoint_flush_after GUC"
}
] |
[
{
"msg_contents": "Hi,\n\nI just created a primary with wal_segment_size=512. Then tried to create a\nstandby via pg_basebackup. The pg_basebackup appeared to just hang, for quite\na while, but did eventually complete. Over a minute for an empty cluster, when\nusing -c fast.\n\nIn this case I had used wal_sync_method=open_datasync - it's often faster and\nif we want to scale WAL writes more we'll have to use it more widely (you\ncan't have multiple fdatasyncs in progress and reason about which one affects\nwhat, but you can have multiple DSYNC writes in progress at the same time).\n\nAfter a bit of confused staring and debugging I figured out that the problem\nis that the RequestXLogSwitch() within the code for starting a basebackup was\ntriggering writing back the WAL in individual 8kB writes via\nGetXLogBuffer()->AdvanceXLInsertBuffer(). With open_datasync each of these\nwrites is durable - on this drive each take about 1ms.\n\n\nNormally we write out WAL in bigger chunks - but as it turns out, we don't\nhave any logic for doing larger writes when AdvanceXLInsertBuffers() is called\nfrom within GetXLogBuffer(). We just try to make enough space so that one\nbuffer can be replaced.\n\n\nThe time for a single SELECT pg_switch_wal() on this system, when using\nopen_datasync and a 512MB segment, are:\n\nwal_buffers time for pg_switch_xlog()\n16 64s\n100 53s\n400 13s\n600 1.3s\n\nThat's pretty bad. We don't really benefit from more buffering here, it just\navoids flushing in tiny increments. With a smaller wal_buffers, the large\nrecord by pg_switch_xlog() needs to replace buffers it inself inserted, and\ndoes so one-by-one. If we never re-encounter an buffer we inserted ourself\nearlier due to a larger wal_buffers, the problem isn't present.\n\nThis can bit with smaller segments too, it doesn't require large ones ones.\n\n\nThe reason this doesn't constantly become an issue is that walwriter normally\ntries to write out WAL, and if it does, the AdvanceXLInsertBuffers() called in\nbackends doesn't need to (walsender also calls AdvanceXLInsertBuffers(), but\nit won't ever write out data).\n\nIn my case, walsender is actually trying to do something - but it never gets\nWALWriteLock. The semaphore does get set after AdvanceXLInsertBuffers()\nreleases WALWriteLock, but on this system walwriter never succeeds taking the\nlwlock before AdvanceXLInsertBuffers() succeeds re-acquiring it.\n\n\nI think it might be a lucky accident that the problem was visible this\nblatantly in this one case - I suspect that this behaviour is encountered\nduring normal operation in the wild, but much harder to pinpoint, because it\ndoesn't happen \"exclusively\".\n\nE.g. I see a lot higher throughput bulk-loading data with larger wal_buffers\nwhen using open_datasync, but basically no difference when using\nfdatasync. And there are a lot of wal_buffers_full writes.\n\n\nTo fix this, I suspect we need to make\nGetXLogBuffer()->AdvanceXLInsertBuffer() flush more aggressively. In this\nspecific case, we even know for sure that we are going to fill a lot more\nbuffers, so no heuristic would be needed. In other cases however we need some\nheuristic to know how much to write out.\n\nGiven how *extremely* aggressive we are about flushing out nearly all pending\nWAL in XLogFlush(), I'm not sure there's much point in not also being somewhat\naggressive in GetXLogBuffer()->AdvanceXLInsertBuffer().\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Nov 2023 19:54:22 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "AdvanceXLInsertBuffers() vs wal_sync_method=open_datasync"
},
{
"msg_contents": "On 10/11/2023 05:54, Andres Freund wrote:\n> In this case I had used wal_sync_method=open_datasync - it's often faster and\n> if we want to scale WAL writes more we'll have to use it more widely (you\n> can't have multiple fdatasyncs in progress and reason about which one affects\n> what, but you can have multiple DSYNC writes in progress at the same time).\n\nNot sure I understand that. If you issue an fdatasync, it will sync all \nwrites that were complete before the fdatasync started. Right? If you \nhave multiple fdatasyncs in progress, that's true for each fdatasync. Or \nis there a bottleneck in the kernel with multiple in-progress fdatasyncs \nor something?\n\n> After a bit of confused staring and debugging I figured out that the problem\n> is that the RequestXLogSwitch() within the code for starting a basebackup was\n> triggering writing back the WAL in individual 8kB writes via\n> GetXLogBuffer()->AdvanceXLInsertBuffer(). With open_datasync each of these\n> writes is durable - on this drive each take about 1ms.\n\nI see. So the assumption in AdvanceXLInsertBuffer() is that XLogWrite() \nis relatively fast. But with open_datasync, it's not.\n\n> To fix this, I suspect we need to make\n> GetXLogBuffer()->AdvanceXLInsertBuffer() flush more aggressively. In this\n> specific case, we even know for sure that we are going to fill a lot more\n> buffers, so no heuristic would be needed. In other cases however we need some\n> heuristic to know how much to write out.\n\n+1. Maybe use the same logic as in XLogFlush().\n\nI wonder if the 'flexible' argument to XLogWrite() is too inflexible. It \nwould be nice to pass a hard minimum XLogRecPtr that it must write up \nto, but still allow it to write more than that if it's convenient.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 10 Nov 2023 17:16:35 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AdvanceXLInsertBuffers() vs wal_sync_method=open_datasync"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-10 17:16:35 +0200, Heikki Linnakangas wrote:\n> On 10/11/2023 05:54, Andres Freund wrote:\n> > In this case I had used wal_sync_method=open_datasync - it's often faster and\n> > if we want to scale WAL writes more we'll have to use it more widely (you\n> > can't have multiple fdatasyncs in progress and reason about which one affects\n> > what, but you can have multiple DSYNC writes in progress at the same time).\n>\n> Not sure I understand that. If you issue an fdatasync, it will sync all\n> writes that were complete before the fdatasync started. Right? If you have\n> multiple fdatasyncs in progress, that's true for each fdatasync. Or is there\n> a bottleneck in the kernel with multiple in-progress fdatasyncs or\n> something?\n\nMany filesystems only allow a single fdatasync to really be in progress at the\nsame time, they eventually acquire an inode specific lock. More problematic\ncases include things like a write followed by an fdatasync, followed by a\nwrite of the same block in another process/thread - there's very little\nguarantee about which contents of that block are now durable.\n\nBut more importantly, using fdatasync doesn't scale because it effectively has\nto flush the entire write cache one the device - which often contains plenty\nother dirty data. Whereas O_DSYNC can use FUA writes, which just makes the\nindividual WAL writes write through the cache, while leaving the rest of the\ncache \"unaffected\".\n\n\n> > After a bit of confused staring and debugging I figured out that the problem\n> > is that the RequestXLogSwitch() within the code for starting a basebackup was\n> > triggering writing back the WAL in individual 8kB writes via\n> > GetXLogBuffer()->AdvanceXLInsertBuffer(). With open_datasync each of these\n> > writes is durable - on this drive each take about 1ms.\n>\n> I see. So the assumption in AdvanceXLInsertBuffer() is that XLogWrite() is\n> relatively fast. But with open_datasync, it's not.\n\nI'm not sure that was an explicit assumption rather than just how it worked\nout.\n\n\n> > To fix this, I suspect we need to make\n> > GetXLogBuffer()->AdvanceXLInsertBuffer() flush more aggressively. In this\n> > specific case, we even know for sure that we are going to fill a lot more\n> > buffers, so no heuristic would be needed. In other cases however we need some\n> > heuristic to know how much to write out.\n>\n> +1. Maybe use the same logic as in XLogFlush().\n\nI've actually been wondering about moving all the handling of WALWriteLock to\nXLogWrite() and/or a new function called from all the places calling\nXLogWrite().\n\nI suspect we can't quite use the same logic in AdvanceXLInsertBuffer() as we\ndo in XLogFlush() - we e.g. don't ever want to trigger flushing out a\npartially filled page, for example. Or really ever want to unnecessarily wait\nfor a WAL insertion to complete when we don't have to.\n\n\n> I wonder if the 'flexible' argument to XLogWrite() is too inflexible. It\n> would be nice to pass a hard minimum XLogRecPtr that it must write up to,\n> but still allow it to write more than that if it's convenient.\n\nYes, I've also thought that. In the AIOified WAL code I ended up tracking\n\"minimum\" and \"optimal\" write/flush locations.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Nov 2023 09:39:57 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AdvanceXLInsertBuffers() vs wal_sync_method=open_datasync"
}
] |
[
{
"msg_contents": "In MobilityDB\nhttps://github.com/MobilityDB/MobilityDB\nwe have defined a tstzspan type which is a fixed-size equivalent of the tstzrange type in PostgreSQL.\n\nWe have a span_union aggregate function which is the equivalent of the range_agg function in PostgreSQL defined as follows\n\nCREATE FUNCTION tstzspan_union_finalfn(internal)\n RETURNS tstzspanset\n AS 'MODULE_PATHNAME', 'Span_union_finalfn'\n LANGUAGE C IMMUTABLE PARALLEL SAFE;\n\nCREATE AGGREGATE span_union(tstzspan) (\n SFUNC = array_agg_transfn,\n STYPE = internal,\n COMBINEFUNC = array_agg_combine,\n SERIALFUNC = array_agg_serialize,\n DESERIALFUNC = array_agg_deserialize,\n FINALFUNC = tstzspan_union_finalfn\n);\n\nAs can be seen, we reuse the array_agg function to accumulate the values in an array and the final function just does similar work as the range_agg_finalfn to merge the overlapping spans.\n\nI am testing the parallel aggregate features of PG 16.1\n\ntest=# select version();\n version\n-------------------------------------------------------------------------------------------------------\n PostgreSQL 16.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0, 64-bit\n\nI create a table with 1M random spans and another one with the same data converted to tstzrange\n\nCREATE TABLE tbl_tstzspan_1M AS\nSELECT k, random_tstzspan('2001-01-01', '2002-12-31', 10) AS t\nFROM generate_series(1, 1e6) AS k;\n\nCREATE TABLE tbl_tstzrange_1M AS\nSELECT k, t::tstzrange\nFROM tbl_tstzspan_1M;\n\ntest=# analyze;\nANALYZE\ntest=#\n\nThe tstzrange DOES NOT support parallel aggregates\n\ntest=# EXPLAIN\nSELECT k%10, range_agg(t) AS t\nFROM tbl_tstzrange_1M\ngroup by k%10\norder by k%10;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n GroupAggregate (cost=66706.17..203172.65 rows=1000000 width=64)\n Group Key: ((k % '10'::numeric))\n -> Gather Merge (cost=66706.17..183172.65 rows=1000000 width=54)\n Workers Planned: 2\n -> Sort (cost=65706.15..66747.81 rows=416667 width=54)\n Sort Key: ((k % '10'::numeric))\n -> Parallel Seq Scan on tbl_tstzrange_1m (cost=0.00..12568.33 rows=416667 width=54)\n(7 rows)\n\nThe array_agg function supports parallel aggregates\n\ntest=# EXPLAIN\nSELECT k%10, array_agg(t) AS t\nFROM tbl_tstzspan_1M\ngroup by k%10\norder by k%10;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Finalize GroupAggregate (cost=66706.17..193518.60 rows=1000000 width=64)\n Group Key: ((k % '10'::numeric))\n -> Gather Merge (cost=66706.17..172268.60 rows=833334 width=64)\n Workers Planned: 2\n -> Partial GroupAggregate (cost=65706.15..75081.15 rows=416667 width=64)\n Group Key: ((k % '10'::numeric))\n -> Sort (cost=65706.15..66747.81 rows=416667 width=56)\n Sort Key: ((k % '10'::numeric))\n -> Parallel Seq Scan on tbl_tstzspan_1m (cost=0.00..12568.33 rows=416667 width=56)\n(9 rows)\n\nWe are not able to make span_union aggregate support parallel aggregates\n\ntest=# EXPLAIN\nSELECT k%10, span_union(t) AS t\nFROM tbl_tstzspan_1M\ngroup by k%10\norder by k%10;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n GroupAggregate (cost=187879.84..210379.84 rows=1000000 width=64)\n Group Key: ((k % '10'::numeric))\n -> Sort (cost=187879.84..190379.84 rows=1000000 width=56)\n Sort Key: ((k % '10'::numeric))\n -> Seq Scan on tbl_tstzspan_1m (cost=0.00..19860.00 rows=1000000 width=56)\n\nAny suggestion?\n\nThanks\n\nEsteban\n\n\n\n\n\n\n\n\n\nIn MobilityDB\nhttps://github.com/MobilityDB/MobilityDB\nwe have defined a tstzspan type which is a fixed-size equivalent of the tstzrange type in PostgreSQL.\n\n\nWe have a span_union aggregate function which is the equivalent of the range_agg function in PostgreSQL defined\n as follows\n\n\nCREATE FUNCTION tstzspan_union_finalfn(internal)\n RETURNS tstzspanset\n AS 'MODULE_PATHNAME', 'Span_union_finalfn'\n LANGUAGE C IMMUTABLE PARALLEL SAFE;\n\n\nCREATE AGGREGATE span_union(tstzspan) (\n SFUNC = array_agg_transfn,\n STYPE = internal,\n COMBINEFUNC = array_agg_combine,\n SERIALFUNC = array_agg_serialize,\n DESERIALFUNC = array_agg_deserialize,\n FINALFUNC = tstzspan_union_finalfn\n);\n\n\nAs can be seen, we reuse the array_agg function to accumulate the values in an array\n and the final function just does similar work as the range_agg_finalfn to merge the overlapping spans.\n\n\nI am testing the parallel aggregate features of PG 16.1\n\n\ntest=# select version();\n version\n-------------------------------------------------------------------------------------------------------\n PostgreSQL 16.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0, 64-bit\n\n\nI create a table with 1M random spans and another one with the same data converted to tstzrange\n\n\nCREATE TABLE tbl_tstzspan_1M AS\nSELECT k, random_tstzspan('2001-01-01', '2002-12-31', 10) AS t\nFROM generate_series(1, 1e6) AS k;\n\n\nCREATE TABLE tbl_tstzrange_1M AS\nSELECT k, t::tstzrange\nFROM tbl_tstzspan_1M;\n\n\ntest=# analyze;\nANALYZE\ntest=#\n\n\nThe tstzrange DOES NOT support parallel aggregates\n\n\ntest=# EXPLAIN\nSELECT k%10, range_agg(t) AS t\nFROM tbl_tstzrange_1M\ngroup by k%10\norder by k%10;\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n GroupAggregate (cost=66706.17..203172.65 rows=1000000 width=64)\n Group Key: ((k % '10'::numeric))\n -> Gather Merge (cost=66706.17..183172.65 rows=1000000 width=54)\n Workers Planned: 2\n -> Sort (cost=65706.15..66747.81 rows=416667 width=54)\n Sort Key: ((k % '10'::numeric))\n -> Parallel Seq Scan on tbl_tstzrange_1m (cost=0.00..12568.33 rows=416667 width=54)\n(7 rows)\n\n\nThe array_agg function supports parallel aggregates\n\n\ntest=# EXPLAIN\nSELECT k%10, array_agg(t) AS t\nFROM tbl_tstzspan_1M\ngroup by k%10\norder by k%10;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Finalize GroupAggregate (cost=66706.17..193518.60 rows=1000000 width=64)\n Group Key: ((k % '10'::numeric))\n -> Gather Merge (cost=66706.17..172268.60 rows=833334 width=64)\n Workers Planned: 2\n -> Partial GroupAggregate (cost=65706.15..75081.15 rows=416667 width=64)\n Group Key: ((k % '10'::numeric))\n -> Sort (cost=65706.15..66747.81 rows=416667 width=56)\n Sort Key: ((k % '10'::numeric))\n -> Parallel Seq Scan on tbl_tstzspan_1m (cost=0.00..12568.33 rows=416667 width=56)\n(9 rows)\n\n\nWe are not able to make span_union aggregate support parallel aggregates\n\n\ntest=# EXPLAIN\nSELECT k%10, span_union(t) AS t\nFROM tbl_tstzspan_1M\ngroup by k%10\norder by k%10;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n GroupAggregate (cost=187879.84..210379.84 rows=1000000 width=64)\n Group Key: ((k % '10'::numeric))\n -> Sort (cost=187879.84..190379.84 rows=1000000 width=56)\n Sort Key: ((k % '10'::numeric))\n -> Seq Scan on tbl_tstzspan_1m (cost=0.00..19860.00 rows=1000000 width=56)\n\n\nAny suggestion?\n\n\nThanks\n\n\nEsteban",
"msg_date": "Fri, 10 Nov 2023 10:47:42 +0000",
"msg_from": "ZIMANYI Esteban <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parallel aggregates in PG 16.1"
},
{
"msg_contents": "On Fri, 10 Nov 2023 at 11:47, ZIMANYI Esteban <[email protected]> wrote:\n>\n> In MobilityDB\n> https://github.com/MobilityDB/MobilityDB\n> we have defined a tstzspan type which is a fixed-size equivalent of the tstzrange type in PostgreSQL.\n>\n> We have a span_union aggregate function which is the equivalent of the range_agg function in PostgreSQL defined as follows\n>\n> CREATE FUNCTION tstzspan_union_finalfn(internal)\n> RETURNS tstzspanset\n> AS 'MODULE_PATHNAME', 'Span_union_finalfn'\n> LANGUAGE C IMMUTABLE PARALLEL SAFE;\n>\n> CREATE AGGREGATE span_union(tstzspan) (\n> SFUNC = array_agg_transfn,\n> STYPE = internal,\n> COMBINEFUNC = array_agg_combine,\n> SERIALFUNC = array_agg_serialize,\n> DESERIALFUNC = array_agg_deserialize,\n> FINALFUNC = tstzspan_union_finalfn\n> );\n>\n> As can be seen, we reuse the array_agg function to accumulate the values in an array and the final function just does similar work as the range_agg_finalfn to merge the overlapping spans.\n\nDid you note the following section in the CREATE AGGREGATE documentation [0]?\n\n\"\"\"\nAn aggregate can optionally support partial aggregation, as described\nin Section 38.12.4.\nThis requires specifying the COMBINEFUNC parameter. If the\nstate_data_type is internal, it's usually also appropriate to provide\nthe SERIALFUNC and DESERIALFUNC parameters so that parallel\naggregation is possible.\nNote that the aggregate must also be marked PARALLEL SAFE to enable\nparallel aggregation.\n\"\"\"\n\n From this, it seems like the PARALLEL = SAFE argument is missing from\nyour aggregate definition as provided above.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/docs/16/sql-createaggregate.html\n\n\n",
"msg_date": "Fri, 10 Nov 2023 13:27:58 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel aggregates in PG 16.1"
}
] |
[
{
"msg_contents": "Moving this to a new thread and adding it to the January commitfest.\n\nOn Thu, Nov 09, 2023 at 03:27:33PM -0600, Nathan Bossart wrote:\n> On Tue, Nov 07, 2023 at 04:58:16PM -0800, Andres Freund wrote:\n>> However, even if there's likely some other implied memory barrier that we\n>> could piggyback on, the patch much simpler to understand if it doesn't change\n>> coherency rules. There's no way the overhead could matter.\n> \n> I wonder if it's worth providing a set of \"locked read\" functions. Those\n> could just do a compare/exchange with 0 in the generic implementation. For\n> patches like this one where the overhead really shouldn't matter, I'd\n> encourage folks to use those to make it easy to reason about correctness.\n\nConcretely, like this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 10 Nov 2023 14:51:28 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "locked reads for atomics"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-10 14:51:28 -0600, Nathan Bossart wrote:\n> Moving this to a new thread and adding it to the January commitfest.\n> \n> On Thu, Nov 09, 2023 at 03:27:33PM -0600, Nathan Bossart wrote:\n> > On Tue, Nov 07, 2023 at 04:58:16PM -0800, Andres Freund wrote:\n> >> However, even if there's likely some other implied memory barrier that we\n> >> could piggyback on, the patch much simpler to understand if it doesn't change\n> >> coherency rules. There's no way the overhead could matter.\n> > \n> > I wonder if it's worth providing a set of \"locked read\" functions. Those\n> > could just do a compare/exchange with 0 in the generic implementation. For\n> > patches like this one where the overhead really shouldn't matter, I'd\n> > encourage folks to use those to make it easy to reason about correctness.\n> \n> Concretely, like this.\n\nI don't think \"locked\" is a good name - there's no locking. I think that'll\ndeter their use, because it will make it sound more expensive than they are.\n\npg_atomic_read_membarrier_u32()?\n\n\n\n> @@ -228,7 +228,8 @@ pg_atomic_init_u32(volatile pg_atomic_uint32 *ptr, uint32 val)\n> * The read is guaranteed to return a value as it has been written by this or\n> * another process at some point in the past. There's however no cache\n> * coherency interaction guaranteeing the value hasn't since been written to\n> - * again.\n> + * again. Consider using pg_atomic_locked_read_u32() unless you have a strong\n> + * reason (e.g., performance) to use unlocked reads.\n\nI think that's too strong an endorsement. Often there will be no difference in\ndifficulty analysing correctness, because the barrier pairing / the\ninteraction with the surrounding code needs to be analyzed just as much.\n\n\n> * No barrier semantics.\n> */\n> @@ -239,6 +240,24 @@ pg_atomic_read_u32(volatile pg_atomic_uint32 *ptr)\n> \treturn pg_atomic_read_u32_impl(ptr);\n> }\n> \n> +/*\n> + * pg_atomic_read_u32 - locked read from atomic variable.\n\nUn-updated name...\n\n\n> + * This read is guaranteed to read the current value,\n\nIt doesn't guarantee that *at all*. What it guarantees is solely that the\ncurrent CPU won't be doing something that could lead to reading an outdated\nvalue. To actually ensure the value is up2date, the modifying side also needs\nto have used a form of barrier (in the form of fetch_add, compare_exchange,\netc or an explicit barrier).\n\n\n> +#ifndef PG_HAVE_ATOMIC_LOCKED_READ_U32\n> +#define PG_HAVE_ATOMIC_LOCKED_READ_U32\n> +static inline uint32\n> +pg_atomic_locked_read_u32_impl(volatile pg_atomic_uint32 *ptr)\n> +{\n> +\tuint32 old = 0;\n> +\n> +\t/*\n> +\t * In the generic implementation, locked reads are implemented as a\n> +\t * compare/exchange with 0. That'll fail or succeed, but always return the\n> +\t * most up-to-date value. It might also store a 0, but only if the\n> +\t * previous value was also a zero, i.e., harmless.\n> +\t */\n> +\tpg_atomic_compare_exchange_u32_impl(ptr, &old, 0);\n> +\n> +\treturn old;\n> +}\n> +#endif\n\nI suspect implementing it with an atomic fetch_add of 0 would be faster...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Nov 2023 15:11:50 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 03:11:50PM -0800, Andres Freund wrote:\n> On 2023-11-10 14:51:28 -0600, Nathan Bossart wrote:\n>> + * This read is guaranteed to read the current value,\n> \n> It doesn't guarantee that *at all*. What it guarantees is solely that the\n> current CPU won't be doing something that could lead to reading an outdated\n> value. To actually ensure the value is up2date, the modifying side also needs\n> to have used a form of barrier (in the form of fetch_add, compare_exchange,\n> etc or an explicit barrier).\n\nOkay, I think I was missing that this doesn't work along with\npg_atomic_write_u32() because that doesn't have any barrier semantics\n(probably because the spinlock version does). IIUC you'd want to use\npg_atomic_exchange_u32() to write the value instead, which seems to really\njust be another compare/exchange under the hood.\n\nSpeaking of the spinlock implementation of pg_atomic_write_u32(), I've been\nstaring at this comment for a while and can't make sense of it:\n\n\tvoid\n\tpg_atomic_write_u32_impl(volatile pg_atomic_uint32 *ptr, uint32 val)\n\t{\n\t\t/*\n\t\t * One might think that an unlocked write doesn't need to acquire the\n\t\t * spinlock, but one would be wrong. Even an unlocked write has to cause a\n\t\t * concurrent pg_atomic_compare_exchange_u32() (et al) to fail.\n\t\t */\n\t\tSpinLockAcquire((slock_t *) &ptr->sema);\n\t\tptr->value = val;\n\t\tSpinLockRelease((slock_t *) &ptr->sema);\n\t}\n\nIt refers to \"unlocked writes,\" but this isn't\npg_atomic_unlocked_write_u32_impl(). The original thread for this comment\n[0] doesn't offer any hints, either. Does \"unlocked\" mean something\ndifferent here, such as \"write without any barrier semantics?\"\n\n[0] https://postgr.es/m/14947.1475690465%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 10 Nov 2023 20:38:13 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-10 20:38:13 -0600, Nathan Bossart wrote:\n> On Fri, Nov 10, 2023 at 03:11:50PM -0800, Andres Freund wrote:\n> > On 2023-11-10 14:51:28 -0600, Nathan Bossart wrote:\n> >> + * This read is guaranteed to read the current value,\n> > \n> > It doesn't guarantee that *at all*. What it guarantees is solely that the\n> > current CPU won't be doing something that could lead to reading an outdated\n> > value. To actually ensure the value is up2date, the modifying side also needs\n> > to have used a form of barrier (in the form of fetch_add, compare_exchange,\n> > etc or an explicit barrier).\n> \n> Okay, I think I was missing that this doesn't work along with\n> pg_atomic_write_u32() because that doesn't have any barrier semantics\n> (probably because the spinlock version does). IIUC you'd want to use\n> pg_atomic_exchange_u32() to write the value instead, which seems to really\n> just be another compare/exchange under the hood.\n\nYes. We should optimize pg_atomic_exchange_u32() one of these days - it can be\ndone *far* faster than a cmpxchg. When I was adding the atomic abstraction\nthere was concern with utilizing too many different atomic instructions. I\ndidn't really agree back then, but these days I really don't see a reason to\nnot use a few more intrinsics.\n\n\n> Speaking of the spinlock implementation of pg_atomic_write_u32(), I've been\n> staring at this comment for a while and can't make sense of it:\n> \n> \tvoid\n> \tpg_atomic_write_u32_impl(volatile pg_atomic_uint32 *ptr, uint32 val)\n> \t{\n> \t\t/*\n> \t\t * One might think that an unlocked write doesn't need to acquire the\n> \t\t * spinlock, but one would be wrong. Even an unlocked write has to cause a\n> \t\t * concurrent pg_atomic_compare_exchange_u32() (et al) to fail.\n> \t\t */\n> \t\tSpinLockAcquire((slock_t *) &ptr->sema);\n> \t\tptr->value = val;\n> \t\tSpinLockRelease((slock_t *) &ptr->sema);\n> \t}\n> \n> It refers to \"unlocked writes,\" but this isn't\n> pg_atomic_unlocked_write_u32_impl(). The original thread for this comment\n> [0] doesn't offer any hints, either. Does \"unlocked\" mean something\n> different here, such as \"write without any barrier semantics?\"\n\nIt's just about not using the spinlock. If we were to *not* use a spinlock\nhere, we'd break pg_atomic_compare_exchange_u32(), because the\nspinlock-implementation of pg_atomic_compare_exchange_u32() needs to actually\nbe able to rely on no concurrent changes to the value to happen.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Nov 2023 18:48:39 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 06:48:39PM -0800, Andres Freund wrote:\n> Yes. We should optimize pg_atomic_exchange_u32() one of these days - it can be\n> done *far* faster than a cmpxchg. When I was adding the atomic abstraction\n> there was concern with utilizing too many different atomic instructions. I\n> didn't really agree back then, but these days I really don't see a reason to\n> not use a few more intrinsics.\n\nI might give this a try, if for no other reason than it'd force me to\nimprove my mental model of this stuff. :)\n\n>> It refers to \"unlocked writes,\" but this isn't\n>> pg_atomic_unlocked_write_u32_impl(). The original thread for this comment\n>> [0] doesn't offer any hints, either. Does \"unlocked\" mean something\n>> different here, such as \"write without any barrier semantics?\"\n> \n> It's just about not using the spinlock. If we were to *not* use a spinlock\n> here, we'd break pg_atomic_compare_exchange_u32(), because the\n> spinlock-implementation of pg_atomic_compare_exchange_u32() needs to actually\n> be able to rely on no concurrent changes to the value to happen.\n\nThanks for clarifying. I thought it might've been hinting at something\nbeyond the compare/exchange implications.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 10 Nov 2023 20:55:29 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "Here's a v2 of the patch set in which I've attempted to address all\nfeedback. I've also added a pg_write_membarrier_u* pair of functions that\nprovide an easy way to write to an atomic variable with full barrier\nsemantics. In the generic implementation, these are just aliases for an\natomic exchange.\n\n0002 demonstrates how these functions might be used to eliminate the\narch_lck spinlock, which is only ever used for one boolean variable. My\nhope is that the membarrier functions make eliminating spinlocks for\nnon-performance-sensitive code easy to reason about.\n\n(We might be able to use a pg_atomic_flag instead for 0002, but that code\nseems intended for a slightly different use-case and has more complicated\nbarrier semantics.)\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 27 Nov 2023 15:00:30 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "\r\n\r\n> On Nov 28, 2023, at 05:00, Nathan Bossart <[email protected]> wrote:\r\n> \r\n> External Email\r\n> \r\n> Here's a v2 of the patch set in which I've attempted to address all\r\n> feedback. I've also added a pg_write_membarrier_u* pair of functions that\r\n> provide an easy way to write to an atomic variable with full barrier\r\n> semantics. In the generic implementation, these are just aliases for an\r\n> atomic exchange.\r\n> \r\n> 0002 demonstrates how these functions might be used to eliminate the\r\n> arch_lck spinlock, which is only ever used for one boolean variable. My\r\n> hope is that the membarrier functions make eliminating spinlocks for\r\n> non-performance-sensitive code easy to reason about.\r\n> \r\n> (We might be able to use a pg_atomic_flag instead for 0002, but that code\r\n> seems intended for a slightly different use-case and has more complicated\r\n> barrier semantics.)\r\n> \r\n> --\r\n> Nathan Bossart\r\n> Amazon Web Services: https://aws.amazon.com\r\n\r\nHi Nathan,\r\n\r\nThe patch looks good to me.\r\n\r\nThe patch adds two pairs of atomic functions that provide full-barrier semantics to atomic read/write operations. The patch also includes an example of how this new functions can be used to replace spin locks.\r\n\r\nThe patch applies cleanly to HEAD. “make check-world” also runs cleanly with no error. I am moving it to Ready for Committers.\r\n\r\nRegards,\r\nYong",
"msg_date": "Wed, 17 Jan 2024 03:48:43 +0000",
"msg_from": "\"Li, Yong\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 2:30 AM Nathan Bossart <[email protected]> wrote:\n>\n> Here's a v2 of the patch set in which I've attempted to address all\n> feedback. I've also added a pg_write_membarrier_u* pair of functions that\n\nThere's some immediate use for reads/writes with barrier semantics -\nhttps://www.postgresql.org/message-id/CALj2ACXrePj4E6ocKr-%2Bb%3DrjT-8yeMmHnEeWQP1bc-WXETfTVw%40mail.gmail.com.\nAny plan for taking this forward?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 22 Feb 2024 12:58:37 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "On Thu, 2024-02-22 at 12:58 +0530, Bharath Rupireddy wrote:\n> There's some immediate use for reads/writes with barrier semantics -\n\nIs this mainly a convenience for safety/readability? Or is it faster in\nsome cases than doing an atomic access with separate memory barriers?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 22 Feb 2024 11:53:50 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 11:53:50AM -0800, Jeff Davis wrote:\n> On Thu, 2024-02-22 at 12:58 +0530, Bharath Rupireddy wrote:\n>> There's some immediate use for reads/writes with barrier semantics -\n> \n> Is this mainly a convenience for safety/readability? Or is it faster in\n> some cases than doing an atomic access with separate memory barriers?\n\nThe former. Besides the 0002 patch tracked here, there's at least one\nother patch [0] that could probably use these new functions. The idea is\nto provide an easy way to remove spinlocks, etc. and use atomics for less\nperformance-sensitive stuff. The implementations are intended to be\nrelatively inexpensive and might continue to improve in the future, but the\nfunctions are primarily meant to help reason about correctness.\n\nI don't mind prioritizing these patches, especially since there now seems\nto be multiple patches waiting on it. IIRC I was worried about not having\nenough support for this change, but I might now have it.\n\n[0] https://commitfest.postgresql.org/47/4330/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 23 Feb 2024 10:17:58 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "On Fri, 2024-02-23 at 10:17 -0600, Nathan Bossart wrote:\n> The idea is\n> to provide an easy way to remove spinlocks, etc. and use atomics for\n> less\n> performance-sensitive stuff. The implementations are intended to be\n> relatively inexpensive and might continue to improve in the future,\n> but the\n> functions are primarily meant to help reason about correctness.\n\nTo be clear:\n\n x = pg_atomic_[read|write]_membarrier_u64(&v);\n\nis semantically equivalent to:\n\n pg_memory_barrier();\n x = pg_atomic_[read|write]_u64(&v);\n pg_memory_barrier();\n\n?\n\nIf so, that does seem more convenient.\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Fri, 23 Feb 2024 10:25:00 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 10:25:00AM -0800, Jeff Davis wrote:\n> To be clear:\n> \n> x = pg_atomic_[read|write]_membarrier_u64(&v);\n> \n> is semantically equivalent to:\n> \n> pg_memory_barrier();\n> x = pg_atomic_[read|write]_u64(&v);\n> pg_memory_barrier();\n> \n> ?\n> \n> If so, that does seem more convenient.\n\nI think that's about right. The upthread feedback from Andres [0] provides\nsome additional context.\n\n[0] https://postgr.es/m/20231110231150.fjm77gup2i7xu6hc%40alap3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 23 Feb 2024 13:32:47 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "Here is a v3 of the patch set with the first draft of the commit messages.\nThere are no code differences between v2 and v3.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 23 Feb 2024 14:58:12 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-23 10:25:00 -0800, Jeff Davis wrote:\n> On Fri, 2024-02-23 at 10:17 -0600, Nathan Bossart wrote:\n> > The idea is\n> > to provide an easy way to remove spinlocks, etc. and use atomics for\n> > less\n> > performance-sensitive stuff.� The implementations are intended to be\n> > relatively inexpensive and might continue to improve in the future,\n> > but the\n> > functions are primarily meant to help reason about correctness.\n> \n> To be clear:\n> \n> x = pg_atomic_[read|write]_membarrier_u64(&v);\n> \n> is semantically equivalent to:\n> \n> pg_memory_barrier();\n> x = pg_atomic_[read|write]_u64(&v);\n> pg_memory_barrier();\n> ?\n> \n> If so, that does seem more convenient.\n\nKinda. Semantically I think that's correct, however it doesn't commonly make\nsense to have both those memory barriers, so you wouldn't really write code\nlike that and thus comparing on the basis of convenience doesn't quite seem\nright.\n\nRather than convenience, I think performance and simplicity are better\narguments. If you're going to execute a read and then a memory barrier, it's\ngoing to be faster to just do a single atomic operation. And it's a bit\nsimpler to analyze on which \"side\" of the read/write the barrier is needed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Feb 2024 17:30:26 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-23 14:58:12 -0600, Nathan Bossart wrote:\n> +/*\n> + * pg_atomic_write_membarrier_u32 - write with barrier semantics.\n> + *\n> + * The write is guaranteed to succeed as a whole, i.e., it's not possible to\n> + * observe a partial write for any reader. Note that this correctly interacts\n> + * with both pg_atomic_compare_exchange_u32() and\n> + * pg_atomic_read_membarrier_u32(). While this may be less performant than\n> + * pg_atomic_write_u32() and pg_atomic_unlocked_write_u32(), it may be easier\n> + * to reason about correctness with this function in less performance-sensitive\n> + * code.\n> + *\n> + * Full barrier semantics.\n> + */\n\nThe callout to pg_atomic_unlocked_write_u32() is wrong. The reason to use\npg_atomic_unlocked_write_u32() is for variables where we do not ever want to\nfall back to spinlocks/semaphores, because the underlying variable isn't\nactually shared. In those cases using the other variants is a bug. The only\nuse of pg_atomic_unlocked_write_u32() is temp-table buffers which share the\ndata structure with the shared buffers case.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Feb 2024 17:34:49 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 05:34:49PM -0800, Andres Freund wrote:\n> On 2024-02-23 14:58:12 -0600, Nathan Bossart wrote:\n>> +/*\n>> + * pg_atomic_write_membarrier_u32 - write with barrier semantics.\n>> + *\n>> + * The write is guaranteed to succeed as a whole, i.e., it's not possible to\n>> + * observe a partial write for any reader. Note that this correctly interacts\n>> + * with both pg_atomic_compare_exchange_u32() and\n>> + * pg_atomic_read_membarrier_u32(). While this may be less performant than\n>> + * pg_atomic_write_u32() and pg_atomic_unlocked_write_u32(), it may be easier\n>> + * to reason about correctness with this function in less performance-sensitive\n>> + * code.\n>> + *\n>> + * Full barrier semantics.\n>> + */\n> \n> The callout to pg_atomic_unlocked_write_u32() is wrong. The reason to use\n> pg_atomic_unlocked_write_u32() is for variables where we do not ever want to\n> fall back to spinlocks/semaphores, because the underlying variable isn't\n> actually shared. In those cases using the other variants is a bug. The only\n> use of pg_atomic_unlocked_write_u32() is temp-table buffers which share the\n> data structure with the shared buffers case.\n\nI removed the reference to pg_atomic_unlocked_write_u32().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 24 Feb 2024 09:27:34 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "Committed. Thank you for reviewing!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Feb 2024 10:24:57 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locked reads for atomics"
}
] |
[
{
"msg_contents": ">I wonder if it's worth providing a set of \"locked read\" functions.\n\nMost out-of-order machines include “read acquire” and “write release” which are pretty close to what you’re suggesting. With the current routines, we only have “read relaxed” and “write relaxed”. I think implementing acquire/release semantics is a very good idea,\n\nI would also like to clarify the properties of atomics. One very important question: Are atomics also volatile? If so, the compiler has very limited ability to move them around. If not, it is difficult to tell when or where they will take place unless the surrounding code is peppered with barriers.\n\n\n\n\n\n\n\n\n\n>I wonder if it's worth providing a set of \"locked read\" functions. \n \nMost out-of-order machines include “read acquire” and “write release” which are pretty close to what you’re suggesting. With the current routines, we only have “read relaxed” and “write relaxed”. I think\n implementing acquire/release semantics is a very good idea,\n \nI would also like to clarify the properties of atomics. One very important question: Are atomics also volatile? If so, the compiler has very limited ability to move them around. If not, it is difficult to\n tell when or where they will take place unless the surrounding code is peppered with barriers.",
"msg_date": "Fri, 10 Nov 2023 21:49:06 +0000",
"msg_from": "John Morris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 09:49:06PM +0000, John Morris wrote:\n> Most out-of-order machines include “read acquire” and “write release”\n> which are pretty close to what you’re suggesting. With the current\n> routines, we only have “read relaxed” and “write relaxed”. I think\n> implementing acquire/release semantics is a very good idea,\n\nWe do have both pg_atomic_write_u32() and pg_atomic_unlocked_write_u32()\n(see commit b0779ab), but AFAICT those only differ in the fallback/spinlock\nimplementations. I suppose there could be an unlocked 64-bit write on\nplatforms that have 8-byte single-copy atomicity but still need to use the\nfallback/spinlock implementation for some reason, but that might be a bit\nof a stretch, and the use-cases might be few and far between...\n\n> I would also like to clarify the properties of atomics. One very\n> important question: Are atomics also volatile?\n\nThe PostgreSQL atomics support appears to ensure they are volatile.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 10 Nov 2023 16:55:22 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locked reads for atomics"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-10 21:49:06 +0000, John Morris wrote:\n> >I wonder if it's worth providing a set of \"locked read\" functions.\n> \n> Most out-of-order machines include “read acquire” and “write release” which\n> are pretty close to what you’re suggesting.\n\nIs that really true? It's IA64 lingo. X86 doesn't have them, while arm has\nmore granular barriers, they don't neatly map onto acquire/release either.\n\nI don't think an acquire here would actually be equivalent to making this a\nfull barrier - an acquire barrier allows moving reads or stores from *before*\nthe barrier to be moved after the barrier. It just prevents the opposite.\n\n\nAnd for proper use of acquire/release semantics we'd need to pair operations\nmuch more closely. Right now we often rely on another preceding memory barrier\nto ensure correct ordering, having to use paired operations everywhere would\nlead to slower code.\n\n\nI thoroughly dislike how strongly C++11/C11 prefer paired atomics *on the same\naddress* over \"global\" fences. It often leads to substantially slower\ncode. And they don't at all map neatly on hardware, where largely barrier\nsemantics are *not* tied to individual addresses. And the fence specification\nis just about unreadable (although I think they did fix some of the worst\nissues).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Nov 2023 15:22:12 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locked reads for atomics"
}
] |
[
{
"msg_contents": "An \"en_US\" user doing:\n\n CREATE TABLE foo(t TEXT PRIMARY KEY);\n\nis providing no indication that they want an index tailored to their\nlocale. Yet we are creating the index with the \"en_US\" collation and\ntherefore imposing huge performance costs (something like 2X slower\nindex build time than the \"C\" locale), and also huge dependency\nversioning risks that could lead to index corruption and/or wrong\nresults.\n\nSimilarly, a user doing:\n\n SELECT DISTINCT t FROM bar;\n\nis providing no indication that they care about the collation of \"t\"\n(we are free to choose a HashAgg which makes no ordering guarantee at\nall). Yet if we choose Sort+GroupAgg, the Sort will be performed in the\n\"en_US\" locale, which is something like 2X slower than the \"C\" locale.\n\nOne of the strongest arguments for using a non-C collation in these\ncases is the chance to use a non-deterministic collation, like a case-\ninsensitive one. But the database collation is always deterministic,\nand all deterministic collations have exactly the same definition of\nequality, so there's no reason not to use \"C\".\n\nAnother argument is that, if the column is the database collation and\nthe index is \"C\", then the index is unusable for text range scans, etc.\nBut there are two ways to solve that problem:\n\n 1. Set the column collation to \"C\"; or\n 2. Set the index collation to the database collation.\n\nRange scans are often most useful when the text is not actually natural\nlanguage, but instead is some kind of formatted text representing\nanother type of thing, often in ASCII. In that case, the range scan is\nreally some kind of prefix search or partitioning, and the \"C\" locale\nis probably the right thing to use, and #1 wins.\n\nGranted, there are reasons to want an index to have a particular\ncollation, in which case it makes sense to opt-in to #2. But in the\ncommon case, the high performance costs and dependency versioning risks\naren't worth it.\n\nThoughts?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 10 Nov 2023 16:03:16 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-10 16:03:16 -0800, Jeff Davis wrote:\n> An \"en_US\" user doing:\n>\n> CREATE TABLE foo(t TEXT PRIMARY KEY);\n>\n> is providing no indication that they want an index tailored to their\n> locale. Yet we are creating the index with the \"en_US\" collation and\n> therefore imposing huge performance costs (something like 2X slower\n> index build time than the \"C\" locale), and also huge dependency\n> versioning risks that could lead to index corruption and/or wrong\n> results.\n\nI guess you are arguing that the user didn't intend to create an index here? I\ndon't think that is true - users know that pkeys create indexes. If we used C\nhere, users would often need to create a second index on the same column using\nthe actual database collation - I think we'd very commonly end up with\ncomplaints that the pkey index doesn't work, because it's been automatically\ncreated with a different collation than the column.\n\nAlso, wouldn't the intent to use a different collation for the column be\nexpressed by changing the column's collation?\n\n\n> Similarly, a user doing:\n>\n> SELECT DISTINCT t FROM bar;\n>\n> is providing no indication that they care about the collation of \"t\"\n> (we are free to choose a HashAgg which makes no ordering guarantee at\n> all). Yet if we choose Sort+GroupAgg, the Sort will be performed in the\n> \"en_US\" locale, which is something like 2X slower than the \"C\" locale.\n\nOTOH, if we are choosing a groupagg, we might be able to implement that using\nan index, which is more likey to exist in the databases collation. Looks like\nwe even just look for indexes that are in the database collation.\n\nMight be worth teaching the planner additional smarts here.\n\n\n> Thoughts?\n\nI seriously doubt its a good idea to change which collations primary keys use\nby default. But I think there's a decent bit of work we could do in the\nplanner, e.g:\n\n- Teach the planner to take collation costs into account for costing - right\n now index scans with \"C\" cost the same as index scans with more expensive\n collations. That seems wrong even for equality lookups and would make it\n hard to make improvements to prefer cheaper collations in other situations.\n\n- Teach the planner to use cheaper collations when ordering for reasons other\n than the user's direct request (e.g. DISTINCT/GROUP BY, merge joins).\n \n\nI think we should also explain in our docs that C can be considerably faster -\nI couldn't find anything in a quick look.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Nov 2023 17:19:43 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Fri, 2023-11-10 at 17:19 -0800, Andres Freund wrote:\n> I guess you are arguing that the user didn't intend to create an\n> index here?\n\nNo, obviously the user should expect an index when a primary key is\ncreated. But that doesn't mean that it necessarily needs to be ordered\naccording to the database collation.\n\nUnfortunately, right now the planner doesn't understand that an index\nin the \"C\" locale can satisfy equality searches and constraint\nenforcement for \"en_US\" (or any other deterministic collation). That's\nprobably the first thing to fix.\n\nInequalities and ORDER BYs can't benefit from an index with a different\ncollation, but lots of indexes don't need that.\n\n> Also, wouldn't the intent to use a different collation for the column\n> be\n> expressed by changing the column's collation?\n\nThe column collation expresses the semantics of that column. If the\nuser has a database collation of \"en_US\", they should expect ORDER BY\non that column to be according to that locale unless otherwise\nspecified.\n\nThat doesn't imply that indexes must have a matching collation. In fact\nwe already allow the column and index collations to differ, it just\ndoesn't work as well as it should.\n\n> \n> OTOH, if we are choosing a groupagg, we might be able to implement\n> that using\n> an index, which is more likey to exist in the databases collation. \n> Looks like\n> we even just look for indexes that are in the database collation.\n> \n> Might be worth teaching the planner additional smarts here.\n\nYeah, we don't need to force anything, we could just create a few paths\nwith appropriate path key information and cost them.\n\n> \n> - Teach the planner to take collation costs into account for costing\n\n+1. I noticed that GroupAgg+Sort is often in the same ballpark as\nHashAgg in runtime when the collation is \"C\", but HashAgg is way faster\nwhen the collation is something else.\n\n> - Teach the planner to use cheaper collations when ordering for\n> reasons other\n> than the user's direct request (e.g. DISTINCT/GROUP BY, merge\n> joins).\n\n+1. Where \"cheaper\" comes from is an interesting question -- is it a\nproperty of the provider or the specific collation? Or do we just call\n\"C\" special?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 11 Nov 2023 23:19:55 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On 11.11.23 01:03, Jeff Davis wrote:\n> But the database collation is always deterministic,\n\nSo far!\n\n\n",
"msg_date": "Mon, 13 Nov 2023 13:43:04 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Mon, 2023-11-13 at 13:43 +0100, Peter Eisentraut wrote:\n> On 11.11.23 01:03, Jeff Davis wrote:\n> > But the database collation is always deterministic,\n> \n> So far!\n\nYeah, if we did that, clearly the index collation would need to match\nthat of the database to be useful. What are the main challenges in\nallowing non-deterministic collations at the database level?\n\nIf someone opts into a collation (and surely a non-deterministic\ncollation would be opt-in), then I think it makes sense that they\naccept some performance costs and dependency versioning risks for the\nfunctionality.\n\nMy point still stands that all deterministic collations are, at least\nfor equality, identical.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 13 Nov 2023 08:49:59 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-11 23:19:55 -0800, Jeff Davis wrote:\n> On Fri, 2023-11-10 at 17:19 -0800, Andres Freund wrote:\n> > I guess you are arguing that the user didn't intend to create an\n> > index here?\n>\n> No, obviously the user should expect an index when a primary key is\n> created. But that doesn't mean that it necessarily needs to be ordered\n> according to the database collation.\n>\n> Unfortunately, right now the planner doesn't understand that an index\n> in the \"C\" locale can satisfy equality searches and constraint\n> enforcement for \"en_US\" (or any other deterministic collation). That's\n> probably the first thing to fix.\n>\n> Inequalities and ORDER BYs can't benefit from an index with a different\n> collation, but lots of indexes don't need that.\n\nBut we don't know whether the index is used for that. If we just change the\nbehaviour, there will be lots of pain around upgrades, because queries will\ncontinue to work but be dog slow.\n\n\n> > Also, wouldn't the intent to use a different collation for the column\n> > be\n> > expressed by changing the column's collation?\n>\n> The column collation expresses the semantics of that column. If the\n> user has a database collation of \"en_US\", they should expect ORDER BY\n> on that column to be according to that locale unless otherwise\n> specified.\n\nThat makes no sense to me. Either the user cares about ordering, in which case\nthe index needs to be in that ordering for efficient ORDER BY, or they don't,\nin which neither index nor column needs a non-C collation. You partially\npremised your argument on the content of primary keys typically making non-C\ncollations undesirable!\n\n\n> > OTOH, if we are choosing a groupagg, we might be able to implement\n> > that using\n> > an index, which is more likey to exist in the databases collation.\n> > Looks like\n> > we even just look for indexes that are in the database collation.\n> >\n> > Might be worth teaching the planner additional smarts here.\n>\n> Yeah, we don't need to force anything, we could just create a few paths\n> with appropriate path key information and cost them.\n\nI'm not sure it's quite that easy. One issue is obviously that this could lead\nto a huge increase in paths we need to keep around due to differing path\nkeys. We might need to be a bit more aggressive about pruning such paths than\nI think we would be today.\n\n\n> > - Teach the planner to use cheaper collations when ordering for\n> > reasons other\n> > � than the user's direct request (e.g. DISTINCT/GROUP BY, merge\n> > joins).\n>\n> +1. Where \"cheaper\" comes from is an interesting question -- is it a\n> property of the provider or the specific collation? Or do we just call\n> \"C\" special?\n\nI'd think the specific collation. Even if we initially perhaps just get the\ndefault cost from the provider such, it structurally seems the sanest place to\nlocate the cost.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 10:02:47 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "\n\nOn 11/13/23 19:02, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-11 23:19:55 -0800, Jeff Davis wrote:\n>> On Fri, 2023-11-10 at 17:19 -0800, Andres Freund wrote:\n>>> I guess you are arguing that the user didn't intend to create an\n>>> index here?\n>>\n>> No, obviously the user should expect an index when a primary key is\n>> created. But that doesn't mean that it necessarily needs to be ordered\n>> according to the database collation.\n>>\n>> Unfortunately, right now the planner doesn't understand that an index\n>> in the \"C\" locale can satisfy equality searches and constraint\n>> enforcement for \"en_US\" (or any other deterministic collation). That's\n>> probably the first thing to fix.\n>>\n>> Inequalities and ORDER BYs can't benefit from an index with a different\n>> collation, but lots of indexes don't need that.\n> \n> But we don't know whether the index is used for that. If we just change the\n> behaviour, there will be lots of pain around upgrades, because queries will\n> continue to work but be dog slow.\n> \n\nYeah. I don't quite agree with the initial argument that not specifying\nthe collation explicitly in CREATE TABLE or a query means the user does\nnot care about the collation. We do have the sensible behavior that if\nyou don't specify a collation, you get the database one as a default.\n\nI don't think we can just arbitrarily override the default because we\nhappen to think \"C\" is going to be faster. If we could prove that using\n\"C\" is going to produce exactly the same results as for the implicit\ncollation (for a given operation), then we can simply do that. Not sure\nif such proof is possible, though.\n\nFor example, I don't see how we could arbitrarily override the collation\nfor indexes backing primary keys, because how would you know the user\nwill never do a sort on it? Not that uncommon with natural primary keys,\nI think (not a great practice, but people do that).\n\nPerhaps we could allow the PK index to have a different collation, say\nby supporting something like this:\n\n ALTER TABLE distributors ADD PRIMARY KEY (dist_id COLLATE \"C\");\n\nAnd then the planner would just pick the right index, I think.\n\n> \n>>> Also, wouldn't the intent to use a different collation for the column\n>>> be\n>>> expressed by changing the column's collation?\n>>\n>> The column collation expresses the semantics of that column. If the\n>> user has a database collation of \"en_US\", they should expect ORDER BY\n>> on that column to be according to that locale unless otherwise\n>> specified.\n> \n> That makes no sense to me. Either the user cares about ordering, in which case\n> the index needs to be in that ordering for efficient ORDER BY, or they don't,\n> in which neither index nor column needs a non-C collation. You partially\n> premised your argument on the content of primary keys typically making non-C\n> collations undesirable!\n> \n\nI may be missing something, but what's the disagreement here? If the\nuser cares about ordering, they'll specify ORDER BY with either an\nexplicit or the default collation. If the index collation matches, it\nmay be useful for the ordering.\n\nOf course, if we feel entitled to create the primary key index with a\ncollation of our choosing, that'd make this unpredictable.\n\n> \n>>> OTOH, if we are choosing a groupagg, we might be able to implement\n>>> that using\n>>> an index, which is more likey to exist in the databases collation.\n>>> Looks like\n>>> we even just look for indexes that are in the database collation.\n>>>\n>>> Might be worth teaching the planner additional smarts here.\n>>\n>> Yeah, we don't need to force anything, we could just create a few paths\n>> with appropriate path key information and cost them.\n> \n> I'm not sure it's quite that easy. One issue is obviously that this could lead\n> to a huge increase in paths we need to keep around due to differing path\n> keys. We might need to be a bit more aggressive about pruning such paths than\n> I think we would be today.\n> \n\nRight. There's also the challenge that we determine \"interesting\npathkeys\" very early, and I'm not sure if we can decide which pathkeys\n(for different collations) are cheaper at that point.\n\n> \n>>> - Teach the planner to use cheaper collations when ordering for\n>>> reasons other\n>>> than the user's direct request (e.g. DISTINCT/GROUP BY, merge\n>>> joins).\n>>\n>> +1. Where \"cheaper\" comes from is an interesting question -- is it a\n>> property of the provider or the specific collation? Or do we just call\n>> \"C\" special?\n> \n> I'd think the specific collation. Even if we initially perhaps just get the\n> default cost from the provider such, it structurally seems the sanest place to\n> locate the cost.\n> \n\nISTM it's about how complex the rules implemented by the collation are,\nso I agree the cost should be a feature of collations not providers.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 Nov 2023 22:36:24 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-13 22:36:24 +0100, Tomas Vondra wrote:\n> I don't think we can just arbitrarily override the default because we\n> happen to think \"C\" is going to be faster. If we could prove that using\n> \"C\" is going to produce exactly the same results as for the implicit\n> collation (for a given operation), then we can simply do that. Not sure\n> if such proof is possible, though.\n\nYea, I don't know if there's any interesting cases where we could prove that.\n\nI think there *are* interesting cases where we should prove that non-C\ncollations are identical. It's imo bonkers that we consider the\n\"collname.encname\" collations distinct from the equivalent \"collname\"\ncollation.\n\nWe document that we consider \"collname\" equivalent to\n \"collname.<database encoding>\":\n\n> Within any particular database, only collations that use that database's\n> encoding are of interest. Other entries in pg_collation are ignored. Thus, a\n> stripped collation name such as de_DE can be considered unique within a\n> given database even though it would not be unique globally. Use of the\n> stripped collation names is recommended, since it will make one fewer thing\n> you need to change if you decide to change to another database\n> encoding. Note however that the default, C, and POSIX collations can be used\n> regardless of the database encoding.\n\nFollowed by:\n\n\n> PostgreSQL considers distinct collation objects to be incompatible even when\n> they have identical properties. Thus for example, [...] Mixing stripped and\n> non-stripped collation names is therefore not recommended.\n\nWhy on earth are we solving this by having multiple pg_collation entries for\nexactly the same collation, instead of normalizing the collation-name during\nlookup by adding the relevant encoding name if not explicitly specified? It\nmakes a lot of sense to not force the user to specify the encoding when it\ncan't differ.\n\n\nIt's imo similarly absurd that an index with \"default\" collation cannot be\nused when specifying the equivalent collation explicitly in the query and vice\nversa.\n\n\n\n\n> >>> Also, wouldn't the intent to use a different collation for the column\n> >>> be\n> >>> expressed by changing the column's collation?\n> >>\n> >> The column collation expresses the semantics of that column. If the\n> >> user has a database collation of \"en_US\", they should expect ORDER BY\n> >> on that column to be according to that locale unless otherwise\n> >> specified.\n> >\n> > That makes no sense to me. Either the user cares about ordering, in which case\n> > the index needs to be in that ordering for efficient ORDER BY, or they don't,\n> > in which neither index nor column needs a non-C collation. You partially\n> > premised your argument on the content of primary keys typically making non-C\n> > collations undesirable!\n> >\n>\n> I may be missing something, but what's the disagreement here? If the\n> user cares about ordering, they'll specify ORDER BY with either an\n> explicit or the default collation. If the index collation matches, it\n> may be useful for the ordering.\n>\n> Of course, if we feel entitled to create the primary key index with a\n> collation of our choosing, that'd make this unpredictable.\n\nJeff was saying that textual primary keys typically don't need sorting and\nbecause of that we could default to \"C\", for performance. Part of my response\nwas that I think the user's intent could be expressed by specifying the column\ncollation as \"C\" - to which Jeff replied that that would change the\nsemantics. Which, to me, seems to completely run counter to his argument that\nwe could just use \"C\" for such indexes.\n\n\n\n> >>> - Teach the planner to use cheaper collations when ordering for\ng> >>> reasons other\n> >>> � than the user's direct request (e.g. DISTINCT/GROUP BY, merge\n> >>> joins).\n> >>\n> >> +1. Where \"cheaper\" comes from is an interesting question -- is it a\n> >> property of the provider or the specific collation? Or do we just call\n> >> \"C\" special?\n> >\n> > I'd think the specific collation. Even if we initially perhaps just get the\n> > default cost from the provider such, it structurally seems the sanest place to\n> > locate the cost.\n> >\n>\n> ISTM it's about how complex the rules implemented by the collation are,\n> so I agree the cost should be a feature of collations not providers.\n\nI'm not sure analysing the complexity in detail is worth it. ISTM there's a\nfew \"levels\" of costliness:\n\n1) memcmp() suffices\n2) can safely use strxfrm() (i.e. ICU), possibly limited to when we sort\n3) deterministic collations\n4) non-deterministic collations\n\nI'm sure there are graduations, particularly within 3), but I'm not sure it's\nrealistic / worthwhile to go to that detail. I think a cost model like the\nabove would provide enough detail to make better decisions than today...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 14:12:12 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On 11/13/23 23:12, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-13 22:36:24 +0100, Tomas Vondra wrote:\n>> I don't think we can just arbitrarily override the default because we\n>> happen to think \"C\" is going to be faster. If we could prove that using\n>> \"C\" is going to produce exactly the same results as for the implicit\n>> collation (for a given operation), then we can simply do that. Not sure\n>> if such proof is possible, though.\n> \n> Yea, I don't know if there's any interesting cases where we could prove that.\n> \n> I think there *are* interesting cases where we should prove that non-C\n> collations are identical. It's imo bonkers that we consider the\n> \"collname.encname\" collations distinct from the equivalent \"collname\"\n> collation.\n> \n\nYeah, I agree that seems a bit ... strange.\n\n> We document that we consider \"collname\" equivalent to\n> \"collname.<database encoding>\":\n> \n>> Within any particular database, only collations that use that database's\n>> encoding are of interest. Other entries in pg_collation are ignored. Thus, a\n>> stripped collation name such as de_DE can be considered unique within a\n>> given database even though it would not be unique globally. Use of the\n>> stripped collation names is recommended, since it will make one fewer thing\n>> you need to change if you decide to change to another database\n>> encoding. Note however that the default, C, and POSIX collations can be used\n>> regardless of the database encoding.\n> \n> Followed by:\n> \n> \n>> PostgreSQL considers distinct collation objects to be incompatible even when\n>> they have identical properties. Thus for example, [...] Mixing stripped and\n>> non-stripped collation names is therefore not recommended.\n> \n> Why on earth are we solving this by having multiple pg_collation entries for\n> exactly the same collation, instead of normalizing the collation-name during\n> lookup by adding the relevant encoding name if not explicitly specified? It\n> makes a lot of sense to not force the user to specify the encoding when it\n> can't differ.\n> \n\nTrue, insisting on having multiple separate entries for the same\ncollation (and not recognizing which collations are the same) seems\nsomewhat inconvenient.\n\n> \n> It's imo similarly absurd that an index with \"default\" collation cannot be\n> used when specifying the equivalent collation explicitly in the query and vice\n> versa.\n> \n\nRight. Having to spell\n\n COLLATE \"default\"\n\nand not the actual collation it references to is weird. Similarly, I\njust realized that the collation name in pg_database and pg_collation\nare not quite consistent. Consider this:\n\nselect datcollate from pg_database where datname = 'test';\n\n datcollate\n------------\n C.UTF-8\n(1 row)\n\nbut then\n\n test=# select * from t where c = 'x' collate \"C.UTF-8\";\n ERROR: collation \"C.UTF-8\" for encoding \"UTF8\" does not exist\n LINE 1: select * from t where c = 'x' collate \"C.UTF-8\";\n\nbecause the collation is actually known as C.utf8.\n\n\n> \n> \n> \n>>>>> Also, wouldn't the intent to use a different collation for the column\n>>>>> be\n>>>>> expressed by changing the column's collation?\n>>>>\n>>>> The column collation expresses the semantics of that column. If the\n>>>> user has a database collation of \"en_US\", they should expect ORDER BY\n>>>> on that column to be according to that locale unless otherwise\n>>>> specified.\n>>>\n>>> That makes no sense to me. Either the user cares about ordering, in which case\n>>> the index needs to be in that ordering for efficient ORDER BY, or they don't,\n>>> in which neither index nor column needs a non-C collation. You partially\n>>> premised your argument on the content of primary keys typically making non-C\n>>> collations undesirable!\n>>>\n>>\n>> I may be missing something, but what's the disagreement here? If the\n>> user cares about ordering, they'll specify ORDER BY with either an\n>> explicit or the default collation. If the index collation matches, it\n>> may be useful for the ordering.\n>>\n>> Of course, if we feel entitled to create the primary key index with a\n>> collation of our choosing, that'd make this unpredictable.\n> \n> Jeff was saying that textual primary keys typically don't need sorting and\n> because of that we could default to \"C\", for performance. Part of my response\n> was that I think the user's intent could be expressed by specifying the column\n> collation as \"C\" - to which Jeff replied that that would change the\n> semantics. Which, to me, seems to completely run counter to his argument that\n> we could just use \"C\" for such indexes.\n> \n\nTrue. I think that's somewhat self-contradictory argument.\n\nIt's not clear to me if the argument is meant to apply to indexes on all\ncolumns or just those backing primary keys, but I guess it's the latter.\nBut that (forcing users to specify collation for PK columns, while using\nthe default for non-PK columns) seems like a recipe for subtle bugs in\napplications.\n\n> \n> \n>>>>> - Teach the planner to use cheaper collations when ordering for\n> g> >>> reasons other\n>>>>> than the user's direct request (e.g. DISTINCT/GROUP BY, merge\n>>>>> joins).\n>>>>\n>>>> +1. Where \"cheaper\" comes from is an interesting question -- is it a\n>>>> property of the provider or the specific collation? Or do we just call\n>>>> \"C\" special?\n>>>\n>>> I'd think the specific collation. Even if we initially perhaps just get the\n>>> default cost from the provider such, it structurally seems the sanest place to\n>>> locate the cost.\n>>>\n>>\n>> ISTM it's about how complex the rules implemented by the collation are,\n>> so I agree the cost should be a feature of collations not providers.\n> \n> I'm not sure analysing the complexity in detail is worth it. ISTM there's a\n> few \"levels\" of costliness:\n> \n> 1) memcmp() suffices\n> 2) can safely use strxfrm() (i.e. ICU), possibly limited to when we sort\n> 3) deterministic collations\n> 4) non-deterministic collations\n> \n> I'm sure there are graduations, particularly within 3), but I'm not sure it's\n> realistic / worthwhile to go to that detail. I think a cost model like the\n> above would provide enough detail to make better decisions than today...\n> \n\nI'm not saying we have to analyze the complexity of the rules. I was\nsimply agreeing with you that the \"cost\" should be associated with\nindividual collations, not the providers.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 14 Nov 2023 00:02:13 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-14 00:02:13 +0100, Tomas Vondra wrote:\n> On 11/13/23 23:12, Andres Freund wrote:\n> > On 2023-11-13 22:36:24 +0100, Tomas Vondra wrote:\n> >> ISTM it's about how complex the rules implemented by the collation are,\n> >> so I agree the cost should be a feature of collations not providers.\n> > \n> > I'm not sure analysing the complexity in detail is worth it. ISTM there's a\n> > few \"levels\" of costliness:\n> > \n> > 1) memcmp() suffices\n> > 2) can safely use strxfrm() (i.e. ICU), possibly limited to when we sort\n> > 3) deterministic collations\n> > 4) non-deterministic collations\n> > \n> > I'm sure there are graduations, particularly within 3), but I'm not sure it's\n> > realistic / worthwhile to go to that detail. I think a cost model like the\n> > above would provide enough detail to make better decisions than today...\n> > \n> \n> I'm not saying we have to analyze the complexity of the rules. I was\n> simply agreeing with you that the \"cost\" should be associated with\n> individual collations, not the providers.\n\nJust to be clear, I didn't intend to contradict you or anything - I was just\noutlining my initial thoughts of how we could model the costs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 15:38:05 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Mon, 2023-11-13 at 10:02 -0800, Andres Freund wrote:\n> > Inequalities and ORDER BYs can't benefit from an index with a\n> > different\n> > collation, but lots of indexes don't need that.\n> \n> But we don't know whether the index is used for that.\n\nThat will be hard to quantify, but perhaps we can discuss in terms of\nthe conditions that must be satisfied for pathkeys provided by an index\nto be useful (the following is a bit fuzzy, let me know if there's\nsomething major I'm missing):\n\n a. There needs to be an ORDER BY or inequality somewhere in a query\nthat involves that text field.\n b. The index must be correlated, or the data cached well enough, or\nthere must be a highly selective inequality.\n c. For ORDER BY, the result size needs to be large enough for the\npathkeys to really matter vs just sorting at the top of the plan.\n d. The pathkeys must be part of a winning plan (e.g. the winning plan\nmust keep the indexed field on the outer of a join, or use MergeJoin).\n\nIn my experience, considering a text index speciifically: queries are\nless likely to use inequalities on text fields than a timestamp field;\nand indexes on text are less likely to be correlated than an index on a\ntimestamp field. That pushes down the probabilities of (a) and (b)\ncompared with timestamps. (Timestamps obviously don't have collation;\nI'm just using timestamps as a point of reference where index pathkeys\nare really useful.)\n\nI know the above are hard to quantify (and not statistically\nindependent), but I don't think we should take it for granted that\npathkeys on a text index are overwhelmingly useful. I would describe\nthem as \"sometimes useful\".\n\n> > \n> That makes no sense to me. Either the user cares about ordering, in\n> which case\n> the index needs to be in that ordering for efficient ORDER BY\n\nI disagree. The user may want top-level ORDER BYs on that field to\nreturn 'a' before 'Z', and that's a very reasonable expectation that\nrequires a non-\"C\" collation.\n\nBut that does not imply much about what order an index on that field\nshould be. The fact that an ORDER BY exists satisfies only condition\n(a) above. If the other conditions are not met, then the pathkeys\nprovided by the index are close to useless anyway.\n\nThe index itself might be useful for other reasons though, like\nconstraints or equality lookups. But indexes for those purposes don't\nneed to provide pathkeys.\n\n> You partially\n> premised your argument on the content of primary keys typically\n> making non-C\n> collations undesirable!\n\nPrimary keys requires (in a practical sense) an index to be created,\nand that index should be useful for other purposes, too.\n\nEquality lookups are clearly required to implement a primary key, so of\ncourse the index should be useful for any other equality lookups as\nwell, because that has zero cost.\n\nBut \"useful for other purposes\" is not a blank check. Providing useful\npathkeys offers some marginal utility (assuming the conditions (a)-(e)\nare satisfied), but also has a marginal cost (build time and versioning\nrisks). For typical cases I believe Postgres is on the wrong side of\nthat trade; that's all I'm saying.\n\n> I'm not sure it's quite that easy. One issue is obviously that this\n> could lead\n> to a huge increase in paths we need to keep \n\nIf there's a particularly bad case you have in mind, please let me\nknow. Otherwise we can sort the details out when it comes to a patch.\n\n> > \n> I'd think the specific collation. Even if we initially perhaps just\n> get the\n> default cost from the provider such, it structurally seems the sanest\n> place to\n> locate the cost.\n\nMakes sense, though I'm thinking we'd still want to special case the\nfastest collation as \"C\".\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 13 Nov 2023 15:55:59 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Mon, 2023-11-13 at 22:36 +0100, Tomas Vondra wrote:\n> Yeah. I don't quite agree with the initial argument that not\n> specifying\n> the collation explicitly in CREATE TABLE or a query means the user\n> does\n> not care about the collation.\n\nI didn't argue that the user doesn't care about collation -- we need to\nhonor the collation semantics of the column. And a column with\nunspecified collation must be the database collation (otherwise what\nwould the database collation mean?). But the index's collation is an\nimplementation detail that is not necessary to provide the requested\nsemantics.\n\nI'm arguing that pathkeys are often not even useful for providing the\nrequested semantics, so why should the user have the pain of poor\nperformance and versioning risks for every text index in their system?\nIf the user just wants PK/FK constraints, and equality lookups, then an\nindex with the \"C\" collation makes a lot of sense to serve those\npurposes.\n\n> For example, I don't see how we could arbitrarily override the\n> collation\n> for indexes backing primary keys, because how would you know the user\n> will never do a sort on it?\n\nThe column collation and index collation are tracked separately in the\ncatalog. The column collation cannot be overridden because it's\nsemantically signficant, but there are at least some degrees of freedom\nwe have with the index collation.\n\nI don't think we can completely change the default index collation to\nbe \"C\", but perhaps there could be a database-level option to do so,\nand that would have no effect on semantics at all. If the user notices\nsome queries that could benefit from an index with a non-\"C\" collation,\nthey can add/replace an index as they see fit.\n\n> Not that uncommon with natural primary keys,\n> I think (not a great practice, but people do that).\n\nNatural keys often have an uncorrelated index, and if the index is not\ncorrelated, it's often not useful ORDER BY.\n\nWhen I actually think about schemas and plans I've seen in the wild, I\nstruggle to think of many cases that would really benefit from an index\nin a non-\"C\" collation. The best cases I can think of are where it's\ndoing some kind of prefix search. That's not rare, but it's also not so\ncommon that I'd like to risk index corruption on every index in the\nsystem by default in case a prefix search is performed.\n\n> Perhaps we could allow the PK index to have a different collation,\n> say\n> by supporting something like this:\n> \n> ALTER TABLE distributors ADD PRIMARY KEY (dist_id COLLATE \"C\");\n\nYes, I'd like something like that to be supported. We'd have to check\nthat, if the collations are different, that both are deterministic.\n\n> And then the planner would just pick the right index, I think.\n\nRight now the planner doesn't seem to understand that an index in the\n\"C\" collation works just fine for answering equality queries. That\nshould be fixed.\n\n> If the\n> user cares about ordering, they'll specify ORDER BY with either an\n> explicit or the default collation. If the index collation matches, it\n> may be useful for the ordering.\n\nExactly.\n\n> Of course, if we feel entitled to create the primary key index with a\n> collation of our choosing, that'd make this unpredictable.\n\nI wouldn't describe it as \"unpredictable\". We'd have some defined way\nof defaulting the collation of an index which might be affected by a\ndatabase option or something. In any case, it would be visible with \\d.\n\nRegards,\n\tJeff Davis\n\n> \n\n\n",
"msg_date": "Mon, 13 Nov 2023 17:58:54 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Mon, 2023-11-13 at 22:36 +0100, Tomas Vondra wrote:\n> Perhaps we could allow the PK index to have a different collation, say\n> by supporting something like this:\n> \n> ALTER TABLE distributors ADD PRIMARY KEY (dist_id COLLATE \"C\");\n\nAn appealing idea! While at it, we could add an INCLUDE clause...\n\nThe risk here would be extending standard syntax in a way that might\npossibly conflict with future changes to the standard.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 14 Nov 2023 05:48:38 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "\n\nOn 11/14/23 02:58, Jeff Davis wrote:\n> On Mon, 2023-11-13 at 22:36 +0100, Tomas Vondra wrote:\n>> Yeah. I don't quite agree with the initial argument that not\n>> specifying\n>> the collation explicitly in CREATE TABLE or a query means the user\n>> does\n>> not care about the collation.\n> \n> I didn't argue that the user doesn't care about collation -- we need to\n> honor the collation semantics of the column. And a column with\n> unspecified collation must be the database collation (otherwise what\n> would the database collation mean?). But the index's collation is an\n> implementation detail that is not necessary to provide the requested\n> semantics.\n> \n> I'm arguing that pathkeys are often not even useful for providing the\n> requested semantics, so why should the user have the pain of poor\n> performance and versioning risks for every text index in their system?\n> If the user just wants PK/FK constraints, and equality lookups, then an\n> index with the \"C\" collation makes a lot of sense to serve those\n> purposes.\n> \n\nThanks for the clarification. I agree index's collation can be seen as\nan implementation detail, as long as it produces the correct results\n(with respect to the column's collation). I'm somewhat skeptical about\ndoing this automatically, because the collations may be equivalent only\nfor some operations (and we don't know what the user will do).\n\nMy concern is we'll decide to alter the index collation, and then the\nuser will face the consequences. Presumably we'd no generate incorrect\nresults, but we'd not be able use an index, causing performance issues.\n\nAFAICS this is a trade-off between known benefits (faster equality\nsearches, which are common for PK columns) vs. unknown downsides\n(performance penalty for operations with unknown frequency).\n\nNot sure it's a decision we can make automatically. But it's mostly\nachievable manually, if the user specifies COLLATE \"C\" for the column.\nYou're right that changes the semantics of the column, but if the user\nonly does equality searches, that shouldn't be an issue. And if an\nordering is needed after all, it's possible to specify the collation in\nthe ORDER BY clause.\n\nI realize you propose to do this automatically for everyone, because few\npeople will realize how much faster can this be. But maybe there's a way\nto make this manual approach more convenient? Say, by allowing the PK to\nhave a different collation (which I don't think is allowed now).\n\nFWIW I wonder what the impact of doing this automatically would be in\npractice. I mean, in my experience the number of tables with TEXT (or\ntypes sensitive to collations) primary keys is fairly low, especially\nfor tables of non-trivial size (where the performance impact might be\nmeasurable).\n\n>> For example, I don't see how we could arbitrarily override the\n>> collation\n>> for indexes backing primary keys, because how would you know the user\n>> will never do a sort on it?\n> \n> The column collation and index collation are tracked separately in the\n> catalog. The column collation cannot be overridden because it's\n> semantically signficant, but there are at least some degrees of freedom\n> we have with the index collation.\n> \n> I don't think we can completely change the default index collation to\n> be \"C\", but perhaps there could be a database-level option to do so,\n> and that would have no effect on semantics at all. If the user notices\n> some queries that could benefit from an index with a non-\"C\" collation,\n> they can add/replace an index as they see fit.\n> \n\nTrue. What about trying to allow a separate collation for the PK\nconstraint (and the backing index)?\n\n>> Not that uncommon with natural primary keys,\n>> I think (not a great practice, but people do that).\n> \n> Natural keys often have an uncorrelated index, and if the index is not\n> correlated, it's often not useful ORDER BY.\n> \n> When I actually think about schemas and plans I've seen in the wild, I\n> struggle to think of many cases that would really benefit from an index\n> in a non-\"C\" collation. The best cases I can think of are where it's\n> doing some kind of prefix search. That's not rare, but it's also not so\n> common that I'd like to risk index corruption on every index in the\n> system by default in case a prefix search is performed.\n> \n\nOK. I personally don't recall any case where I'd see a collation on PK\nindexes as a performance issue. Or maybe I just didn't realize it.\n\nBut speaking of natural keys, I recall a couple schemas with natural\nkeys in code/dimension tables, and it's not uncommon to cluster those\nslow-moving tables once in a while. I don't know if ORDER BY queries\nwere very common on those tables, though.\n\n>> Perhaps we could allow the PK index to have a different collation,\n>> say\n>> by supporting something like this:\n>>\n>> ALTER TABLE distributors ADD PRIMARY KEY (dist_id COLLATE \"C\");\n> \n> Yes, I'd like something like that to be supported. We'd have to check\n> that, if the collations are different, that both are deterministic.\n> \n\nOK, I think this answers my earlier question. Now that I think about\nthis, the one confusing thing with this syntax is that it seems to\nassign the collation to the constraint, but in reality we want the\nconstraint to be enforced with the column's collation and the\nalternative collation is for the index.\n\n>> And then the planner would just pick the right index, I think.\n> \n> Right now the planner doesn't seem to understand that an index in the\n> \"C\" collation works just fine for answering equality queries. That\n> should be fixed.\n> \n>> If the\n>> user cares about ordering, they'll specify ORDER BY with either an\n>> explicit or the default collation. If the index collation matches, it\n>> may be useful for the ordering.\n> \n> Exactly.\n> \n>> Of course, if we feel entitled to create the primary key index with a\n>> collation of our choosing, that'd make this unpredictable.\n> \n> I wouldn't describe it as \"unpredictable\". We'd have some defined way\n> of defaulting the collation of an index which might be affected by a\n> database option or something. In any case, it would be visible with \\d.\n> \n\nPerhaps \"unpredictable\" was not the right word. What I meant to express\nis that it happens in the background, possibly confusing for the user.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 14 Nov 2023 13:01:13 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On 13.11.23 17:49, Jeff Davis wrote:\n> On Mon, 2023-11-13 at 13:43 +0100, Peter Eisentraut wrote:\n>> On 11.11.23 01:03, Jeff Davis wrote:\n>>> But the database collation is always deterministic,\n>>\n>> So far!\n> \n> Yeah, if we did that, clearly the index collation would need to match\n> that of the database to be useful. What are the main challenges in\n> allowing non-deterministic collations at the database level?\n\nText pattern matching operations (LIKE, ~) don't work.\n\n\"Allowing\" here is the right word. We could enable it, and it would \nwork just fine, but because of the restriction I mentioned, the \nexperience would not be very pleasant.\n\n\n\n",
"msg_date": "Tue, 14 Nov 2023 17:05:35 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On 14.11.23 02:58, Jeff Davis wrote:\n> If the user just wants PK/FK constraints, and equality lookups, then an\n> index with the \"C\" collation makes a lot of sense to serve those\n> purposes.\n\nThe problem is that the user has no way to declare whether they just \nwant this. The default assumption is that you get a btree and that is \nuseful for range queries. If the user just wants equality lookups, they \ncould use a hash index. Hash indexes kind of work like what we \ndiscussed in another message: They use C collation semantics unless the \ncollation is declared nondeterministic. Of course, hash indexes don't \nsupport uniqueness, but maybe that could be fixed? And/or we could \nprovide some other syntax that say, I want a btree but I just want \nequality lookups?\n\n\n\n",
"msg_date": "Tue, 14 Nov 2023 17:15:49 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Tue, 2023-11-14 at 17:15 +0100, Peter Eisentraut wrote:\n> On 14.11.23 02:58, Jeff Davis wrote:\n> > If the user just wants PK/FK constraints, and equality lookups,\n> > then an\n> > index with the \"C\" collation makes a lot of sense to serve those\n> > purposes.\n> \n> The problem is that the user has no way to declare whether they just \n> want this.\n\nWe should add a way to declare that a primary key should create an\nindex in a particular collation. We need to be careful not to interfere\nwith the SQL standard, but other than that, I think this is non-\ncontroversial.\n\n> The default assumption is that you get a btree and that is \n> useful for range queries.\n\nAs I've said elsewhere in this thread, I think the benefit of these\npathkeys are overstated, and the costs of providing those pathkeys with\nan index (performance and corruption risk) are understated.\n\nThat being said, obviously we don't want to make any sudden change to\nthe default behavior that would regress lots of users. But there's lots\nof stuff we can do that is not so radical.\n\n> If the user just wants equality lookups, they \n> could use a hash index.\n\nThat's a good point, and we should probably support hash indexes for\nprimary keys. But I don't see a reason to push users toward hash\nindexes if they aren't already inclined to use hash over btree. Btree\nindexes in the \"C\" collation work just fine if we fix a planner issue\nor two.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 14 Nov 2023 11:28:50 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Tue, 2023-11-14 at 17:15 +0100, Peter Eisentraut wrote:\n>> The problem is that the user has no way to declare whether they just \n>> want this.\n\n> We should add a way to declare that a primary key should create an\n> index in a particular collation.\n\nWhy should that ever be different from the column's own declared\ncollation?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Nov 2023 14:47:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Tue, 2023-11-14 at 14:47 -0500, Tom Lane wrote:\n> Why should that ever be different from the column's own declared\n> collation?\n\nBecause an index with the \"C\" collation is more efficient in terms of\nbuilding/maintaining/searching the index, and it also doesn't carry\nrisks of corrupting your PK index when you upgrade libc or other\ndependency headaches.\n\nA \"C\" collation index is also perfectly capable of performing the\nduties of a PK index: equality means the exact same thing in every\ndeterministic collation, so it can enforce the same notion of\nuniqueness. It can also be used for ordinary equality lookups in the\nsame way, though currently our planner doesn't do that (I'll take a\nshot at fixing that).\n\nOf course such an index won't offer range scans or pathkeys useful for\nORDER BY on that text column. But as I've argued elsewhere in this\nthread, that's less useful than it may seem at first (text indexes are\noften uncorrelated). It seems valid to offer this as a trade-off that\nusers can make.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 14 Nov 2023 15:28:24 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Wed, 15 Nov 2023 at 00:28, Jeff Davis <[email protected]> wrote:\n>\n> On Tue, 2023-11-14 at 14:47 -0500, Tom Lane wrote:\n> > Why should that ever be different from the column's own declared\n> > collation?\n>\n> Because an index with the \"C\" collation is more efficient in terms of\n> building/maintaining/searching the index, and it also doesn't carry\n> risks of corrupting your PK index when you upgrade libc or other\n> dependency headaches.\n\nThat doesn't really answer the question for me. Why would you have a\nprimary key that has different collation rules (which include equality\nrules) than the columns that this primary key contains? It is not\nunlikely that users are misinformed about the behaviour of the\ncollation they're creating, thus breaking any primary key or equality\nlookup that uses indexes auto-converted from that collation to the \"C\"\ncollation.\n\nIf the collation on my primary key's columns changes from one that is\ndeterministic to one that isn't, then my primary key surely has to be\nreindexed. If the collation of the underlying index was overwritten to\n'C' for performance, then that's a problem, right, as we wouldn't have\nthe expectation that the index is based on the columns' actual\ncollation's properties?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 15 Nov 2023 00:52:19 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Wed, 2023-11-15 at 00:52 +0100, Matthias van de Meent wrote:\n> That doesn't really answer the question for me. Why would you have a\n> primary key that has different collation rules (which include\n> equality\n> rules)\n\nThe equality rules for all deterministic collations are the same: if\nthe bytes are identical, the values are considered equal; and if the\nbytes are not identical, the values are considered unequal.\n\nThat's the basis for this entire thread. The \"C\" collation provides the\nsame equality semantics as every other deterministic collation, but\nwith better performance and lower risk. (As long as you don't actually\nneed range scans or path keys from the index.)\n\nSee varstr_cmp() or varstrfastcmp_locale(). Those functions first check\nfor identical bytes and return 0 if so. If the bytes aren't equal, it\npasses it to the collation provider, but if the collation provider\nreturns 0, we do a final memcmp() to break the tie. You can also see\nthis in hashtext(), where for deterministic collations it just calls\nhash_any() on the bytes.\n\nNone of this works for non-deterministic collations (e.g. case\ninsensitive), but that would be easy to block where necessary.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 14 Nov 2023 16:13:49 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Tue, 2023-11-14 at 13:01 +0100, Tomas Vondra wrote:\n\n> Presumably we'd no generate incorrect\n> results, but we'd not be able use an index, causing performance\n> issues.\n\nCouldn't use the index for its pathkeys or range scans, but could use\nit for equality.\n\n> AFAICS this is a trade-off between known benefits (faster equality\n> searches, which are common for PK columns) vs. unknown downsides\n> (performance penalty for operations with unknown frequency).\n\nDon't forget the dependency versioning risk: changes to the collation\nprovider library can corrupt your indexes. That affects even small\ntables where performance is not a concern.\n\n> Not sure it's a decision we can make automatically. But it's mostly\n> achievable manually, if the user specifies COLLATE \"C\" for the\n> column.\n\nChanging the column collation is the wrong place to do it, in my\nopinion. It conflates semantics with performance considerations and\ndependency risks.\n\nWe already have the catalog support for indexes with a different\ncollation:\n\n CREATE TABLE foo (t TEXT COLLATE \"en_US\");\n INSERT INTO foo SELECT g::TEXT FROM generate_series(1,1000000) g;\n CREATE INDEX foo_idx ON foo (t COLLATE \"C\");\n ANALYZE foo;\n\nThe problem is that:\n\n EXPLAIN SELECT * FROM foo WHERE t = '345678';\n\ndoesn't use the index. And also that there's no way to do it for a PK\nindex.\n\n> I realize you propose to do this automatically for everyone,\n\nI don't think I proposed that. Perhaps we nudge users in that direction\nover time as the utility becomes clear, but I'm not trying to push for\na sudden radical change.\n\nPerhaps many read $SUBJECT as a rhetorical question, but I really do\nwant to know if I am missing important and common use cases for indexes\nin a non-\"C\" collation.\n\n> But maybe there's a way\n> to make this manual approach more convenient? Say, by allowing the PK\n> to\n> have a different collation (which I don't think is allowed now).\n\nYeah, that should be fairly non-controversial.\n\n> FWIW I wonder what the impact of doing this automatically would be in\n> practice. I mean, in my experience the number of tables with TEXT (or\n> types sensitive to collations) primary keys is fairly low, especially\n> for tables of non-trivial size (where the performance impact might be\n> measurable).\n\nI think a lot more users would be helped than hurt. But in absolute\nnumbers, the latter group still represents a lot of regressions, so\nlet's not do anything radical.\n\n> > \n> True. What about trying to allow a separate collation for the PK\n> constraint (and the backing index)?\n\n+1.\n\n> > \n> OK. I personally don't recall any case where I'd see a collation on\n> PK\n> indexes as a performance issue. Or maybe I just didn't realize it.\n\nTry a simple experiment building an index with the \"C\" collation and\nthen try with a different locale on the same data. Numbers vary, but\nI've seen 1.5X to 4X on some simple generated data. Others have\nreported much worse numbers on some versions of glibc that are\nespecially slow with lots of non-latin characters.\n\n> But speaking of natural keys, I recall a couple schemas with natural\n> keys in code/dimension tables, and it's not uncommon to cluster those\n> slow-moving tables once in a while. I don't know if ORDER BY queries\n> were very common on those tables, though.\n\nYeah, not exactly a common case though.\n\n> > \n> OK, I think this answers my earlier question. Now that I think about\n> this, the one confusing thing with this syntax is that it seems to\n> assign the collation to the constraint, but in reality we want the\n> constraint to be enforced with the column's collation and the\n> alternative collation is for the index.\n\nYeah, let's be careful about that. It's still technically correct:\nuniqueness in either collation makes sense. But it could be confusing\nanyway.\n\n> > \nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 15 Nov 2023 11:09:11 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
},
{
"msg_contents": "On Mon, 2023-11-13 at 14:12 -0800, Andres Freund wrote:\n> Why on earth are we solving this by having multiple pg_collation\n> entries for\n> exactly the same collation, instead of normalizing the collation-name\n> during\n> lookup by adding the relevant encoding name if not explicitly\n> specified? It\n> makes a lot of sense to not force the user to specify the encoding\n> when it\n> can't differ.\n\nI'm not aware of it being a common practical problem, so perhaps lack\nof motivation. But you're right that it doesn't look very efficient.\n\nWe can even go deeper into ICU if we wanted to: lots of locales are\nactually aliases to a much smaller number of actual collators. And a\nlot are just aliases to the root locale. It's not trivial to reliably\ntell if two collators are identical, but in principle it should be\npossible: each collation is just a set of tailorings on top of the root\nlocale, so I suppose if those are equal it's the same collator, right?\n\n> It's imo similarly absurd that an index with \"default\" collation\n> cannot be\n> used when specifying the equivalent collation explicitly in the query\n> and vice\n> versa.\n\nThe catalog representation is not ideal to treat the database collation\nconsistently with other collations. It would be nice to fix that.\n\n> > > > \n> Jeff was saying that textual primary keys typically don't need\n> sorting and\n> because of that we could default to \"C\", for performance. Part of my\n> response\n> was that I think the user's intent could be expressed by specifying\n> the column\n> collation as \"C\" - to which Jeff replied that that would change the\n> semantics. Which, to me, seems to completely run counter to his\n> argument that\n> we could just use \"C\" for such indexes.\n\nI am saying we shouldn't prematurely optimize for the case of ORDER BY\non a text PK case by making a an index with a non-\"C\" collation, given\nthe costs and risks of non-\"C\" indexes. Particularly because, even if\nthere is an ORDER BY, there are several common reasons such an index\nwould not help anyway.\n\n> > > > \nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 17 Nov 2023 11:39:40 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do indexes and sorts use the database collation?"
}
] |
[
{
"msg_contents": "Hi,\n\nI was adding support for the new pg_stat_statements JIT deform_counter in PoWA\nwhen I realized that those were added after jit_generation_time in the\ndocumentation while they're actually at the end of the view. I'm adding Daniel\nin Cc as the committer of the original patch.\n\nIt looks like there was some will to put them earlier in the view too, but\nsince it would require some additional tests in the SRF they probably just\nended up at the end of the view while still being earlier in the struct.\nAnyway, it's not a big problem but all other fields are documented in the\ncorrect position so let's be consistent.\n\nTrivial patch attached.",
"msg_date": "Sat, 11 Nov 2023 17:26:47 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix documentation for pg_stat_statements JIT deform_counter"
},
{
"msg_contents": "> On 11 Nov 2023, at 10:26, Julien Rouhaud <[email protected]> wrote:\n\n> I was adding support for the new pg_stat_statements JIT deform_counter in PoWA\n> when I realized that those were added after jit_generation_time in the\n> documentation while they're actually at the end of the view.\n\nNice catch, that was indeed an omission in the original commit. Thanks for the\npatch, I'll apply that shortly.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 15 Nov 2023 13:53:13 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix documentation for pg_stat_statements JIT deform_counter"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 01:53:13PM +0100, Daniel Gustafsson wrote:\n> > On 11 Nov 2023, at 10:26, Julien Rouhaud <[email protected]> wrote:\n>\n> > I was adding support for the new pg_stat_statements JIT deform_counter in PoWA\n> > when I realized that those were added after jit_generation_time in the\n> > documentation while they're actually at the end of the view.\n>\n> Nice catch, that was indeed an omission in the original commit. Thanks for the\n> patch, I'll apply that shortly.\n\nThanks!\n\n\n",
"msg_date": "Wed, 15 Nov 2023 23:03:23 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix documentation for pg_stat_statements JIT deform_counter"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm not sure if these ideas were circulated before or not.\nWe use auto_explain a lot to investigate slow/problematic queries.\nOne of the main issues with its usability comes from the fact that EXPLAIN\noutput is logged rather than returned to the caller in some way. If you\nhave a large cluster with lots of replicas, there is also an extra\ninconvenience of log accumulation, search, etc.\nWhy not have an option to return EXPLAIN results as a NoticeResponse\ninstead? That would make its usage more convenient.\n\nAnother thing is tangentially related...\nI think it may be good to have a number of options to generate\nsignificantly shorter output similar to EXPLAIN. EXPLAIN is great, but\nsometimes people need more concise and specific information, for example\ntotal number of buffers and reads by certain query (this is pretty common),\nwhether or not we had certain nodes in the plan (seq scan, scan of certain\nindex(es)), how bad was cardinality misprediction on certain nodes, etc.\nIt's not totally clear yet what would be the best way to define those\nrules, but I think we can come up with something reasonable. Logging or\nreturning shorter messages like that can cause less overhead than logging\nfull EXPLAIN and can potentially allow for better query monitoring overall.\n\nDo you see any potential issues with implementing those? Of course there\nshould be more details, like what kind of configuration parameters to add,\nhow to define rules for the 2nd case, etc. Just wanted to check if there\nare any objections in general.\n\nThank you,\n-Vladimir Churyukin.\n\nHello,I'm not sure if these ideas were circulated before or not.We use auto_explain a lot to investigate slow/problematic queries.One of the main issues with its usability comes from the fact that EXPLAIN output is logged rather than returned to the caller in some way. If you have a large cluster with lots of replicas, there is also an extra inconvenience of log accumulation, search, etc. Why not have an option to return EXPLAIN results as a NoticeResponse instead? That would make its usage more convenient. Another thing is tangentially related...I think it may be good to have a number of options to generate significantly shorter output similar to EXPLAIN. EXPLAIN is great, but sometimes people need more concise and specific information, for example total number of buffers and reads by certain query (this is pretty common), whether or not we had certain nodes in the plan (seq scan, scan of certain index(es)), how bad was cardinality misprediction on certain nodes, etc. It's not totally clear yet what would be the best way to define those rules, but I think we can come up with something reasonable. Logging or returning shorter messages like that can cause less overhead than logging full EXPLAIN and can potentially allow for better query monitoring overall.Do you see any potential issues with implementing those? Of course there should be more details, like what kind of configuration parameters to add, how to define rules for the 2nd case, etc. Just wanted to check if there are any objections in general. Thank you,-Vladimir Churyukin.",
"msg_date": "Sat, 11 Nov 2023 02:17:17 -0800",
"msg_from": "Vladimir Churyukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Making auto_explain more useful / convenient"
},
{
"msg_contents": "Hello\n\nauto_explain.log_level is available since postgresql 12.\n\npostgres=# load 'auto_explain';\nLOAD\npostgres=# set auto_explain.log_min_duration to 0;\nSET\npostgres=# set auto_explain.log_level to 'notice';\nSET\npostgres=# select 1;\nNOTICE: duration: 0.010 ms plan:\nQuery Text: select 1;\nResult (cost=0.00..0.01 rows=1 width=4)\n ?column? \n----------\n 1\n\nregards, Sergei\n\n\n",
"msg_date": "Sat, 11 Nov 2023 13:43:47 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:Making auto_explain more useful / convenient"
},
{
"msg_contents": "Vladimir Churyukin <[email protected]> writes:\n> Why not have an option to return EXPLAIN results as a NoticeResponse\n> instead? That would make its usage more convenient.\n\nThat seems quite useless to me, and likely actually counterproductive.\nIf you are manually investigating query performance, you can just use\nEXPLAIN directly. The point of auto_explain, ISTM, is to capture info\nabout queries issued by automated applications. So something like the\nabove could only work if you taught every one of your applications to\ncapture the NOTICE output, separate it from random other NOTICE\noutput, and then (probably) log it somewhere central for later\ninspection. That's a lot of code to write, and at the end you'd\nonly have effectively duplicated existing tooling such as pgbadger.\nAlso, what happens in applications you forgot to convert?\n\n> Another thing is tangentially related...\n> I think it may be good to have a number of options to generate\n> significantly shorter output similar to EXPLAIN. EXPLAIN is great, but\n> sometimes people need more concise and specific information, for example\n> total number of buffers and reads by certain query (this is pretty common),\n> whether or not we had certain nodes in the plan (seq scan, scan of certain\n> index(es)), how bad was cardinality misprediction on certain nodes, etc.\n\nMaybe, but again I'm a bit skeptical. IME you frequently don't know\nwhat you're looking for until you've seen the bigger picture. Zeroing\nin on details like this could be pretty misleading.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Nov 2023 10:49:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making auto_explain more useful / convenient"
},
{
"msg_contents": "Thank you, that answers the first part of my question.\n\nOn Sat, Nov 11, 2023 at 2:43 AM Sergei Kornilov <[email protected]> wrote:\n\n> Hello\n>\n> auto_explain.log_level is available since postgresql 12.\n>\n> postgres=# load 'auto_explain';\n> LOAD\n> postgres=# set auto_explain.log_min_duration to 0;\n> SET\n> postgres=# set auto_explain.log_level to 'notice';\n> SET\n> postgres=# select 1;\n> NOTICE: duration: 0.010 ms plan:\n> Query Text: select 1;\n> Result (cost=0.00..0.01 rows=1 width=4)\n> ?column?\n> ----------\n> 1\n>\n> regards, Sergei\n>\n\nThank you, that answers the first part of my question.On Sat, Nov 11, 2023 at 2:43 AM Sergei Kornilov <[email protected]> wrote:Hello\n\nauto_explain.log_level is available since postgresql 12.\n\npostgres=# load 'auto_explain';\nLOAD\npostgres=# set auto_explain.log_min_duration to 0;\nSET\npostgres=# set auto_explain.log_level to 'notice';\nSET\npostgres=# select 1;\nNOTICE: duration: 0.010 ms plan:\nQuery Text: select 1;\nResult (cost=0.00..0.01 rows=1 width=4)\n ?column? \n----------\n 1\n\nregards, Sergei",
"msg_date": "Sat, 11 Nov 2023 08:20:02 -0800",
"msg_from": "Vladimir Churyukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Making auto_explain more useful / convenient"
},
{
"msg_contents": "On Sat, Nov 11, 2023 at 7:49 AM Tom Lane <[email protected]> wrote:\n\n> Vladimir Churyukin <[email protected]> writes:\n> > Why not have an option to return EXPLAIN results as a NoticeResponse\n> > instead? That would make its usage more convenient.\n>\n> That seems quite useless to me, and likely actually counterproductive.\n> If you are manually investigating query performance, you can just use\n> EXPLAIN directly. The point of auto_explain, ISTM, is to capture info\n> about queries issued by automated applications. So something like the\n> above could only work if you taught every one of your applications to\n> capture the NOTICE output, separate it from random other NOTICE\n> output, and then (probably) log it somewhere central for later\n> inspection. That's a lot of code to write, and at the end you'd\n> only have effectively duplicated existing tooling such as pgbadger.\n> Also, what happens in applications you forgot to convert?\n>\n>\nSergey Kornilov just gave the right answer above in the thread for this one.\nUnfortunately, there are a lot of scenarios where you can't use pgbadger or\nany other log analysis or it's not convenient.\nThere are a bunch of cloud hosted forks of postgres for example, not all of\nthem give you this functionality.\nIn AWS for example you need to download all the logs first, which\ncomplicates it significantly.\nThe goal of this is not investigating performance of a single query but\nrather constant monitoring of a bunch (or all) queries, so you can detect\nplan degradations right away.\n\n\n> > Another thing is tangentially related...\n> > I think it may be good to have a number of options to generate\n> > significantly shorter output similar to EXPLAIN. EXPLAIN is great, but\n> > sometimes people need more concise and specific information, for example\n> > total number of buffers and reads by certain query (this is pretty\n> common),\n> > whether or not we had certain nodes in the plan (seq scan, scan of\n> certain\n> > index(es)), how bad was cardinality misprediction on certain nodes, etc.\n>\n> Maybe, but again I'm a bit skeptical. IME you frequently don't know\n> what you're looking for until you've seen the bigger picture. Zeroing\n> in on details like this could be pretty misleading.\n>\n>\nIf you don't know what you're looking for, then it's not very useful, I\nagree.\nBut in many cases you know. There are certain generic \"signs of trouble\"\nthat you can detect by\nthe amount of data the query processor scans, by cache hit rate for certain\nqueries. presence of seq scans or scans of certain indexes,\nlarge differences between predicted and actual rows, some other stuff that\nmay be relevant to your app/queries specifically that you want to monitor.\nWe're already doing similar analysis on our side (a multi-terabyte db\ncluster with hundreds of millions to billions queries running daily).\nBut it's not efficient enough because:\n1. the problem I mentioned above, access to logs is limited on cloud\nenvironments\n2. explain output could be huge, it causes performance issues because of\nits size. compact output is much more preferable for mass processing\n(it's even more important if this output is to notice messages rather than\nto logs, that's why I said it's tangentially related)\n\nSince it seems the notice output is already possible, half of the problem\nis solved already.\nI'll try to come up with possible options for more compact output\nthen, unless you think it's completely futile.\n\nthank you,\n-Vladimir Churyukin\n\nOn Sat, Nov 11, 2023 at 7:49 AM Tom Lane <[email protected]> wrote:Vladimir Churyukin <[email protected]> writes:\n> Why not have an option to return EXPLAIN results as a NoticeResponse\n> instead? That would make its usage more convenient.\n\nThat seems quite useless to me, and likely actually counterproductive.\nIf you are manually investigating query performance, you can just use\nEXPLAIN directly. The point of auto_explain, ISTM, is to capture info\nabout queries issued by automated applications. So something like the\nabove could only work if you taught every one of your applications to\ncapture the NOTICE output, separate it from random other NOTICE\noutput, and then (probably) log it somewhere central for later\ninspection. That's a lot of code to write, and at the end you'd\nonly have effectively duplicated existing tooling such as pgbadger.\nAlso, what happens in applications you forgot to convert?\nSergey Kornilov just gave the right answer above in the thread for this one.Unfortunately, there are a lot of scenarios where you can't use pgbadger or any other log analysis or it's not convenient.There are a bunch of cloud hosted forks of postgres for example, not all of them give you this functionality.In AWS for example you need to download all the logs first, which complicates it significantly.The goal of this is not investigating performance of a single query but rather constant monitoring of a bunch (or all) queries, so you can detectplan degradations right away. \n> Another thing is tangentially related...\n> I think it may be good to have a number of options to generate\n> significantly shorter output similar to EXPLAIN. EXPLAIN is great, but\n> sometimes people need more concise and specific information, for example\n> total number of buffers and reads by certain query (this is pretty common),\n> whether or not we had certain nodes in the plan (seq scan, scan of certain\n> index(es)), how bad was cardinality misprediction on certain nodes, etc.\n\nMaybe, but again I'm a bit skeptical. IME you frequently don't know\nwhat you're looking for until you've seen the bigger picture. Zeroing\nin on details like this could be pretty misleading.\nIf you don't know what you're looking for, then it's not very useful, I agree.But in many cases you know. There are certain generic \"signs of trouble\" that you can detect bythe amount of data the query processor scans, by cache hit rate for certain queries. presence of seq scans or scans of certain indexes,large differences between predicted and actual rows, some other stuff that may be relevant to your app/queries specifically that you want to monitor. We're already doing similar analysis on our side (a multi-terabyte db cluster with hundreds of millions to billions queries running daily).But it's not efficient enough because:1. the problem I mentioned above, access to logs is limited on cloud environments2. explain output could be huge, it causes performance issues because of its size. compact output is much more preferable for mass processing(it's even more important if this output is to notice messages rather than to logs, that's why I said it's tangentially related)Since it seems the notice output is already possible, half of the problem is solved already.I'll try to come up with possible options for more compact output then, unless you think it's completely futile.thank you,-Vladimir Churyukin",
"msg_date": "Sat, 11 Nov 2023 09:03:48 -0800",
"msg_from": "Vladimir Churyukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Making auto_explain more useful / convenient"
}
] |
[
{
"msg_contents": "Hackers,\n\nAs promised in [1], attached are some basic tests for the low-level \nbackup method.\n\nThere are currently no tests for the low-level backup method. \npg_backup_start() and pg_backup_stop() are called but not exercised in \nany real fashion.\n\nThere is a lot more that can be done, but this at least supplies some \nbasic tests and provides a place for future improvement.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/2daf8adc-8db7-4204-a7f2-a7e94e2bfa4b%40pgmasters.net",
"msg_date": "Sat, 11 Nov 2023 15:21:12 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add basic tests for the low-level backup method."
},
{
"msg_contents": "On 11/12/23 08:21, David Steele wrote:\n> \n> As promised in [1], attached are some basic tests for the low-level \n> backup method.\n\nAdded to the 2024-03 CF.\n\nRegards,\n-David\n\n\n",
"msg_date": "Thu, 29 Feb 2024 10:30:52 +1300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 10:30:52AM +1300, David Steele wrote:\n> On 11/12/23 08:21, David Steele wrote:\n>> As promised in [1], attached are some basic tests for the low-level\n>> backup method.\n> \n> Added to the 2024-03 CF.\n\nThere is already 040_standby_failover_slots_sync.pl in recovery/ that\nuses the number of your test script. You may want to bump it, that's\na nit.\n\n+unlink(\"$backup_dir/postmaster.pid\")\n+\tor BAIL_OUT(\"unable to unlink $backup_dir/postmaster.pid\");\n+unlink(\"$backup_dir/postmaster.opts\")\n+\tor BAIL_OUT(\"unable to unlink $backup_dir/postmaster.opts\");\n+unlink(\"$backup_dir/global/pg_control\")\n+\tor BAIL_OUT(\"unable to unlink $backup_dir/global/pg_control\");\n\nRELCACHE_INIT_FILENAME as well?\n\n+# Rather than writing out backup_label, try to recover the backup without\n+# backup_label to demonstrate that recovery will not work correctly without it,\n+# i.e. the canary table will be missing and the cluster will be corrupt. Provide\n+# only the WAL segment that recovery will think it needs.\n\nOkay, why not. No objections to this addition. I am a bit surprised\nthat this is not scanning some of the logs produced by the startup\nprocess for particular patterns.\n\n+# Save backup_label into the backup directory and recover using the primary's\n+# archive. This time recovery will succeed and the canary table will be\n+# present. \n\nHere are well, I think that we should add some log_contains() with\npre-defined patterns to show that recovery has completed the way we\nwant it with a backup_label up to the end-of-backup record.\n--\nMichael",
"msg_date": "Thu, 29 Feb 2024 12:55:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On 2/29/24 16:55, Michael Paquier wrote:\n> On Thu, Feb 29, 2024 at 10:30:52AM +1300, David Steele wrote:\n>> On 11/12/23 08:21, David Steele wrote:\n>>> As promised in [1], attached are some basic tests for the low-level\n>>> backup method.\n>>\n>> Added to the 2024-03 CF.\n> \n> There is already 040_standby_failover_slots_sync.pl in recovery/ that\n> uses the number of your test script. You may want to bump it, that's\n> a nit.\n\nRenamed to 042_low_level_backup.pl.\n\n> +unlink(\"$backup_dir/postmaster.pid\")\n> +\tor BAIL_OUT(\"unable to unlink $backup_dir/postmaster.pid\");\n> +unlink(\"$backup_dir/postmaster.opts\")\n> +\tor BAIL_OUT(\"unable to unlink $backup_dir/postmaster.opts\");\n> +unlink(\"$backup_dir/global/pg_control\")\n> +\tor BAIL_OUT(\"unable to unlink $backup_dir/global/pg_control\");\n> \n> RELCACHE_INIT_FILENAME as well?\n\nI'm not trying to implement the full exclusion list here, just enough to \nget the test working. Since exclusions are optional according to the \ndocs I don't think we need them for a valid tests.\n\n> +# Rather than writing out backup_label, try to recover the backup without\n> +# backup_label to demonstrate that recovery will not work correctly without it,\n> +# i.e. the canary table will be missing and the cluster will be corrupt. Provide\n> +# only the WAL segment that recovery will think it needs.\n> \n> Okay, why not. No objections to this addition. I am a bit surprised\n> that this is not scanning some of the logs produced by the startup\n> process for particular patterns.\n\nNot sure what to look for here. There are no distinct messages for crash \nrecovery. Perhaps there should be?\n\n> +# Save backup_label into the backup directory and recover using the primary's\n> +# archive. This time recovery will succeed and the canary table will be\n> +# present.\n> \n> Here are well, I think that we should add some log_contains() with\n> pre-defined patterns to show that recovery has completed the way we\n> want it with a backup_label up to the end-of-backup record.\n\nSure, I added a check for the new log message when recovering with a \nbackup_label.\n\nRegards,\n-David",
"msg_date": "Wed, 13 Mar 2024 13:12:28 +1300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 01:12:28PM +1300, David Steele wrote:\n> On 2/29/24 16:55, Michael Paquier wrote:\n>> +unlink(\"$backup_dir/postmaster.pid\")\n>> +\tor BAIL_OUT(\"unable to unlink $backup_dir/postmaster.pid\");\n>> +unlink(\"$backup_dir/postmaster.opts\")\n>> +\tor BAIL_OUT(\"unable to unlink $backup_dir/postmaster.opts\");\n>> +unlink(\"$backup_dir/global/pg_control\")\n>> +\tor BAIL_OUT(\"unable to unlink $backup_dir/global/pg_control\");\n>> \n>> RELCACHE_INIT_FILENAME as well?\n> \n> I'm not trying to implement the full exclusion list here, just enough to get\n> the test working. Since exclusions are optional according to the docs I\n> don't think we need them for a valid tests.\n\nOkay. Fine by me at the end.\n\n>> +# Rather than writing out backup_label, try to recover the backup without\n>> +# backup_label to demonstrate that recovery will not work correctly without it,\n>> +# i.e. the canary table will be missing and the cluster will be corrupt. Provide\n>> +# only the WAL segment that recovery will think it needs.\n>> \n>> Okay, why not. No objections to this addition. I am a bit surprised\n>> that this is not scanning some of the logs produced by the startup\n>> process for particular patterns.\n> \n> Not sure what to look for here. There are no distinct messages for crash\n> recovery. Perhaps there should be?\n\nThe closest thing I can think of here would be \"database system was\nnot properly shut down; automatic recovery in progress\" as we don't\nhave InArchiveRecovery, after checking that the canary is missing. If\nyou don't like this suggestion, feel free to say so, of course :)\n\n>> +# Save backup_label into the backup directory and recover using the primary's\n>> +# archive. This time recovery will succeed and the canary table will be\n>> +# present.\n>> \n>> Here are well, I think that we should add some log_contains() with\n>> pre-defined patterns to show that recovery has completed the way we\n>> want it with a backup_label up to the end-of-backup record.\n> \n> Sure, I added a check for the new log message when recovering with a\n> backup_label.\n\n+ok($node_replica->log_contains('completed backup recovery with redo LSN'),\n+ 'verify backup recovery performed with backup_label');\n\nOkay for this choice. I was thinking first about \"starting backup\nrecovery with redo LSN\", closer to the area where the backup_label is\nread.\n--\nMichael",
"msg_date": "Wed, 13 Mar 2024 15:15:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On 3/13/24 19:15, Michael Paquier wrote:\n> On Wed, Mar 13, 2024 at 01:12:28PM +1300, David Steele wrote:\n>>\n>> Not sure what to look for here. There are no distinct messages for crash\n>> recovery. Perhaps there should be?\n> \n> The closest thing I can think of here would be \"database system was\n> not properly shut down; automatic recovery in progress\" as we don't\n> have InArchiveRecovery, after checking that the canary is missing. If\n> you don't like this suggestion, feel free to say so, of course :)\n\nThat works for me. I think I got it confused with \"database system was \ninterrupted...\" when I was looking at the success vs. fail logs.\n\n>> Sure, I added a check for the new log message when recovering with a\n>> backup_label.\n> \n> +ok($node_replica->log_contains('completed backup recovery with redo LSN'),\n> + 'verify backup recovery performed with backup_label');\n> \n> Okay for this choice. I was thinking first about \"starting backup\n> recovery with redo LSN\", closer to the area where the backup_label is\n> read.\n\nI think you are right that the start message is better since it can only \nappear once when the backup_label is found. The completed message could \nin theory appear after a restart, though the backup_label must have been \nfound at some point.\n\nRegards,\n-David",
"msg_date": "Thu, 14 Mar 2024 09:12:52 +1300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 09:12:52AM +1300, David Steele wrote:\n> I think you are right that the start message is better since it can only\n> appear once when the backup_label is found. The completed message could in\n> theory appear after a restart, though the backup_label must have been found\n> at some point.\n\nSo, I've given a try to this patch with 99b4a63bef94, to note that\nsidewinder failed because of a timing issue on Windows: the recovery\nof the node without backup_label, expected to fail, would try to\nbackup the last segment it has replayed, because, as it has no\nbackup_label, it behaves like the primary. It would try to use the\nsame archive location as the primary, leading to a conflict failure on\nWindows. This one was easy to fix, by overwritting postgresql.conf on\nthe node to not do archiving.\n\nFollowing that, I've noticed a second race condition: we don't wait\nfor the segment after pg_switch_wal() to be archived. This one can be\neasily avoided with a poll on pg_stat_archiver.\n\nAfter that, also because I've initially managed to, cough, forget an\nupdate of meson.build to list the new test, I've noticed a third\nfailure on Windows for the case of the node that has a backup_label.\nHere is one of the failures:\nhttps://cirrus-ci.com/task/5245341683941376\n\nregress_log_042_low_level_backup and\n042_low_level_backup_replica_success.log have all the information\nneeded, that can be summarized like that:\nThe system cannot find the file specified.\n2024-03-14 06:02:37.670 GMT [560][startup] FATAL: invalid data in file \"backup_label\"\n\nThe first message is something new to me, that seems to point to a\ncorruption failure of the file system. Why don't we see something\nsimilar in other tests, then? Leaving that aside..\n\nThe second LOG is something that can be acted on. I've added some\ndebugging to the parsing of the backup_label file in the backend, and\nnoticed that the first fscanf() for START WAL LOCATION is failing\nbecause the last %c is detected as \\r rather than \\n. Tweaking the\ncontents stored from pg_backend_stop() with a sed won't help, because\nthe issue is that we write the CRLFs with append_to_file, and the\nstartup process cannot cope with that. The simplest method I can\nthink of is to use binmode, as of the attached.\n\nIt is the first time that we'd take the contents received from a\nBackgroundPsql and write them to a file parsed by the backend, so\nperhaps we should try to do that in a more general way, but I'm not\nsure how, tbh, and the case of this test is special while adding\nhandling for \\r when reading the backup_label got discussed in the\npast but we were OK with what we are doing now on HEAD.\n\nOn top of all that, note that I have removed remove_tree as I am not\nsure if this would be OK in all the buildfarm animals, added a quit()\nfor BackgroundPsql, moved queries to use less BackgroundPsql, as well\nas a few other things like avoiding the hardcoded segment names.\nmeson.build is.. Cough.. Updated now.\n\nI am attaching an updated patch with all that fixed, which is stable\nin the CI and any tests I've run. Do you have any comments about\nthat?\n--\nMichael",
"msg_date": "Thu, 14 Mar 2024 16:00:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On 3/14/24 20:00, Michael Paquier wrote:\n> On Thu, Mar 14, 2024 at 09:12:52AM +1300, David Steele wrote:\n>> I think you are right that the start message is better since it can only\n>> appear once when the backup_label is found. The completed message could in\n>> theory appear after a restart, though the backup_label must have been found\n>> at some point.\n> \n> So, I've given a try to this patch with 99b4a63bef94, to note that\n> sidewinder failed because of a timing issue on Windows: the recovery\n> of the node without backup_label, expected to fail, would try to\n> backup the last segment it has replayed, because, as it has no\n> backup_label, it behaves like the primary. It would try to use the\n> same archive location as the primary, leading to a conflict failure on\n> Windows. This one was easy to fix, by overwritting postgresql.conf on\n> the node to not do archiving.\n\nHmmm, I wonder why this did not show up in the Windows tests on CI?\n\n> Following that, I've noticed a second race condition: we don't wait\n> for the segment after pg_switch_wal() to be archived. This one can be\n> easily avoided with a poll on pg_stat_archiver.\n\nUgh, yeah, good change.\n\n> After that, also because I've initially managed to, cough, forget an\n> update of meson.build to list the new test, I've noticed a third\n> failure on Windows for the case of the node that has a backup_label.\n> Here is one of the failures:\n> https://cirrus-ci.com/task/5245341683941376\n\nIs the missing test in meson the reason we did not see test failures for \nWindows in CI?\n\n> regress_log_042_low_level_backup and\n> 042_low_level_backup_replica_success.log have all the information\n> needed, that can be summarized like that:\n> The system cannot find the file specified.\n> 2024-03-14 06:02:37.670 GMT [560][startup] FATAL: invalid data in file \"backup_label\"\n> \n> The first message is something new to me, that seems to point to a\n> corruption failure of the file system. Why don't we see something\n> similar in other tests, then? Leaving that aside..\n> \n> The second LOG is something that can be acted on. I've added some\n> debugging to the parsing of the backup_label file in the backend, and\n> noticed that the first fscanf() for START WAL LOCATION is failing\n> because the last %c is detected as \\r rather than \\n. Tweaking the\n> contents stored from pg_backend_stop() with a sed won't help, because\n> the issue is that we write the CRLFs with append_to_file, and the\n> startup process cannot cope with that. The simplest method I can\n> think of is to use binmode, as of the attached.\n\nYeah, that makes sense.\n\n> It is the first time that we'd take the contents received from a\n> BackgroundPsql and write them to a file parsed by the backend, so\n> perhaps we should try to do that in a more general way, but I'm not\n> sure how, tbh, and the case of this test is special while adding\n> handling for \\r when reading the backup_label got discussed in the\n> past but we were OK with what we are doing now on HEAD.\n\nI think it makes sense to leave the parsing code as is and make the \nchange in the test. If we add more tests to this module we'll probably \nneed a function to avoid repeating code.\n\n> On top of all that, note that I have removed remove_tree as I am not\n> sure if this would be OK in all the buildfarm animals, added a quit()\n> for BackgroundPsql, moved queries to use less BackgroundPsql, as well\n> as a few other things like avoiding the hardcoded segment names.\n> meson.build is.. Cough.. Updated now.\n\nOK.\n\n> I am attaching an updated patch with all that fixed, which is stable\n> in the CI and any tests I've run. Do you have any comments about\n\nThese changes look good to me. Sure wish we had an easier to way to test \ncommits in the build farm.\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 15 Mar 2024 09:40:38 +1300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 09:40:38AM +1300, David Steele wrote:\n> Is the missing test in meson the reason we did not see test failures for\n> Windows in CI?\n\nThe test has to be listed in src/test/recovery/meson.build or the CI\nwould ignore it.\n\n>> The second LOG is something that can be acted on. I've added some\n>> debugging to the parsing of the backup_label file in the backend, and\n>> noticed that the first fscanf() for START WAL LOCATION is failing\n>> because the last %c is detected as \\r rather than \\n. Tweaking the\n>> contents stored from pg_backend_stop() with a sed won't help, because\n>> the issue is that we write the CRLFs with append_to_file, and the\n>> startup process cannot cope with that. The simplest method I can\n>> think of is to use binmode, as of the attached.\n> \n> Yeah, that makes sense.\n\nI am wondering if there is a better trick here that would not require\nchanges in the backend to make the backup_label parsing more flexible,\nthough.\n\n>> I am attaching an updated patch with all that fixed, which is stable\n>> in the CI and any tests I've run. Do you have any comments about\n> \n> These changes look good to me. Sure wish we had an easier to way to test\n> commits in the build farm.\n\nThat's why these tests are not that easy, they can be racy. I've run\nthe test 5~10 times in the CI this time to gain more confidence, and\nsaw zero failures with the stability fixes in place including Windows.\nI've applied it now, as I can still monitor the buildfarm for a few\nmore days. Let's see what happens, but that should be better.\n--\nMichael",
"msg_date": "Fri, 15 Mar 2024 08:38:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On 3/15/24 12:38, Michael Paquier wrote:\n> On Fri, Mar 15, 2024 at 09:40:38AM +1300, David Steele wrote:\n>> Is the missing test in meson the reason we did not see test failures for\n>> Windows in CI?\n> \n> The test has to be listed in src/test/recovery/meson.build or the CI\n> would ignore it.\n\nRight -- I will keep this in mind for the future.\n\n>>> The second LOG is something that can be acted on. I've added some\n>>> debugging to the parsing of the backup_label file in the backend, and\n>>> noticed that the first fscanf() for START WAL LOCATION is failing\n>>> because the last %c is detected as \\r rather than \\n. Tweaking the\n>>> contents stored from pg_backend_stop() with a sed won't help, because\n>>> the issue is that we write the CRLFs with append_to_file, and the\n>>> startup process cannot cope with that. The simplest method I can\n>>> think of is to use binmode, as of the attached.\n>>\n>> Yeah, that makes sense.\n> \n> I am wondering if there is a better trick here that would not require\n> changes in the backend to make the backup_label parsing more flexible,\n> though.\n\nWell, this is what we recommend in the docs, i.e. using bin mode to save \nbackup_label, so it seems OK to me.\n\n>>> I am attaching an updated patch with all that fixed, which is stable\n>>> in the CI and any tests I've run. Do you have any comments about\n>>\n>> These changes look good to me. Sure wish we had an easier to way to test\n>> commits in the build farm.\n> \n> That's why these tests are not that easy, they can be racy. I've run\n> the test 5~10 times in the CI this time to gain more confidence, and\n> saw zero failures with the stability fixes in place including Windows.\n> I've applied it now, as I can still monitor the buildfarm for a few\n> more days. Let's see what happens, but that should be better.\n\nAt least sidewinder is happy now -- and the build farm in general as far \nas I can see.\n\nThank you for your help on this!\n-David\n\n\n",
"msg_date": "Fri, 15 Mar 2024 18:23:15 +1300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 08:38:47AM +0900, Michael Paquier wrote:\n> That's why these tests are not that easy, they can be racy. I've run\n> the test 5~10 times in the CI this time to gain more confidence, and\n> saw zero failures with the stability fixes in place including Windows.\n> I've applied it now, as I can still monitor the buildfarm for a few\n> more days. Let's see what happens, but that should be better.\n\nSo, it looks like the buildfarm is clear. sidewinder has reported a\ngreen state, and the recent runs of the CFbot across all the patches\nare looking stable as well on all platforms. There are still a few\nbuildfarm members on Windows that will take time more time before\nrunning.\n--\nMichael",
"msg_date": "Fri, 15 Mar 2024 14:25:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 06:23:15PM +1300, David Steele wrote:\n> Well, this is what we recommend in the docs, i.e. using bin mode to save\n> backup_label, so it seems OK to me.\n\nIndeed, I didn't notice that this is actually documented, so what I\ndid took the right angle. French flair, perhaps..\n--\nMichael",
"msg_date": "Fri, 15 Mar 2024 14:32:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On 3/15/24 18:32, Michael Paquier wrote:\n> On Fri, Mar 15, 2024 at 06:23:15PM +1300, David Steele wrote:\n>> Well, this is what we recommend in the docs, i.e. using bin mode to save\n>> backup_label, so it seems OK to me.\n> \n> Indeed, I didn't notice that this is actually documented, so what I\n> did took the right angle. French flair, perhaps..\n\nThis seems like a reasonable explanation to me.\n\n-David\n\n\n",
"msg_date": "Fri, 15 Mar 2024 18:37:35 +1300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add basic tests for the low-level backup method."
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 06:37:35PM +1300, David Steele wrote:\n> This seems like a reasonable explanation to me.\n\nFYI, drongo has just passed the test. fairywren uses TAP, does not\nrun the recovery tests.\n--\nMichael",
"msg_date": "Sat, 16 Mar 2024 08:43:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add basic tests for the low-level backup method."
}
] |
[
{
"msg_contents": "We're about 1/3 of the way through.\n\nAt start:\nNeeds review: 210. Waiting on Author: 42. Ready for Committer: 29.\nCommitted: 55. Withdrawn: 10. Returned with Feedback: 1. Total: 347.\n\nToday:\nNeeds review: 197. Waiting on Author: 45. Ready for Committer: 27.\nCommitted: 63. Withdrawn: 10. Returned with Feedback: 4. Rejected: 1.\nTotal: 347.\n\nThis seems in line with September, i.e. not a whole lot of change, but\nplenty of discussion in various threads. We also had activity for a\nminor release recently.\n\n--\nJohn Naylor\n\n\n",
"msg_date": "Sun, 12 Nov 2023 12:53:25 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest 2023-11 update 1"
}
] |
[
{
"msg_contents": "Greetings,\n\nI am getting the following error\nbuilding on HEAD\n\nLibrary crypto found: YES\nChecking for function \"CRYPTO_new_ex_data\" with dependencies -lssl,\n-lcrypto: NO\n\nI have openssl 1.1.1 installed\n\nDave Cramer\n\nGreetings,I am getting the following errorbuilding on HEADLibrary crypto found: YESChecking for function \"CRYPTO_new_ex_data\" with dependencies -lssl, -lcrypto: NOI have openssl 1.1.1 installedDave Cramer",
"msg_date": "Sun, 12 Nov 2023 07:57:14 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "building with meson on windows with ssl"
},
{
"msg_contents": "On Sun, 12 Nov 2023 at 07:57, Dave Cramer <[email protected]> wrote:\n\n> Greetings,\n>\n> I am getting the following error\n> building on HEAD\n>\n> Library crypto found: YES\n> Checking for function \"CRYPTO_new_ex_data\" with dependencies -lssl,\n> -lcrypto: NO\n>\n\nSo this is the error you get if you mix a 64 bit version of openssl and\nbuild with x86 tools. Clearly my problem, but the error message is less\nthan helpful\n\nDave\n\n>\n\nOn Sun, 12 Nov 2023 at 07:57, Dave Cramer <[email protected]> wrote:Greetings,I am getting the following errorbuilding on HEADLibrary crypto found: YESChecking for function \"CRYPTO_new_ex_data\" with dependencies -lssl, -lcrypto: NOSo this is the error you get if you mix a 64 bit version of openssl and build with x86 tools. Clearly my problem, but the error message is less than helpfulDave",
"msg_date": "Sun, 12 Nov 2023 11:41:15 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: building with meson on windows with ssl"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-12 11:41:15 -0500, Dave Cramer wrote:\n> On Sun, 12 Nov 2023 at 07:57, Dave Cramer <[email protected]> wrote:\n> > I am getting the following error\n> > building on HEAD\n> >\n> > Library crypto found: YES\n> > Checking for function \"CRYPTO_new_ex_data\" with dependencies -lssl,\n> > -lcrypto: NO\n> >\n> \n> So this is the error you get if you mix a 64 bit version of openssl and\n> build with x86 tools. Clearly my problem, but the error message is less\n> than helpful\n\nThere probably is more detail in meson-logs/meson-log.txt - could you post\nthat?\n\n\nThe problem could be related to the fact that on windows you (I think) can\nhave binaries with both 32bit and 64bit libraries loaded.\n\nWas this with msvc or gcc/mingw?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 17:56:44 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: building with meson on windows with ssl"
},
{
"msg_contents": "On Mon, 13 Nov 2023 at 20:56, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-11-12 11:41:15 -0500, Dave Cramer wrote:\n> > On Sun, 12 Nov 2023 at 07:57, Dave Cramer <[email protected]> wrote:\n> > > I am getting the following error\n> > > building on HEAD\n> > >\n> > > Library crypto found: YES\n> > > Checking for function \"CRYPTO_new_ex_data\" with dependencies -lssl,\n> > > -lcrypto: NO\n> > >\n> >\n> > So this is the error you get if you mix a 64 bit version of openssl and\n> > build with x86 tools. Clearly my problem, but the error message is less\n> > than helpful\n>\n> There probably is more detail in meson-logs/meson-log.txt - could you post\n> that?\n>\nI'd have to undo what I did to fix it, but if I find time I will\n\n>\n>\n> The problem could be related to the fact that on windows you (I think) can\n> have binaries with both 32bit and 64bit libraries loaded.\n>\n\nI was building with the 32bit tools by mistake.\n\n>\n> Was this with msvc or gcc/mingw?\n>\n\nmsvc\n\nDave\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nOn Mon, 13 Nov 2023 at 20:56, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-11-12 11:41:15 -0500, Dave Cramer wrote:\n> On Sun, 12 Nov 2023 at 07:57, Dave Cramer <[email protected]> wrote:\n> > I am getting the following error\n> > building on HEAD\n> >\n> > Library crypto found: YES\n> > Checking for function \"CRYPTO_new_ex_data\" with dependencies -lssl,\n> > -lcrypto: NO\n> >\n> \n> So this is the error you get if you mix a 64 bit version of openssl and\n> build with x86 tools. Clearly my problem, but the error message is less\n> than helpful\n\nThere probably is more detail in meson-logs/meson-log.txt - could you post\nthat?I'd have to undo what I did to fix it, but if I find time I will \n\n\nThe problem could be related to the fact that on windows you (I think) can\nhave binaries with both 32bit and 64bit libraries loaded.I was building with the 32bit tools by mistake. \n\nWas this with msvc or gcc/mingw?msvcDave \n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 14 Nov 2023 05:21:09 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: building with meson on windows with ssl"
}
] |
[
{
"msg_contents": "In PostgreSQL, when a backend process crashes, it can cause other backend processes to also require a restart, primarily to ensure data consistency. I understand that the correct approach is to analyze and identify the cause of the crash and resolve it. However, it is also important to be able to handle a backend process crash without affecting the operation of other processes, thus minimizing the scope of negative impact and improving availability. To achieve this goal, could we mimic the Oracle process by introducing a \"pmon\" process dedicated to rolling back crashed process transactions and performing resource cleanup? I wonder if anyone has attempted such a strategy or if there have been previous discussions on this topic.\nIn PostgreSQL, when a backend process crashes, it can cause other backend processes to also require a restart, primarily to ensure data consistency. I understand that the correct approach is to analyze and identify the cause of the crash and resolve it. However, it is also important to be able to handle a backend process crash without affecting the operation of other processes, thus minimizing the scope of negative impact and improving availability. To achieve this goal, could we mimic the Oracle process by introducing a \"pmon\" process dedicated to rolling back crashed process transactions and performing resource cleanup? I wonder if anyone has attempted such a strategy or if there have been previous discussions on this topic.",
"msg_date": "Mon, 13 Nov 2023 10:30:44 +0800 (CST)",
"msg_from": "yuansong <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to solve the problem of one backend process crashing and\n causing other processes to restart?"
},
{
"msg_contents": "yuansong <[email protected]> writes:\n> In PostgreSQL, when a backend process crashes, it can cause other backend processes to also require a restart, primarily to ensure data consistency. I understand that the correct approach is to analyze and identify the cause of the crash and resolve it. However, it is also important to be able to handle a backend process crash without affecting the operation of other processes, thus minimizing the scope of negative impact and improving availability. To achieve this goal, could we mimic the Oracle process by introducing a \"pmon\" process dedicated to rolling back crashed process transactions and performing resource cleanup? I wonder if anyone has attempted such a strategy or if there have been previous discussions on this topic.\n\nThe reason we force a database-wide restart is that there's no way to\nbe certain that the crashed process didn't corrupt anything in shared\nmemory. (Even with the forced restart, there's a window where bad\ndata could reach disk before we kill off the other processes that\nmight write it. But at least it's a short window.) \"Corruption\"\nhere doesn't just involve bad data placed into disk buffers; more\noften it's things like unreleased locks, which would block other\nprocesses indefinitely.\n\nI seriously doubt that anything like what you're describing\ncould be made reliable enough to be acceptable. \"Oracle does\nit like this\" isn't a counter-argument: they have a much different\n(and non-extensible) architecture, and they also have an army of\nprogrammers to deal with minutiae like undoing resource acquisition.\nEven with that, you'd have to wonder about the number of bugs\nexisting in such necessarily-poorly-tested code paths.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Nov 2023 21:55:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to solve the problem of one backend process crashing and\n causing other processes to restart?"
},
{
"msg_contents": "On Sun, 2023-11-12 at 21:55 -0500, Tom Lane wrote:\n> yuansong <[email protected]> writes:\n> > In PostgreSQL, when a backend process crashes, it can cause other backend\n> > processes to also require a restart, primarily to ensure data consistency.\n> > I understand that the correct approach is to analyze and identify the\n> > cause of the crash and resolve it. However, it is also important to be\n> > able to handle a backend process crash without affecting the operation of\n> > other processes, thus minimizing the scope of negative impact and\n> > improving availability. To achieve this goal, could we mimic the Oracle\n> > process by introducing a \"pmon\" process dedicated to rolling back crashed\n> > process transactions and performing resource cleanup? I wonder if anyone\n> > has attempted such a strategy or if there have been previous discussions\n> > on this topic.\n> \n> The reason we force a database-wide restart is that there's no way to\n> be certain that the crashed process didn't corrupt anything in shared\n> memory. (Even with the forced restart, there's a window where bad\n> data could reach disk before we kill off the other processes that\n> might write it. But at least it's a short window.) \"Corruption\"\n> here doesn't just involve bad data placed into disk buffers; more\n> often it's things like unreleased locks, which would block other\n> processes indefinitely.\n> \n> I seriously doubt that anything like what you're describing\n> could be made reliable enough to be acceptable. \"Oracle does\n> it like this\" isn't a counter-argument: they have a much different\n> (and non-extensible) architecture, and they also have an army of\n> programmers to deal with minutiae like undoing resource acquisition.\n> Even with that, you'd have to wonder about the number of bugs\n> existing in such necessarily-poorly-tested code paths.\n\nYes.\nI think that PostgreSQL's approach is superior: rather than investing in\ncode to mitigate the impact of data corruption caused by a crash, invest\nin quality code that doesn't crash in the first place.\n\nEuphemistically naming a crash \"ORA-600 error\" seems to be part of\ntheir strategy.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 13 Nov 2023 06:53:29 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to solve the problem of one backend process crashing and\n causing other processes to restart?"
},
{
"msg_contents": "Enhancing the overall fault tolerance of the entire system for this feature is quite important. No one can avoid bugs, and I don't believe that this approach is a more advanced one. It might be worth considering adding it to the roadmap so that interested parties can conduct relevant research.\n\nThe current major issue is that when one process crashes, resetting all connections has a significant impact on other connections. Is it possible to only disconnect the crashed connection and have the other connections simply roll back the current transaction without reconnecting? Perhaps this problem can be mitigated through the use of a connection pool.\n\nIf we want to implement this feature, would it be sufficient to clean up or restore the shared memory and disk changes caused by the crashed backend? Besides clearing various known locks, what else needs to be changed? Does anyone have any insights? Please help me. Thank you.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAt 2023-11-13 13:53:29, \"Laurenz Albe\" <[email protected]> wrote:\n>On Sun, 2023-11-12 at 21:55 -0500, Tom Lane wrote:\n>> yuansong <[email protected]> writes:\n>> > In PostgreSQL, when a backend process crashes, it can cause other backend\n>> > processes to also require a restart, primarily to ensure data consistency.\n>> > I understand that the correct approach is to analyze and identify the\n>> > cause of the crash and resolve it. However, it is also important to be\n>> > able to handle a backend process crash without affecting the operation of\n>> > other processes, thus minimizing the scope of negative impact and\n>> > improving availability. To achieve this goal, could we mimic the Oracle\n>> > process by introducing a \"pmon\" process dedicated to rolling back crashed\n>> > process transactions and performing resource cleanup? I wonder if anyone\n>> > has attempted such a strategy or if there have been previous discussions\n>> > on this topic.\n>> \n>> The reason we force a database-wide restart is that there's no way to\n>> be certain that the crashed process didn't corrupt anything in shared\n>> memory. (Even with the forced restart, there's a window where bad\n>> data could reach disk before we kill off the other processes that\n>> might write it. But at least it's a short window.) \"Corruption\"\n>> here doesn't just involve bad data placed into disk buffers; more\n>> often it's things like unreleased locks, which would block other\n>> processes indefinitely.\n>> \n>> I seriously doubt that anything like what you're describing\n>> could be made reliable enough to be acceptable. \"Oracle does\n>> it like this\" isn't a counter-argument: they have a much different\n>> (and non-extensible) architecture, and they also have an army of\n>> programmers to deal with minutiae like undoing resource acquisition.\n>> Even with that, you'd have to wonder about the number of bugs\n>> existing in such necessarily-poorly-tested code paths.\n>\n>Yes.\n>I think that PostgreSQL's approach is superior: rather than investing in\n>code to mitigate the impact of data corruption caused by a crash, invest\n>in quality code that doesn't crash in the first place.\n>\n>Euphemistically naming a crash \"ORA-600 error\" seems to be part of\n>their strategy.\n>\n>Yours,\n>Laurenz Albe\n>\n\nEnhancing the overall fault tolerance of the entire system for this feature is quite important. No one can avoid bugs, and I don't believe that this approach is a more advanced one. It might be worth considering adding it to the roadmap so that interested parties can conduct relevant research.The current major issue is that when one process crashes, resetting all connections has a significant impact on other connections. Is it possible to only disconnect the crashed connection and have the other connections simply roll back the current transaction without reconnecting? Perhaps this problem can be mitigated through the use of a connection pool.If we want to implement this feature, would it be sufficient to clean up or restore the shared memory and disk changes caused by the crashed backend? Besides clearing various known locks, what else needs to be changed? Does anyone have any insights? Please help me. Thank you.At 2023-11-13 13:53:29, \"Laurenz Albe\" <[email protected]> wrote:\n>On Sun, 2023-11-12 at 21:55 -0500, Tom Lane wrote:\n>> yuansong <[email protected]> writes:\n>> > In PostgreSQL, when a backend process crashes, it can cause other backend\n>> > processes to also require a restart, primarily to ensure data consistency.\n>> > I understand that the correct approach is to analyze and identify the\n>> > cause of the crash and resolve it. However, it is also important to be\n>> > able to handle a backend process crash without affecting the operation of\n>> > other processes, thus minimizing the scope of negative impact and\n>> > improving availability. To achieve this goal, could we mimic the Oracle\n>> > process by introducing a \"pmon\" process dedicated to rolling back crashed\n>> > process transactions and performing resource cleanup? I wonder if anyone\n>> > has attempted such a strategy or if there have been previous discussions\n>> > on this topic.\n>> \n>> The reason we force a database-wide restart is that there's no way to\n>> be certain that the crashed process didn't corrupt anything in shared\n>> memory. (Even with the forced restart, there's a window where bad\n>> data could reach disk before we kill off the other processes that\n>> might write it. But at least it's a short window.) \"Corruption\"\n>> here doesn't just involve bad data placed into disk buffers; more\n>> often it's things like unreleased locks, which would block other\n>> processes indefinitely.\n>> \n>> I seriously doubt that anything like what you're describing\n>> could be made reliable enough to be acceptable. \"Oracle does\n>> it like this\" isn't a counter-argument: they have a much different\n>> (and non-extensible) architecture, and they also have an army of\n>> programmers to deal with minutiae like undoing resource acquisition.\n>> Even with that, you'd have to wonder about the number of bugs\n>> existing in such necessarily-poorly-tested code paths.\n>\n>Yes.\n>I think that PostgreSQL's approach is superior: rather than investing in\n>code to mitigate the impact of data corruption caused by a crash, invest\n>in quality code that doesn't crash in the first place.\n>\n>Euphemistically naming a crash \"ORA-600 error\" seems to be part of\n>their strategy.\n>\n>Yours,\n>Laurenz Albe\n>",
"msg_date": "Mon, 13 Nov 2023 17:13:20 +0800 (CST)",
"msg_from": "yuansong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:Re: How to solve the problem of one backend process crashing and\n causing other processes to restart?"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 5:14 PM yuansong <[email protected]> wrote:\n>\n> Enhancing the overall fault tolerance of the entire system for this feature is quite important. No one can avoid bugs, and I don't believe that this approach is a more advanced one. It might be worth considering adding it to the roadmap so that interested parties can conduct relevant research.\n>\n> The current major issue is that when one process crashes, resetting all connections has a significant impact on other connections. Is it possible to only disconnect the crashed connection and have the other connections simply roll back the current transaction without reconnecting? Perhaps this problem can be mitigated through the use of a connection pool.\n\nIt's not about the other connections, it's that the crashed connection\nhas no way to rollback.\n\n>\n> If we want to implement this feature, would it be sufficient to clean up or restore the shared memory and disk changes caused by the crashed backend? Besides clearing various known locks, what else needs to be changed? Does anyone have any insights? Please help me. Thank you.\n>\n>\n>\n>\n>\n>\n>\n> At 2023-11-13 13:53:29, \"Laurenz Albe\" <[email protected]> wrote:\n> >On Sun, 2023-11-12 at 21:55 -0500, Tom Lane wrote:\n> >> yuansong <[email protected]> writes:\n> >> > In PostgreSQL, when a backend process crashes, it can cause other backend\n> >> > processes to also require a restart, primarily to ensure data consistency.\n> >> > I understand that the correct approach is to analyze and identify the\n> >> > cause of the crash and resolve it. However, it is also important to be\n> >> > able to handle a backend process crash without affecting the operation of\n> >> > other processes, thus minimizing the scope of negative impact and\n> >> > improving availability. To achieve this goal, could we mimic the Oracle\n> >> > process by introducing a \"pmon\" process dedicated to rolling back crashed\n> >> > process transactions and performing resource cleanup? I wonder if anyone\n> >> > has attempted such a strategy or if there have been previous discussions\n> >> > on this topic.\n> >>\n> >> The reason we force a database-wide restart is that there's no way to\n> >> be certain that the crashed process didn't corrupt anything in shared\n> >> memory. (Even with the forced restart, there's a window where bad\n> >> data could reach disk before we kill off the other processes that\n> >> might write it. But at least it's a short window.) \"Corruption\"\n> >> here doesn't just involve bad data placed into disk buffers; more\n> >> often it's things like unreleased locks, which would block other\n> >> processes indefinitely.\n> >>\n> >> I seriously doubt that anything like what you're describing\n> >> could be made reliable enough to be acceptable. \"Oracle does\n> >> it like this\" isn't a counter-argument: they have a much different\n> >> (and non-extensible) architecture, and they also have an army of\n> >> programmers to deal with minutiae like undoing resource acquisition.\n> >> Even with that, you'd have to wonder about the number of bugs\n> >> existing in such necessarily-poorly-tested code paths.\n> >\n> >Yes.\n> >I think that PostgreSQL's approach is superior: rather than investing in\n> >code to mitigate the impact of data corruption caused by a crash, invest\n> >in quality code that doesn't crash in the first place.\n> >\n> >Euphemistically naming a crash \"ORA-600 error\" seems to be part of\n> >their strategy.\n> >\n> >Yours,\n> >Laurenz Albe\n> >\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Mon, 13 Nov 2023 18:42:08 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: How to solve the problem of one backend process crashing and\n causing other processes to restart?"
},
{
"msg_contents": "On 11/13/23 00:53, Laurenz Albe wrote:\n> On Sun, 2023-11-12 at 21:55 -0500, Tom Lane wrote:\n>> yuansong <[email protected]> writes:\n>> > In PostgreSQL, when a backend process crashes, it can cause other backend\n>> > processes to also require a restart, primarily to ensure data consistency.\n>> > I understand that the correct approach is to analyze and identify the\n>> > cause of the crash and resolve it. However, it is also important to be\n>> > able to handle a backend process crash without affecting the operation of\n>> > other processes, thus minimizing the scope of negative impact and\n>> > improving availability. To achieve this goal, could we mimic the Oracle\n>> > process by introducing a \"pmon\" process dedicated to rolling back crashed\n>> > process transactions and performing resource cleanup? I wonder if anyone\n>> > has attempted such a strategy or if there have been previous discussions\n>> > on this topic.\n>> \n>> The reason we force a database-wide restart is that there's no way to\n>> be certain that the crashed process didn't corrupt anything in shared\n>> memory. (Even with the forced restart, there's a window where bad\n>> data could reach disk before we kill off the other processes that\n>> might write it. But at least it's a short window.) \"Corruption\"\n>> here doesn't just involve bad data placed into disk buffers; more\n>> often it's things like unreleased locks, which would block other\n>> processes indefinitely.\n>> \n>> I seriously doubt that anything like what you're describing\n>> could be made reliable enough to be acceptable. \"Oracle does\n>> it like this\" isn't a counter-argument: they have a much different\n>> (and non-extensible) architecture, and they also have an army of\n>> programmers to deal with minutiae like undoing resource acquisition.\n>> Even with that, you'd have to wonder about the number of bugs\n>> existing in such necessarily-poorly-tested code paths.\n> \n> Yes.\n> I think that PostgreSQL's approach is superior: rather than investing in\n> code to mitigate the impact of data corruption caused by a crash, invest\n> in quality code that doesn't crash in the first place.\n\n\nWhile true, this does nothing to prevent OOM kills, which are becoming \nmore prevalent as, for example, running Postgres in a container (or \notherwise) with a cgroup memory limit becomes more popular.\n\nAnd in any case, there are enterprise use cases that necessarily avoid \nPostgres due to this behavior, which is a shame.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 13 Nov 2023 11:57:56 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to solve the problem of one backend process crashing and\n causing other processes to restart?"
},
{
"msg_contents": "Hi yuansong\r\n there is connnection pool path (https://commitfest.postgresql.org/34/3043/) ,but it has been dormant for few years,You can check this patch to get what you want to need\r\n________________________________\r\n发件人: yuansong <[email protected]>\r\n发送时间: 2023年11月13日 17:13\r\n收件人: Laurenz Albe <[email protected]>\r\n抄送: [email protected] <[email protected]>\r\n主题: Re:Re: How to solve the problem of one backend process crashing and causing other processes to restart?\r\n\r\n\r\nEnhancing the overall fault tolerance of the entire system for this feature is quite important. No one can avoid bugs, and I don't believe that this approach is a more advanced one. It might be worth considering adding it to the roadmap so that interested parties can conduct relevant research.\r\n\r\nThe current major issue is that when one process crashes, resetting all connections has a significant impact on other connections. Is it possible to only disconnect the crashed connection and have the other connections simply roll back the current transaction without reconnecting? Perhaps this problem can be mitigated through the use of a connection pool.\r\n\r\nIf we want to implement this feature, would it be sufficient to clean up or restore the shared memory and disk changes caused by the crashed backend? Besides clearing various known locks, what else needs to be changed? Does anyone have any insights? Please help me. Thank you.\r\n\r\n\r\n\r\n\r\n\r\n\r\nAt 2023-11-13 13:53:29, \"Laurenz Albe\" <[email protected]> wrote:\r\n>On Sun, 2023-11-12 at 21:55 -0500, Tom Lane wrote:\r\n>> yuansong <[email protected]> writes:\r\n>> > In PostgreSQL, when a backend process crashes, it can cause other backend\r\n>> > processes to also require a restart, primarily to ensure data consistency.\r\n>> > I understand that the correct approach is to analyze and identify the\r\n>> > cause of the crash and resolve it. However, it is also important to be\r\n>> > able to handle a backend process crash without affecting the operation of\r\n>> > other processes, thus minimizing the scope of negative impact and\r\n>> > improving availability. To achieve this goal, could we mimic the Oracle\r\n>> > process by introducing a \"pmon\" process dedicated to rolling back crashed\r\n>> > process transactions and performing resource cleanup? I wonder if anyone\r\n>> > has attempted such a strategy or if there have been previous discussions\r\n>> > on this topic.\r\n>>\r\n>> The reason we force a database-wide restart is that there's no way to\r\n>> be certain that the crashed process didn't corrupt anything in shared\r\n>> memory. (Even with the forced restart, there's a window where bad\r\n>> data could reach disk before we kill off the other processes that\r\n>> might write it. But at least it's a short window.) \"Corruption\"\r\n>> here doesn't just involve bad data placed into disk buffers; more\r\n>> often it's things like unreleased locks, which would block other\r\n>> processes indefinitely.\r\n>>\r\n>> I seriously doubt that anything like what you're describing\r\n>> could be made reliable enough to be acceptable. \"Oracle does\r\n>> it like this\" isn't a counter-argument: they have a much different\r\n>> (and non-extensible) architecture, and they also have an army of\r\n>> programmers to deal with minutiae like undoing resource acquisition.\r\n>> Even with that, you'd have to wonder about the number of bugs\r\n>> existing in such necessarily-poorly-tested code paths.\r\n>\r\n>Yes.\r\n>I think that PostgreSQL's approach is superior: rather than investing in\r\n>code to mitigate the impact of data corruption caused by a crash, invest\r\n>in quality code that doesn't crash in the first place.\r\n>\r\n>Euphemistically naming a crash \"ORA-600 error\" seems to be part of\r\n>their strategy.\r\n>\r\n>Yours,\r\n>Laurenz Albe\r\n>\r\n\r\n\n\n\n\n\n\n\nHi yuansong\n there is connnection pool path (https://commitfest.postgresql.org/34/3043/) ,but it has been dormant for few years,You can check this patch to get what you want to need\n\n\n发件人: yuansong <[email protected]>\n发送时间: 2023年11月13日 17:13\n收件人: Laurenz Albe <[email protected]>\n抄送: [email protected] <[email protected]>\n主题: Re:Re: How to solve the problem of one backend process crashing and causing other processes to restart?\n \n\n\n\n\nEnhancing the overall fault tolerance of the entire system for this feature is quite important. No one can avoid bugs, and I don't believe that this approach is a more advanced one. It might be worth considering adding it to the roadmap so that interested parties\n can conduct relevant research.\n\nThe current major issue is that when one process crashes, resetting all connections has a significant impact on other connections. Is it possible to only disconnect the crashed connection and have the other connections simply roll back the current transaction\n without reconnecting? Perhaps this problem can be mitigated through the use of a connection pool.\n\nIf we want to implement this feature, would it be sufficient to clean up or restore the shared memory and disk changes caused by the crashed backend? Besides clearing various known locks, what else needs to be changed? Does anyone have any insights? Please\n help me. Thank you.\n\n\n\n\n\n\n\n\n\n\n\n\nAt 2023-11-13 13:53:29, \"Laurenz Albe\" <[email protected]> wrote:\n>On Sun, 2023-11-12 at 21:55 -0500, Tom Lane wrote:\n>> yuansong <[email protected]> writes:\n>> > In PostgreSQL, when a backend process crashes, it can cause other backend\n>> > processes to also require a restart, primarily to ensure data consistency.\n>> > I understand that the correct approach is to analyze and identify the\n>> > cause of the crash and resolve it. However, it is also important to be\n>> > able to handle a backend process crash without affecting the operation of\n>> > other processes, thus minimizing the scope of negative impact and\n>> > improving availability. To achieve this goal, could we mimic the Oracle\n>> > process by introducing a \"pmon\" process dedicated to rolling back crashed\n>> > process transactions and performing resource cleanup? I wonder if anyone\n>> > has attempted such a strategy or if there have been previous discussions\n>> > on this topic.\n>> \n>> The reason we force a database-wide restart is that there's no way to\n>> be certain that the crashed process didn't corrupt anything in shared\n>> memory. (Even with the forced restart, there's a window where bad\n>> data could reach disk before we kill off the other processes that\n>> might write it. But at least it's a short window.) \"Corruption\"\n>> here doesn't just involve bad data placed into disk buffers; more\n>> often it's things like unreleased locks, which would block other\n>> processes indefinitely.\n>> \n>> I seriously doubt that anything like what you're describing\n>> could be made reliable enough to be acceptable. \"Oracle does\n>> it like this\" isn't a counter-argument: they have a much different\n>> (and non-extensible) architecture, and they also have an army of\n>> programmers to deal with minutiae like undoing resource acquisition.\n>> Even with that, you'd have to wonder about the number of bugs\n>> existing in such necessarily-poorly-tested code paths.\n>\n>Yes.\n>I think that PostgreSQL's approach is superior: rather than investing in\n>code to mitigate the impact of data corruption caused by a crash, invest\n>in quality code that doesn't crash in the first place.\n>\n>Euphemistically naming a crash \"ORA-600 error\" seems to be part of\n>their strategy.\n>\n>Yours,\n>Laurenz Albe\n>",
"msg_date": "Tue, 14 Nov 2023 01:41:03 +0000",
"msg_from": "Thomas wen <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?gb2312?B?u9i4tDogUmU6UmU6IEhvdyB0byBzb2x2ZSB0aGUgcHJvYmxlbSBvZiBvbmUg?=\n =?gb2312?B?YmFja2VuZCBwcm9jZXNzIGNyYXNoaW5nIGFuZCBjYXVzaW5nIG90aGVyIHBy?=\n =?gb2312?Q?ocesses_to_restart=3F?="
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 3:14 AM yuansong <[email protected]> wrote:\n\n> Enhancing the overall fault tolerance of the entire system for this\n> feature is quite important. No one can avoid bugs, and I don't believe that\n> this approach is a more advanced one. It might be worth considering adding\n> it to the roadmap so that interested parties can conduct relevant research.\n>\n> The current major issue is that when one process crashes, resetting all\n> connections has a significant impact on other connections. Is it possible\n> to only disconnect the crashed connection and have the other connections\n> simply roll back the current transaction without reconnecting? Perhaps this\n> problem can be mitigated through the use of a connection pool.\n>\n> If we want to implement this feature, would it be sufficient to clean up\n> or restore the shared memory and disk changes caused by the crashed\n> backend? Besides clearing various known locks, what else needs to be\n> changed? Does anyone have any insights? Please help me. Thank you.\n>\n\nOne thing that's really key to understand about postgres is that there are\na different set of rules regarding what is the database's job to solve vs\nsupporting libraries and frameworks. It isn't that hard to wait and retry\na query in most applications, and it is up to you to do that. There are\nalso various connection poolers that might implement retry logic, and not\nhaving to work through those concerns keeps the code lean and has other\nbenefits. While postgres might implement things like a built in connection\npooler, 'o_direct' type memory management, and things like that, there are\nlong term costs to doing them.\n\nThere's another side to this. Suppose I had to choose between a\nhypothetical postgres that had some kind of process local crash recovery\nand the current implementation. I might still choose the current\nimplementation because, in general, crashes are good, and the full reset\nhas a much better chance of clearing the underlying issue that caused the\nproblem vs managing the symptoms of it.\n\nmerlin\n\nOn Mon, Nov 13, 2023 at 3:14 AM yuansong <[email protected]> wrote:Enhancing the overall fault tolerance of the entire system for this feature is quite important. No one can avoid bugs, and I don't believe that this approach is a more advanced one. It might be worth considering adding it to the roadmap so that interested parties can conduct relevant research.The current major issue is that when one process crashes, resetting all connections has a significant impact on other connections. Is it possible to only disconnect the crashed connection and have the other connections simply roll back the current transaction without reconnecting? Perhaps this problem can be mitigated through the use of a connection pool.If we want to implement this feature, would it be sufficient to clean up or restore the shared memory and disk changes caused by the crashed backend? Besides clearing various known locks, what else needs to be changed? Does anyone have any insights? Please help me. Thank you.One thing that's really key to understand about postgres is that there are a different set of rules regarding what is the database's job to solve vs supporting libraries and frameworks. It isn't that hard to wait and retry a query in most applications, and it is up to you to do that. There are also various connection poolers that might implement retry logic, and not having to work through those concerns keeps the code lean and has other benefits. While postgres might implement things like a built in connection pooler, 'o_direct' type memory management, and things like that, there are long term costs to doing them.There's another side to this. Suppose I had to choose between a hypothetical postgres that had some kind of process local crash recovery and the current implementation. I might still choose the current implementation because, in general, crashes are good, and the full reset has a much better chance of clearing the underlying issue that caused the problem vs managing the symptoms of it.merlin",
"msg_date": "Mon, 13 Nov 2023 20:03:23 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: How to solve the problem of one backend process crashing and\n causing other processes to restart?"
},
{
"msg_contents": "thanks,After reconsideration, I realized that what I really want is for other connections to remain unaffected when a process crashes. This is something that a connection pool cannot solve.\n\n\n\n\n\n\n\nAt 2023-11-14 09:41:03, \"Thomas wen\" <[email protected]> wrote:\n\nHi yuansong\n there is connnection pool path (https://commitfest.postgresql.org/34/3043/) ,but it has been dormant for few years,You can check this patch to get what you want to need\n发件人: yuansong <[email protected]>\n发送时间: 2023年11月13日 17:13\n收件人: Laurenz Albe <[email protected]>\n抄送: [email protected] <[email protected]>\n主题: Re:Re: How to solve the problem of one backend process crashing and causing other processes to restart?\n \n\nEnhancing the overall fault tolerance of the entire system for this feature is quite important. No one can avoid bugs, and I don't believe that this approach is a more advanced one. It might be worth considering adding it to the roadmap so that interested parties can conduct relevant research.\n\nThe current major issue is that when one process crashes, resetting all connections has a significant impact on other connections. Is it possible to only disconnect the crashed connection and have the other connections simply roll back the current transaction without reconnecting? Perhaps this problem can be mitigated through the use of a connection pool.\n\nIf we want to implement this feature, would it be sufficient to clean up or restore the shared memory and disk changes caused by the crashed backend? Besides clearing various known locks, what else needs to be changed? Does anyone have any insights? Please help me. Thank you.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAt 2023-11-13 13:53:29, \"Laurenz Albe\" <[email protected]> wrote:\n>On Sun, 2023-11-12 at 21:55 -0500, Tom Lane wrote:\n>> yuansong <[email protected]> writes:\n>> > In PostgreSQL, when a backend process crashes, it can cause other backend\n>> > processes to also require a restart, primarily to ensure data consistency.\n>> > I understand that the correct approach is to analyze and identify the\n>> > cause of the crash and resolve it. However, it is also important to be\n>> > able to handle a backend process crash without affecting the operation of\n>> > other processes, thus minimizing the scope of negative impact and\n>> > improving availability. To achieve this goal, could we mimic the Oracle\n>> > process by introducing a \"pmon\" process dedicated to rolling back crashed\n>> > process transactions and performing resource cleanup? I wonder if anyone\n>> > has attempted such a strategy or if there have been previous discussions\n>> > on this topic.\n>> \n>> The reason we force a database-wide restart is that there's no way to\n>> be certain that the crashed process didn't corrupt anything in shared\n>> memory. (Even with the forced restart, there's a window where bad\n>> data could reach disk before we kill off the other processes that\n>> might write it. But at least it's a short window.) \"Corruption\"\n>> here doesn't just involve bad data placed into disk buffers; more\n>> often it's things like unreleased locks, which would block other\n>> processes indefinitely.\n>> \n>> I seriously doubt that anything like what you're describing\n>> could be made reliable enough to be acceptable. \"Oracle does\n>> it like this\" isn't a counter-argument: they have a much different\n>> (and non-extensible) architecture, and they also have an army of\n>> programmers to deal with minutiae like undoing resource acquisition.\n>> Even with that, you'd have to wonder about the number of bugs\n>> existing in such necessarily-poorly-tested code paths.\n>\n>Yes.\n>I think that PostgreSQL's approach is superior: rather than investing in\n>code to mitigate the impact of data corruption caused by a crash, invest\n>in quality code that doesn't crash in the first place.\n>\n>Euphemistically naming a crash \"ORA-600 error\" seems to be part of\n>their strategy.\n>\n>Yours,\n>Laurenz Albe\n>\n\nthanks,After reconsideration, I realized that what I really want is for other connections to remain unaffected when a process crashes. This is something that a connection pool cannot solve.At 2023-11-14 09:41:03, \"Thomas wen\" <[email protected]> wrote:\n\n\nHi yuansong\n there is connnection pool path (https://commitfest.postgresql.org/34/3043/) ,but it has been dormant for few years,You can check this patch to get what you want to need\n\n\n发件人: yuansong <[email protected]>\n发送时间: 2023年11月13日 17:13\n收件人: Laurenz Albe <[email protected]>\n抄送: [email protected] <[email protected]>\n主题: Re:Re: How to solve the problem of one backend process crashing and causing other processes to restart?\n \n\n\n\n\nEnhancing the overall fault tolerance of the entire system for this feature is quite important. No one can avoid bugs, and I don't believe that this approach is a more advanced one. It might be worth considering adding it to the roadmap so that interested parties\n can conduct relevant research.\n\nThe current major issue is that when one process crashes, resetting all connections has a significant impact on other connections. Is it possible to only disconnect the crashed connection and have the other connections simply roll back the current transaction\n without reconnecting? Perhaps this problem can be mitigated through the use of a connection pool.\n\nIf we want to implement this feature, would it be sufficient to clean up or restore the shared memory and disk changes caused by the crashed backend? Besides clearing various known locks, what else needs to be changed? Does anyone have any insights? Please\n help me. Thank you.\n\n\n\n\n\n\n\n\n\n\n\n\nAt 2023-11-13 13:53:29, \"Laurenz Albe\" <[email protected]> wrote:\n>On Sun, 2023-11-12 at 21:55 -0500, Tom Lane wrote:\n>> yuansong <[email protected]> writes:\n>> > In PostgreSQL, when a backend process crashes, it can cause other backend\n>> > processes to also require a restart, primarily to ensure data consistency.\n>> > I understand that the correct approach is to analyze and identify the\n>> > cause of the crash and resolve it. However, it is also important to be\n>> > able to handle a backend process crash without affecting the operation of\n>> > other processes, thus minimizing the scope of negative impact and\n>> > improving availability. To achieve this goal, could we mimic the Oracle\n>> > process by introducing a \"pmon\" process dedicated to rolling back crashed\n>> > process transactions and performing resource cleanup? I wonder if anyone\n>> > has attempted such a strategy or if there have been previous discussions\n>> > on this topic.\n>> \n>> The reason we force a database-wide restart is that there's no way to\n>> be certain that the crashed process didn't corrupt anything in shared\n>> memory. (Even with the forced restart, there's a window where bad\n>> data could reach disk before we kill off the other processes that\n>> might write it. But at least it's a short window.) \"Corruption\"\n>> here doesn't just involve bad data placed into disk buffers; more\n>> often it's things like unreleased locks, which would block other\n>> processes indefinitely.\n>> \n>> I seriously doubt that anything like what you're describing\n>> could be made reliable enough to be acceptable. \"Oracle does\n>> it like this\" isn't a counter-argument: they have a much different\n>> (and non-extensible) architecture, and they also have an army of\n>> programmers to deal with minutiae like undoing resource acquisition.\n>> Even with that, you'd have to wonder about the number of bugs\n>> existing in such necessarily-poorly-tested code paths.\n>\n>Yes.\n>I think that PostgreSQL's approach is superior: rather than investing in\n>code to mitigate the impact of data corruption caused by a crash, invest\n>in quality code that doesn't crash in the first place.\n>\n>Euphemistically naming a crash \"ORA-600 error\" seems to be part of\n>their strategy.\n>\n>Yours,\n>Laurenz Albe\n>",
"msg_date": "Tue, 21 Nov 2023 10:02:51 +0800 (CST)",
"msg_from": "yuansong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:Re:Re: How to solve the problem of one backend process crashing\n and causing other processes to restart?"
}
] |
[
{
"msg_contents": "When creating a partitioned index, the partition key must be a subset of \nthe index's columns. DefineIndex() explains:\n\n * If this table is partitioned and we're creating a unique index, \nprimary\n * key, or exclusion constraint, make sure that the partition key is a\n * subset of the index's columns. Otherwise it would be possible to\n * violate uniqueness by putting values that ought to be unique in\n * different partitions.\n\nBut this currently doesn't check that the collations between the \npartition key and the index definition match. So you can construct a \nunique index that fails to enforce uniqueness.\n\nHere is a non-partitioned case for reference:\n\ncreate collation case_insensitive (provider=icu, \nlocale='und-u-ks-level2', deterministic=false);\ncreate table t0 (a int, b text);\ncreate unique index i0 on t0 (b collate case_insensitive);\ninsert into t0 values (1, 'a'), (2, 'A'); -- violates unique constraint\n\nHere is a partitioned case that doesn't work correctly:\n\ncreate table t1 (a int, b text) partition by hash (b);\ncreate table t1a partition of t1 for values with (modulus 2, remainder 0);\ncreate table t1b partition of t1 for values with (modulus 2, remainder 1);\ncreate unique index i1 on t1 (b collate case_insensitive);\ninsert into t1 values (1, 'a'), (2, 'A'); -- this succeeds\n\nThe attached patch adds the required collation check. In the example, \nit would not allow the index i1 to be created.",
"msg_date": "Mon, 13 Nov 2023 10:24:03 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "should check collations when creating partitioned index"
},
{
"msg_contents": "On Mon, 2023-11-13 at 10:24 +0100, Peter Eisentraut wrote:\n> * If this table is partitioned and we're creating a unique index, primary\n> * key, or exclusion constraint, make sure that the partition key is a\n> * subset of the index's columns. Otherwise it would be possible to\n> * violate uniqueness by putting values that ought to be unique in\n> * different partitions.\n> \n> But this currently doesn't check that the collations between the \n> partition key and the index definition match. So you can construct a \n> unique index that fails to enforce uniqueness.\n> \n> Here is a partitioned case that doesn't work correctly:\n> \n> create collation case_insensitive (provider=icu,\n> locale='und-u-ks-level2', deterministic=false);\n> create table t1 (a int, b text) partition by hash (b);\n> create table t1a partition of t1 for values with (modulus 2, remainder 0);\n> create table t1b partition of t1 for values with (modulus 2, remainder 1);\n> create unique index i1 on t1 (b collate case_insensitive);\n> insert into t1 values (1, 'a'), (2, 'A'); -- this succeeds\n> \n> The attached patch adds the required collation check. In the example, \n> it would not allow the index i1 to be created.\n\nThe patch looks good, but I think the error message needs love:\n\n> +\t\tereport(ERROR,\n> +\t\t\terrcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\terrmsg(\"collation of column \\\"%s\\\" does not match between partition key and index definition\",\n> +\t\t\t\t NameStr(att->attname)));\n\n\"does not match between\" sounds weird. How about\n\n collation of index column \\\"%s\\\" must match collation of the partitioning key column\n\nThis will be backpatched, right? What if somebody already created an index like that?\nDoes this warrant an entry in the \"however\" for the release notes, or is the case\nexotic enough that we can assume that nobody is affected?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 13 Nov 2023 21:04:05 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: should check collations when creating partitioned index"
},
{
"msg_contents": "On 13.11.23 21:04, Laurenz Albe wrote:\n> On Mon, 2023-11-13 at 10:24 +0100, Peter Eisentraut wrote:\n>> * If this table is partitioned and we're creating a unique index, primary\n>> * key, or exclusion constraint, make sure that the partition key is a\n>> * subset of the index's columns. Otherwise it would be possible to\n>> * violate uniqueness by putting values that ought to be unique in\n>> * different partitions.\n>>\n>> But this currently doesn't check that the collations between the\n>> partition key and the index definition match. So you can construct a\n>> unique index that fails to enforce uniqueness.\n>>\n>> Here is a partitioned case that doesn't work correctly:\n>>\n>> create collation case_insensitive (provider=icu,\n>> locale='und-u-ks-level2', deterministic=false);\n>> create table t1 (a int, b text) partition by hash (b);\n>> create table t1a partition of t1 for values with (modulus 2, remainder 0);\n>> create table t1b partition of t1 for values with (modulus 2, remainder 1);\n>> create unique index i1 on t1 (b collate case_insensitive);\n>> insert into t1 values (1, 'a'), (2, 'A'); -- this succeeds\n>>\n>> The attached patch adds the required collation check. In the example,\n>> it would not allow the index i1 to be created.\n> \n> The patch looks good, but I think the error message needs love:\n> \n>> +\t\tereport(ERROR,\n>> +\t\t\terrcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> +\t\t\terrmsg(\"collation of column \\\"%s\\\" does not match between partition key and index definition\",\n>> +\t\t\t\t NameStr(att->attname)));\n> \n> \"does not match between\" sounds weird. How about\n> \n> collation of index column \\\"%s\\\" must match collation of the partitioning key column\n> \n> This will be backpatched, right? What if somebody already created an index like that?\n> Does this warrant an entry in the \"however\" for the release notes, or is the case\n> exotic enough that we can assume that nobody is affected?\n\nI think it's exotic enough that I wouldn't bother backpatching it. But \nI welcome input on this.\n\n\n\n",
"msg_date": "Tue, 14 Nov 2023 17:03:18 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: should check collations when creating partitioned index"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 13.11.23 21:04, Laurenz Albe wrote:\n>> This will be backpatched, right? What if somebody already created an index like that?\n>> Does this warrant an entry in the \"however\" for the release notes, or is the case\n>> exotic enough that we can assume that nobody is affected?\n\n> I think it's exotic enough that I wouldn't bother backpatching it. But \n> I welcome input on this.\n\nI think it should be back-patched.\n\nI don't love the patch details though. It seems entirely wrong to check\nthis before we check the opclass match. Also, in at least some cases\nthe code presses on looking for another match if the current opclass\ndoesn't match; you've broken such cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Nov 2023 11:15:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: should check collations when creating partitioned index"
},
{
"msg_contents": "On Mon, 2023-11-13 at 10:24 +0100, Peter Eisentraut wrote:\n> create table t1 (a int, b text) partition by hash (b);\n> create table t1a partition of t1 for values with (modulus 2,\n> remainder 0);\n> create table t1b partition of t1 for values with (modulus 2,\n> remainder 1);\n> create unique index i1 on t1 (b collate case_insensitive);\n> insert into t1 values (1, 'a'), (2, 'A'); -- this succeeds\n> \n> The attached patch adds the required collation check. In the\n> example, \n> it would not allow the index i1 to be created.\n\nIn the patch, you check for an exact collation match. Considering this\ncase only depends on equality, I think it would be correct if the\nrequirement was that (a) both collations are deterministic; or (b) the\ncollations match exactly.\n\nThis is related to the discussion here:\n\nhttps://postgr.es/m/[email protected]\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 17 Nov 2023 12:02:33 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: should check collations when creating partitioned index"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> In the patch, you check for an exact collation match. Considering this\n> case only depends on equality, I think it would be correct if the\n> requirement was that (a) both collations are deterministic; or (b) the\n> collations match exactly.\n\nYou keep harping on this idea that we are only concerned with equality,\nbut I think you are wrong. We expect a btree index to provide ordering\nnot only equality, and this example definitely is a btree index.\n\nPossibly, with a great deal more specificity added to the check, we\ncould distinguish the cases where ordering can't matter and allow\ncollation variance then. I do not see the value of that, especially\nnot when measured against the risk of introducing subtle bugs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Nov 2023 15:18:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: should check collations when creating partitioned index"
},
{
"msg_contents": "On Fri, 2023-11-17 at 15:18 -0500, Tom Lane wrote:\n> You keep harping on this idea that we are only concerned with\n> equality,\n> but I think you are wrong. We expect a btree index to provide\n> ordering\n> not only equality, and this example definitely is a btree index.\n> \n> Possibly, with a great deal more specificity added to the check, we\n> could distinguish the cases where ordering can't matter and allow\n> collation variance then. I do not see the value of that, especially\n> not when measured against the risk of introducing subtle bugs.\n\nFair point.\n\nAs background, I don't see a complete solution to our collation\nproblems and on the horizon. You've probably noticed that I'm looking\nfor various ways to mitigate the problem, and this thread was about\nreducing the number of situations in which we rely on collation.\n\nI'll focus on other potential improvements/mitigations and see if I can\nmake progress somewhere else.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 17 Nov 2023 13:08:09 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: should check collations when creating partitioned index"
},
{
"msg_contents": "On 14.11.23 17:15, Tom Lane wrote:\n> I don't love the patch details though. It seems entirely wrong to check\n> this before we check the opclass match.\n\nNot sure why? The order doesn't seem to matter?\n\n> Also, in at least some cases\n> the code presses on looking for another match if the current opclass\n> doesn't match; you've broken such cases.\n\nI see. That means we shouldn't raise an error on a mismatch but just do\n\n if (key->partcollation[i] != collationIds[j])\n continue;\n\nand then let the existing error under if (!found) complain.\n\nI suppose we could move that into the\n\n if (get_opclass_opfamily_and_input_type(...))\n\nblock. I'm not sure I see the difference.\n\n\n\n",
"msg_date": "Mon, 20 Nov 2023 11:21:41 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: should check collations when creating partitioned index"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 14.11.23 17:15, Tom Lane wrote:\n>> I don't love the patch details though. It seems entirely wrong to check\n>> this before we check the opclass match.\n\n> Not sure why? The order doesn't seem to matter?\n\nThe case that was bothering me was if we had a non-collated type\nversus a collated type. That would result in throwing an error\nabout collation mismatch, when complaining about the opclass seems\nmore apropos. However, if we do this:\n\n> I see. That means we shouldn't raise an error on a mismatch but just do\n> if (key->partcollation[i] != collationIds[j])\n> continue;\n\nit might not matter much.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Nov 2023 11:25:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: should check collations when creating partitioned index"
},
{
"msg_contents": "On 20.11.23 17:25, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> On 14.11.23 17:15, Tom Lane wrote:\n>>> I don't love the patch details though. It seems entirely wrong to check\n>>> this before we check the opclass match.\n> \n>> Not sure why? The order doesn't seem to matter?\n> \n> The case that was bothering me was if we had a non-collated type\n> versus a collated type. That would result in throwing an error\n> about collation mismatch, when complaining about the opclass seems\n> more apropos. However, if we do this:\n> \n>> I see. That means we shouldn't raise an error on a mismatch but just do\n>> if (key->partcollation[i] != collationIds[j])\n>> continue;\n> \n> it might not matter much.\n\nHere is an updated patch that works as indicated above.\n\nThe behavior if you try to create an index with mismatching collations \nnow is that it will skip over the column and complain at the end with \nsomething like\n\nERROR: 0A000: unique constraint on partitioned table must include all \npartitioning columns\nDETAIL: UNIQUE constraint on table \"t1\" lacks column \"b\" which is part \nof the partition key.\n\nwhich perhaps isn't intuitive, but I think it would be the same if you \nsomehow tried to build an index with different operator classes than the \npartitioning. I think these less-specific error messages are ok in such \nedge cases.",
"msg_date": "Thu, 23 Nov 2023 11:01:38 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: should check collations when creating partitioned index"
},
{
"msg_contents": "On 23.11.23 11:01, Peter Eisentraut wrote:\n> On 20.11.23 17:25, Tom Lane wrote:\n>> Peter Eisentraut <[email protected]> writes:\n>>> On 14.11.23 17:15, Tom Lane wrote:\n>>>> I don't love the patch details though. It seems entirely wrong to \n>>>> check\n>>>> this before we check the opclass match.\n>>\n>>> Not sure why? The order doesn't seem to matter?\n>>\n>> The case that was bothering me was if we had a non-collated type\n>> versus a collated type. That would result in throwing an error\n>> about collation mismatch, when complaining about the opclass seems\n>> more apropos. However, if we do this:\n>>\n>>> I see. That means we shouldn't raise an error on a mismatch but just do\n>>> if (key->partcollation[i] != collationIds[j])\n>>> continue;\n>>\n>> it might not matter much.\n> \n> Here is an updated patch that works as indicated above.\n> \n> The behavior if you try to create an index with mismatching collations \n> now is that it will skip over the column and complain at the end with \n> something like\n> \n> ERROR: 0A000: unique constraint on partitioned table must include all \n> partitioning columns\n> DETAIL: UNIQUE constraint on table \"t1\" lacks column \"b\" which is part \n> of the partition key.\n> \n> which perhaps isn't intuitive, but I think it would be the same if you \n> somehow tried to build an index with different operator classes than the \n> partitioning. I think these less-specific error messages are ok in such \n> edge cases.\n\nIf there are no further comments on this patch version, I plan to go \nahead and commit it soon.\n\n\n\n",
"msg_date": "Thu, 30 Nov 2023 05:50:27 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: should check collations when creating partitioned index"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Here is an updated patch that works as indicated above.\n>> \n>> The behavior if you try to create an index with mismatching collations \n>> now is that it will skip over the column and complain at the end with \n>> something like\n>> \n>> ERROR: 0A000: unique constraint on partitioned table must include all \n>> partitioning columns\n>> DETAIL: UNIQUE constraint on table \"t1\" lacks column \"b\" which is part \n>> of the partition key.\n>> \n>> which perhaps isn't intuitive, but I think it would be the same if you \n>> somehow tried to build an index with different operator classes than the \n>> partitioning. I think these less-specific error messages are ok in such \n>> edge cases.\n\n> If there are no further comments on this patch version, I plan to go \n> ahead and commit it soon.\n\nSorry for slow response --- I've been dealing with a little too much\n$REAL_LIFE lately. Anyway, I'm content with the v2 patch. I see\nthat the existing code works a little harder than this to produce\nan on-point error message for mismatching operator, but after\nstudying that I'm not totally convinced that it's ideal behavior\neither. I think we can wait for some field complaints to see if\nwe need a better error message for mismatching collation, and\nif so what the shape of the bad input is exactly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Nov 2023 18:04:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: should check collations when creating partitioned index"
}
] |
[
{
"msg_contents": "Hello Hackers!\n\nCurrently, pgbench will log individual transactions to a logfile when the\n`--log` parameter flag is provided. The logfile, however, does not include\ncolumn header. It has become a fairly standard expectation of users to have\ncolumn headers present in flat files. Without the header in the pgbench log\nfiles, new users must navigate to the docs and piece together the column\nheaders themselves. Most industry leading frameworks have tooling built in\nto read column headers though, for example python/pandas read_csv().\n\nWe can improve the experience for users by adding column headers to pgbench\nlogfiles with an optional command line flag, `--log-header`. This will keep\nthe API backwards compatible by making users opt-in to the column headers.\nIt follows the existing pattern of having conditional flags in pgbench’s\nAPI; the `--log` option would have both –log-prefix and –log-header if this\nwork is accepted.\n\nThe implementation considers the column headers only when the\n`--log-header` flag is present. The values for the columns are taken\ndirectly from the “Per-Transaction Logging” section in\nhttps://www.postgresql.org/docs/current/pgbench.html and takes into account\nthe conditional columns `schedule_lag` and `retries`.\n\n\nBelow is an example of what that logfile will look like:\n\n\npgbench postgres://postgres:postgres@localhost:5432/postgres --log\n--log-header\n\nclient_id transaction_no time script_no time_epoch time_us\n\n0 1 1863 0 1699555588 791102\n\n0 2 706 0 1699555588 791812\n\n\nIf the interface and overall approach makes sense, I will work on adding\ndocumentation and tests for this too.\n\nRespectfully,\n\nAdam Hendel",
"msg_date": "Mon, 13 Nov 2023 11:55:07 -0600",
"msg_from": "Adam Hendel <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] pgbench log file headers"
},
{
"msg_contents": "Hi Adam. Column headers in pgbench log files seem helpful. Besides\nprograms, it seems helpful for humans to understand the column data as\nwell. I was able to apply your patch and verify that the headers are added\nto the log file:\n\nandy@MacBook-Air-4 ~/P/postgres (master)> rm pgbench_log.*\n\nandy@MacBook-Air-4 ~/P/postgres (master)> src/bin/pgbench/pgbench\npostgres://andy:@localhost:5432/postgres --log --log-header\n\npgbench (17devel)\n\n....\n\n\nandy@MacBook-Air-4 ~/P/postgres (master)> cat pgbench_log.*\n\nclient_id transaction_no time script_no time_epoch time_us\n\n0 1 8435 0 1699902315 902700\n\n0 2 1130 0 1699902315 903973\n\n...\n\n\n\nThe generated pgbench_log.62387 log file showed headers \"client_id\ntransaction_no time script_no time_epoch time_us\". Hope that helps with\nyour patch acceptance journey.\n\n\nGood luck!\n\n\nAndrew Atkinson\n\nOn Mon, Nov 13, 2023 at 11:55 AM Adam Hendel <[email protected]> wrote:\n\n> Hello Hackers!\n>\n> Currently, pgbench will log individual transactions to a logfile when the\n> `--log` parameter flag is provided. The logfile, however, does not include\n> column header. It has become a fairly standard expectation of users to have\n> column headers present in flat files. Without the header in the pgbench log\n> files, new users must navigate to the docs and piece together the column\n> headers themselves. Most industry leading frameworks have tooling built in\n> to read column headers though, for example python/pandas read_csv().\n>\n> We can improve the experience for users by adding column headers to\n> pgbench logfiles with an optional command line flag, `--log-header`. This\n> will keep the API backwards compatible by making users opt-in to the column\n> headers. It follows the existing pattern of having conditional flags in\n> pgbench’s API; the `--log` option would have both –log-prefix and\n> –log-header if this work is accepted.\n>\n> The implementation considers the column headers only when the\n> `--log-header` flag is present. The values for the columns are taken\n> directly from the “Per-Transaction Logging” section in\n> https://www.postgresql.org/docs/current/pgbench.html and takes into\n> account the conditional columns `schedule_lag` and `retries`.\n>\n>\n> Below is an example of what that logfile will look like:\n>\n>\n> pgbench postgres://postgres:postgres@localhost:5432/postgres --log\n> --log-header\n>\n> client_id transaction_no time script_no time_epoch time_us\n>\n> 0 1 1863 0 1699555588 791102\n>\n> 0 2 706 0 1699555588 791812\n>\n>\n> If the interface and overall approach makes sense, I will work on adding\n> documentation and tests for this too.\n>\n> Respectfully,\n>\n> Adam Hendel\n>\n>\n\nHi Adam. Column headers in pgbench log files seem helpful. Besides programs, it seems helpful for humans to understand the column data as well. I was able to apply your patch and verify that the headers are added to the log file:andy@MacBook-Air-4 ~/P/postgres (master)> rm pgbench_log.*\nandy@MacBook-Air-4 ~/P/postgres (master)> src/bin/pgbench/pgbench postgres://andy:@localhost:5432/postgres --log --log-header\npgbench (17devel)\n....\n\nandy@MacBook-Air-4 ~/P/postgres (master)> cat pgbench_log.*\nclient_id transaction_no time script_no time_epoch time_us\n0 1 8435 0 1699902315 902700\n0 2 1130 0 1699902315 903973\n...The generated pgbench_log.62387 log file showed headers \"client_id transaction_no time script_no time_epoch time_us\". Hope that helps with your patch acceptance journey.Good luck!Andrew AtkinsonOn Mon, Nov 13, 2023 at 11:55 AM Adam Hendel <[email protected]> wrote:Hello Hackers!Currently, pgbench will log individual transactions to a logfile when the `--log` parameter flag is provided. The logfile, however, does not include column header. It has become a fairly standard expectation of users to have column headers present in flat files. Without the header in the pgbench log files, new users must navigate to the docs and piece together the column headers themselves. Most industry leading frameworks have tooling built in to read column headers though, for example python/pandas read_csv().We can improve the experience for users by adding column headers to pgbench logfiles with an optional command line flag, `--log-header`. This will keep the API backwards compatible by making users opt-in to the column headers. It follows the existing pattern of having conditional flags in pgbench’s API; the `--log` option would have both –log-prefix and –log-header if this work is accepted.The implementation considers the column headers only when the `--log-header` flag is present. The values for the columns are taken directly from the “Per-Transaction Logging” section in https://www.postgresql.org/docs/current/pgbench.html and takes into account the conditional columns `schedule_lag` and `retries`.Below is an example of what that logfile will look like:pgbench postgres://postgres:postgres@localhost:5432/postgres --log --log-headerclient_id transaction_no time script_no time_epoch time_us0 1 1863 0 1699555588 7911020 2 706 0 1699555588 791812If the interface and overall approach makes sense, I will work on adding documentation and tests for this too.Respectfully,Adam Hendel",
"msg_date": "Mon, 13 Nov 2023 13:12:31 -0600",
"msg_from": "Andrew Atkinson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench log file headers"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-13 11:55:07 -0600, Adam Hendel wrote:\n> Currently, pgbench will log individual transactions to a logfile when the\n> `--log` parameter flag is provided. The logfile, however, does not include\n> column header. It has become a fairly standard expectation of users to have\n> column headers present in flat files. Without the header in the pgbench log\n> files, new users must navigate to the docs and piece together the column\n> headers themselves. Most industry leading frameworks have tooling built in\n> to read column headers though, for example python/pandas read_csv().\n\nThe disadvantage of doing that is that a single pgbench run with --log will\ngenerate many files when using -j, to avoid contention. The easiest way to\ndeal with that after the run is to cat all the log files to a larger one. If\nyou do that with headers present, you obviously have a few bogus rows (the\nheades from the various files).\n\nI guess you could avoid the \"worst\" of that by documenting that the combined\nlog file should be built by\n cat {$log_prefix}.${pid} {$log_prefix}.${pid}.*\nand only outputting the header in the file generated by the main thread.\n\n\n> We can improve the experience for users by adding column headers to pgbench\n> logfiles with an optional command line flag, `--log-header`. This will keep\n> the API backwards compatible by making users opt-in to the column headers.\n> It follows the existing pattern of having conditional flags in pgbench’s\n> API; the `--log` option would have both –log-prefix and –log-header if this\n> work is accepted.\n\n> The implementation considers the column headers only when the\n> `--log-header` flag is present. The values for the columns are taken\n> directly from the “Per-Transaction Logging” section in\n> https://www.postgresql.org/docs/current/pgbench.html and takes into account\n> the conditional columns `schedule_lag` and `retries`.\n> \n> \n> Below is an example of what that logfile will look like:\n> \n> \n> pgbench postgres://postgres:postgres@localhost:5432/postgres --log\n> --log-header\n> \n> client_id transaction_no time script_no time_epoch time_us\n\nIndependent of your patch, but we imo ought to combine time_epoch time_us in\nthe log output. There's no point in forcing consumers to combine those fields,\nand it makes logging more expensive... And if we touch this, we should just\nswitch to outputting nanoseconds instead of microseconds.\n\nIt also is quite odd that we have \"time\" and \"time_epoch\", \"time_us\", where\ntime is the elapsed time of an individual \"transaction\" and time_epoch +\ntime_us together are a wall-clock timestamp. Without differentiating between\nthose, the column headers seem not very useful, because one needs to look in\nthe documentation to understand the fields.\n\n\nI don't think there's all that strong a need for backward compatibility in\npgbench. We could just change the columns as I suggest above and then always\nemit the header in the \"main\" log file.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 16:01:22 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench log file headers"
},
{
"msg_contents": "Hello\n\n\nOn Mon, Nov 13, 2023 at 6:01 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-11-13 11:55:07 -0600, Adam Hendel wrote:\n> > Currently, pgbench will log individual transactions to a logfile when the\n> > `--log` parameter flag is provided. The logfile, however, does not\n> include\n> > column header. It has become a fairly standard expectation of users to\n> have\n> > column headers present in flat files. Without the header in the pgbench\n> log\n> > files, new users must navigate to the docs and piece together the column\n> > headers themselves. Most industry leading frameworks have tooling built\n> in\n> > to read column headers though, for example python/pandas read_csv().\n>\n> The disadvantage of doing that is that a single pgbench run with --log will\n> generate many files when using -j, to avoid contention. The easiest way to\n> deal with that after the run is to cat all the log files to a larger one.\n> If\n> you do that with headers present, you obviously have a few bogus rows (the\n> heades from the various files).\n>\n> I guess you could avoid the \"worst\" of that by documenting that the\n> combined\n> log file should be built by\n> cat {$log_prefix}.${pid} {$log_prefix}.${pid}.*\n> and only outputting the header in the file generated by the main thread.\n>\n>\n> We can improve the experience for users by adding column headers to\n> pgbench\n> > logfiles with an optional command line flag, `--log-header`. This will\n> keep\n> > the API backwards compatible by making users opt-in to the column\n> headers.\n> > It follows the existing pattern of having conditional flags in pgbench’s\n> > API; the `--log` option would have both –log-prefix and –log-header if\n> this\n> > work is accepted.\n>\n> > The implementation considers the column headers only when the\n> > `--log-header` flag is present. The values for the columns are taken\n> > directly from the “Per-Transaction Logging” section in\n> > https://www.postgresql.org/docs/current/pgbench.html and takes into\n> account\n> > the conditional columns `schedule_lag` and `retries`.\n> >\n> >\n> > Below is an example of what that logfile will look like:\n> >\n> >\n> > pgbench postgres://postgres:postgres@localhost:5432/postgres --log\n> > --log-header\n> >\n> > client_id transaction_no time script_no time_epoch time_us\n>\n> Independent of your patch, but we imo ought to combine time_epoch time_us\n> in\n> the log output. There's no point in forcing consumers to combine those\n> fields,\n> and it makes logging more expensive... And if we touch this, we should\n> just\n> switch to outputting nanoseconds instead of microseconds.\n>\n> It also is quite odd that we have \"time\" and \"time_epoch\", \"time_us\", where\n> time is the elapsed time of an individual \"transaction\" and time_epoch +\n> time_us together are a wall-clock timestamp. Without differentiating\n> between\n> those, the column headers seem not very useful, because one needs to look\n> in\n> the documentation to understand the fields.\n\n\n> I don't think there's all that strong a need for backward compatibility in\n> pgbench. We could just change the columns as I suggest above and then\n> always\n> emit the header in the \"main\" log file.\n>\n>\nDo you think this should be done in separate patches?\n\nFirst patch: log the column headers in the \"main\" log file. No --log-header\nflag, make it the default behavior of --log.\n\nNext patch: collapse \"time_epoch\" and \"time_us\" in the log output and give\nthe \"time\" column a name that is more clear.\n\nAdam\n\n\n> Greetings,\n>\n> Andres Freund\n>\n\nHelloOn Mon, Nov 13, 2023 at 6:01 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-11-13 11:55:07 -0600, Adam Hendel wrote:\n> Currently, pgbench will log individual transactions to a logfile when the\n> `--log` parameter flag is provided. The logfile, however, does not include\n> column header. It has become a fairly standard expectation of users to have\n> column headers present in flat files. Without the header in the pgbench log\n> files, new users must navigate to the docs and piece together the column\n> headers themselves. Most industry leading frameworks have tooling built in\n> to read column headers though, for example python/pandas read_csv().\n\nThe disadvantage of doing that is that a single pgbench run with --log will\ngenerate many files when using -j, to avoid contention. The easiest way to\ndeal with that after the run is to cat all the log files to a larger one. If\nyou do that with headers present, you obviously have a few bogus rows (the\nheades from the various files).\n\nI guess you could avoid the \"worst\" of that by documenting that the combined\nlog file should be built by\n cat {$log_prefix}.${pid} {$log_prefix}.${pid}.*\nand only outputting the header in the file generated by the main thread. \n> We can improve the experience for users by adding column headers to pgbench\n> logfiles with an optional command line flag, `--log-header`. This will keep\n> the API backwards compatible by making users opt-in to the column headers.\n> It follows the existing pattern of having conditional flags in pgbench’s\n> API; the `--log` option would have both –log-prefix and –log-header if this\n> work is accepted.\n\n> The implementation considers the column headers only when the\n> `--log-header` flag is present. The values for the columns are taken\n> directly from the “Per-Transaction Logging” section in\n> https://www.postgresql.org/docs/current/pgbench.html and takes into account\n> the conditional columns `schedule_lag` and `retries`.\n> \n> \n> Below is an example of what that logfile will look like:\n> \n> \n> pgbench postgres://postgres:postgres@localhost:5432/postgres --log\n> --log-header\n> \n> client_id transaction_no time script_no time_epoch time_us\n\nIndependent of your patch, but we imo ought to combine time_epoch time_us in\nthe log output. There's no point in forcing consumers to combine those fields,\nand it makes logging more expensive... And if we touch this, we should just\nswitch to outputting nanoseconds instead of microseconds.\n\nIt also is quite odd that we have \"time\" and \"time_epoch\", \"time_us\", where\ntime is the elapsed time of an individual \"transaction\" and time_epoch +\ntime_us together are a wall-clock timestamp. Without differentiating between\nthose, the column headers seem not very useful, because one needs to look in\nthe documentation to understand the fields.\n\nI don't think there's all that strong a need for backward compatibility in\npgbench. We could just change the columns as I suggest above and then always\nemit the header in the \"main\" log file.\nDo you think this should be done in separate patches?First patch: log the column headers in the \"main\" log file. No --log-header flag, make it the default behavior of --log.Next patch: collapse \"time_epoch\" and \"time_us\" in the log output and give the \"time\" column a name that is more clear.Adam \nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 15 Nov 2023 08:16:20 -0600",
"msg_from": "Adam Hendel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pgbench log file headers"
},
{
"msg_contents": "Hello,\n\nOn Mon, Nov 13, 2023 at 6:01 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-11-13 11:55:07 -0600, Adam Hendel wrote:\n> > Currently, pgbench will log individual transactions to a logfile when the\n> > `--log` parameter flag is provided. The logfile, however, does not\n> include\n> > column header. It has become a fairly standard expectation of users to\n> have\n> > column headers present in flat files. Without the header in the pgbench\n> log\n> > files, new users must navigate to the docs and piece together the column\n> > headers themselves. Most industry leading frameworks have tooling built\n> in\n> > to read column headers though, for example python/pandas read_csv().\n>\n> The disadvantage of doing that is that a single pgbench run with --log will\n> generate many files when using -j, to avoid contention. The easiest way to\n> deal with that after the run is to cat all the log files to a larger one.\n> If\n> you do that with headers present, you obviously have a few bogus rows (the\n> heades from the various files).\n>\n> I guess you could avoid the \"worst\" of that by documenting that the\n> combined\n> log file should be built by\n> cat {$log_prefix}.${pid} {$log_prefix}.${pid}.*\n> and only outputting the header in the file generated by the main thread.\n>\n>\n> > We can improve the experience for users by adding column headers to\n> pgbench\n> > logfiles with an optional command line flag, `--log-header`. This will\n> keep\n> > the API backwards compatible by making users opt-in to the column\n> headers.\n> > It follows the existing pattern of having conditional flags in pgbench’s\n> > API; the `--log` option would have both –log-prefix and –log-header if\n> this\n> > work is accepted.\n>\n> > The implementation considers the column headers only when the\n> > `--log-header` flag is present. The values for the columns are taken\n> > directly from the “Per-Transaction Logging” section in\n> > https://www.postgresql.org/docs/current/pgbench.html and takes into\n> account\n> > the conditional columns `schedule_lag` and `retries`.\n> >\n> >\n> > Below is an example of what that logfile will look like:\n> >\n> >\n> > pgbench postgres://postgres:postgres@localhost:5432/postgres --log\n> > --log-header\n> >\n> > client_id transaction_no time script_no time_epoch time_us\n>\n> Independent of your patch, but we imo ought to combine time_epoch time_us\n> in\n> the log output. There's no point in forcing consumers to combine those\n> fields,\n> and it makes logging more expensive... And if we touch this, we should\n> just\n> switch to outputting nanoseconds instead of microseconds.\n>\n> It also is quite odd that we have \"time\" and \"time_epoch\", \"time_us\", where\n> time is the elapsed time of an individual \"transaction\" and time_epoch +\n> time_us together are a wall-clock timestamp. Without differentiating\n> between\n> those, the column headers seem not very useful, because one needs to look\n> in\n> the documentation to understand the fields.\n>\n>\n> I don't think there's all that strong a need for backward compatibility in\n> pgbench. We could just change the columns as I suggest above and then\n> always\n> emit the header in the \"main\" log file.\n>\n\nI updated the patch to always log the header and only off the main thread.\nAs for the time headers, I will work on renaming/combining those in a\nseparate commit as:\n\ntime -> time_elapsed\ntime_epoch + time_us -> time_completion_us\n\n\n\n> Greetings,\n>\n> Andres Freund\n>",
"msg_date": "Mon, 20 Nov 2023 22:22:21 -0600",
"msg_from": "Adam Hendel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pgbench log file headers"
},
{
"msg_contents": "On Tue, 21 Nov 2023 at 09:52, Adam Hendel <[email protected]> wrote:\n>\n> Hello,\n>\n> On Mon, Nov 13, 2023 at 6:01 PM Andres Freund <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> On 2023-11-13 11:55:07 -0600, Adam Hendel wrote:\n>> > Currently, pgbench will log individual transactions to a logfile when the\n>> > `--log` parameter flag is provided. The logfile, however, does not include\n>> > column header. It has become a fairly standard expectation of users to have\n>> > column headers present in flat files. Without the header in the pgbench log\n>> > files, new users must navigate to the docs and piece together the column\n>> > headers themselves. Most industry leading frameworks have tooling built in\n>> > to read column headers though, for example python/pandas read_csv().\n>>\n>> The disadvantage of doing that is that a single pgbench run with --log will\n>> generate many files when using -j, to avoid contention. The easiest way to\n>> deal with that after the run is to cat all the log files to a larger one. If\n>> you do that with headers present, you obviously have a few bogus rows (the\n>> heades from the various files).\n>>\n>> I guess you could avoid the \"worst\" of that by documenting that the combined\n>> log file should be built by\n>> cat {$log_prefix}.${pid} {$log_prefix}.${pid}.*\n>> and only outputting the header in the file generated by the main thread.\n>>\n>>\n>> > We can improve the experience for users by adding column headers to pgbench\n>> > logfiles with an optional command line flag, `--log-header`. This will keep\n>> > the API backwards compatible by making users opt-in to the column headers.\n>> > It follows the existing pattern of having conditional flags in pgbench’s\n>> > API; the `--log` option would have both –log-prefix and –log-header if this\n>> > work is accepted.\n>>\n>> > The implementation considers the column headers only when the\n>> > `--log-header` flag is present. The values for the columns are taken\n>> > directly from the “Per-Transaction Logging” section in\n>> > https://www.postgresql.org/docs/current/pgbench.html and takes into account\n>> > the conditional columns `schedule_lag` and `retries`.\n>> >\n>> >\n>> > Below is an example of what that logfile will look like:\n>> >\n>> >\n>> > pgbench postgres://postgres:postgres@localhost:5432/postgres --log\n>> > --log-header\n>> >\n>> > client_id transaction_no time script_no time_epoch time_us\n>>\n>> Independent of your patch, but we imo ought to combine time_epoch time_us in\n>> the log output. There's no point in forcing consumers to combine those fields,\n>> and it makes logging more expensive... And if we touch this, we should just\n>> switch to outputting nanoseconds instead of microseconds.\n>>\n>> It also is quite odd that we have \"time\" and \"time_epoch\", \"time_us\", where\n>> time is the elapsed time of an individual \"transaction\" and time_epoch +\n>> time_us together are a wall-clock timestamp. Without differentiating between\n>> those, the column headers seem not very useful, because one needs to look in\n>> the documentation to understand the fields.\n>>\n>>\n>> I don't think there's all that strong a need for backward compatibility in\n>> pgbench. We could just change the columns as I suggest above and then always\n>> emit the header in the \"main\" log file.\n>\n>\n> I updated the patch to always log the header and only off the main thread. As for the time headers, I will work on renaming/combining those in a separate commit as:\n\nOne of the test has failed in CFBOT at[1] with:\n[09:15:00.526](0.000s) not ok 422 - transaction count for\n/tmp/cirrus-ci-build/build/testrun/pgbench/001_pgbench_with_server/data/t_001_pgbench_with_server_main_data/001_pgbench_log_3.25193\n(11)\n[09:15:00.526](0.000s) # Failed test 'transaction count for\n/tmp/cirrus-ci-build/build/testrun/pgbench/001_pgbench_with_server/data/t_001_pgbench_with_server_main_data/001_pgbench_log_3.25193\n(11)'\n# at /tmp/cirrus-ci-build/src/bin/pgbench/t/001_pgbench_with_server.pl\nline 1257.\n[09:15:00.526](0.000s) not ok 423 - transaction format for 001_pgbench_log_3\n[09:15:00.526](0.000s) # Failed test 'transaction format for\n001_pgbench_log_3'\n# at /tmp/cirrus-ci-build/src/bin/pgbench/t/001_pgbench_with_server.pl\nline 1257.\n# Log entry not matching: client_id transaction_no time script_no\ntime_epoch time_us\n# Running: pgbench --no-vacuum -f\n/tmp/cirrus-ci-build/build/testrun/pgbench/001_pgbench_with_server/data/t_001_pgbench_with_server_main_data/001_pgbench_incomplete_transaction_block\n\nMore details for the same is available at [2]\n\n[1] - https://cirrus-ci.com/task/5139049757802496\n[2] - https://api.cirrus-ci.com/v1/artifact/task/5139049757802496/testrun/build/testrun/pgbench/001_pgbench_with_server/log/regress_log_001_pgbench_with_server\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 6 Jan 2024 20:50:20 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench log file headers"
},
{
"msg_contents": "\n\n> On 6 Jan 2024, at 20:20, vignesh C <[email protected]> wrote:\n> \n> One of the test has failed in CFBOT at[1] with:\n\nHi Adam,\n\nThis is a kind reminder that CF entry [0] is waiting for an update. Thanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4660/\n\n\n",
"msg_date": "Sun, 31 Mar 2024 10:10:43 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench log file headers"
}
] |
[
{
"msg_contents": "I just found myself researching the difference between MemoryContextReset()\nand MemoryContextResetAndDeleteChildren(), and it turns out that as of\ncommit eaa5808 (2015), there is none.\nMemoryContextResetAndDeleteChildren() is just a backwards compatibility\nmacro for MemoryContextReset(). I found this surprising because it sounds\nlike they do very different things.\n\nShall we retire this backwards compatibility macro at this point? A search\nof https://codesearch.debian.net/ does reveal a few external uses, so we\ncould alternatively leave it around and just update Postgres to stop using\nit, but I don't think it would be too burdensome for extension authors to\nfix if we removed it completely.\n\nPatch attached.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Nov 2023 12:59:50 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "retire MemoryContextResetAndDeleteChildren backwards compatibility\n macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 12:30 AM Nathan Bossart <[email protected]>\nwrote:\n\n> I just found myself researching the difference between MemoryContextReset()\n> and MemoryContextResetAndDeleteChildren(), and it turns out that as of\n> commit eaa5808 (2015), there is none.\n> MemoryContextResetAndDeleteChildren() is just a backwards compatibility\n> macro for MemoryContextReset(). I found this surprising because it sounds\n> like they do very different things.\n>\n> Shall we retire this backwards compatibility macro at this point? A search\n> of https://codesearch.debian.net/ does reveal a few external uses, so we\n> could alternatively leave it around and just update Postgres to stop using\n> it, but I don't think it would be too burdensome for extension authors to\n> fix if we removed it completely.\n>\n\n+1\n\nPatch attached.\n>\n\nChanges looks pretty much straight forward, but patch failed to apply on the\nlatest master head(b41b1a7f490) at me.\n\nRegards,\nAmul\n\nOn Tue, Nov 14, 2023 at 12:30 AM Nathan Bossart <[email protected]> wrote:I just found myself researching the difference between MemoryContextReset()\nand MemoryContextResetAndDeleteChildren(), and it turns out that as of\ncommit eaa5808 (2015), there is none.\nMemoryContextResetAndDeleteChildren() is just a backwards compatibility\nmacro for MemoryContextReset(). I found this surprising because it sounds\nlike they do very different things.\n\nShall we retire this backwards compatibility macro at this point? A search\nof https://codesearch.debian.net/ does reveal a few external uses, so we\ncould alternatively leave it around and just update Postgres to stop using\nit, but I don't think it would be too burdensome for extension authors to\nfix if we removed it completely. +1\nPatch attached.Changes looks pretty much straight forward, but patch failed to apply on thelatest master head(b41b1a7f490) at me.Regards,Amul",
"msg_date": "Tue, 14 Nov 2023 16:25:24 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 04:25:24PM +0530, Amul Sul wrote:\n> Changes looks pretty much straight forward, but patch failed to apply on the\n> latest master head(b41b1a7f490) at me.\n\nThanks for taking a look. Would you mind sharing the error(s) you are\nseeing? The patch applies fine on cfbot and my machine, and check-world\ncontinues to pass.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 09:51:05 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Tue, Nov 14, 2023 at 04:25:24PM +0530, Amul Sul wrote:\n>> Changes looks pretty much straight forward, but patch failed to apply on the\n>> latest master head(b41b1a7f490) at me.\n\n> Thanks for taking a look. Would you mind sharing the error(s) you are\n> seeing? The patch applies fine on cfbot and my machine, and check-world\n> continues to pass.\n\nIt may be a question of the tool used to apply the patch. IME,\n\"patch\" is pretty forgiving, \"git am\" very much less so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Nov 2023 10:59:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 10:59:04AM -0500, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> On Tue, Nov 14, 2023 at 04:25:24PM +0530, Amul Sul wrote:\n>>> Changes looks pretty much straight forward, but patch failed to apply on the\n>>> latest master head(b41b1a7f490) at me.\n> \n>> Thanks for taking a look. Would you mind sharing the error(s) you are\n>> seeing? The patch applies fine on cfbot and my machine, and check-world\n>> continues to pass.\n> \n> It may be a question of the tool used to apply the patch. IME,\n> \"patch\" is pretty forgiving, \"git am\" very much less so.\n\nAh. I just did a 'git diff > file_name' for this one, so you'd indeed need\nto use git-apply instead of git-am. (I ordinarily use git-format-patch,\nbut I sometimes use git-diff for trivial or prototype patches.)\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 10:05:53 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On 2023-Nov-13, Nathan Bossart wrote:\n\n> Shall we retire this backwards compatibility macro at this point? A search\n> of https://codesearch.debian.net/ does reveal a few external uses, so we\n> could alternatively leave it around and just update Postgres to stop using\n> it, but I don't think it would be too burdensome for extension authors to\n> fix if we removed it completely.\n\nLet's leave the macro around and just remove its uses in PGDG-owned\ncode. Having the macro around hurts nothing, and we can remove it in 15\nyears or so.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Entristecido, Wutra (canción de Las Barreras)\necha a Freyr a rodar\ny a nosotros al mar\"\n\n\n",
"msg_date": "Tue, 14 Nov 2023 17:20:16 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 05:20:16PM +0100, Alvaro Herrera wrote:\n> Let's leave the macro around and just remove its uses in PGDG-owned\n> code. Having the macro around hurts nothing, and we can remove it in 15\n> years or so.\n\nWFM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 10:33:39 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n\n> On 2023-Nov-13, Nathan Bossart wrote:\n>\n>> Shall we retire this backwards compatibility macro at this point? A search\n>> of https://codesearch.debian.net/ does reveal a few external uses, so we\n>> could alternatively leave it around and just update Postgres to stop using\n>> it, but I don't think it would be too burdensome for extension authors to\n>> fix if we removed it completely.\n>\n> Let's leave the macro around and just remove its uses in PGDG-owned\n> code. Having the macro around hurts nothing, and we can remove it in 15\n> years or so.\n\nIs there a preprocessor symbol that is defined when building Postgres\nitself (and extensions in /contrib/), but not third-party extensions (or\nvice versa)? If so, the macro could be guarded by that, so that uses\ndon't accientally sneak back in.\n\nThere's also __attribute__((deprecated)) (and and __declspec(deprecated)\nfor MSVC), but that can AFAIK only be attached to functions and\nvariables, not macros, so it would have to be changed to a static inline\nfunction.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 14 Nov 2023 16:36:44 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 04:36:44PM +0000, Dagfinn Ilmari Manns�ker wrote:\n> Is there a preprocessor symbol that is defined when building Postgres\n> itself (and extensions in /contrib/), but not third-party extensions (or\n> vice versa)? If so, the macro could be guarded by that, so that uses\n> don't accientally sneak back in.\n\nI'm not aware of anything like that.\n\n> There's also __attribute__((deprecated)) (and and __declspec(deprecated)\n> for MSVC), but that can AFAIK only be attached to functions and\n> variables, not macros, so it would have to be changed to a static inline\n> function.\n\nIt might be worth introducing pg_attribute_deprecated() in c.h. I'm not\ntoo worried about this particular macro, but it seems handy in general.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 11:01:15 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 11:01:15AM -0600, Nathan Bossart wrote:\n> On Tue, Nov 14, 2023 at 04:36:44PM +0000, Dagfinn Ilmari Manns�ker wrote:\n>> There's also __attribute__((deprecated)) (and and __declspec(deprecated)\n>> for MSVC), but that can AFAIK only be attached to functions and\n>> variables, not macros, so it would have to be changed to a static inline\n>> function.\n> \n> It might be worth introducing pg_attribute_deprecated() in c.h. I'm not\n> too worried about this particular macro, but it seems handy in general.\n\nHuh, this was brought up before [0].\n\n[0] https://postgr.es/m/20200825183002.fkvzxtneijsdgrfv%40alap3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 11:04:51 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n>> It might be worth introducing pg_attribute_deprecated() in c.h. I'm not\n>> too worried about this particular macro, but it seems handy in general.\n\n> Huh, this was brought up before [0].\n> [0] https://postgr.es/m/20200825183002.fkvzxtneijsdgrfv%40alap3.anarazel.de\n\nFWIW, I think it's fine to just nuke MemoryContextResetAndDeleteChildren.\nWe ask extension authors to deal with much more significant API changes\nthan that in every release, and versions where the updated code wouldn't\nwork are long gone. And, as you say, the existence of that separate from\nMemoryContextReset creates confusion, which has nonzero cost in itself.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Nov 2023 12:10:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 9:50 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Nov-13, Nathan Bossart wrote:\n>\n> > Shall we retire this backwards compatibility macro at this point? A search\n> > of https://codesearch.debian.net/ does reveal a few external uses, so we\n> > could alternatively leave it around and just update Postgres to stop using\n> > it, but I don't think it would be too burdensome for extension authors to\n> > fix if we removed it completely.\n>\n> Let's leave the macro around and just remove its uses in PGDG-owned\n> code. Having the macro around hurts nothing, and we can remove it in 15\n> years or so.\n\nFWIW, there are other backward compatibility macros out there like\ntuplestore_donestoring which was introduced by commit dd04e95 21 years\nago and SPI_push() and its friends which were made no-ops macros by\ncommit 1833f1a 7 years ago. Debian code search shows very minimal\nusages of the above macros. Can we do away with\ntuplestore_donestoring?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 22:46:25 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 12:10:41PM -0500, Tom Lane wrote:\n> FWIW, I think it's fine to just nuke MemoryContextResetAndDeleteChildren.\n> We ask extension authors to deal with much more significant API changes\n> than that in every release, and versions where the updated code wouldn't\n> work are long gone. And, as you say, the existence of that separate from\n> MemoryContextReset creates confusion, which has nonzero cost in itself.\n\nThat is my preference as well. Alvaro, AFAICT you are the only vote\nagainst removing it completely. If you feel ѕtrongly about it, I don't\nmind going the __attribute__((deprecated)) route, but otherwise, I'd\nprobably just remove it completely.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 11:59:17 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 10:46:25PM +0530, Bharath Rupireddy wrote:\n> FWIW, there are other backward compatibility macros out there like\n> tuplestore_donestoring which was introduced by commit dd04e95 21 years\n> ago and SPI_push() and its friends which were made no-ops macros by\n> commit 1833f1a 7 years ago. Debian code search shows very minimal\n> usages of the above macros. Can we do away with\n> tuplestore_donestoring?\n\nCan we take these other things to a separate thread?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 11:59:53 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On 2023-Nov-14, Nathan Bossart wrote:\n\n> On Tue, Nov 14, 2023 at 12:10:41PM -0500, Tom Lane wrote:\n> > FWIW, I think it's fine to just nuke MemoryContextResetAndDeleteChildren.\n> > We ask extension authors to deal with much more significant API changes\n> > than that in every release, and versions where the updated code wouldn't\n> > work are long gone. And, as you say, the existence of that separate from\n> > MemoryContextReset creates confusion, which has nonzero cost in itself.\n> \n> That is my preference as well. Alvaro, AFAICT you are the only vote\n> against removing it completely. If you feel ѕtrongly about it,\n\nOh, I don't. (But I wouldn't mind putting pg_attribute_deprecated to\ngood use elsewhere ... not that I have any specific examples handy.)\n\nYour S key seems to be doing some funny business.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n\n\n",
"msg_date": "Tue, 14 Nov 2023 20:20:08 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 08:20:08PM +0100, Alvaro Herrera wrote:\n> Oh, I don't. (But I wouldn't mind putting pg_attribute_deprecated to\n> good use elsewhere ... not that I have any specific examples handy.)\n\nAgreed.\n\n> Your S key seems to be doing some funny business.\n\nI seem to have accidentally enabled \"digraph\" in my .vimrc at some point...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 14:22:52 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 9:21 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Tue, Nov 14, 2023 at 04:25:24PM +0530, Amul Sul wrote:\n> > Changes looks pretty much straight forward, but patch failed to apply on\n> the\n> > latest master head(b41b1a7f490) at me.\n>\n> Thanks for taking a look. Would you mind sharing the error(s) you are\n> seeing? The patch applies fine on cfbot and my machine, and check-world\n> continues to pass.\n>\n\nNevermind, I usually use git apply or git am, here are those errors:\n\nPG/ - (master) $ git apply ~/Downloads/retire_compatibility_macro_v1.patch\nerror: patch failed: src/backend/access/brin/brin.c:297\nerror: src/backend/access/brin/brin.c: patch does not apply\nerror: patch failed: src/backend/access/gin/ginscan.c:251\nerror: src/backend/access/gin/ginscan.c: patch does not apply\nerror: patch failed: src/backend/access/transam/xact.c:1933\nerror: src/backend/access/transam/xact.c: patch does not apply\nerror: patch failed: src/backend/commands/analyze.c:583\nerror: src/backend/commands/analyze.c: patch does not apply\nerror: patch failed: src/backend/executor/nodeRecursiveunion.c:317\nerror: src/backend/executor/nodeRecursiveunion.c: patch does not apply\nerror: patch failed: src/backend/executor/nodeSetOp.c:631\nerror: src/backend/executor/nodeSetOp.c: patch does not apply\nerror: patch failed: src/backend/executor/nodeWindowAgg.c:216\nerror: src/backend/executor/nodeWindowAgg.c: patch does not apply\nerror: patch failed: src/backend/executor/spi.c:547\nerror: src/backend/executor/spi.c: patch does not apply\nerror: patch failed: src/backend/postmaster/autovacuum.c:555\nerror: src/backend/postmaster/autovacuum.c: patch does not apply\nerror: patch failed: src/backend/postmaster/bgwriter.c:182\nerror: src/backend/postmaster/bgwriter.c: patch does not apply\nerror: patch failed: src/backend/postmaster/checkpointer.c:290\nerror: src/backend/postmaster/checkpointer.c: patch does not apply\nerror: patch failed: src/backend/postmaster/walwriter.c:178\nerror: src/backend/postmaster/walwriter.c: patch does not apply\nerror: patch failed: src/backend/replication/logical/worker.c:3647\nerror: src/backend/replication/logical/worker.c: patch does not apply\nerror: patch failed: src/backend/statistics/extended_stats.c:2237\nerror: src/backend/statistics/extended_stats.c: patch does not apply\nerror: patch failed: src/backend/tcop/postgres.c:4457\nerror: src/backend/tcop/postgres.c: patch does not apply\nerror: patch failed: src/backend/utils/cache/evtcache.c:91\nerror: src/backend/utils/cache/evtcache.c: patch does not apply\nerror: patch failed: src/backend/utils/error/elog.c:1833\nerror: src/backend/utils/error/elog.c: patch does not apply\nerror: patch failed: src/include/utils/memutils.h:66\nerror: src/include/utils/memutils.h: patch does not apply\nPG/ - (master) $\n\nRegards,\nAmul\n\nOn Tue, Nov 14, 2023 at 9:21 PM Nathan Bossart <[email protected]> wrote:On Tue, Nov 14, 2023 at 04:25:24PM +0530, Amul Sul wrote:\n> Changes looks pretty much straight forward, but patch failed to apply on the\n> latest master head(b41b1a7f490) at me.\n\nThanks for taking a look. Would you mind sharing the error(s) you are\nseeing? The patch applies fine on cfbot and my machine, and check-world\ncontinues to pass. Nevermind, I usually use git apply or git am, here are those errors:PG/ - (master) $ git apply ~/Downloads/retire_compatibility_macro_v1.patcherror: patch failed: src/backend/access/brin/brin.c:297error: src/backend/access/brin/brin.c: patch does not applyerror: patch failed: src/backend/access/gin/ginscan.c:251error: src/backend/access/gin/ginscan.c: patch does not applyerror: patch failed: src/backend/access/transam/xact.c:1933error: src/backend/access/transam/xact.c: patch does not applyerror: patch failed: src/backend/commands/analyze.c:583error: src/backend/commands/analyze.c: patch does not applyerror: patch failed: src/backend/executor/nodeRecursiveunion.c:317error: src/backend/executor/nodeRecursiveunion.c: patch does not applyerror: patch failed: src/backend/executor/nodeSetOp.c:631error: src/backend/executor/nodeSetOp.c: patch does not applyerror: patch failed: src/backend/executor/nodeWindowAgg.c:216error: src/backend/executor/nodeWindowAgg.c: patch does not applyerror: patch failed: src/backend/executor/spi.c:547error: src/backend/executor/spi.c: patch does not applyerror: patch failed: src/backend/postmaster/autovacuum.c:555error: src/backend/postmaster/autovacuum.c: patch does not applyerror: patch failed: src/backend/postmaster/bgwriter.c:182error: src/backend/postmaster/bgwriter.c: patch does not applyerror: patch failed: src/backend/postmaster/checkpointer.c:290error: src/backend/postmaster/checkpointer.c: patch does not applyerror: patch failed: src/backend/postmaster/walwriter.c:178error: src/backend/postmaster/walwriter.c: patch does not applyerror: patch failed: src/backend/replication/logical/worker.c:3647error: src/backend/replication/logical/worker.c: patch does not applyerror: patch failed: src/backend/statistics/extended_stats.c:2237error: src/backend/statistics/extended_stats.c: patch does not applyerror: patch failed: src/backend/tcop/postgres.c:4457error: src/backend/tcop/postgres.c: patch does not applyerror: patch failed: src/backend/utils/cache/evtcache.c:91error: src/backend/utils/cache/evtcache.c: patch does not applyerror: patch failed: src/backend/utils/error/elog.c:1833error: src/backend/utils/error/elog.c: patch does not applyerror: patch failed: src/include/utils/memutils.h:66error: src/include/utils/memutils.h: patch does not applyPG/ - (master) $Regards,Amul",
"msg_date": "Wed, 15 Nov 2023 09:27:18 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 09:27:18AM +0530, Amul Sul wrote:\n> Nevermind, I usually use git apply or git am, here are those errors:\n> \n> PG/ - (master) $ git apply ~/Downloads/retire_compatibility_macro_v1.patch\n> error: patch failed: src/backend/access/brin/brin.c:297\n> error: src/backend/access/brin/brin.c: patch does not apply\n\nI wonder if your mail client is modifying the patch. Do you have the same\nissue if you download it from the archives?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Nov 2023 09:56:00 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "Committed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Nov 2023 13:45:14 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 9:26 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Wed, Nov 15, 2023 at 09:27:18AM +0530, Amul Sul wrote:\n> > Nevermind, I usually use git apply or git am, here are those errors:\n> >\n> > PG/ - (master) $ git apply\n> ~/Downloads/retire_compatibility_macro_v1.patch\n> > error: patch failed: src/backend/access/brin/brin.c:297\n> > error: src/backend/access/brin/brin.c: patch does not apply\n>\n> I wonder if your mail client is modifying the patch. Do you have the same\n> issue if you download it from the archives?\n>\n\nYes, you are correct. Surprisingly, the archive version applied cleanly.\n\nGmail is doing something, I usually use web login on chrome browser, I\nnever\nfaced such issues with other's patches. Anyway, will try both the versions\nnext\ntime for the same kind of issue, sorry for the noise.\n\nRegards,\nAmul\n\nOn Wed, Nov 15, 2023 at 9:26 PM Nathan Bossart <[email protected]> wrote:On Wed, Nov 15, 2023 at 09:27:18AM +0530, Amul Sul wrote:\n> Nevermind, I usually use git apply or git am, here are those errors:\n> \n> PG/ - (master) $ git apply ~/Downloads/retire_compatibility_macro_v1.patch\n> error: patch failed: src/backend/access/brin/brin.c:297\n> error: src/backend/access/brin/brin.c: patch does not apply\n\nI wonder if your mail client is modifying the patch. Do you have the same\nissue if you download it from the archives? Yes, you are correct. Surprisingly, the archive version applied cleanly.Gmail is doing something, I usually use web login on chrome browser, I neverfaced such issues with other's patches. Anyway, will try both the versions nexttime for the same kind of issue, sorry for the noise.Regards,Amul",
"msg_date": "Thu, 16 Nov 2023 11:37:26 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 11:29 PM Nathan Bossart\n<[email protected]> wrote:\n>\n> On Tue, Nov 14, 2023 at 10:46:25PM +0530, Bharath Rupireddy wrote:\n> > FWIW, there are other backward compatibility macros out there like\n> > tuplestore_donestoring which was introduced by commit dd04e95 21 years\n> > ago and SPI_push() and its friends which were made no-ops macros by\n> > commit 1833f1a 7 years ago. Debian code search shows very minimal\n> > usages of the above macros. Can we do away with\n> > tuplestore_donestoring?\n>\n> Can we take these other things to a separate thread?\n\nSure. Here it is -\nhttps://www.postgresql.org/message-id/CALj2ACVeO58JM5tK2Qa8QC-%3DkC8sdkJOTd4BFU%3DK8zs4gGYpjQ%40mail.gmail.com.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 19:13:27 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: retire MemoryContextResetAndDeleteChildren backwards\n compatibility macro"
}
] |
[
{
"msg_contents": "Improve default and empty privilege outputs in psql.\n\nDefault privileges are represented as NULL::aclitem[] in catalog ACL\ncolumns, while revoking all privileges leaves an empty aclitem[].\nThese two cases used to produce identical output in psql meta-commands\nlike \\dp. Using something like \"\\pset null '(default)'\" as a\nworkaround for spotting the difference did not work, because null\nvalues were always displayed as empty strings by describe.c's\nmeta-commands.\n\nThis patch improves that with two changes:\n\n1. Print \"(none)\" for empty privileges so that the user is able to\n distinguish them from default privileges, even without special\n workarounds.\n\n2. Remove the special handling of null values in describe.c,\n so that \"\\pset null\" is honored like everywhere else.\n (This affects all output from these commands, not only ACLs.)\n\nThe privileges shown by \\dconfig+ and \\ddp as well as the column\nprivileges shown by \\dp are not affected by change #1, because the\nrespective aclitem[] is reset to NULL or deleted from the catalog\ninstead of leaving an empty array.\n\nErik Wienhold and Laurenz Albe\n\nDiscussion: https://postgr.es/m/[email protected]\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d1379ebf4c2d3d399e739966dbfa34e92a53b727\n\nModified Files\n--------------\ndoc/src/sgml/ddl.sgml | 16 ++++++++++-\nsrc/bin/psql/describe.c | 57 +++++++++-----------------------------\nsrc/test/regress/expected/psql.out | 55 ++++++++++++++++++++++++++++++++++++\nsrc/test/regress/sql/psql.sql | 32 +++++++++++++++++++++\n4 files changed, 115 insertions(+), 45 deletions(-)",
"msg_date": "Mon, 13 Nov 2023 20:41:43 +0000",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Improve default and empty privilege outputs in psql."
},
{
"msg_contents": "Re: Tom Lane\n> Improve default and empty privilege outputs in psql.\n\nI'm sorry to report this when 17.0 has already been wrapped, but this\nchange is breaking `psql -l` against 9.3-or-earlier servers:\n\n$ /usr/lib/postgresql/17/bin/psql\npsql (17rc1 (Debian 17~rc1-1.pgdg+2), Server 9.3.25)\nGeben Sie �help� f�r Hilfe ein.\n\npostgres =# \\set ECHO_HIDDEN on\npostgres =# \\l\n/******* ANFRAGE ********/\nSELECT\n d.datname as \"Name\",\n pg_catalog.pg_get_userbyid(d.datdba) as \"Owner\",\n pg_catalog.pg_encoding_to_char(d.encoding) as \"Encoding\",\n 'libc' AS \"Locale Provider\",\n d.datcollate as \"Collate\",\n d.datctype as \"Ctype\",\n NULL as \"Locale\",\n NULL as \"ICU Rules\",\n CASE WHEN pg_catalog.cardinality(d.datacl) = 0 THEN '(none)' ELSE pg_catalog.array_to_string(d.datacl, E'\\n') END AS \"Access privileges\"\nFROM pg_catalog.pg_database d\nORDER BY 1;\n/************************/\n\nFEHLER: 42883: Funktion pg_catalog.cardinality(aclitem[]) existiert nicht\nZEILE 10: CASE WHEN pg_catalog.cardinality(d.datacl) = 0 THEN '(none...\n ^\nTIP: Keine Funktion stimmt mit dem angegebenen Namen und den Argumenttypen �berein. Sie m�ssen m�glicherweise ausdr�ckliche Typumwandlungen hinzuf�gen.\nORT: ParseFuncOrColumn, parse_func.c:299\n\n\nThe psql docs aren't really explicit about which old versions are\nstill supported, but there's some mentioning that \\d should work back\nto 9.2:\n\n <para><application>psql</application> works best with servers of the same\n or an older major version. Backslash commands are particularly likely\n to fail if the server is of a newer version than <application>psql</application>\n itself. However, backslash commands of the <literal>\\d</literal> family should\n work with servers of versions back to 9.2, though not necessarily with\n servers newer than <application>psql</application> itself.\n\n\\l seems a tad more important even.\n\nChristoph\n\n\n",
"msg_date": "Tue, 24 Sep 2024 17:14:42 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Improve default and empty privilege outputs in psql."
},
{
"msg_contents": "Christoph Berg <[email protected]> writes:\n> Re: Tom Lane\n>> Improve default and empty privilege outputs in psql.\n\n> I'm sorry to report this when 17.0 has already been wrapped, but this\n> change is breaking `psql -l` against 9.3-or-earlier servers:\n> FEHLER: 42883: Funktion pg_catalog.cardinality(aclitem[]) existiert nicht\n\nGrumble. Well, if that's the worst bug in 17.0 we should all be\nvery pleased indeed ;-). I'll see about fixing it after the\nrelease freeze lifts.\n\n> The psql docs aren't really explicit about which old versions are\n> still supported, but there's some mentioning that \\d should work back\n> to 9.2:\n\nYes, that's the expectation. I'm sure we can think of a more\nbackwards-compatible way to test for empty datacl, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Sep 2024 12:27:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Improve default and empty privilege outputs in psql."
},
{
"msg_contents": "I wrote:\n> Yes, that's the expectation. I'm sure we can think of a more\n> backwards-compatible way to test for empty datacl, though.\n\nLooks like the attached should be sufficient. As penance, I tried\nall the commands in describe.c against 9.2 (or the oldest supported\nserver version), and none of them fail now.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 24 Sep 2024 13:57:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Improve default and empty privilege outputs in psql."
}
] |
[
{
"msg_contents": "The documentation for max_connections does not mention that just by having\na higher value for max_connections, PostgreSQL will use more resources.\n\nWhile working with different customers, I noticed that several of them set\nmax_connections to very high numbers, even though they never expected to\nactually have that many connections to their PostgreSQL instance.\n\nIn one extreme case, the user set max_connections to 200000 and was\nbefuddled that the instance was using more memory than another with the\nsame number of connections.\n\nThis patch adds language to the documentation pointing to the fact that\nhigher value of max_connections leads to higher consumption of resources by\nPostgres, adding one paragraph to doc/src/sgml/config.sgml\n\n <para>\n PostgreSQL sizes certain resources based directly on the value of\n <varname>max_connections</varname>. Increasing its value leads to\n higher allocation of those resources, including shared memory.\n </para>\n\nSincerely,\n\nRoberto Mello",
"msg_date": "Mon, 13 Nov 2023 14:40:08 -0700",
"msg_from": "Roberto Mello <[email protected]>",
"msg_from_op": true,
"msg_subject": "[DOC] Add detail regarding resource consumption wrt max_connections"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nI think it is good to warn the user about the increased allocation of memory for certain parameters so that they do not abuse it by setting it to a huge number without knowing the consequences.\r\n\r\nIt is true that max_connections can increase the size of proc array and other resources, which are allocated in the shared buffer, which also means less shared buffer to perform regular data operations. I am sure this is not the only parameter that affects the memory allocation. \"max_prepared_xacts\" can also affect the shared memory allocation too so the same warning message applies here as well. Maybe there are other parameters with similar effects. \r\n\r\nInstead of stating that higher max_connections results in higher allocation, It may be better to tell the user that if the value needs to be set much higher, consider increasing the \"shared_buffers\" setting as well.\r\n\r\nthank you\r\n\r\n-----------------------\r\nCary Huang\r\nHighgo Software Canada\r\nwww.highgo.ca",
"msg_date": "Fri, 12 Jan 2024 22:14:38 +0000",
"msg_from": "Cary Huang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 3:15 PM Cary Huang <[email protected]> wrote:\n\n> I think it is good to warn the user about the increased allocation of\n> memory for certain parameters so that they do not abuse it by setting it to\n> a huge number without knowing the consequences.\n>\n> It is true that max_connections can increase the size of proc array and\n> other resources, which are allocated in the shared buffer, which also means\n> less shared buffer to perform regular data operations. I am sure this is\n> not the only parameter that affects the memory allocation.\n> \"max_prepared_xacts\" can also affect the shared memory allocation too so\n> the same warning message applies here as well. Maybe there are other\n> parameters with similar effects.\n>\n> Instead of stating that higher max_connections results in higher\n> allocation, It may be better to tell the user that if the value needs to be\n> set much higher, consider increasing the \"shared_buffers\" setting as well.\n>\n\nAppreciate the review, Cary.\n\nMy goal was to inform the reader that there are implications to setting\nmax_connections higher. I've personally seen a user mindlessly set this to\n50k connections, unaware it would cause unintended consequences.\n\nI can add a suggestion for the user to consider increasing shared_buffers\nin accordance with higher max_connections, but it would be better if there\nwas a \"rule of thumb\" guideline to go along. I'm open to suggestions.\n\nI can revise with a similar warning in max_prepared_xacts as well.\n\nSincerely,\n\nRoberto\n\nOn Fri, Jan 12, 2024 at 3:15 PM Cary Huang <[email protected]> wrote:I think it is good to warn the user about the increased allocation of memory for certain parameters so that they do not abuse it by setting it to a huge number without knowing the consequences.\n\nIt is true that max_connections can increase the size of proc array and other resources, which are allocated in the shared buffer, which also means less shared buffer to perform regular data operations. I am sure this is not the only parameter that affects the memory allocation. \"max_prepared_xacts\" can also affect the shared memory allocation too so the same warning message applies here as well. Maybe there are other parameters with similar effects. \n\nInstead of stating that higher max_connections results in higher allocation, It may be better to tell the user that if the value needs to be set much higher, consider increasing the \"shared_buffers\" setting as well.Appreciate the review, Cary.My goal was to inform the reader that there are implications to setting max_connections higher. I've personally seen a user mindlessly set this to 50k connections, unaware it would cause unintended consequences. I can add a suggestion for the user to consider increasing shared_buffers in accordance with higher max_connections, but it would be better if there was a \"rule of thumb\" guideline to go along. I'm open to suggestions.I can revise with a similar warning in max_prepared_xacts as well.Sincerely,Roberto",
"msg_date": "Sat, 13 Jan 2024 10:31:42 -0700",
"msg_from": "Roberto Mello <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Sat, 2024-01-13 at 10:31 -0700, Roberto Mello wrote:\n> On Fri, Jan 12, 2024 at 3:15 PM Cary Huang <[email protected]>\n> wrote:\n> > I think it is good to warn the user about the increased allocation\n> > of memory for certain parameters so that they do not abuse it by\n> > setting it to a huge number without knowing the consequences.\n> > \n> > It is true that max_connections can increase the size of proc array\n> > and other resources, which are allocated in the shared buffer,\n> > which also means less shared buffer to perform regular data\n> > operations. I am sure this is not the only parameter that affects\n> > the memory allocation. \"max_prepared_xacts\" can also affect the\n> > shared memory allocation too so the same warning message applies\n> > here as well. Maybe there are other parameters with similar\n> > effects. \n> > \n> > Instead of stating that higher max_connections results in higher\n> > allocation, It may be better to tell the user that if the value\n> > needs to be set much higher, consider increasing the\n> > \"shared_buffers\" setting as well.\n> > \n> \n> \n> Appreciate the review, Cary.\n> \n> My goal was to inform the reader that there are implications to\n> setting max_connections higher. I've personally seen a user\n> mindlessly set this to 50k connections, unaware it would cause\n> unintended consequences. \n> \n> I can add a suggestion for the user to consider increasing\n> shared_buffers in accordance with higher max_connections, but it\n> would be better if there was a \"rule of thumb\" guideline to go along.\n> I'm open to suggestions.\n> \n> I can revise with a similar warning in max_prepared_xacts as well.\n> \n> Sincerely,\n> \n> Roberto\n\nCan a \"close enough\" rule of thumb be calculated from:\npostgresql.conf -> log_min_messages = debug3\n\nstart postgresql with varying max_connections to get\nCreateSharedMemoryAndSemaphores() sizes to generate a rough equation\n\npostgresql-12-main.log\n\nmax_connections=100\n75:2024-01-19 17:04:56.544 EST [2762535] DEBUG: invoking\nIpcMemoryCreate(size=149110784)\n0.149110784GB\n\nmax_connections=10000\n1203:2024-01-19 17:06:13.502 EST [2764895] DEBUG: invoking\nIpcMemoryCreate(size=644997120)\n0.64499712GB\n\nmax_connections=20000\n5248:2024-01-19 17:24:27.956 EST [2954550] DEBUG: invoking\nIpcMemoryCreate(size=1145774080)\n1.14577408GB\n\nmax_connections=50000\n2331:2024-01-19 17:07:27.716 EST [2767079] DEBUG: invoking\nIpcMemoryCreate(size=2591490048)\n2.591490048GB\n\n\nfrom lines 184-186\n\n$ rg -B28 -A35 'invoking IpcMemoryCreate'\nbackend/storage/ipc/ipci.c\n158-/*\n159- * CreateSharedMemoryAndSemaphores\n160- * Creates and initializes shared memory and semaphores.\n161- *\n162- * This is called by the postmaster or by a standalone backend.\n163- * It is also called by a backend forked from the postmaster in the\n164- * EXEC_BACKEND case. In the latter case, the shared memory segment\n165- * already exists and has been physically attached to, but we have\nto\n166- * initialize pointers in local memory that reference the shared\nstructures,\n167- * because we didn't inherit the correct pointer values from the\npostmaster\n168- * as we do in the fork() scenario. The easiest way to do that is\nto run\n169- * through the same code as before. (Note that the called routines\nmostly\n170- * check IsUnderPostmaster, rather than EXEC_BACKEND, to detect\nthis case.\n171- * This is a bit code-wasteful and could be cleaned up.)\n172- */\n173-void\n174-CreateSharedMemoryAndSemaphores(void)\n175-{\n176- PGShmemHeader *shim = NULL;\n177-\n178- if (!IsUnderPostmaster)\n179- {\n180- PGShmemHeader *seghdr;\n181- Size size;\n182- int numSemas;\n183-\n184- /* Compute the size of the shared-memory block */\n185- size = CalculateShmemSize(&numSemas);\n186: elog(DEBUG3, \"invoking IpcMemoryCreate(size=%zu)\", size);\n187-\n188- /*\n189- * Create the shmem segment\n190- */\n191- seghdr = PGSharedMemoryCreate(size, &shim);\n192-\n193- InitShmemAccess(seghdr);\n194-\n195- /*\n196- * Create semaphores\n197- */\n198- PGReserveSemaphores(numSemas);\n199-\n200- /*\n201- * If spinlocks are disabled, initialize emulation layer (which\n202- * depends on semaphores, so the order is important here).\n203- */\n204-#ifndef HAVE_SPINLOCKS\n205- SpinlockSemaInit();\n206-#endif\n207- }\n208- else\n209- {\n210- /*\n211- * We are reattaching to an existing shared memory segment. This\n212- * should only be reached in the EXEC_BACKEND case.\n213- */\n214-#ifndef EXEC_BACKEND\n215- elog(PANIC, \"should be attached to shared memory already\");\n216-#endif\n217- }\n218-\n219- /*\n220- * Set up shared memory allocation mechanism\n221- */\n\n\n\nOn Sat, 2024-01-13 at 10:31 -0700, Roberto Mello wrote:On Fri, Jan 12, 2024 at 3:15 PM Cary Huang <[email protected]> wrote:I think it is good to warn the user about the increased allocation of memory for certain parameters so that they do not abuse it by setting it to a huge number without knowing the consequences.It is true that max_connections can increase the size of proc array and other resources, which are allocated in the shared buffer, which also means less shared buffer to perform regular data operations. I am sure this is not the only parameter that affects the memory allocation. \"max_prepared_xacts\" can also affect the shared memory allocation too so the same warning message applies here as well. Maybe there are other parameters with similar effects. Instead of stating that higher max_connections results in higher allocation, It may be better to tell the user that if the value needs to be set much higher, consider increasing the \"shared_buffers\" setting as well.Appreciate the review, Cary.My goal was to inform the reader that there are implications to setting max_connections higher. I've personally seen a user mindlessly set this to 50k connections, unaware it would cause unintended consequences. I can add a suggestion for the user to consider increasing shared_buffers in accordance with higher max_connections, but it would be better if there was a \"rule of thumb\" guideline to go along. I'm open to suggestions.I can revise with a similar warning in max_prepared_xacts as well.Sincerely,RobertoCan a \"close enough\" rule of thumb be calculated from:postgresql.conf -> log_min_messages = debug3start postgresql with varying max_connections to get CreateSharedMemoryAndSemaphores() sizes to generate a rough equationpostgresql-12-main.logmax_connections=10075:2024-01-19 17:04:56.544 EST [2762535] DEBUG: invoking IpcMemoryCreate(size=149110784)0.149110784GBmax_connections=100001203:2024-01-19 17:06:13.502 EST [2764895] DEBUG: invoking IpcMemoryCreate(size=644997120)0.64499712GBmax_connections=200005248:2024-01-19 17:24:27.956 EST [2954550] DEBUG: invoking IpcMemoryCreate(size=1145774080)1.14577408GBmax_connections=500002331:2024-01-19 17:07:27.716 EST [2767079] DEBUG: invoking IpcMemoryCreate(size=2591490048)2.591490048GBfrom lines 184-186$ rg -B28 -A35 'invoking IpcMemoryCreate'backend/storage/ipc/ipci.c158-/*159- * CreateSharedMemoryAndSemaphores160- * Creates and initializes shared memory and semaphores.161- *162- * This is called by the postmaster or by a standalone backend.163- * It is also called by a backend forked from the postmaster in the164- * EXEC_BACKEND case. In the latter case, the shared memory segment165- * already exists and has been physically attached to, but we have to166- * initialize pointers in local memory that reference the shared structures,167- * because we didn't inherit the correct pointer values from the postmaster168- * as we do in the fork() scenario. The easiest way to do that is to run169- * through the same code as before. (Note that the called routines mostly170- * check IsUnderPostmaster, rather than EXEC_BACKEND, to detect this case.171- * This is a bit code-wasteful and could be cleaned up.)172- */173-void174-CreateSharedMemoryAndSemaphores(void)175-{176- PGShmemHeader *shim = NULL;177-178- if (!IsUnderPostmaster)179- {180- PGShmemHeader *seghdr;181- Size size;182- int numSemas;183-184- /* Compute the size of the shared-memory block */185- size = CalculateShmemSize(&numSemas);186: elog(DEBUG3, \"invoking IpcMemoryCreate(size=%zu)\", size);187-188- /*189- * Create the shmem segment190- */191- seghdr = PGSharedMemoryCreate(size, &shim);192-193- InitShmemAccess(seghdr);194-195- /*196- * Create semaphores197- */198- PGReserveSemaphores(numSemas);199-200- /*201- * If spinlocks are disabled, initialize emulation layer (which202- * depends on semaphores, so the order is important here).203- */204-#ifndef HAVE_SPINLOCKS205- SpinlockSemaInit();206-#endif207- }208- else209- {210- /*211- * We are reattaching to an existing shared memory segment. This212- * should only be reached in the EXEC_BACKEND case.213- */214-#ifndef EXEC_BACKEND215- elog(PANIC, \"should be attached to shared memory already\");216-#endif217- }218-219- /*220- * Set up shared memory allocation mechanism221- */",
"msg_date": "Fri, 19 Jan 2024 17:37:33 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Fri, 2024-01-19 at 17:37 -0500, [email protected] wrote:\n> On Sat, 2024-01-13 at 10:31 -0700, Roberto Mello wrote:\n> > \n> > I can add a suggestion for the user to consider increasing\n> > shared_buffers in accordance with higher max_connections, but it\n> > would be better if there was a \"rule of thumb\" guideline to go\n> > along. I'm open to suggestions.\n> > \n> > I can revise with a similar warning in max_prepared_xacts as well.\n> > \n> > Sincerely,\n> > \n> > Roberto\n> \n> Can a \"close enough\" rule of thumb be calculated from:\n> postgresql.conf -> log_min_messages = debug3\n> \n> start postgresql with varying max_connections to get\n> CreateSharedMemoryAndSemaphores() sizes to generate a rough equation\n> \n\nor maybe it would be sufficient to advise to set log_min_messages =\ndebug3 on a test DB and start/stop it with varying values of\nmax_connections and look at the differing values in \nDEBUG: invoking IpcMemoryCreate(size=...) log messages for themselves.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 08:58:23 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Mon, Jan 22, 2024 at 8:58 AM <[email protected]> wrote:\n> On Fri, 2024-01-19 at 17:37 -0500, [email protected] wrote:\n> > On Sat, 2024-01-13 at 10:31 -0700, Roberto Mello wrote:\n> > >\n> > > I can add a suggestion for the user to consider increasing\n> > > shared_buffers in accordance with higher max_connections, but it\n> > > would be better if there was a \"rule of thumb\" guideline to go\n> > > along. I'm open to suggestions.\n> > >\n> > > I can revise with a similar warning in max_prepared_xacts as well.\n> > >\n> > > Sincerely,\n> > >\n> > > Roberto\n> >\n> > Can a \"close enough\" rule of thumb be calculated from:\n> > postgresql.conf -> log_min_messages = debug3\n> >\n> > start postgresql with varying max_connections to get\n> > CreateSharedMemoryAndSemaphores() sizes to generate a rough equation\n> >\n>\n> or maybe it would be sufficient to advise to set log_min_messages =\n> debug3 on a test DB and start/stop it with varying values of\n> max_connections and look at the differing values in\n> DEBUG: invoking IpcMemoryCreate(size=...) log messages for themselves.\n>\n>\n\nI'm of the opinion that advice suggestingDBA's set things to DEBUG 3\nis unfriendly at best. If you really want to add more, there is an\nexisting unfriendly section of the docs at\nhttps://www.postgresql.org/docs/devel/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT\nthat mentions this problem, specifically:\n\n\"If PostgreSQL itself is the cause of the system running out of\nmemory, you can avoid the problem by changing your configuration. In\nsome cases, it may help to lower memory-related configuration\nparameters, particularly shared_buffers, work_mem, and\nhash_mem_multiplier. In other cases, the problem may be caused by\nallowing too many connections to the database server itself. In many\ncases, it may be better to reduce max_connections and instead make use\nof external connection-pooling software.\"\n\nI couldn't really find a spot to add in your additional info, but\nmaybe you can find a spot that fits? Or maybe a well written\nwalk-through of this would make for a good wiki page in case people\nreally want to dig in.\n\nIn any case, I think Roberto's original language is an improvement\nover what we have now, so I'd probably recommend just going with that,\nalong with a similar note to max_prepared_xacts, and optionally a\npointer to the shared mem section of the docs.\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Fri, 8 Mar 2024 09:52:27 -0500",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "Hi,\n \nOn Fri, Jan 12, 2024 at 10:14:38PM +0000, Cary Huang wrote:\n> I think it is good to warn the user about the increased allocation of\n> memory for certain parameters so that they do not abuse it by setting\n> it to a huge number without knowing the consequences.\n\nRight, and I think it might be useful to log (i.e. at LOG not DEBUG3\nlevel, with a nicer message) the amount of memory we allocate on\nstartup, that is just one additional line per instance lifetime but\nmight be quite useful to admins. Or maybe two lines if we log whether we\ncould allocate it as huge pages or not as well:\n\n|2024-03-08 16:46:13.117 CET [237899] DEBUG: invoking IpcMemoryCreate(size=145145856)\n|2024-03-08 16:46:13.117 CET [237899] DEBUG: mmap(146800640) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory\n \n> It is true that max_connections can increase the size of proc array\n> and other resources, which are allocated in the shared buffer, which\n> also means less shared buffer to perform regular data operations.\n\nAFAICT, those resources are allocated on top of shared_buffers, i.e. the\ntotal allocated memory is shared_buffers + (some resources) *\nmax_connections + (other resources) * other_factors.\n\n> Instead of stating that higher max_connections results in higher\n> allocation, It may be better to tell the user that if the value needs\n> to be set much higher, consider increasing the \"shared_buffers\"\n> setting as well.\n\nOnly if what you say above is true and I am at fault.\n\n\nMichael\n\n\n",
"msg_date": "Fri, 8 Mar 2024 16:46:48 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 10:47 AM Michael Banck <[email protected]> wrote:\n>\n> Hi,\n>\n> On Fri, Jan 12, 2024 at 10:14:38PM +0000, Cary Huang wrote:\n> > I think it is good to warn the user about the increased allocation of\n> > memory for certain parameters so that they do not abuse it by setting\n> > it to a huge number without knowing the consequences.\n>\n> Right, and I think it might be useful to log (i.e. at LOG not DEBUG3\n> level, with a nicer message) the amount of memory we allocate on\n> startup, that is just one additional line per instance lifetime but\n> might be quite useful to admins. Or maybe two lines if we log whether we\n> could allocate it as huge pages or not as well:\n>\n> |2024-03-08 16:46:13.117 CET [237899] DEBUG: invoking IpcMemoryCreate(size=145145856)\n> |2024-03-08 16:46:13.117 CET [237899] DEBUG: mmap(146800640) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory\n>\n\nIf we were going to add these details (and I very much like the idea),\nI would advocate that we put it somewhere more permanent than a single\nlog entry at start-up. Given that database up-times easily run months\nand sometimes years, it is hard to imagine we'd always have access to\nthe log files to figure this out on any actively running systems.\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Sun, 10 Mar 2024 09:58:25 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "Hi,\n\nOn Sun, Mar 10, 2024 at 09:58:25AM -0400, Robert Treat wrote:\n> On Fri, Mar 8, 2024 at 10:47 AM Michael Banck <[email protected]> wrote:\n> > On Fri, Jan 12, 2024 at 10:14:38PM +0000, Cary Huang wrote:\n> > > I think it is good to warn the user about the increased allocation of\n> > > memory for certain parameters so that they do not abuse it by setting\n> > > it to a huge number without knowing the consequences.\n> >\n> > Right, and I think it might be useful to log (i.e. at LOG not DEBUG3\n> > level, with a nicer message) the amount of memory we allocate on\n> > startup, that is just one additional line per instance lifetime but\n> > might be quite useful to admins. Or maybe two lines if we log whether we\n> > could allocate it as huge pages or not as well:\n> >\n> > |2024-03-08 16:46:13.117 CET [237899] DEBUG: invoking IpcMemoryCreate(size=145145856)\n> > |2024-03-08 16:46:13.117 CET [237899] DEBUG: mmap(146800640) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory\n> >\n> \n> If we were going to add these details (and I very much like the idea),\n> I would advocate that we put it somewhere more permanent than a single\n> log entry at start-up. Given that database up-times easily run months\n> and sometimes years, it is hard to imagine we'd always have access to\n> the log files to figure this out on any actively running systems.\n\nWell actually, those two numbers are already available at runtime, via\nthe shared_memory_size and (from 17 on) huge_pages_status GUCs.\n\nSo this would be geared at admins that keeps in long-term storage and\nwant to know what the numbers were a while ago. Maybe it is not that\ninteresting, but I think one or two lines at startup would not hurt.\n\n\nMichael\n\n\n",
"msg_date": "Sun, 10 Mar 2024 15:24:05 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 9:52 AM Robert Treat <[email protected]> wrote:\n> I'm of the opinion that advice suggestingDBA's set things to DEBUG 3\n> is unfriendly at best. If you really want to add more, there is an\n> existing unfriendly section of the docs at\n> https://www.postgresql.org/docs/devel/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT\n> that mentions this problem, specifically:\n>\n> \"If PostgreSQL itself is the cause of the system running out of\n> memory, you can avoid the problem by changing your configuration. In\n> some cases, it may help to lower memory-related configuration\n> parameters, particularly shared_buffers, work_mem, and\n> hash_mem_multiplier. In other cases, the problem may be caused by\n> allowing too many connections to the database server itself. In many\n> cases, it may be better to reduce max_connections and instead make use\n> of external connection-pooling software.\"\n>\n> I couldn't really find a spot to add in your additional info, but\n> maybe you can find a spot that fits? Or maybe a well written\n> walk-through of this would make for a good wiki page in case people\n> really want to dig in.\n>\n> In any case, I think Roberto's original language is an improvement\n> over what we have now, so I'd probably recommend just going with that,\n> along with a similar note to max_prepared_xacts, and optionally a\n> pointer to the shared mem section of the docs.\n\nI agree with this.\n\nI don't agree with Cary's statement that if you increase\nmax_connections you should increase shared_buffers as well. That seems\nsituation-dependent to me, and it's also missing Roberto's point,\nwhich is that JUST increasing max_connections without doing anything\nelse uses more shared memory.\n\nSimilarly, I don't think we need to document a detailed testing\nprocedure, as proposed by Reid. If users want to know exactly how many\nadditional resources are used, they can test; either using the DEBUG3\napproach, or perhaps more simply via the pg_shmem_allocations view.\nBut I think it's overkill for us to recommend any specific testing\nprocedure here.\n\nRather, I think that it's entirely appropriate to do what Roberto\nsuggested, which is to say, let users know that they're going to use\nsome extra resources if they increase the setting, and then let them\nfigure out what if anything they want to do about that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 13:57:53 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 1:57 PM Robert Haas <[email protected]> wrote:\n> Rather, I think that it's entirely appropriate to do what Roberto\n> suggested, which is to say, let users know that they're going to use\n> some extra resources if they increase the setting, and then let them\n> figure out what if anything they want to do about that.\n\nConsidering that, and the lack of further comment, I propose to commit\nthe original patch.\n\nObjections?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 11:14:42 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Wed, May 15, 2024 at 11:14 AM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Mar 22, 2024 at 1:57 PM Robert Haas <[email protected]> wrote:\n> > Rather, I think that it's entirely appropriate to do what Roberto\n> > suggested, which is to say, let users know that they're going to use\n> > some extra resources if they increase the setting, and then let them\n> > figure out what if anything they want to do about that.\n>\n> Considering that, and the lack of further comment, I propose to commit\n> the original patch.\n>\n> Objections?\n>\n\nI think the only unresolved question in my mind was if we should add a\nsimilar note to the original patch to max_prepared_xacts as well; do\nyou intend to do that?\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Wed, 15 May 2024 15:59:51 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Wed, May 15, 2024 at 4:00 PM Robert Treat <[email protected]> wrote:\n> I think the only unresolved question in my mind was if we should add a\n> similar note to the original patch to max_prepared_xacts as well; do\n> you intend to do that?\n\nI didn't intend to do that. I don't think it would be incorrect to do\nso, but then we're kind of getting into a slippery slope of trying to\nlabel every parameter that has increases shared memory usage or any\nother kind of research consumption, and there are probably (pulls\nnumber out of the air) twenty of those. It seems more worthwhile to\nmention it for max_connections than the other (deducts one from\nprevious random guess) nineteen because it affects a whole lot more\nthings, like the size of the fsync queue and the size of the lock\ntable, and also because it tends to get set to relatively large\nvalues, unlike, for example, autovacuum_max_workers. If you think we\nshould go further than just doing max_connections, then I think we\neither need to (a) add a note to every single bloody parameter that\naffects the size of shared memory or (b) prove that the subset where\nwe add such a note have a significantly larger impact than the others\nwhere we don't. Do you think we should get into all that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 16:05:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Wed, May 15, 2024 at 4:05 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, May 15, 2024 at 4:00 PM Robert Treat <[email protected]> wrote:\n> > I think the only unresolved question in my mind was if we should add a\n> > similar note to the original patch to max_prepared_xacts as well; do\n> > you intend to do that?\n>\n> I didn't intend to do that. I don't think it would be incorrect to do\n> so, but then we're kind of getting into a slippery slope of trying to\n> label every parameter that has increases shared memory usage or any\n> other kind of research consumption, and there are probably (pulls\n> number out of the air) twenty of those. It seems more worthwhile to\n> mention it for max_connections than the other (deducts one from\n> previous random guess) nineteen because it affects a whole lot more\n> things, like the size of the fsync queue and the size of the lock\n> table, and also because it tends to get set to relatively large\n> values, unlike, for example, autovacuum_max_workers. If you think we\n> should go further than just doing max_connections, then I think we\n> either need to (a) add a note to every single bloody parameter that\n> affects the size of shared memory or (b) prove that the subset where\n> we add such a note have a significantly larger impact than the others\n> where we don't. Do you think we should get into all that?\n>\n\nNope. Let's do the best bang for the buck improvement and we can see\nif we get any feedback that indicates more needs to be done.\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Wed, 15 May 2024 16:22:43 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
},
{
"msg_contents": "On Wed, May 15, 2024 at 4:22 PM Robert Treat <[email protected]> wrote:\n> Nope. Let's do the best bang for the buck improvement and we can see\n> if we get any feedback that indicates more needs to be done.\n\nDone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 May 2024 08:55:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Add detail regarding resource consumption wrt\n max_connections"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that in lazy_scan_heap(), when there are no indexes on the\ntable being vacuumed, we don't release the lock on the heap page buffer\nbefore vacuuming the freespace map. Other call sites of\nFreeSpaceMapVacuumRange() hold no such lock. It seems like a waste to\nhold a lock we don't need.\n\nISTM the fix (attached) is just to move down the call to\nFreeSpaceMapVacuumRange() to after we've released the lock and recorded\nthe space we just freed.\n\n- Melanie",
"msg_date": "Mon, 13 Nov 2023 17:13:32 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "lazy_scan_heap() should release lock on buffer before vacuuming FSM"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-13 17:13:32 -0500, Melanie Plageman wrote:\n> I noticed that in lazy_scan_heap(), when there are no indexes on the\n> table being vacuumed, we don't release the lock on the heap page buffer\n> before vacuuming the freespace map. Other call sites of\n> FreeSpaceMapVacuumRange() hold no such lock. It seems like a waste to\n> hold a lock we don't need.\n\nI think this undersells the situation a bit. We right now do\nFreeSpaceMapVacuumRange() for 8GB of data (VACUUM_FSM_EVERY_PAGES) in the main\nfork, while holding an exclusive page level lock. There's no guarantee (or\neven high likelihood) that those pages are currently in the page cache, given\nthat we have processed up to 8GB of heap data since. 8GB of data is roughly\n2MB of FSM with the default compilation options.\n\nOf course processing 2MB of FSM doesn't take that long, but still, it seems\nworse than just reading a page or two.\n\n\n\n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index 6985d299b2..8b729828ce 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -1046,18 +1046,6 @@ lazy_scan_heap(LVRelState *vacrel)\n> \t\t\t\t/* Forget the LP_DEAD items that we just vacuumed */\n> \t\t\t\tdead_items->num_items = 0;\n> \n> -\t\t\t\t/*\n> -\t\t\t\t * Periodically perform FSM vacuuming to make newly-freed\n> -\t\t\t\t * space visible on upper FSM pages. Note we have not yet\n> -\t\t\t\t * performed FSM processing for blkno.\n> -\t\t\t\t */\n> -\t\t\t\tif (blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES)\n> -\t\t\t\t{\n> -\t\t\t\t\tFreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum,\n> -\t\t\t\t\t\t\t\t\t\t\tblkno);\n> -\t\t\t\t\tnext_fsm_block_to_vacuum = blkno;\n> -\t\t\t\t}\n> -\n> \t\t\t\t/*\n> \t\t\t\t * Now perform FSM processing for blkno, and move on to next\n> \t\t\t\t * page.\n> @@ -1071,6 +1059,18 @@ lazy_scan_heap(LVRelState *vacrel)\n> \n> \t\t\t\tUnlockReleaseBuffer(buf);\n> \t\t\t\tRecordPageWithFreeSpace(vacrel->rel, blkno, freespace);\n> +\n> +\t\t\t\t/*\n> +\t\t\t\t * Periodically perform FSM vacuuming to make newly-freed\n> +\t\t\t\t * space visible on upper FSM pages.\n> +\t\t\t\t */\n> +\t\t\t\tif (blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES)\n> +\t\t\t\t{\n> +\t\t\t\t\tFreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum,\n> +\t\t\t\t\t\t\t\t\t\t\tblkno);\n> +\t\t\t\t\tnext_fsm_block_to_vacuum = blkno;\n> +\t\t\t\t}\n> +\n> \t\t\t\tcontinue;\n> \t\t\t}\n\nPreviously there was this comment about \"not yet\", hinting at that being\nimportant for FreeSpaceMapVacuumRange's API. I think as-is we now don't vacuum\nthe actually freshly updated page contents.\n\nFreeSpaceMapVacuumRange()'s comment says:\n * As above, but assume that only heap pages between start and end-1 inclusive\n * have new free-space information, so update only the upper-level slots\n * covering that block range. end == InvalidBlockNumber is equivalent to\n * \"all the rest of the relation\".\n\nSo FreeSpaceMapVacuumRange(..., blkno) will not actually process the \"effects\"\nof the RecordPageWithFreeSpace() above it - which seems confusing.\n\n\n\nAside:\n\nI just tried to reach the path and noticed something odd:\n\n=# show autovacuum;\n┌────────────┐\n│ autovacuum │\n├────────────┤\n│ off │\n└────────────┘\n(1 row)\n\n=# \\dt+ copytest_0\n List of relations\n┌────────┬────────────┬───────┬────────┬─────────────┬───────────────┬─────────┬─────────────┐\n│ Schema │ Name │ Type │ Owner │ Persistence │ Access method │ Size │ Description │\n├────────┼────────────┼───────┼────────┼─────────────┼───────────────┼─────────┼─────────────┤\n│ public │ copytest_0 │ table │ andres │ permanent │ heap │ 1143 MB │ │\n└────────┴────────────┴───────┴────────┴─────────────┴───────────────┴─────────┴─────────────┘\n\n=# DELETE FROM copytest_0;\n=# VACUUM (VERBOSE) copytest_0;\n...\nINFO: 00000: table \"copytest_0\": truncated 146264 to 110934 pages\n...\ntuples missed: 5848 dead from 89 pages not removed due to cleanup lock contention\n...\n\nA bit of debugging later I figured out that this is due to the background\nwriter. If I SIGSTOP bgwriter, the whole relation is truncated...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 17:26:42 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 8:26 PM Andres Freund <[email protected]> wrote:\r\n> On 2023-11-13 17:13:32 -0500, Melanie Plageman wrote:\r\n> > diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\r\n> > index 6985d299b2..8b729828ce 100644\r\n> > --- a/src/backend/access/heap/vacuumlazy.c\r\n> > +++ b/src/backend/access/heap/vacuumlazy.c\r\n> > @@ -1046,18 +1046,6 @@ lazy_scan_heap(LVRelState *vacrel)\r\n> > /* Forget the LP_DEAD items that we just vacuumed */\r\n> > dead_items->num_items = 0;\r\n> >\r\n> > - /*\r\n> > - * Periodically perform FSM vacuuming to make newly-freed\r\n> > - * space visible on upper FSM pages. Note we have not yet\r\n> > - * performed FSM processing for blkno.\r\n> > - */\r\n> > - if (blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES)\r\n> > - {\r\n> > - FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum,\r\n> > - blkno);\r\n> > - next_fsm_block_to_vacuum = blkno;\r\n> > - }\r\n> > -\r\n> > /*\r\n> > * Now perform FSM processing for blkno, and move on to next\r\n> > * page.\r\n> > @@ -1071,6 +1059,18 @@ lazy_scan_heap(LVRelState *vacrel)\r\n> >\r\n> > UnlockReleaseBuffer(buf);\r\n> > RecordPageWithFreeSpace(vacrel->rel, blkno, freespace);\r\n> > +\r\n> > + /*\r\n> > + * Periodically perform FSM vacuuming to make newly-freed\r\n> > + * space visible on upper FSM pages.\r\n> > + */\r\n> > + if (blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES)\r\n> > + {\r\n> > + FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum,\r\n> > + blkno);\r\n> > + next_fsm_block_to_vacuum = blkno;\r\n> > + }\r\n> > +\r\n> > continue;\r\n> > }\r\n>\r\n> Previously there was this comment about \"not yet\", hinting at that being\r\n> important for FreeSpaceMapVacuumRange's API. I think as-is we now don't vacuum\r\n> the actually freshly updated page contents.\r\n>\r\n> FreeSpaceMapVacuumRange()'s comment says:\r\n> * As above, but assume that only heap pages between start and end-1 inclusive\r\n> * have new free-space information, so update only the upper-level slots\r\n> * covering that block range. end == InvalidBlockNumber is equivalent to\r\n> * \"all the rest of the relation\".\r\n>\r\n> So FreeSpaceMapVacuumRange(..., blkno) will not actually process the \"effects\"\r\n> of the RecordPageWithFreeSpace() above it - which seems confusing.\r\n\r\nAh, so shall I pass blkno + 1 as end?\r\n\r\n> Aside:\r\n>\r\n> I just tried to reach the path and noticed something odd:\r\n>\r\n> =# show autovacuum;\r\n> ┌────────────┐\r\n> │ autovacuum │\r\n> ├────────────┤\r\n> │ off │\r\n> └────────────┘\r\n> (1 row)\r\n>\r\n> =# \\dt+ copytest_0\r\n> List of relations\r\n> ┌────────┬────────────┬───────┬────────┬─────────────┬───────────────┬─────────┬─────────────┐\r\n> │ Schema │ Name │ Type │ Owner │ Persistence │ Access method │ Size │ Description │\r\n> ├────────┼────────────┼───────┼────────┼─────────────┼───────────────┼─────────┼─────────────┤\r\n> │ public │ copytest_0 │ table │ andres │ permanent │ heap │ 1143 MB │ │\r\n> └────────┴────────────┴───────┴────────┴─────────────┴───────────────┴─────────┴─────────────┘\r\n>\r\n> =# DELETE FROM copytest_0;\r\n> =# VACUUM (VERBOSE) copytest_0;\r\n> ...\r\n> INFO: 00000: table \"copytest_0\": truncated 146264 to 110934 pages\r\n> ...\r\n> tuples missed: 5848 dead from 89 pages not removed due to cleanup lock contention\r\n> ...\r\n>\r\n> A bit of debugging later I figured out that this is due to the background\r\n> writer. If I SIGSTOP bgwriter, the whole relation is truncated...\r\n\r\nThat's a bit sad. But isn't that what you would expect bgwriter to do?\r\nWrite out dirty buffers? It doesn't know that those pages consist of\r\nonly dead tuples and that you will soon truncate them. Or are you\r\nsuggesting that bgwriter should do something like use the FSM to avoid\r\nflushing pages which have a lot of free space? Would the FSM even be\r\nupdated in this case to reflect that those pages have free space?\r\n\r\n- Melanie\r\n",
"msg_date": "Tue, 14 Nov 2023 07:46:10 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-14 07:46:10 -0500, Melanie Plageman wrote:\n> > FreeSpaceMapVacuumRange()'s comment says:\n> > * As above, but assume that only heap pages between start and end-1 inclusive\n> > * have new free-space information, so update only the upper-level slots\n> > * covering that block range. end == InvalidBlockNumber is equivalent to\n> > * \"all the rest of the relation\".\n> >\n> > So FreeSpaceMapVacuumRange(..., blkno) will not actually process the \"effects\"\n> > of the RecordPageWithFreeSpace() above it - which seems confusing.\n>\n> Ah, so shall I pass blkno + 1 as end?\n\nI think there's no actual overflow danger, because MaxBlockNumber + 1 is\nInvalidBlockNumber, which scans the rest of the relation (i.e. exactly the\nintended block). Perhaps worth noting?\n\n\n\n> > =# DELETE FROM copytest_0;\n> > =# VACUUM (VERBOSE) copytest_0;\n> > ...\n> > INFO: 00000: table \"copytest_0\": truncated 146264 to 110934 pages\n> > ...\n> > tuples missed: 5848 dead from 89 pages not removed due to cleanup lock contention\n> > ...\n> >\n> > A bit of debugging later I figured out that this is due to the background\n> > writer. If I SIGSTOP bgwriter, the whole relation is truncated...\n>\n> That's a bit sad. But isn't that what you would expect bgwriter to do?\n\nI mainly noted this so that if somebody else tries this they don't also spent\n30 minutes being confused. I'm not quite sure what a good solution here would\nbe.\n\n\n> Write out dirty buffers? It doesn't know that those pages consist of\n> only dead tuples and that you will soon truncate them.\n\nI think the issue more that it's feels wrong that a pin by bgwriter blocks\nvacuum cleaning up. I think the same happens with on-access pruning - where\nit's more likely for bgwriter to focus on those pages. Checkpointer likely\ncauses the same. Even normal backends can cause this while writing out the\npage.\n\nIt's not like bgwriter/checkpointer would actually have a problem if the page\nwere pruned - the page is locked while being written out and neither keep\npointers into the page after unlocking it again.\n\n\nAt this point I started to wonder if we should invent a separate type of pin\nfor a buffer undergoing IO. We basically have the necessary state already:\nBM_IO_IN_PROGRESS. We'd need to look at that in some places, e.g. in\nInvalidateBuffer(), instead of BUF_STATE_GET_REFCOUNT(), we'd need to also\nlook at BM_IO_IN_PROGRESS.\n\n\nExcept of course that that'd not change anything -\nConditionalLockBufferForCleanup() locks the buffer conditionally, *before*\neven looking at the refcount and returns false if not. And writing out a\nbuffer takes a content lock. Which made me realize that\n \"tuples missed: 5848 dead from 89 pages not removed due to cleanup lock contention\"\n\nis often kinda wrong - the contention is *not* cleanup lock specific. It's\noften just plain contention on the lwlock.\n\n\nPerhaps we should just treat IO_IN_PROGRESS buffers differently in\nlazy_scan_heap() and heap_page_prune_opt() and wait?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Nov 2023 16:15:29 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 7:15 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-11-14 07:46:10 -0500, Melanie Plageman wrote:\n> > > FreeSpaceMapVacuumRange()'s comment says:\n> > > * As above, but assume that only heap pages between start and end-1 inclusive\n> > > * have new free-space information, so update only the upper-level slots\n> > > * covering that block range. end == InvalidBlockNumber is equivalent to\n> > > * \"all the rest of the relation\".\n> > >\n> > > So FreeSpaceMapVacuumRange(..., blkno) will not actually process the \"effects\"\n> > > of the RecordPageWithFreeSpace() above it - which seems confusing.\n> >\n> > Ah, so shall I pass blkno + 1 as end?\n>\n> I think there's no actual overflow danger, because MaxBlockNumber + 1 is\n> InvalidBlockNumber, which scans the rest of the relation (i.e. exactly the\n> intended block). Perhaps worth noting?\n\nAttached\n\n> > > =# DELETE FROM copytest_0;\n> > > =# VACUUM (VERBOSE) copytest_0;\n> > > ...\n> > > INFO: 00000: table \"copytest_0\": truncated 146264 to 110934 pages\n> > > ...\n> > > tuples missed: 5848 dead from 89 pages not removed due to cleanup lock contention\n> > > ...\n> > >\n> > > A bit of debugging later I figured out that this is due to the background\n> > > writer. If I SIGSTOP bgwriter, the whole relation is truncated...\n> >\n> > That's a bit sad. But isn't that what you would expect bgwriter to do?\n> > Write out dirty buffers? It doesn't know that those pages consist of\n> > only dead tuples and that you will soon truncate them.\n>\n> I think the issue more that it's feels wrong that a pin by bgwriter blocks\n> vacuum cleaning up. I think the same happens with on-access pruning - where\n> it's more likely for bgwriter to focus on those pages. Checkpointer likely\n> causes the same. Even normal backends can cause this while writing out the\n> page.\n>\n> It's not like bgwriter/checkpointer would actually have a problem if the page\n> were pruned - the page is locked while being written out and neither keep\n> pointers into the page after unlocking it again.\n>\n>\n> At this point I started to wonder if we should invent a separate type of pin\n> for a buffer undergoing IO. We basically have the necessary state already:\n> BM_IO_IN_PROGRESS. We'd need to look at that in some places, e.g. in\n> InvalidateBuffer(), instead of BUF_STATE_GET_REFCOUNT(), we'd need to also\n> look at BM_IO_IN_PROGRESS.\n>\n>\n> Except of course that that'd not change anything -\n> ConditionalLockBufferForCleanup() locks the buffer conditionally, *before*\n> even looking at the refcount and returns false if not. And writing out a\n> buffer takes a content lock. Which made me realize that\n> \"tuples missed: 5848 dead from 89 pages not removed due to cleanup lock contention\"\n>\n> is often kinda wrong - the contention is *not* cleanup lock specific. It's\n> often just plain contention on the lwlock.\n\nDo you think we should change the error message?\n\n> Perhaps we should just treat IO_IN_PROGRESS buffers differently in\n> lazy_scan_heap() and heap_page_prune_opt() and wait?\n\nHmm. This is an interesting idea. lazy_scan_heap() waiting for\ncheckpointer to write out a buffer could have an interesting property\nof shifting who writes out the FPI. I wonder if it would matter.\n\n- Melanie",
"msg_date": "Wed, 15 Nov 2023 16:21:45 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 8:26 PM Andres Freund <[email protected]> wrote:\n> I think this undersells the situation a bit. We right now do\n> FreeSpaceMapVacuumRange() for 8GB of data (VACUUM_FSM_EVERY_PAGES) in the main\n> fork, while holding an exclusive page level lock.\n\nThat sounds fairly horrific?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Nov 2023 16:32:48 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-15 16:32:48 -0500, Robert Haas wrote:\n> On Mon, Nov 13, 2023 at 8:26 PM Andres Freund <[email protected]> wrote:\n> > I think this undersells the situation a bit. We right now do\n> > FreeSpaceMapVacuumRange() for 8GB of data (VACUUM_FSM_EVERY_PAGES) in the main\n> > fork, while holding an exclusive page level lock.\n> \n> That sounds fairly horrific?\n\nIt's decidedly not great, indeed. I couldn't come up with a clear risk of\ndeadlock, but I wouldn't want to bet against there being a deadlock risk.\n\nI think the rarity of it does ameliorate the performance issues to some\ndegree.\n\nThoughts on whether to backpatch? It'd probably be at least a bit painful,\nthere have been a lot of changes in the surrounding code in the last 5 years.\n\n- Andres\n\n\n",
"msg_date": "Wed, 15 Nov 2023 15:17:18 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 6:17 PM Andres Freund <[email protected]> wrote:\n> Thoughts on whether to backpatch? It'd probably be at least a bit painful,\n> there have been a lot of changes in the surrounding code in the last 5 years.\n\nI guess the main point that I'd make here is that we shouldn't\nback-patch because it's wrong, but because of whatever consequences it\nbeing wrong has. If somebody demonstrates that a deadlock occurs, or\nthat a painfully long stall can be constructed on a somewhat realistic\ntest case, then I think we should back-patch. If it's just something\nthat we look at and by visual inspection say \"wow, that looks awful,\"\nthat is not a sufficient reason to back-patch in my view. Such a\nback-patch would still have known risk, but no known reward.\n\nUntil just now, I hadn't quite absorbed the fact that this only\naffected the indexes == 0 case; that case is probably extremely rare\nin real life. It's possible that accounts for why this hasn't caused\nmore trouble. But there could also be reasons why, even if you do have\nthat use case, this tends not to be too serious.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 15:29:38 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-16 15:29:38 -0500, Robert Haas wrote:\n> On Wed, Nov 15, 2023 at 6:17 PM Andres Freund <[email protected]> wrote:\n> > Thoughts on whether to backpatch? It'd probably be at least a bit painful,\n> > there have been a lot of changes in the surrounding code in the last 5 years.\n> \n> I guess the main point that I'd make here is that we shouldn't\n> back-patch because it's wrong, but because of whatever consequences it\n> being wrong has. If somebody demonstrates that a deadlock occurs, or\n> that a painfully long stall can be constructed on a somewhat realistic\n> test case, then I think we should back-patch.\n\nYea, that'd make it easy :)\n\n\n> If it's just something that we look at and by visual inspection say \"wow,\n> that looks awful,\" that is not a sufficient reason to back-patch in my\n> view. Such a back-patch would still have known risk, but no known reward.\n\n\n> Until just now, I hadn't quite absorbed the fact that this only\n> affected the indexes == 0 case; that case is probably extremely rare\n> in real life. It's possible that accounts for why this hasn't caused\n> more trouble. But there could also be reasons why, even if you do have\n> that use case, this tends not to be too serious.\n\nI think the main reason it's not all that bad, even when hitting this path, is\nthat one stall every 8GB just isn't that much and that the stalls aren't that\nlong - the leaf page fsm updates don't use the strategy, so they're still\nsomewhat likely to be in s_b, and there's \"just\" ~2MB of FMS to read. I tried\nto reproduce it here, and it was a few ms, even though I dropped filesystem\ncaches in a loop.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Nov 2023 12:49:15 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 3:49 PM Andres Freund <[email protected]> wrote:\n> I think the main reason it's not all that bad, even when hitting this path, is\n> that one stall every 8GB just isn't that much and that the stalls aren't that\n> long - the leaf page fsm updates don't use the strategy, so they're still\n> somewhat likely to be in s_b, and there's \"just\" ~2MB of FMS to read. I tried\n> to reproduce it here, and it was a few ms, even though I dropped filesystem\n> caches in a loop.\n\nSo just fix it in master then. If it turns out later there are worse\nscenarios, you can back-patch then.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 15:52:10 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 3:29 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Nov 15, 2023 at 6:17 PM Andres Freund <[email protected]> wrote:\n> > Thoughts on whether to backpatch? It'd probably be at least a bit painful,\n> > there have been a lot of changes in the surrounding code in the last 5 years.\n>\n> I guess the main point that I'd make here is that we shouldn't\n> back-patch because it's wrong, but because of whatever consequences it\n> being wrong has. If somebody demonstrates that a deadlock occurs, or\n> that a painfully long stall can be constructed on a somewhat realistic\n> test case, then I think we should back-patch. If it's just something\n> that we look at and by visual inspection say \"wow, that looks awful,\"\n> that is not a sufficient reason to back-patch in my view. Such a\n> back-patch would still have known risk, but no known reward.\n\nThis reasoning makes sense, and I hadn't really thought of it that way.\nI was going to offer to take a stab at producing some back patch sets,\nbut, given this rationale and Andres' downthread agreement and\nanalysis, it sounds like there is no reason to do so. Thanks for\nthinking about my bug report!\n\n- Melanie\n\n\n",
"msg_date": "Thu, 16 Nov 2023 19:43:43 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-15 16:21:45 -0500, Melanie Plageman wrote:\n> On Tue, Nov 14, 2023 at 7:15 PM Andres Freund <[email protected]> wrote:\n> > On 2023-11-14 07:46:10 -0500, Melanie Plageman wrote:\n> > > > FreeSpaceMapVacuumRange()'s comment says:\n> > > > * As above, but assume that only heap pages between start and end-1 inclusive\n> > > > * have new free-space information, so update only the upper-level slots\n> > > > * covering that block range. end == InvalidBlockNumber is equivalent to\n> > > > * \"all the rest of the relation\".\n> > > >\n> > > > So FreeSpaceMapVacuumRange(..., blkno) will not actually process the \"effects\"\n> > > > of the RecordPageWithFreeSpace() above it - which seems confusing.\n> > >\n> > > Ah, so shall I pass blkno + 1 as end?\n> >\n> > I think there's no actual overflow danger, because MaxBlockNumber + 1 is\n> > InvalidBlockNumber, which scans the rest of the relation (i.e. exactly the\n> > intended block). Perhaps worth noting?\n> \n> Attached\n\nAnd pushed! Thanks for the report and fix!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Nov 2023 13:02:05 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() should release lock on buffer before vacuuming\n FSM"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen there are no indexes on the relation, we can set would-be dead\nitems LP_UNUSED and remove them during pruning. This saves us a vacuum\nWAL record, reducing WAL volume (and time spent writing and syncing\nWAL).\n\nSee this example:\n\n drop table if exists foo;\n create table foo(a int) with (autovacuum_enabled=false);\n insert into foo select i from generate_series(1,10000000)i;\n update foo set a = 10;\n \\timing on\n vacuum foo;\n\nOn my machine, the attached patch set provides a 10% speedup for vacuum\nfor this example -- and a 40% decrease in WAL bytes emitted.\n\nAdmittedly, this case is probably unusual in the real world. On-access\npruning would preclude it. Throw a SELECT * FROM foo before the vacuum\nand the patch has no performance benefit.\n\nHowever, it has no downside as far as I can tell. And, IMHO, it is a\ncode clarity improvement. This change means that lazy_vacuum_heap_page()\nis only called when we are actually doing a second pass and reaping dead\nitems. I found it quite confusing that lazy_vacuum_heap_page() was\ncalled by lazy_scan_heap() to set dead items unused in a block that we\njust pruned.\n\nI think it also makes it clear that we should update the VM in\nlazy_scan_prune(). All callers of lazy_scan_prune() will now consider\nupdating the VM after returning. And most of the state communicated back\nto lazy_scan_heap() from lazy_scan_prune() is to inform it whether or\nnot to update the VM. I didn't do that in this patch set because I would\nneed to pass all_visible_according_to_vm to lazy_scan_prune() and that\nchange didn't seem worth the improvement in code clarity in\nlazy_scan_heap().\n\nI am planning to add a VM update into the freeze record, at which point\nI will move the VM update code into lazy_scan_prune(). This will then\nallow us to consolidate the freespace map update code for the prune and\nnoprune cases and make lazy_scan_heap() short and sweet.\n\nNote that (on principle) this patch set is on top of the bug fix I\nproposed in [1].\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_YiL%3D44GvGnt1dpYouDSSoV7wzxVoXs8m3p311rp-TVQQ%40mail.gmail.com",
"msg_date": "Mon, 13 Nov 2023 17:28:50 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Emit fewer vacuum records by reaping removable tuples during pruning"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 2:29 PM Melanie Plageman\n<[email protected]> wrote:\n> I think it also makes it clear that we should update the VM in\n> lazy_scan_prune(). All callers of lazy_scan_prune() will now consider\n> updating the VM after returning. And most of the state communicated back\n> to lazy_scan_heap() from lazy_scan_prune() is to inform it whether or\n> not to update the VM.\n\nThat makes sense.\n\n> I didn't do that in this patch set because I would\n> need to pass all_visible_according_to_vm to lazy_scan_prune() and that\n> change didn't seem worth the improvement in code clarity in\n> lazy_scan_heap().\n\nHave you thought about finding a way to get rid of\nall_visible_according_to_vm? (Not necessarily in the scope of the\nongoing work, just in general.)\n\nall_visible_according_to_vm is inherently prone to races -- it tells\nus what the VM used to say about the page, back when we looked. It is\nvery likely that the page isn't visible in this sense, anyway, because\nVACUUM is after all choosing to scan the page in the first place when\nwe end up in lazy_scan_prune. (Granted, this is much less true than it\nshould be due to the influence of SKIP_PAGES_THRESHOLD, which\nimplements a weird and inefficient form of prefetching/readahead.)\n\nWhy should we necessarily care what all_visible_according_to_vm says\nor would say at this point? We're looking at the heap page itself,\nwhich is more authoritative than the VM (theoretically they're equally\nauthoritative, but not really, not when push comes to shove). The best\nanswer that I can think of is that all_visible_according_to_vm gives\nus a way of noticing and then logging any inconsistencies between VM\nand heap that we might end up \"repairing\" in passing (once we've\nrechecked the VM). But maybe that could happen elsewhere.\n\nPerhaps that cross-check could be pushed into visibilitymap.c, when we\ngo to set a !PageIsAllVisible(page) page all-visible in the VM. If\nwe're setting it and find that it's already set from earlier on, then\nremember and complaing/LOG it. No all_visible_according_to_vm\nrequired, plus this seems like it might be more thorough.\n\nJust a thought.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 13 Nov 2023 14:58:44 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 5:59 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Mon, Nov 13, 2023 at 2:29 PM Melanie Plageman\n> <[email protected]> wrote:\n> > I think it also makes it clear that we should update the VM in\n> > lazy_scan_prune(). All callers of lazy_scan_prune() will now consider\n> > updating the VM after returning. And most of the state communicated back\n> > to lazy_scan_heap() from lazy_scan_prune() is to inform it whether or\n> > not to update the VM.\n>\n> That makes sense.\n>\n> > I didn't do that in this patch set because I would\n> > need to pass all_visible_according_to_vm to lazy_scan_prune() and that\n> > change didn't seem worth the improvement in code clarity in\n> > lazy_scan_heap().\n>\n> Have you thought about finding a way to get rid of\n> all_visible_according_to_vm? (Not necessarily in the scope of the\n> ongoing work, just in general.)\n>\n> all_visible_according_to_vm is inherently prone to races -- it tells\n> us what the VM used to say about the page, back when we looked. It is\n> very likely that the page isn't visible in this sense, anyway, because\n> VACUUM is after all choosing to scan the page in the first place when\n> we end up in lazy_scan_prune. (Granted, this is much less true than it\n> should be due to the influence of SKIP_PAGES_THRESHOLD, which\n> implements a weird and inefficient form of prefetching/readahead.)\n>\n> Why should we necessarily care what all_visible_according_to_vm says\n> or would say at this point? We're looking at the heap page itself,\n> which is more authoritative than the VM (theoretically they're equally\n> authoritative, but not really, not when push comes to shove). The best\n> answer that I can think of is that all_visible_according_to_vm gives\n> us a way of noticing and then logging any inconsistencies between VM\n> and heap that we might end up \"repairing\" in passing (once we've\n> rechecked the VM). But maybe that could happen elsewhere.\n>\n> Perhaps that cross-check could be pushed into visibilitymap.c, when we\n> go to set a !PageIsAllVisible(page) page all-visible in the VM. If\n> we're setting it and find that it's already set from earlier on, then\n> remember and complaing/LOG it. No all_visible_according_to_vm\n> required, plus this seems like it might be more thorough.\n\nSetting aside the data corruption cases (the repairing you mentioned), I\nthink the primary case that all_visible_according_to_vm seeks to protect\nus from is if the page was already marked all visible in the VM and\npruning did not change that. So, it avoids a call to visibilitymap_set()\nwhen the page was known to already be set all visible (as long as we\ndidn't newly set all frozen).\n\nAs you say, this does not seem like the common case. Pages we vacuum\nwill most often not be all visible.\n\nI actually wonder if it is worse to rely on that old value of all\nvisible and then call visibilitymap_set(), which takes an exclusive lock\non the VM page, than it would be to just call visibilitymap_get_status()\nanew. This is what we do with all frozen.\n\nThe problem is that it is kind of hard to tell because the whole thing\nis a bit of a tangled knot.\n\nI've ripped this code apart and put it back together again six ways from\nSunday trying to get rid of all_visible_according_to_vm and\nnext_unskippable_allvis/skipping_current_range's bootleg prefetching\nwithout incurring any additional calls to visibilitymap_get_status().\n\nAs best I can tell, our best case scenario is Thomas' streaming read API\ngoes in, we add vacuum as a user, and we can likely remove the skip\nrange logic.\n\nThen, the question remains about how useful it is to save the visibility\nmap statuses we got when deciding whether or not to skip a block.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 13 Nov 2023 19:06:15 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 5:28 PM Melanie Plageman\n<[email protected]> wrote:\n> When there are no indexes on the relation, we can set would-be dead\n> items LP_UNUSED and remove them during pruning. This saves us a vacuum\n> WAL record, reducing WAL volume (and time spent writing and syncing\n> WAL).\n...\n> Note that (on principle) this patch set is on top of the bug fix I\n> proposed in [1].\n>\n> [1] https://www.postgresql.org/message-id/CAAKRu_YiL%3D44GvGnt1dpYouDSSoV7wzxVoXs8m3p311rp-TVQQ%40mail.gmail.com\n\nRebased on top of fix in b2e237afddc56a and registered for the january fest\nhttps://commitfest.postgresql.org/46/4665/\n\n- Melanie",
"msg_date": "Fri, 17 Nov 2023 18:12:08 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 6:12 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Mon, Nov 13, 2023 at 5:28 PM Melanie Plageman\n> <[email protected]> wrote:\n> > When there are no indexes on the relation, we can set would-be dead\n> > items LP_UNUSED and remove them during pruning. This saves us a vacuum\n> > WAL record, reducing WAL volume (and time spent writing and syncing\n> > WAL).\n> ...\n> > Note that (on principle) this patch set is on top of the bug fix I\n> > proposed in [1].\n> >\n> > [1] https://www.postgresql.org/message-id/CAAKRu_YiL%3D44GvGnt1dpYouDSSoV7wzxVoXs8m3p311rp-TVQQ%40mail.gmail.com\n>\n> Rebased on top of fix in b2e237afddc56a and registered for the january fest\n> https://commitfest.postgresql.org/46/4665/\n\nI got an off-list question about whether or not this codepath is\nexercised in existing regression tests. It is -- vacuum.sql tests\ninclude those which vacuum a table with no indexes and tuples that can\nbe deleted.\n\nI also looked through [1] to see if there were any user-facing docs\nwhich happened to mention the exact implementation details of how and\nwhen tuples are deleted by vacuum. I didn't see anything like that, so\nI don't think there are user-facing docs which need updating.\n\n- Melanie\n\n[1] https://www.postgresql.org/docs/devel/routine-vacuuming.html\n\n\n",
"msg_date": "Thu, 21 Dec 2023 16:36:12 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 07:06:15PM -0500, Melanie Plageman wrote:\n> As best I can tell, our best case scenario is Thomas' streaming read API\n> goes in, we add vacuum as a user, and we can likely remove the skip\n> range logic.\n\nThis does not prevent the work you've been doing in 0001 and 0002\nposted upthread, right? Some progress is always better than no\nprogress, and I can see the appeal behind doing 0001 actually to keep\nthe updates of the block numbers closer to where we determine if\nrelation truncation is safe of not rather than use an intermediate\nstate in LVPagePruneState.\n\n0002 is much, much, much trickier..\n--\nMichael",
"msg_date": "Sun, 24 Dec 2023 11:14:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 9:14 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Nov 13, 2023 at 07:06:15PM -0500, Melanie Plageman wrote:\n> > As best I can tell, our best case scenario is Thomas' streaming read API\n> > goes in, we add vacuum as a user, and we can likely remove the skip\n> > range logic.\n>\n> This does not prevent the work you've been doing in 0001 and 0002\n> posted upthread, right? Some progress is always better than no\n> progress\n\nCorrect. Peter and I were mainly discussing next refactoring steps as\nwe move toward combining the prune, freeze, and VM records. This\nthread's patches stand alone.\n\n> I can see the appeal behind doing 0001 actually to keep\n> the updates of the block numbers closer to where we determine if\n> relation truncation is safe of not rather than use an intermediate\n> state in LVPagePruneState.\n\nExactly.\n\n> 0002 is much, much, much trickier..\n\nDo you have specific concerns about its correctness? I understand it\nis an area where we have to be sure we are correct. But, to be fair,\nthat is true of all the pruning and vacuuming code.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 27 Dec 2023 11:26:52 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 11:27 AM Melanie Plageman\n<[email protected]> wrote:\n> Do you have specific concerns about its correctness? I understand it\n> is an area where we have to be sure we are correct. But, to be fair,\n> that is true of all the pruning and vacuuming code.\n\nI'm kind of concerned that 0002 might be a performance regression. It\npushes more branches down into the heap-pruning code, which I think\ncan sometimes be quite hot, for the sake of a case that rarely occurs\nin practice. I take your point about it improving things when there\nare no indexes, but what about when there are? And even if there are\nno adverse performance consequences, is it really worth complicating\nthe logic at such a low level?\n\nAlso, I find \"pronto_reap\" to be a poor choice of name. \"pronto\" is an\ninformal word that seems to have no advantage over something like\n\"immediate\" or \"now,\" and I don't think \"reap\" has a precise,\nuniversally-understood meaning. You could call this \"mark_unused_now\"\nor \"immediately_mark_unused\" or something and it would be far more\nself-documenting, IMHO.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Jan 2024 12:31:36 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-04 12:31:36 -0500, Robert Haas wrote:\n> On Wed, Dec 27, 2023 at 11:27 AM Melanie Plageman\n> <[email protected]> wrote:\n> > Do you have specific concerns about its correctness? I understand it\n> > is an area where we have to be sure we are correct. But, to be fair,\n> > that is true of all the pruning and vacuuming code.\n> \n> I'm kind of concerned that 0002 might be a performance regression. It\n> pushes more branches down into the heap-pruning code, which I think\n> can sometimes be quite hot, for the sake of a case that rarely occurs\n> in practice.\n\nI was wondering the same when talking about this with Melanie. But I guess\nthere are some scenarios that aren't unrealistic, consider e.g. bulk data\nloading with COPY with an UPDATE to massage the data afterwards, before\ncreating the indexes.\n\nWhere I could see this becoming more interesting / powerful is if we were able\nto do 'pronto reaping' not just from vacuum but also during on-access\npruning. For ETL workloads VACUUM might often not run between modifications of\nthe same page, but on-access pruning will. Being able to reclaim dead items at\nthat point seems like it could be pretty sizable improvement?\n\n\n> I take your point about it improving things when there are no indexes, but\n> what about when there are?\n\nI suspect that branch prediction should take care of the additional\nbranches. heap_prune_chain() indeed can be very hot, but I think we're\nprimarily bottlenecked on data cache misses.\n\nFor a single VACUUM, whether we'd do pronto reaping would be constant -> very\nwell predictable. We could add an unlikely() to make sure the\nbranch-predictor-is-empty case optimizes for the much more common case of\nhaving indexes. Falsely assuming we'd not pronto reap wouldn't be\nparticularly bad, as the wins for the case are so much bigger.\n\nIf we were to use pronto reaping for on-access pruning, it's perhaps a bit\nless predictable, as pruning for pages of a relation with indexes could be\ninterleaved with pruning for relations without. But even there I suspect it'd\nnot be the primary bottleneck: We call heap_page_prune_chain() in a loop for\nevery tuple on a page, the branch predictor should quickly learn whether we're\nusing pronto reaping. Whereas we're not becoming less cache-miss heavy when\nlooking at subsequent tuples.\n\n\n> And even if there are no adverse performance consequences, is it really\n> worth complicating the logic at such a low level?\n\nYes, I think this is the main question here. It's not clear though that the\nstate after the patch is meaningfullye more complicated? It removes nontrivial\ncode from lazy_scan_heap() and pays for that with a bit more complexity in\nheap_prune_chain().\n\n\n> Also, I find \"pronto_reap\" to be a poor choice of name. \"pronto\" is an\n> informal word that seems to have no advantage over something like\n> \"immediate\" or \"now,\"\n\nI didn't like that either :)\n\n\n> and I don't think \"reap\" has a precise, universally-understood meaning.\n\nLess concerned about that.\n\n\n> You could call this \"mark_unused_now\" or \"immediately_mark_unused\" or\n> something and it would be far more self-documenting, IMHO.\n\nHow about 'no_indexes' or such?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Jan 2024 11:24:20 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 18:12:08 -0500, Melanie Plageman wrote:\n> diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> index 14de8158d49..b578c32eeb6 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -8803,8 +8803,13 @@ heap_xlog_prune(XLogReaderState *record)\n> \t\tnunused = (end - nowunused);\n> \t\tAssert(nunused >= 0);\n>\n> -\t\t/* Update all line pointers per the record, and repair fragmentation */\n> -\t\theap_page_prune_execute(buffer,\n> +\t\t/*\n> +\t\t * Update all line pointers per the record, and repair fragmentation.\n> +\t\t * We always pass pronto_reap as true, because we don't know whether\n> +\t\t * or not this option was used when pruning. This reduces the\n> +\t\t * validation done on replay in an assert build.\n> +\t\t */\n\nHm, that seems not great. Both because we loose validation and because it\nseems to invite problems down the line, due to pronto_reap falsely being set\nto true in heap_page_prune_execute().\n\n\n> @@ -581,7 +589,17 @@ heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum,\n> \t\t * function.)\n> \t\t */\n> \t\tif (ItemIdIsDead(lp))\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * If the relation has no indexes, we can set dead line pointers\n> +\t\t\t * LP_UNUSED now. We don't increment ndeleted here since the LP\n> +\t\t\t * was already marked dead.\n> +\t\t\t */\n> +\t\t\tif (prstate->pronto_reap)\n> +\t\t\t\theap_prune_record_unused(prstate, offnum);\n> +\n> \t\t\tbreak;\n> +\t\t}\n\nI wasn't immediately sure whether this is reachable - but it is, e.g. after\non-access pruning (which currently doesn't yet use pronto reaping), after\npg_upgrade or dropping an index.\n\n\n> \t\tAssert(ItemIdIsNormal(lp));\n> \t\thtup = (HeapTupleHeader) PageGetItem(dp, lp);\n> @@ -715,7 +733,17 @@ heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum,\n> \t\t * redirect the root to the correct chain member.\n> \t\t */\n> \t\tif (i >= nchain)\n> -\t\t\theap_prune_record_dead(prstate, rootoffnum);\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * If the relation has no indexes, we can remove dead tuples\n> +\t\t\t * during pruning instead of marking their line pointers dead. Set\n> +\t\t\t * this tuple's line pointer LP_UNUSED.\n> +\t\t\t */\n> +\t\t\tif (prstate->pronto_reap)\n> +\t\t\t\theap_prune_record_unused(prstate, rootoffnum);\n> +\t\t\telse\n> +\t\t\t\theap_prune_record_dead(prstate, rootoffnum);\n> +\t\t}\n> \t\telse\n> \t\t\theap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);\n> \t}\n> @@ -726,9 +754,12 @@ heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum,\n> \t\t * item. This can happen if the loop in heap_page_prune caused us to\n> \t\t * visit the dead successor of a redirect item before visiting the\n> \t\t * redirect item. We can clean up by setting the redirect item to\n> -\t\t * DEAD state.\n> +\t\t * DEAD state. If pronto_reap is true, we can set it LP_UNUSED now.\n> \t\t */\n> -\t\theap_prune_record_dead(prstate, rootoffnum);\n> +\t\tif (prstate->pronto_reap)\n> +\t\t\theap_prune_record_unused(prstate, rootoffnum);\n> +\t\telse\n> +\t\t\theap_prune_record_dead(prstate, rootoffnum);\n> \t}\n>\n> \treturn ndeleted;\n\nThere's three new calls to heap_prune_record_unused() and the logic got more\nnested. Is there a way to get to a nicer end result?\n\n\n> From 608658f2cbc0acde55aac815c0fdb523ec24c452 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Mon, 13 Nov 2023 16:47:08 -0500\n> Subject: [PATCH v2 1/2] Indicate rel truncation unsafe in lazy_scan[no]prune\n>\n> Both lazy_scan_prune() and lazy_scan_noprune() must determine whether or\n> not there are tuples on the page making rel truncation unsafe.\n> LVRelState->nonempty_pages is updated to reflect this. Previously, both\n> functions set an output parameter or output parameter member, hastup, to\n> indicate that nonempty_pages should be updated to reflect the latest\n> non-removable page. There doesn't seem to be any reason to wait until\n> lazy_scan_[no]prune() returns to update nonempty_pages. Plenty of other\n> counters in the LVRelState are updated in lazy_scan_[no]prune().\n> This allows us to get rid of the output parameter hastup.\n\n\n> @@ -972,20 +970,21 @@ lazy_scan_heap(LVRelState *vacrel)\n> \t\t\t\tcontinue;\n> \t\t\t}\n>\n> -\t\t\t/* Collect LP_DEAD items in dead_items array, count tuples */\n> -\t\t\tif (lazy_scan_noprune(vacrel, buf, blkno, page, &hastup,\n> +\t\t\t/*\n> +\t\t\t * Collect LP_DEAD items in dead_items array, count tuples,\n> +\t\t\t * determine if rel truncation is safe\n> +\t\t\t */\n> +\t\t\tif (lazy_scan_noprune(vacrel, buf, blkno, page,\n> \t\t\t\t\t\t\t\t &recordfreespace))\n> \t\t\t{\n> \t\t\t\tSize\t\tfreespace = 0;\n>\n> \t\t\t\t/*\n> \t\t\t\t * Processed page successfully (without cleanup lock) -- just\n> -\t\t\t\t * need to perform rel truncation and FSM steps, much like the\n> -\t\t\t\t * lazy_scan_prune case. Don't bother trying to match its\n> -\t\t\t\t * visibility map setting steps, though.\n> +\t\t\t\t * need to update the FSM, much like the lazy_scan_prune case.\n> +\t\t\t\t * Don't bother trying to match its visibility map setting\n> +\t\t\t\t * steps, though.\n> \t\t\t\t */\n> -\t\t\t\tif (hastup)\n> -\t\t\t\t\tvacrel->nonempty_pages = blkno + 1;\n> \t\t\t\tif (recordfreespace)\n> \t\t\t\t\tfreespace = PageGetHeapFreeSpace(page);\n> \t\t\t\tUnlockReleaseBuffer(buf);\n\nThe comment continues to say that we \"determine if rel truncation is safe\" -\nbut I don't see that? Oh, I see, it's done inside lazy_scan_noprune(). This\ndoesn't seem like a clear improvement to me. Particularly because it's only\nset if lazy_scan_noprune() actually does something.\n\n\nI don't like the existing code in lazy_scan_heap(). But this kinda seems like\ntinkering around the edges, without getting to the heart of the issue. I think\nwe should\n\n1) Move everything after ReadBufferExtended() and the end of the loop into its\n own function\n\n2) All the code in the loop body after the call to lazy_scan_prune() is\n specific to the lazy_scan_prune() path, it doesn't make sense that it's at\n the same level as the the calls to lazy_scan_noprune(),\n lazy_scan_new_or_empty() or lazy_scan_prune(). Either it should be in\n lazy_scan_prune() or a new wrapper function.\n\n3) It's imo wrong that we have UnlockReleaseBuffer() (there are 6 different\n places unlocking if I didn't miscount!) and RecordPageWithFreeSpace() calls\n in this many places. I think this is largely a consequence of the previous\n points. Once those are addressed, we can have one common place.\n\nBut I digress.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Jan 2024 12:03:31 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "Thanks for the review!\n\nOn Thu, Jan 4, 2024 at 3:03 PM Andres Freund <[email protected]> wrote:\n>\n> On 2023-11-17 18:12:08 -0500, Melanie Plageman wrote:\n> > Assert(ItemIdIsNormal(lp));\n> > htup = (HeapTupleHeader) PageGetItem(dp, lp);\n> > @@ -715,7 +733,17 @@ heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum,\n> > * redirect the root to the correct chain member.\n> > */\n> > if (i >= nchain)\n> > - heap_prune_record_dead(prstate, rootoffnum);\n> > + {\n> > + /*\n> > + * If the relation has no indexes, we can remove dead tuples\n> > + * during pruning instead of marking their line pointers dead. Set\n> > + * this tuple's line pointer LP_UNUSED.\n> > + */\n> > + if (prstate->pronto_reap)\n> > + heap_prune_record_unused(prstate, rootoffnum);\n> > + else\n> > + heap_prune_record_dead(prstate, rootoffnum);\n> > + }\n> > else\n> > heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);\n> > }\n> > @@ -726,9 +754,12 @@ heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum,\n> > * item. This can happen if the loop in heap_page_prune caused us to\n> > * visit the dead successor of a redirect item before visiting the\n> > * redirect item. We can clean up by setting the redirect item to\n> > - * DEAD state.\n> > + * DEAD state. If pronto_reap is true, we can set it LP_UNUSED now.\n> > */\n> > - heap_prune_record_dead(prstate, rootoffnum);\n> > + if (prstate->pronto_reap)\n> > + heap_prune_record_unused(prstate, rootoffnum);\n> > + else\n> > + heap_prune_record_dead(prstate, rootoffnum);\n> > }\n> >\n> > return ndeleted;\n>\n> There's three new calls to heap_prune_record_unused() and the logic got more\n> nested. Is there a way to get to a nicer end result?\n\nSo, I could do another loop through the line pointers in\nheap_page_prune() (after the loop calling heap_prune_chain()) and, if\npronto_reap is true, set dead line pointers LP_UNUSED. Then, when\nconstructing the WAL record, I would just not add the prstate.nowdead\nthat I saved from heap_prune_chain() to the prune WAL record.\n\nThis would eliminate the extra if statements from heap_prune_chain().\nIt will be more performant than sticking with the original (master)\ncall to lazy_vacuum_heap_page(). However, I'm not convinced that the\nextra loop to set line pointers LP_DEAD -> LP_UNUSED is less confusing\nthan keeping the if pronto_reap test in heap_prune_chain().\nheap_prune_chain() is where line pointers' new values are decided. It\nseems weird to pick one new value for a line pointer in\nheap_prune_chain() and then pick another, different new value in a\nloop after heap_prune_chain(). I don't see any way to eliminate the if\npronto_reap tests without a separate loop setting LP_DEAD->LP_UNUSED,\nthough.\n\n> > From 608658f2cbc0acde55aac815c0fdb523ec24c452 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Mon, 13 Nov 2023 16:47:08 -0500\n> > Subject: [PATCH v2 1/2] Indicate rel truncation unsafe in lazy_scan[no]prune\n> >\n> > Both lazy_scan_prune() and lazy_scan_noprune() must determine whether or\n> > not there are tuples on the page making rel truncation unsafe.\n> > LVRelState->nonempty_pages is updated to reflect this. Previously, both\n> > functions set an output parameter or output parameter member, hastup, to\n> > indicate that nonempty_pages should be updated to reflect the latest\n> > non-removable page. There doesn't seem to be any reason to wait until\n> > lazy_scan_[no]prune() returns to update nonempty_pages. Plenty of other\n> > counters in the LVRelState are updated in lazy_scan_[no]prune().\n> > This allows us to get rid of the output parameter hastup.\n>\n>\n> > @@ -972,20 +970,21 @@ lazy_scan_heap(LVRelState *vacrel)\n> > continue;\n> > }\n> >\n> > - /* Collect LP_DEAD items in dead_items array, count tuples */\n> > - if (lazy_scan_noprune(vacrel, buf, blkno, page, &hastup,\n> > + /*\n> > + * Collect LP_DEAD items in dead_items array, count tuples,\n> > + * determine if rel truncation is safe\n> > + */\n> > + if (lazy_scan_noprune(vacrel, buf, blkno, page,\n> > &recordfreespace))\n> > {\n> > Size freespace = 0;\n> >\n> > /*\n> > * Processed page successfully (without cleanup lock) -- just\n> > - * need to perform rel truncation and FSM steps, much like the\n> > - * lazy_scan_prune case. Don't bother trying to match its\n> > - * visibility map setting steps, though.\n> > + * need to update the FSM, much like the lazy_scan_prune case.\n> > + * Don't bother trying to match its visibility map setting\n> > + * steps, though.\n> > */\n> > - if (hastup)\n> > - vacrel->nonempty_pages = blkno + 1;\n> > if (recordfreespace)\n> > freespace = PageGetHeapFreeSpace(page);\n> > UnlockReleaseBuffer(buf);\n>\n> The comment continues to say that we \"determine if rel truncation is safe\" -\n> but I don't see that? Oh, I see, it's done inside lazy_scan_noprune(). This\n> doesn't seem like a clear improvement to me. Particularly because it's only\n> set if lazy_scan_noprune() actually does something.\n\nI don't get what the last sentence means (\"Particularly because...\").\nThe new location of the hastup test in lazy_scan_noprune() is above an\nunconditional return true, so it is also only set if\nlazy_scan_noprune() actually does something. I think the\nlazy_scan[]prune() functions shouldn't try to export the hastup\ninformation to lazy_scan_heap(). It's confusing. We should be moving\nall of the page-specific processing into the individual functions\ninstead of in the body of lazy_scan_heap().\n\n> I don't like the existing code in lazy_scan_heap(). But this kinda seems like\n> tinkering around the edges, without getting to the heart of the issue. I think\n> we should\n>\n> 1) Move everything after ReadBufferExtended() and the end of the loop into its\n> own function\n>\n> 2) All the code in the loop body after the call to lazy_scan_prune() is\n> specific to the lazy_scan_prune() path, it doesn't make sense that it's at\n> the same level as the the calls to lazy_scan_noprune(),\n> lazy_scan_new_or_empty() or lazy_scan_prune(). Either it should be in\n> lazy_scan_prune() or a new wrapper function.\n>\n> 3) It's imo wrong that we have UnlockReleaseBuffer() (there are 6 different\n> places unlocking if I didn't miscount!) and RecordPageWithFreeSpace() calls\n> in this many places. I think this is largely a consequence of the previous\n> points. Once those are addressed, we can have one common place.\n\nI have other patches that do versions of all of the above, but they\ndidn't seem to really fit with this patch set. I am taking a step to\nmove code out of lazy_scan_heap() that doesn't belong there. That fact\nthat other code should also be moved from there seems more like a \"yes\nand\" than a \"no but\". That being said, do you think I should introduce\npatches doing further refactoring of lazy_scan_heap() (like what you\nsuggest above) into this thread?\n\n- Melanie\n\n\n",
"msg_date": "Thu, 4 Jan 2024 17:37:27 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 4, 2024 at 12:31 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Dec 27, 2023 at 11:27 AM Melanie Plageman\n> <[email protected]> wrote:\n> > Do you have specific concerns about its correctness? I understand it\n> > is an area where we have to be sure we are correct. But, to be fair,\n> > that is true of all the pruning and vacuuming code.\n>\n> I'm kind of concerned that 0002 might be a performance regression. It\n> pushes more branches down into the heap-pruning code, which I think\n> can sometimes be quite hot, for the sake of a case that rarely occurs\n> in practice. I take your point about it improving things when there\n> are no indexes, but what about when there are? And even if there are\n> no adverse performance consequences, is it really worth complicating\n> the logic at such a low level?\n\nRegarding the additional code complexity, I think the extra call to\nlazy_vacuum_heap_page() in lazy_scan_heap() actually represents a fair\namount of code complexity. It is a special case of page-level\nprocessing that should be handled by heap_page_prune() and not\nlazy_scan_heap().\n\nlazy_scan_heap() is responsible for three main things -- loop through\nthe blocks in a relation and process each page (pruning, freezing,\netc), invoke index vacuuming, invoke functions to loop through\ndead_items and vacuum pages. The logic to do the per-page processing\nis spread across three places, though.\n\nWhen a single page is being processed, page pruning happens in\nheap_page_prune(). Freezing, dead items recording, and visibility\nchecks happen in lazy_scan_prune(). Visibility map updates and\nfreespace map updates happen back in lazy_scan_heap(). Except, if the\ntable has no indexes, in which case, lazy_scan_heap() also invokes\nlazy_vacuum_heap_page() to set dead line pointers unused and do\nanother separate visibility check and VM update. I maintain that all\npage-level processing should be done in the page-level processing\nfunctions (like lazy_scan_prune()). And lazy_scan_heap() shouldn't be\ndirectly responsible for special case page-level processing.\n\n> Also, I find \"pronto_reap\" to be a poor choice of name. \"pronto\" is an\n> informal word that seems to have no advantage over something like\n> \"immediate\" or \"now,\" and I don't think \"reap\" has a precise,\n> universally-understood meaning. You could call this \"mark_unused_now\"\n> or \"immediately_mark_unused\" or something and it would be far more\n> self-documenting, IMHO.\n\nYes, I see how pronto is unnecessarily informal. If there are no cases\nother than when the table has no indexes that we would consider\nimmediately marking LPs unused, then perhaps it is better to call it\n\"no_indexes\" (per andres' suggestion)?\n\n- Melanie\n\n\n",
"msg_date": "Thu, 4 Jan 2024 18:03:25 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 4, 2024 at 6:03 PM Melanie Plageman\n<[email protected]> wrote:\n> When a single page is being processed, page pruning happens in\n> heap_page_prune(). Freezing, dead items recording, and visibility\n> checks happen in lazy_scan_prune(). Visibility map updates and\n> freespace map updates happen back in lazy_scan_heap(). Except, if the\n> table has no indexes, in which case, lazy_scan_heap() also invokes\n> lazy_vacuum_heap_page() to set dead line pointers unused and do\n> another separate visibility check and VM update. I maintain that all\n> page-level processing should be done in the page-level processing\n> functions (like lazy_scan_prune()). And lazy_scan_heap() shouldn't be\n> directly responsible for special case page-level processing.\n\nBut you can just as easily turn this argument on its head, can't you?\nIn general, except for HOT tuples, line pointers are marked dead by\npruning and unused by vacuum. Here you want to turn it on its head and\nmake pruning do what would normally be vacuum's responsibility.\n\nI mean, that's not to say that your argument is \"wrong\" ... but what I\njust said really is how I think about it, too.\n\n> > Also, I find \"pronto_reap\" to be a poor choice of name. \"pronto\" is an\n> > informal word that seems to have no advantage over something like\n> > \"immediate\" or \"now,\" and I don't think \"reap\" has a precise,\n> > universally-understood meaning. You could call this \"mark_unused_now\"\n> > or \"immediately_mark_unused\" or something and it would be far more\n> > self-documenting, IMHO.\n>\n> Yes, I see how pronto is unnecessarily informal. If there are no cases\n> other than when the table has no indexes that we would consider\n> immediately marking LPs unused, then perhaps it is better to call it\n> \"no_indexes\" (per andres' suggestion)?\n\nwfm.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 08:59:41 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 8:59 AM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Jan 4, 2024 at 6:03 PM Melanie Plageman\n> <[email protected]> wrote:\n> > When a single page is being processed, page pruning happens in\n> > heap_page_prune(). Freezing, dead items recording, and visibility\n> > checks happen in lazy_scan_prune(). Visibility map updates and\n> > freespace map updates happen back in lazy_scan_heap(). Except, if the\n> > table has no indexes, in which case, lazy_scan_heap() also invokes\n> > lazy_vacuum_heap_page() to set dead line pointers unused and do\n> > another separate visibility check and VM update. I maintain that all\n> > page-level processing should be done in the page-level processing\n> > functions (like lazy_scan_prune()). And lazy_scan_heap() shouldn't be\n> > directly responsible for special case page-level processing.\n>\n> But you can just as easily turn this argument on its head, can't you?\n> In general, except for HOT tuples, line pointers are marked dead by\n> pruning and unused by vacuum. Here you want to turn it on its head and\n> make pruning do what would normally be vacuum's responsibility.\n\nI actually think we are going to want to stop referring to these steps\nas pruning and vacuuming. It is confusing because vacuuming refers to\nthe whole process of doing garbage collection on the table and also to\nthe specific step of setting dead line pointers unused. If we called\nthese steps say, pruning and reaping, that may be more clear.\n\nVacuuming consists of three phases -- the first pass, index vacuuming,\nand the second pass. I don't think we should dictate what happens in\neach pass. That is, we shouldn't expect only pruning to happen in the\nfirst pass and only reaping to happen in the second pass. For example,\nI think Andres has previously proposed doing another round of pruning\nafter index vacuuming. The second pass/third phase is distinguished\nprimarily by being after index vacuuming.\n\nIf we think about it this way, that frees us up to set dead line\npointers unused in the first pass when the table has no indexes. For\nclarity, I could add a block comment explaining that doing this is an\noptimization and not a logical requirement. One way to make this even\nmore clear would be to set the dead line pointers unused in a separate\nloop after heap_prune_chain() as I proposed upthread.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:59:34 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 12:59 PM Melanie Plageman\n<[email protected]> wrote:\n> I actually think we are going to want to stop referring to these steps\n> as pruning and vacuuming. It is confusing because vacuuming refers to\n> the whole process of doing garbage collection on the table and also to\n> the specific step of setting dead line pointers unused. If we called\n> these steps say, pruning and reaping, that may be more clear.\n\nI agree that there's some terminological confusion here, but if we\nleave all of the current naming unchanged and start using new naming\nconventions in new code, I don't think that's going to clear things\nup. Getting rid of the weird naming with pruning, \"scanning\" (that is\npart of vacuum), and \"vacuuming\" (that is another part of vacuum) is\ngonna take some work.\n\n> Vacuuming consists of three phases -- the first pass, index vacuuming,\n> and the second pass. I don't think we should dictate what happens in\n> each pass. That is, we shouldn't expect only pruning to happen in the\n> first pass and only reaping to happen in the second pass. For example,\n> I think Andres has previously proposed doing another round of pruning\n> after index vacuuming. The second pass/third phase is distinguished\n> primarily by being after index vacuuming.\n>\n> If we think about it this way, that frees us up to set dead line\n> pointers unused in the first pass when the table has no indexes. For\n> clarity, I could add a block comment explaining that doing this is an\n> optimization and not a logical requirement. One way to make this even\n> more clear would be to set the dead line pointers unused in a separate\n> loop after heap_prune_chain() as I proposed upthread.\n\nI don't really disagree with any of this, but I think the question is\nwhether the code is cleaner with mark-LP-as-unused pushed down into\npruning or whether it's better the way it is. Andres seems confident\nthat the change won't suck for performance, which is good, but I'm not\nquite convinced that it's the right direction to go with the code, and\nhe doesn't seem to be either. Perhaps this all turns on this point:\n\nMP> I am planning to add a VM update into the freeze record, at which point\nMP> I will move the VM update code into lazy_scan_prune(). This will then\nMP> allow us to consolidate the freespace map update code for the prune and\nMP> noprune cases and make lazy_scan_heap() short and sweet.\n\nCan we see what that looks like on top of this change?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 13:47:43 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-04 17:37:27 -0500, Melanie Plageman wrote:\n> On Thu, Jan 4, 2024 at 3:03 PM Andres Freund <[email protected]> wrote:\n> >\n> > On 2023-11-17 18:12:08 -0500, Melanie Plageman wrote:\n> > > Assert(ItemIdIsNormal(lp));\n> > > htup = (HeapTupleHeader) PageGetItem(dp, lp);\n> > > @@ -715,7 +733,17 @@ heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum,\n> > > * redirect the root to the correct chain member.\n> > > */\n> > > if (i >= nchain)\n> > > - heap_prune_record_dead(prstate, rootoffnum);\n> > > + {\n> > > + /*\n> > > + * If the relation has no indexes, we can remove dead tuples\n> > > + * during pruning instead of marking their line pointers dead. Set\n> > > + * this tuple's line pointer LP_UNUSED.\n> > > + */\n> > > + if (prstate->pronto_reap)\n> > > + heap_prune_record_unused(prstate, rootoffnum);\n> > > + else\n> > > + heap_prune_record_dead(prstate, rootoffnum);\n> > > + }\n> > > else\n> > > heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);\n> > > }\n> > > @@ -726,9 +754,12 @@ heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum,\n> > > * item. This can happen if the loop in heap_page_prune caused us to\n> > > * visit the dead successor of a redirect item before visiting the\n> > > * redirect item. We can clean up by setting the redirect item to\n> > > - * DEAD state.\n> > > + * DEAD state. If pronto_reap is true, we can set it LP_UNUSED now.\n> > > */\n> > > - heap_prune_record_dead(prstate, rootoffnum);\n> > > + if (prstate->pronto_reap)\n> > > + heap_prune_record_unused(prstate, rootoffnum);\n> > > + else\n> > > + heap_prune_record_dead(prstate, rootoffnum);\n> > > }\n> > >\n> > > return ndeleted;\n> >\n> > There's three new calls to heap_prune_record_unused() and the logic got more\n> > nested. Is there a way to get to a nicer end result?\n> \n> So, I could do another loop through the line pointers in\n> heap_page_prune() (after the loop calling heap_prune_chain()) and, if\n> pronto_reap is true, set dead line pointers LP_UNUSED. Then, when\n> constructing the WAL record, I would just not add the prstate.nowdead\n> that I saved from heap_prune_chain() to the prune WAL record.\n\nHm, that seems a bit sad as well. I am wondering if we could move the\npronto_reap handling into heap_prune_record_dead() or a wrapper of it. I am\nmore concerned about the human reader than the CPU here...\n\n\n\n> > > @@ -972,20 +970,21 @@ lazy_scan_heap(LVRelState *vacrel)\n> > > continue;\n> > > }\n> > >\n> > > - /* Collect LP_DEAD items in dead_items array, count tuples */\n> > > - if (lazy_scan_noprune(vacrel, buf, blkno, page, &hastup,\n> > > + /*\n> > > + * Collect LP_DEAD items in dead_items array, count tuples,\n> > > + * determine if rel truncation is safe\n> > > + */\n> > > + if (lazy_scan_noprune(vacrel, buf, blkno, page,\n> > > &recordfreespace))\n> > > {\n> > > Size freespace = 0;\n> > >\n> > > /*\n> > > * Processed page successfully (without cleanup lock) -- just\n> > > - * need to perform rel truncation and FSM steps, much like the\n> > > - * lazy_scan_prune case. Don't bother trying to match its\n> > > - * visibility map setting steps, though.\n> > > + * need to update the FSM, much like the lazy_scan_prune case.\n> > > + * Don't bother trying to match its visibility map setting\n> > > + * steps, though.\n> > > */\n> > > - if (hastup)\n> > > - vacrel->nonempty_pages = blkno + 1;\n> > > if (recordfreespace)\n> > > freespace = PageGetHeapFreeSpace(page);\n> > > UnlockReleaseBuffer(buf);\n> >\n> > The comment continues to say that we \"determine if rel truncation is safe\" -\n> > but I don't see that? Oh, I see, it's done inside lazy_scan_noprune(). This\n> > doesn't seem like a clear improvement to me. Particularly because it's only\n> > set if lazy_scan_noprune() actually does something.\n> \n> I don't get what the last sentence means (\"Particularly because...\").\n\nTook me a second to understand myself again too, oops. What I think I meant is\nthat it seems error-prone that it's only set in some paths inside\nlazy_scan_noprune(). Previously it was at least a bit clearer in\nlazy_scan_heap() that it would be set for the different possible paths.\n\n\n> > I don't like the existing code in lazy_scan_heap(). But this kinda seems like\n> > tinkering around the edges, without getting to the heart of the issue. I think\n> > we should\n> >\n> > 1) Move everything after ReadBufferExtended() and the end of the loop into its\n> > own function\n> >\n> > 2) All the code in the loop body after the call to lazy_scan_prune() is\n> > specific to the lazy_scan_prune() path, it doesn't make sense that it's at\n> > the same level as the the calls to lazy_scan_noprune(),\n> > lazy_scan_new_or_empty() or lazy_scan_prune(). Either it should be in\n> > lazy_scan_prune() or a new wrapper function.\n> >\n> > 3) It's imo wrong that we have UnlockReleaseBuffer() (there are 6 different\n> > places unlocking if I didn't miscount!) and RecordPageWithFreeSpace() calls\n> > in this many places. I think this is largely a consequence of the previous\n> > points. Once those are addressed, we can have one common place.\n> \n> I have other patches that do versions of all of the above, but they\n> didn't seem to really fit with this patch set. I am taking a step to\n> move code out of lazy_scan_heap() that doesn't belong there. That fact\n> that other code should also be moved from there seems more like a \"yes\n> and\" than a \"no but\". That being said, do you think I should introduce\n> patches doing further refactoring of lazy_scan_heap() (like what you\n> suggest above) into this thread?\n\nIt probably should not be part of this patchset. I probably shouldn't have\nwritten the above here, but after concluding that I didn't think your small\nrefactoring patch was quite right, I couldn't stop myself from thinking about\nwhat would be right.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jan 2024 11:47:09 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-05 08:59:41 -0500, Robert Haas wrote:\n> On Thu, Jan 4, 2024 at 6:03 PM Melanie Plageman\n> <[email protected]> wrote:\n> > When a single page is being processed, page pruning happens in\n> > heap_page_prune(). Freezing, dead items recording, and visibility\n> > checks happen in lazy_scan_prune(). Visibility map updates and\n> > freespace map updates happen back in lazy_scan_heap(). Except, if the\n> > table has no indexes, in which case, lazy_scan_heap() also invokes\n> > lazy_vacuum_heap_page() to set dead line pointers unused and do\n> > another separate visibility check and VM update. I maintain that all\n> > page-level processing should be done in the page-level processing\n> > functions (like lazy_scan_prune()). And lazy_scan_heap() shouldn't be\n> > directly responsible for special case page-level processing.\n> \n> But you can just as easily turn this argument on its head, can't you?\n> In general, except for HOT tuples, line pointers are marked dead by\n> pruning and unused by vacuum. Here you want to turn it on its head and\n> make pruning do what would normally be vacuum's responsibility.\n\nOTOH, the pruning logic, including its WAL record, already supports marking\nitems unused, all we need to do is to tell it to do so in a few more cases. If\nwe didn't already need to have support for this, I'd a much harder time\narguing for doing this.\n\nOne important part of the larger project is to combine the WAL records for\npruning, freezing and setting the all-visible/all-frozen bit into one WAL\nrecord. We can't set all-frozen before we have removed the dead items. So\neither we need to combine pruning and setting items unused for no-index tables\nor we end up considerably less efficient in the no-indexes case.\n\n\nAn aside:\n\nAs I think we chatted about before, I eventually would like the option to\nremove index entries for a tuple during on-access pruning, for OLTP\nworkloads. I.e. before removing the tuple, construct the corresponding index\ntuple, use it to look up index entries pointing to the tuple. If all the index\nentries were found (they might not be, if they already were marked dead during\na lookup, or if an expression wasn't actually immutable), we can prune without\nthe full index scan. Obviously this would only be suitable for some\nworkloads, but it could be quite beneficial when you have huge indexes. The\nreason I mention this is that then we'd have another source of marking items\nunused during pruning.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:05:20 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 3:05 PM Andres Freund <[email protected]> wrote:\n> OTOH, the pruning logic, including its WAL record, already supports marking\n> items unused, all we need to do is to tell it to do so in a few more cases. If\n> we didn't already need to have support for this, I'd a much harder time\n> arguing for doing this.\n>\n> One important part of the larger project is to combine the WAL records for\n> pruning, freezing and setting the all-visible/all-frozen bit into one WAL\n> record. We can't set all-frozen before we have removed the dead items. So\n> either we need to combine pruning and setting items unused for no-index tables\n> or we end up considerably less efficient in the no-indexes case.\n\nThose are fair arguments.\n\n> An aside:\n>\n> As I think we chatted about before, I eventually would like the option to\n> remove index entries for a tuple during on-access pruning, for OLTP\n> workloads. I.e. before removing the tuple, construct the corresponding index\n> tuple, use it to look up index entries pointing to the tuple. If all the index\n> entries were found (they might not be, if they already were marked dead during\n> a lookup, or if an expression wasn't actually immutable), we can prune without\n> the full index scan. Obviously this would only be suitable for some\n> workloads, but it could be quite beneficial when you have huge indexes. The\n> reason I mention this is that then we'd have another source of marking items\n> unused during pruning.\n\nI will be astonished if you can make this work well enough to avoid\nhuge regressions in plausible cases. There are plenty of cases where\nwe do a very thorough job opportunistically removing index tuples.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 15:23:12 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 1:47 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Jan 5, 2024 at 12:59 PM Melanie Plageman\n> <[email protected]> wrote:\n> MP> I am planning to add a VM update into the freeze record, at which point\n> MP> I will move the VM update code into lazy_scan_prune(). This will then\n> MP> allow us to consolidate the freespace map update code for the prune and\n> MP> noprune cases and make lazy_scan_heap() short and sweet.\n>\n> Can we see what that looks like on top of this change?\n\nYes, attached is a patch set which does this. My previous patchset\nalready reduced the number of places we unlock the buffer and update\nthe freespace map in lazy_scan_heap(). This patchset combines the\nlazy_scan_prune() and lazy_scan_noprune() FSM update cases. I also\nhave a version which moves the freespace map updates into\nlazy_scan_prune() and lazy_scan_noprune() -- eliminating all of these\nfrom lazy_scan_heap(). This is arguably more clear. But Andres\nmentioned he wanted fewer places unlocking the buffer and updating the\nFSM.\n\n- Melanie",
"msg_date": "Fri, 5 Jan 2024 15:34:22 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-05 15:23:12 -0500, Robert Haas wrote:\n> On Fri, Jan 5, 2024 at 3:05 PM Andres Freund <[email protected]> wrote:\n> > An aside:\n> >\n> > As I think we chatted about before, I eventually would like the option to\n> > remove index entries for a tuple during on-access pruning, for OLTP\n> > workloads. I.e. before removing the tuple, construct the corresponding index\n> > tuple, use it to look up index entries pointing to the tuple. If all the index\n> > entries were found (they might not be, if they already were marked dead during\n> > a lookup, or if an expression wasn't actually immutable), we can prune without\n> > the full index scan. Obviously this would only be suitable for some\n> > workloads, but it could be quite beneficial when you have huge indexes. The\n> > reason I mention this is that then we'd have another source of marking items\n> > unused during pruning.\n>\n> I will be astonished if you can make this work well enough to avoid\n> huge regressions in plausible cases. There are plenty of cases where\n> we do a very thorough job opportunistically removing index tuples.\n\nThese days the AM is often involved with that, via\ntable_index_delete_tuples()/heap_index_delete_tuples(). That IIRC has to\nhappen before physically removing the already-marked-killed index entries. We\ncan't rely on being able to actually prune the heap page at that point, there\nmight be other backends pinning it, but often we will be able to. If we were\nto prune below heap_index_delete_tuples(), we wouldn't need to recheck that\nindex again during \"individual tuple pruning\", if the to-be-marked-unused heap\ntuple is one of the tuples passed to heap_index_delete_tuples(). Which\npresumably will be very commonly the case.\n\nAt least for nbtree, we are much more aggressive about marking index entries\nas killed, than about actually removing the index entries. \"individual tuple\npruning\" would have to look for killed-but-still-present index entries, not\njust for \"live\" entries.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:57:30 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 2:47 PM Andres Freund <[email protected]> wrote:\n> On 2024-01-04 17:37:27 -0500, Melanie Plageman wrote:\n> > On Thu, Jan 4, 2024 at 3:03 PM Andres Freund <[email protected]> wrote:\n> > >\n> > > On 2023-11-17 18:12:08 -0500, Melanie Plageman wrote:\n> > > > @@ -972,20 +970,21 @@ lazy_scan_heap(LVRelState *vacrel)\n> > > > continue;\n> > > > }\n> > > >\n> > > > - /* Collect LP_DEAD items in dead_items array, count tuples */\n> > > > - if (lazy_scan_noprune(vacrel, buf, blkno, page, &hastup,\n> > > > + /*\n> > > > + * Collect LP_DEAD items in dead_items array, count tuples,\n> > > > + * determine if rel truncation is safe\n> > > > + */\n> > > > + if (lazy_scan_noprune(vacrel, buf, blkno, page,\n> > > > &recordfreespace))\n> > > > {\n> > > > Size freespace = 0;\n> > > >\n> > > > /*\n> > > > * Processed page successfully (without cleanup lock) -- just\n> > > > - * need to perform rel truncation and FSM steps, much like the\n> > > > - * lazy_scan_prune case. Don't bother trying to match its\n> > > > - * visibility map setting steps, though.\n> > > > + * need to update the FSM, much like the lazy_scan_prune case.\n> > > > + * Don't bother trying to match its visibility map setting\n> > > > + * steps, though.\n> > > > */\n> > > > - if (hastup)\n> > > > - vacrel->nonempty_pages = blkno + 1;\n> > > > if (recordfreespace)\n> > > > freespace = PageGetHeapFreeSpace(page);\n> > > > UnlockReleaseBuffer(buf);\n> > >\n> > > The comment continues to say that we \"determine if rel truncation is safe\" -\n> > > but I don't see that? Oh, I see, it's done inside lazy_scan_noprune(). This\n> > > doesn't seem like a clear improvement to me. Particularly because it's only\n> > > set if lazy_scan_noprune() actually does something.\n> >\n> > I don't get what the last sentence means (\"Particularly because...\").\n>\n> Took me a second to understand myself again too, oops. What I think I meant is\n> that it seems error-prone that it's only set in some paths inside\n> lazy_scan_noprune(). Previously it was at least a bit clearer in\n> lazy_scan_heap() that it would be set for the different possible paths.\n\nI see what you are saying. But if lazy_scan_noprune() doesn't do\nanything, then it calls lazy_scan_prune(), which does set hastup and\nupdate vacrel->nonempty_pages if needed.\n\nUsing hastup in lazy_scan_[no]prune() also means that they are\ndirectly updating LVRelState after determining how to update it.\nlazy_scan_heap() isn't doing responsible anymore. I don't see a reason\nto be passing information back to lazy_scan_heap() to update\nLVRelState when lazy_scan_[no]prune() has access to the LVRelState.\n\nImportantly, once I combine the prune and freeze records, hastup is\nset in heap_page_prune() instead of lazy_scan_prune() (that whole loop\nin lazy_scan_prune() is eliminated()). And I don't like having to pass\nhastup back through lazy_scan_prune() and then to lazy_scan_heap() so\nthat lazy_scan_heap() can use it (and use it to update a data\nstructure available in lazy_scan_prune()).\n\n- Melanie\n\n\n",
"msg_date": "Fri, 5 Jan 2024 18:57:02 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "Patch 0001 in the attached set addresses the following review feedback:\n\n- pronto_reap renamed to no_indexes\n- reduce the number of callers of heap_prune_record_unused() by calling\n it from heap_prune_record_dead() when appropriate\n- add unlikely hint to no_indexes test\n\nI've also dropped the patch which moves the test of hastup into\nlazy_scan_[no]prune(). In the future, I plan to remove the loop from\nlazy_scan_prune() which sets hastup and set it instead in\nheap_page_prune(). Changes to hastup's usage could be done with that\nchange instead of in this set.\n\nThe one review point I did not address in the code is that which I've\nresponded to inline below.\n\nThough not required for the immediate reaping feature, I've included\npatches 0002-0004 which are purely refactoring. These patches simplify\nlazy_scan_heap() by moving the visibility map code into\nlazy_scan_prune() and consolidating the updates to the FSM and\nvacrel->nonempty_pages. I've proposed them in this thread because there\nis an interdependence between eliminating the lazy_vacuum_heap_page()\ncall, moving the VM code, and combining the three FSM updates\n(lazy_scan_prune() [index and no index] and lazy_scan_noprune()).\n\nThis illustrates the code clarity benefits of the change to mark line\npointers LP_UNUSED during pruning if the table has no indexes. I can\npropose them in another thread if 0001 is merged.\n\nOn Thu, Jan 4, 2024 at 3:03 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-11-17 18:12:08 -0500, Melanie Plageman wrote:\n> > diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> > index 14de8158d49..b578c32eeb6 100644\n> > --- a/src/backend/access/heap/heapam.c\n> > +++ b/src/backend/access/heap/heapam.c\n> > @@ -8803,8 +8803,13 @@ heap_xlog_prune(XLogReaderState *record)\n> > nunused = (end - nowunused);\n> > Assert(nunused >= 0);\n> >\n> > - /* Update all line pointers per the record, and repair fragmentation */\n> > - heap_page_prune_execute(buffer,\n> > + /*\n> > + * Update all line pointers per the record, and repair fragmentation.\n> > + * We always pass pronto_reap as true, because we don't know whether\n> > + * or not this option was used when pruning. This reduces the\n> > + * validation done on replay in an assert build.\n> > + */\n>\n> Hm, that seems not great. Both because we loose validation and because it\n> seems to invite problems down the line, due to pronto_reap falsely being set\n> to true in heap_page_prune_execute().\n\nI see what you are saying. With the name change to no_indexes, it\nwould be especially misleading to future programmers who might decide\nto use that parameter for something else. Are you thinking it would be\nokay to check the catalog to see if the table has indexes in\nheap_xlog_prune() or are you suggesting that I add some kind of flag\nto the prune WAL record?\n\n- Melanie",
"msg_date": "Sat, 6 Jan 2024 10:34:59 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 12:23 PM Robert Haas <[email protected]> wrote:\n> > As I think we chatted about before, I eventually would like the option to\n> > remove index entries for a tuple during on-access pruning, for OLTP\n> > workloads. I.e. before removing the tuple, construct the corresponding index\n> > tuple, use it to look up index entries pointing to the tuple. If all the index\n> > entries were found (they might not be, if they already were marked dead during\n> > a lookup, or if an expression wasn't actually immutable), we can prune without\n> > the full index scan. Obviously this would only be suitable for some\n> > workloads, but it could be quite beneficial when you have huge indexes. The\n> > reason I mention this is that then we'd have another source of marking items\n> > unused during pruning.\n>\n> I will be astonished if you can make this work well enough to avoid\n> huge regressions in plausible cases. There are plenty of cases where\n> we do a very thorough job opportunistically removing index tuples.\n\nRight. In particular, bottom-up index deletion works well because it\nadds a kind of natural backpressure to one important special case (the\ncase of non-HOT updates that don't \"logically change\" any indexed\ncolumn). It isn't really all that \"opportunistic\" in my understanding\nof the term -- the overall effect is to *systematically* control bloat\nin a way that is actually quite reliable. Like you, I have my doubts\nthat it would be valuable to be more proactive about deleting dead\nindex tuples that are just random dead tuples. There may be a great\nmany dead index tuples diffusely spread across an index -- these can\nbe quite harmless, and not worth proactively cleaning up (even at a\nfairly low cost). What we mostly need to worry about is *concentrated*\nbuild-up of dead index tuples in particular leaf pages.\n\nA natural question to ask is: what cases remain, where we could stand\nto add more backpressure? What other \"special case\" do we not yet\naddress? I think that retail index tuple deletion could work well as\npart of a limited form of \"transaction rollback\" that cleans up after\na just-aborted transaction, within the backend that executed the\ntransaction itself. I suspect that this case now has outsized\nimportance, precisely because it's the one remaining case where the\nsystem accumulates index bloat without any sort of natural\nbackpressure. Making the transaction/backend that creates bloat\ndirectly responsible for proactively cleaning it up tends to have a\nstabilizing effect over time. The system is made to live within its\nmeans.\n\nWe could even fully reverse heap page line pointer bloat under this\n\"transaction rollback\" scheme -- I bet that aborted xacts are a\ndisproportionate source of line pointer bloat. Barring a hard crash,\nor a very large transaction, we could \"undo\" the physical changes to\nrelations before permitting the backend to retry the transaction from\nscratch. This would just work as an optimization.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 6 Jan 2024 08:03:03 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 12:57 PM Andres Freund <[email protected]> wrote:\n> > I will be astonished if you can make this work well enough to avoid\n> > huge regressions in plausible cases. There are plenty of cases where\n> > we do a very thorough job opportunistically removing index tuples.\n>\n> These days the AM is often involved with that, via\n> table_index_delete_tuples()/heap_index_delete_tuples(). That IIRC has to\n> happen before physically removing the already-marked-killed index entries. We\n> can't rely on being able to actually prune the heap page at that point, there\n> might be other backends pinning it, but often we will be able to. If we were\n> to prune below heap_index_delete_tuples(), we wouldn't need to recheck that\n> index again during \"individual tuple pruning\", if the to-be-marked-unused heap\n> tuple is one of the tuples passed to heap_index_delete_tuples(). Which\n> presumably will be very commonly the case.\n\nI don't understand. Making heap_index_delete_tuples() prune heap pages\nin passing such that we can ultimately mark dead heap tuples LP_UNUSED\nnecessitates high level coordination -- it has to happen at a level\nmuch higher than heap_index_delete_tuples(). In other words, making it\nall work safely requires the same high level context that makes it\nsafe for VACUUM to set a stub LP_DEAD line pointer to LP_UNUSED (index\ntuples must never be allowed to point to TIDs/heap line pointers that\ncan be concurrently recycled).\n\nObviously my idea of \"a limited form of transaction rollback\" has the\nrequired high-level context available, which is the crucial factor\nthat allows it to safely reverse all bloat -- even line pointer bloat\n(which is traditionally something that only VACUUM can do safely). I\nhave a hard time imagining a scheme that can do that outside of VACUUM\nwithout directly targeting some special case, such as the case that\nI'm calling \"transaction rollback\". In other words, I have a hard time\nimagining how this would ever be practical as part of any truly\nopportunistic cleanup process. AFAICT the dependency between indexes\nand the heap is just too delicate for such a scheme to ever really be\npractical.\n\n> At least for nbtree, we are much more aggressive about marking index entries\n> as killed, than about actually removing the index entries. \"individual tuple\n> pruning\" would have to look for killed-but-still-present index entries, not\n> just for \"live\" entries.\n\nThese days having index tuples directly marked LP_DEAD is surprisingly\nunimportant to heap_index_delete_tuples(). The batching optimization\nimplemented by _bt_simpledel_pass() tends to be very effective in\npractice. We only need to have the right *general* idea about which\nheap pages to visit -- which heap pages will yield some number of\ndeletable index tuples.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 6 Jan 2024 08:34:25 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 12:59 PM Melanie Plageman\n<[email protected]> wrote:\n> > But you can just as easily turn this argument on its head, can't you?\n> > In general, except for HOT tuples, line pointers are marked dead by\n> > pruning and unused by vacuum. Here you want to turn it on its head and\n> > make pruning do what would normally be vacuum's responsibility.\n>\n> I actually think we are going to want to stop referring to these steps\n> as pruning and vacuuming. It is confusing because vacuuming refers to\n> the whole process of doing garbage collection on the table and also to\n> the specific step of setting dead line pointers unused. If we called\n> these steps say, pruning and reaping, that may be more clear.\n\nWhat about index VACUUM records? Should they be renamed to REAP records, too?\n\n> Vacuuming consists of three phases -- the first pass, index vacuuming,\n> and the second pass. I don't think we should dictate what happens in\n> each pass. That is, we shouldn't expect only pruning to happen in the\n> first pass and only reaping to happen in the second pass.\n\nWhy not? It's not self-evident that it matters much either way. I\ndon't see it being worth the complexity (which is not to say that I'm\nopposed to what you're trying to do here).\n\nNote that we only need a cleanup for the first heap pass right now\n(not the second heap pass). So if you're going to prune in the second\nheap pass, you're going to have to add a mechanism that makes it safe\n(by acquiring a cleanup lock once we decide that we actually want to\nprune, say). Or maybe you'd just give up on the fact that we don't\nneed cleanup locks for the second hea pass these days instead (which\nseems like a step backwards).\n\n> For example,\n> I think Andres has previously proposed doing another round of pruning\n> after index vacuuming. The second pass/third phase is distinguished\n> primarily by being after index vacuuming.\n\nI think of the second pass/third phase as being very similar to index vacuuming.\n\nBoth processes/phases don't require cleanup locks (actually nbtree\nVACUUM does require cleanup locks, but the reasons why are pretty\nesoteric, and not shared by every index AM). And, both\nprocesses/phases don't need to generate their own recovery conflicts.\nNeither type of WAL record requires a snapshotConflictHorizon field of\nits own, since we can safely assume that some PRUNE record must have\ntaken care of all that earlier on.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Jan 2024 12:27:16 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 3:57 PM Andres Freund <[email protected]> wrote:\n> > I will be astonished if you can make this work well enough to avoid\n> > huge regressions in plausible cases. There are plenty of cases where\n> > we do a very thorough job opportunistically removing index tuples.\n>\n> These days the AM is often involved with that, via\n> table_index_delete_tuples()/heap_index_delete_tuples(). That IIRC has to\n> happen before physically removing the already-marked-killed index entries. We\n> can't rely on being able to actually prune the heap page at that point, there\n> might be other backends pinning it, but often we will be able to. If we were\n> to prune below heap_index_delete_tuples(), we wouldn't need to recheck that\n> index again during \"individual tuple pruning\", if the to-be-marked-unused heap\n> tuple is one of the tuples passed to heap_index_delete_tuples(). Which\n> presumably will be very commonly the case.\n>\n> At least for nbtree, we are much more aggressive about marking index entries\n> as killed, than about actually removing the index entries. \"individual tuple\n> pruning\" would have to look for killed-but-still-present index entries, not\n> just for \"live\" entries.\n\nI don't want to derail this thread, but I don't really see what you\nhave in mind here. The first paragraph sounds like you're imagining\nthat while pruning the index entries we might jump over to the heap\nand clean things up there, too, but that seems like it wouldn't work\nif the table has more than one index. I thought you were talking about\nstarting with a heap tuple and bouncing around to every index to see\nif we can find index pointers to kill in every one of them. That\n*could* work out, but you only need one index to have been\nopportunistically cleaned up in order for it to fail to work out.\nThere might well be some workloads where that's often the case, but\nthe regressions in the workloads where it isn't the case seem like\nthey would be rather substantial, because doing an extra lookup in\nevery index for each heap tuple visited sounds pricey.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jan 2024 15:10:33 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 3:34 PM Melanie Plageman\n<[email protected]> wrote:\n> Yes, attached is a patch set which does this. My previous patchset\n> already reduced the number of places we unlock the buffer and update\n> the freespace map in lazy_scan_heap(). This patchset combines the\n> lazy_scan_prune() and lazy_scan_noprune() FSM update cases. I also\n> have a version which moves the freespace map updates into\n> lazy_scan_prune() and lazy_scan_noprune() -- eliminating all of these\n> from lazy_scan_heap(). This is arguably more clear. But Andres\n> mentioned he wanted fewer places unlocking the buffer and updating the\n> FSM.\n\nHmm, interesting. I haven't had time to study this fully today, but I\nthink 0001 looks fine and could just be committed. Hooray for killing\nuseless variables with dumb names.\n\nThis part of 0002 makes me very, very uncomfortable:\n\n+ /*\n+ * Update all line pointers per the record, and repair\nfragmentation.\n+ * We always pass no_indexes as true, because we don't\nknow whether or\n+ * not this option was used when pruning. This reduces\nthe validation\n+ * done on replay in an assert build.\n+ */\n+ heap_page_prune_execute(buffer, true,\n\nredirected, nredirected,\n nowdead, ndead,\n\nnowunused, nunused);\n\nThe problem that I have with this is that we're essentially saying\nthat it's ok to lie to heap_page_prune_execute because we know how\nit's going to use the information, and therefore we know that the lie\nis harmless. But that's not how things are supposed to work. We should\neither find a way to tell the truth, or change the name of the\nparameter so that it's not a lie, or change the function so that it\ndoesn't need this parameter in the first place, or something. I can\noccasionally stomach this sort of lying as a last resort when there's\nno realistic way of adapting the code being called, but that's surely\nnot the case here -- this is a newborn parameter, and it shouldn't be\na lie on day 1. Just imagine if some future developer thought that the\nno_indexes parameter meant that the relation actually didn't have\nindexes (the nerve of them!).\n\nI took a VERY fast look through the rest of the patch set and I think\nthat the overall direction looks like it's probably reasonable, but\nthat's a very preliminary conclusion which I reserve the right to\nrevise after studying further. @Andres: Are you planning to\nreview/commit any of this? Are you hoping that I'm going to do that?\nSomebody else want to jump in here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jan 2024 15:50:47 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Mon, Jan 08, 2024 at 03:50:47PM -0500, Robert Haas wrote:\n> Hmm, interesting. I haven't had time to study this fully today, but I\n> think 0001 looks fine and could just be committed. Hooray for killing\n> useless variables with dumb names.\n\nI've been looking at 0001 a couple of weeks ago and thought that it\nwas fine because there's only one caller of lazy_scan_prune() and one\ncaller of lazy_scan_noprune() so all the code paths were covered.\n\n+ /* rel truncation is unsafe */\n+ if (hastup)\n+ vacrel->nonempty_pages = blkno + 1;\n\nExcept for this comment that I found misleading because this is not\nabout the fact that truncation is unsafe, it's about correctly\ntracking the the last block where we have tuples to ensure a correct\ntruncation. Perhaps this could just reuse \"Remember the location of \nthe last page with nonremovable tuples\"? If people object to that,\nfeel free.\n--\nMichael",
"msg_date": "Tue, 9 Jan 2024 14:56:34 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 12:56 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Jan 08, 2024 at 03:50:47PM -0500, Robert Haas wrote:\n> > Hmm, interesting. I haven't had time to study this fully today, but I\n> > think 0001 looks fine and could just be committed. Hooray for killing\n> > useless variables with dumb names.\n>\n> I've been looking at 0001 a couple of weeks ago and thought that it\n> was fine because there's only one caller of lazy_scan_prune() and one\n> caller of lazy_scan_noprune() so all the code paths were covered.\n>\n> + /* rel truncation is unsafe */\n> + if (hastup)\n> + vacrel->nonempty_pages = blkno + 1;\n\nAndres had actually said that he didn't like pushing the update of\nnonempty_pages into lazy_scan_[no]prune(). So, my v4 patch set\neliminates this.\n\nI can see an argument for doing both the update of\nvacrel->nonempty_pages and the FSM updates in lazy_scan_[no]prune()\nbecause it eliminates some of the back-and-forth between the\nblock-specific functions and lazy_scan_heap().\nlazy_scan_new_or_empty() has special logic for deciding how to update\nthe FSM -- so that remains in lazy_scan_new_or_empty() either way.\n\nOn the other hand, the comment above lazy_scan_new_or_empty() says we\ncan get rid of this special handling if we make relation extension\ncrash safe. Then it would make more sense to have a consolidated FSM\nupdate in lazy_scan_heap(). However it does still mean that we repeat\nthe \"UnlockReleaseBuffer()\" and FSM update code in even more places.\n\nUltimately I can see arguments for and against. Is it better to avoid\nhaving the same few lines of code in two places or avoid unneeded\ncommunication between page-level functions and lazy_scan_heap()?\n\n> Except for this comment that I found misleading because this is not\n> about the fact that truncation is unsafe, it's about correctly\n> tracking the the last block where we have tuples to ensure a correct\n> truncation. Perhaps this could just reuse \"Remember the location of\n> the last page with nonremovable tuples\"? If people object to that,\n> feel free.\n\nI agree the comment could be better. But, simply saying that it tracks\nthe last page with non-removable tuples makes it less clear how\nimportant this is. It makes it sound like it could be simply for stats\npurposes. I'll update the comment to something that includes that\nsentiment but is more exact than \"rel truncation is unsafe\".\n\n- Melanie\n\n\n",
"msg_date": "Tue, 9 Jan 2024 10:56:46 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Mon, Jan 8, 2024 at 3:51 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Jan 5, 2024 at 3:34 PM Melanie Plageman\n> <[email protected]> wrote:\n>\n> This part of 0002 makes me very, very uncomfortable:\n>\n> + /*\n> + * Update all line pointers per the record, and repair\n> fragmentation.\n> + * We always pass no_indexes as true, because we don't\n> know whether or\n> + * not this option was used when pruning. This reduces\n> the validation\n> + * done on replay in an assert build.\n> + */\n> + heap_page_prune_execute(buffer, true,\n>\n> redirected, nredirected,\n> nowdead, ndead,\n>\n> nowunused, nunused);\n>\n> The problem that I have with this is that we're essentially saying\n> that it's ok to lie to heap_page_prune_execute because we know how\n> it's going to use the information, and therefore we know that the lie\n> is harmless. But that's not how things are supposed to work. We should\n> either find a way to tell the truth, or change the name of the\n> parameter so that it's not a lie, or change the function so that it\n> doesn't need this parameter in the first place, or something. I can\n> occasionally stomach this sort of lying as a last resort when there's\n> no realistic way of adapting the code being called, but that's surely\n> not the case here -- this is a newborn parameter, and it shouldn't be\n> a lie on day 1. Just imagine if some future developer thought that the\n> no_indexes parameter meant that the relation actually didn't have\n> indexes (the nerve of them!).\n\nI agree that this is an issue.\n\nThe easiest solution would be to change the name of the parameter to\nheap_page_prune_execute()'s from \"no_indexes\" to something like\n\"validate_unused\", since it is only used in assert builds for\nvalidation.\n\nHowever, though I wish a name change was the right way to solve this\nproblem, my gut feeling is that it is not. It seems like we should\nrely only on the WAL record itself in recovery. Right now the\nparameter is used exclusively for validation, so it isn't so bad. But\nwhat if someone uses this parameter in the future in heap_xlog_prune()\nto decide how to modify the page?\n\nIt seems like the right solution would be to add a flag into the prune\nrecord indicating what to pass to heap_page_prune_execute(). In the\nfuture, I'd like to add flags for updating the VM to each of the prune\nand vacuum records (eliminating the separate VM update record). Thus,\na new flags member of the prune record could have future use. However,\nthis would add a uint8 to the record. I can go and look for some\npadding if you think this is the right direction?\n\n- Melanie\n\n\n",
"msg_date": "Tue, 9 Jan 2024 11:35:34 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 10:56 AM Melanie Plageman\n<[email protected]> wrote:\n> Andres had actually said that he didn't like pushing the update of\n> nonempty_pages into lazy_scan_[no]prune(). So, my v4 patch set\n> eliminates this.\n\nMmph - but I was so looking forward to killing hastup!\n\n> On the other hand, the comment above lazy_scan_new_or_empty() says we\n> can get rid of this special handling if we make relation extension\n> crash safe. Then it would make more sense to have a consolidated FSM\n> update in lazy_scan_heap(). However it does still mean that we repeat\n> the \"UnlockReleaseBuffer()\" and FSM update code in even more places.\n\nI wouldn't hold my breath waiting for relation extension to become\ncrash-safe. Even if you were a whale, you'd be about four orders of\nmagnitude short of holding it long enough.\n\n> Ultimately I can see arguments for and against. Is it better to avoid\n> having the same few lines of code in two places or avoid unneeded\n> communication between page-level functions and lazy_scan_heap()?\n\nTo me, the conceptual complexity of an extra structure member is a\nbigger cost than duplicating TWO lines of code. If we were talking\nabout 20 lines of code, I'd say rename it to something less dumb.\n\n> > Except for this comment that I found misleading because this is not\n> > about the fact that truncation is unsafe, it's about correctly\n> > tracking the the last block where we have tuples to ensure a correct\n> > truncation. Perhaps this could just reuse \"Remember the location of\n> > the last page with nonremovable tuples\"? If people object to that,\n> > feel free.\n>\n> I agree the comment could be better. But, simply saying that it tracks\n> the last page with non-removable tuples makes it less clear how\n> important this is. It makes it sound like it could be simply for stats\n> purposes. I'll update the comment to something that includes that\n> sentiment but is more exact than \"rel truncation is unsafe\".\n\nI agree that \"rel truncation is unsafe\" is less clear than desirable,\nbut I'm not sure that I agree that tracking the last page with\nnon-removable tuples makes it sound unimportant. However, the comments\nalso need to make everybody happy, not just me. Maybe something like\n\"can't truncate away this page\" or similar. A long-form comment that\nreally spells it out is fine too, but I don't know if we really need\nit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jan 2024 13:31:30 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 11:35 AM Melanie Plageman\n<[email protected]> wrote:\n> The easiest solution would be to change the name of the parameter to\n> heap_page_prune_execute()'s from \"no_indexes\" to something like\n> \"validate_unused\", since it is only used in assert builds for\n> validation.\n\nRight.\n\n> However, though I wish a name change was the right way to solve this\n> problem, my gut feeling is that it is not. It seems like we should\n> rely only on the WAL record itself in recovery. Right now the\n> parameter is used exclusively for validation, so it isn't so bad. But\n> what if someone uses this parameter in the future in heap_xlog_prune()\n> to decide how to modify the page?\n\nExactly.\n\n> It seems like the right solution would be to add a flag into the prune\n> record indicating what to pass to heap_page_prune_execute(). In the\n> future, I'd like to add flags for updating the VM to each of the prune\n> and vacuum records (eliminating the separate VM update record). Thus,\n> a new flags member of the prune record could have future use. However,\n> this would add a uint8 to the record. I can go and look for some\n> padding if you think this is the right direction?\n\nI thought about this approach and it might be OK but I don't love it,\nbecause it means making the WAL record bigger on production systems\nfor the sake of assertion that only fires for developers. Sure, it's\npossible that there might be another use in the future, but there\nmight also not be another use in the future.\n\nHow about changing if (no_indexes) to if (ndead == 0) and adding a\ncomment like this: /* If there are any tuples being marked LP_DEAD,\nthen the relation must have indexes, so every item being marked unused\nmust be a heap-only tuple. But if there are no tuples being marked\nLP_DEAD, then it's possible that the relation has no indexes, in which\ncase all we know is that the line pointer shouldn't already be\nLP_UNUSED. */\n\nBTW:\n\n+ * LP_REDIRECT, or LP_DEAD items to LP_UNUSED\nduring pruning. We\n+ * can't check much here except that, if the\nitem is LP_NORMAL, it\n+ * should have storage before it is set LP_UNUSED.\n\nIs it really helpful to check this here, or just confusing/grasping at\nstraws? I mean, the requirement that LP_NORMAL items have storage is a\ngeneral one, IIUC, not something that's specific to this situation. It\nfeels like the equivalent of checking that your children don't set\nfire to the couch on Tuesdays.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jan 2024 14:00:27 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 1:31 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jan 9, 2024 at 10:56 AM Melanie Plageman\n> <[email protected]> wrote:\n> > Andres had actually said that he didn't like pushing the update of\n> > nonempty_pages into lazy_scan_[no]prune(). So, my v4 patch set\n> > eliminates this.\n>\n> Mmph - but I was so looking forward to killing hastup!\n>\n> > Ultimately I can see arguments for and against. Is it better to avoid\n> > having the same few lines of code in two places or avoid unneeded\n> > communication between page-level functions and lazy_scan_heap()?\n>\n> To me, the conceptual complexity of an extra structure member is a\n> bigger cost than duplicating TWO lines of code. If we were talking\n> about 20 lines of code, I'd say rename it to something less dumb.\n\nYes, I agree. I thought about it more, and I prefer updating the FSM\nand setting nonempty_pages into lazy_scan_[no]prune(). Originally, I\nhad ordered the patch set with that first (before the patch to do\nimmediate reaping), but there is no reason for it to be so. Using\nhastup can be done in a subsequent commit on top of the immediate\nreaping patch. I will post a new version of the immediate reaping\npatch which addresses your feedback. Then, separately, I will post a\nrevised version of the lazy_scan_heap() refactoring patches.\n\n- Melanie\n\n\n",
"msg_date": "Tue, 9 Jan 2024 14:23:24 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 2:23 PM Melanie Plageman\n<[email protected]> wrote:\n> Yes, I agree. I thought about it more, and I prefer updating the FSM\n> and setting nonempty_pages into lazy_scan_[no]prune(). Originally, I\n> had ordered the patch set with that first (before the patch to do\n> immediate reaping), but there is no reason for it to be so. Using\n> hastup can be done in a subsequent commit on top of the immediate\n> reaping patch. I will post a new version of the immediate reaping\n> patch which addresses your feedback. Then, separately, I will post a\n> revised version of the lazy_scan_heap() refactoring patches.\n\nI kind of liked it first, because I thought we could just do it and\nget it out of the way, but if Andres doesn't agree with the idea, it\nprobably does make sense to push it later, as you say here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jan 2024 14:33:29 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "Hi, \n\nOn January 9, 2024 11:33:29 AM PST, Robert Haas <[email protected]> wrote:\n>On Tue, Jan 9, 2024 at 2:23 PM Melanie Plageman\n><[email protected]> wrote:\n>> Yes, I agree. I thought about it more, and I prefer updating the FSM\n>> and setting nonempty_pages into lazy_scan_[no]prune(). Originally, I\n>> had ordered the patch set with that first (before the patch to do\n>> immediate reaping), but there is no reason for it to be so. Using\n>> hastup can be done in a subsequent commit on top of the immediate\n>> reaping patch. I will post a new version of the immediate reaping\n>> patch which addresses your feedback. Then, separately, I will post a\n>> revised version of the lazy_scan_heap() refactoring patches.\n>\n>I kind of liked it first, because I thought we could just do it and\n>get it out of the way, but if Andres doesn't agree with the idea, it\n>probably does make sense to push it later, as you say here.\n\n\nI don't have that strong feelings about it. If both of you think it looks good, go ahead... \n\n\nAndres \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 09 Jan 2024 11:35:36 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?US-ASCII?Q?Re=3A_Emit_fewer_vacuum_records_by_rea?=\n =?US-ASCII?Q?ping_removable_tuples_during_pruning?="
},
{
"msg_contents": "I had already written the patch for immediate reaping addressing the\nbelow feedback before I saw the emails that said everyone is happy\nwith using hastup in lazy_scan_[no]prune() in a preliminary patch. Let\nme know if you have a strong preference for reordering. Otherwise, I\nwill write the three subsequent patches on top of this one.\n\nOn Tue, Jan 9, 2024 at 2:00 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jan 9, 2024 at 11:35 AM Melanie Plageman\n> <[email protected]> wrote:\n> > The easiest solution would be to change the name of the parameter to\n> > heap_page_prune_execute()'s from \"no_indexes\" to something like\n> > \"validate_unused\", since it is only used in assert builds for\n> > validation.\n>\n> Right.\n>\n> > However, though I wish a name change was the right way to solve this\n> > problem, my gut feeling is that it is not. It seems like we should\n> > rely only on the WAL record itself in recovery. Right now the\n> > parameter is used exclusively for validation, so it isn't so bad. But\n> > what if someone uses this parameter in the future in heap_xlog_prune()\n> > to decide how to modify the page?\n>\n> Exactly.\n>\n> > It seems like the right solution would be to add a flag into the prune\n> > record indicating what to pass to heap_page_prune_execute(). In the\n> > future, I'd like to add flags for updating the VM to each of the prune\n> > and vacuum records (eliminating the separate VM update record). Thus,\n> > a new flags member of the prune record could have future use. However,\n> > this would add a uint8 to the record. I can go and look for some\n> > padding if you think this is the right direction?\n>\n> I thought about this approach and it might be OK but I don't love it,\n> because it means making the WAL record bigger on production systems\n> for the sake of assertion that only fires for developers. Sure, it's\n> possible that there might be another use in the future, but there\n> might also not be another use in the future.\n>\n> How about changing if (no_indexes) to if (ndead == 0) and adding a\n> comment like this: /* If there are any tuples being marked LP_DEAD,\n> then the relation must have indexes, so every item being marked unused\n> must be a heap-only tuple. But if there are no tuples being marked\n> LP_DEAD, then it's possible that the relation has no indexes, in which\n> case all we know is that the line pointer shouldn't already be\n> LP_UNUSED. */\n\nAh, I like this a lot. Attached patch does this. I've added a modified\nversion of the comment you suggested. My only question is if we are\nlosing something without this sentence (from the old comment):\n\n- * ... They don't need to be left in place as LP_DEAD items\nuntil VACUUM gets\n- * around to doing index vacuuming.\n\nI don't feel like it adds a lot, but it is absent from the new\ncomment, so thought I would check.\n\n> BTW:\n>\n> + * LP_REDIRECT, or LP_DEAD items to LP_UNUSED\n> during pruning. We\n> + * can't check much here except that, if the\n> item is LP_NORMAL, it\n> + * should have storage before it is set LP_UNUSED.\n>\n> Is it really helpful to check this here, or just confusing/grasping at\n> straws? I mean, the requirement that LP_NORMAL items have storage is a\n> general one, IIUC, not something that's specific to this situation. It\n> feels like the equivalent of checking that your children don't set\n> fire to the couch on Tuesdays.\n\nHmm. Yes. I suppose I was trying to find something to validate. Is it\nworth checking that the line pointer is not already LP_UNUSED? Or is\nthat a bit ridiculous?\n\n- Melanie",
"msg_date": "Tue, 9 Jan 2024 15:13:30 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 3:13 PM Melanie Plageman\n<[email protected]> wrote:\n> I had already written the patch for immediate reaping addressing the\n> below feedback before I saw the emails that said everyone is happy\n> with using hastup in lazy_scan_[no]prune() in a preliminary patch. Let\n> me know if you have a strong preference for reordering. Otherwise, I\n> will write the three subsequent patches on top of this one.\n\nI don't know if it rises to the level of a strong preference. It's\njust a preference.\n\n> Ah, I like this a lot. Attached patch does this. I've added a modified\n> version of the comment you suggested. My only question is if we are\n> losing something without this sentence (from the old comment):\n>\n> - * ... They don't need to be left in place as LP_DEAD items\n> until VACUUM gets\n> - * around to doing index vacuuming.\n>\n> I don't feel like it adds a lot, but it is absent from the new\n> comment, so thought I would check.\n\nI agree that we can leave that out. It wouldn't be bad to include it\nif someone had a nice way of doing that, but it doesn't seem critical,\nand if forcing it in there makes the comment less clear overall, it's\na net loss IMHO.\n\n> Hmm. Yes. I suppose I was trying to find something to validate. Is it\n> worth checking that the line pointer is not already LP_UNUSED? Or is\n> that a bit ridiculous?\n\nI think that's worthwhile (hence my proposed wording).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jan 2024 15:40:36 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On 1/8/24 2:10 PM, Robert Haas wrote:\n> On Fri, Jan 5, 2024 at 3:57 PM Andres Freund <[email protected]> wrote:\n>>> I will be astonished if you can make this work well enough to avoid\n>>> huge regressions in plausible cases. There are plenty of cases where\n>>> we do a very thorough job opportunistically removing index tuples.\n>>\n>> These days the AM is often involved with that, via\n>> table_index_delete_tuples()/heap_index_delete_tuples(). That IIRC has to\n>> happen before physically removing the already-marked-killed index entries. We\n>> can't rely on being able to actually prune the heap page at that point, there\n>> might be other backends pinning it, but often we will be able to. If we were\n>> to prune below heap_index_delete_tuples(), we wouldn't need to recheck that\n>> index again during \"individual tuple pruning\", if the to-be-marked-unused heap\n>> tuple is one of the tuples passed to heap_index_delete_tuples(). Which\n>> presumably will be very commonly the case.\n>>\n>> At least for nbtree, we are much more aggressive about marking index entries\n>> as killed, than about actually removing the index entries. \"individual tuple\n>> pruning\" would have to look for killed-but-still-present index entries, not\n>> just for \"live\" entries.\n> \n> I don't want to derail this thread, but I don't really see what you\n> have in mind here. The first paragraph sounds like you're imagining\n> that while pruning the index entries we might jump over to the heap\n> and clean things up there, too, but that seems like it wouldn't work\n> if the table has more than one index. I thought you were talking about\n> starting with a heap tuple and bouncing around to every index to see\n> if we can find index pointers to kill in every one of them. That\n> *could* work out, but you only need one index to have been\n> opportunistically cleaned up in order for it to fail to work out.\n> There might well be some workloads where that's often the case, but\n> the regressions in the workloads where it isn't the case seem like\n> they would be rather substantial, because doing an extra lookup in\n> every index for each heap tuple visited sounds pricey.\n\nThe idea of probing indexes for tuples that are now dead has come up in \nthe past, and the concern has always been whether it's actually safe to \ndo so. An obvious example is an index on a function and now the function \nhas changed so you can't reliably determine if a particular tuple is \npresent in the index. That's bad enough during an index scan, but \npotentially worse while doing heeap cleanup. Given that operators are \nfunctions, this risk exists to some degree in even simple indexes.\n\nDepending on the gains this might still be worth doing, at least for \nsome cases. It's hard to conceive of this breaking for indexes on \nintegers, for example. But we'd still need to be cautious.\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n\n",
"msg_date": "Tue, 9 Jan 2024 15:20:01 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 3:40 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jan 9, 2024 at 3:13 PM Melanie Plageman\n> <[email protected]> wrote:\n> > I had already written the patch for immediate reaping addressing the\n> > below feedback before I saw the emails that said everyone is happy\n> > with using hastup in lazy_scan_[no]prune() in a preliminary patch. Let\n> > me know if you have a strong preference for reordering. Otherwise, I\n> > will write the three subsequent patches on top of this one.\n>\n> I don't know if it rises to the level of a strong preference. It's\n> just a preference.\n\nAttached v6 has the immediate reaping patch first followed by the code\nto use hastup in lazy_scan_[no]prune(). 0003 and 0004 move the VM\nupdate code into lazy_scan_prune() and eliminate LVPagePruneState\nentirely. 0005 moves the FSM update into lazy_scan_[no]prune(),\nsubstantially simplifying lazy_scan_heap().\n\n> I agree that we can leave that out. It wouldn't be bad to include it\n> if someone had a nice way of doing that, but it doesn't seem critical,\n> and if forcing it in there makes the comment less clear overall, it's\n> a net loss IMHO.\n>\n> > Hmm. Yes. I suppose I was trying to find something to validate. Is it\n> > worth checking that the line pointer is not already LP_UNUSED? Or is\n> > that a bit ridiculous?\n>\n> I think that's worthwhile (hence my proposed wording).\n\nDone in attached v6.\n\n- Melanie",
"msg_date": "Tue, 9 Jan 2024 17:42:38 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 5:42 PM Melanie Plageman\n<[email protected]> wrote:\n> Done in attached v6.\n\nDon't kill me, but:\n\n+ /*\n+ * For now, pass no_indexes == false\nregardless of whether or not\n+ * the relation has indexes. In the future we\nmay enable immediate\n+ * reaping for on access pruning.\n+ */\n+ heap_page_prune(relation, buffer, vistest, false,\n+ &presult, NULL);\n\nMy previous comments about lying seem equally applicable here.\n\n- if (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid))\n+ if (!ItemIdIsUsed(itemid))\n continue;\n\nThere is a bit of overhead here for the !no_indexes case. I assume it\ndoesn't matter.\n\n static void\n heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum)\n {\n+ /*\n+ * If the relation has no indexes, we can remove dead tuples during\n+ * pruning instead of marking their line pointers dead. Set this tuple's\n+ * line pointer LP_UNUSED. We hint that tables with indexes are more\n+ * likely.\n+ */\n+ if (unlikely(prstate->no_indexes))\n+ {\n+ heap_prune_record_unused(prstate, offnum);\n+ return;\n+ }\n\nI think this should be pushed to the callers. Else you make the\nexisting function name into another lie.\n\n+ bool recordfreespace;\n\nNot sure if it's necessary to move this to an outer scope like this?\nThe whole handling of this looks kind of confusing. If we're going to\ndo it this way, then I think lazy_scan_prune() definitely needs to\ndocument how it handles this function (i.e. might set true to false,\nwon't set false to true, also meaning). But are we sure we want to let\na local variable with a weird name \"leak out\" like this?\n\n+ Assert(vacrel->do_index_vacuuming);\n\nIs this related?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jan 2024 15:54:40 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 10, 2024 at 3:54 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jan 9, 2024 at 5:42 PM Melanie Plageman\n> <[email protected]> wrote:\n> > Done in attached v6.\n>\n> Don't kill me, but:\n>\n> + /*\n> + * For now, pass no_indexes == false\n> regardless of whether or not\n> + * the relation has indexes. In the future we\n> may enable immediate\n> + * reaping for on access pruning.\n> + */\n> + heap_page_prune(relation, buffer, vistest, false,\n> + &presult, NULL);\n>\n> My previous comments about lying seem equally applicable here.\n\nYes, the options I can think of are:\n\n1) rename the parameter to \"immed_reap\" or similar and make very clear\nin heap_page_prune_opt() that you are not to pass true.\n2) make immediate reaping work for on-access pruning. I would need a\nlow cost way to find out if there are any indexes on the table. Do you\nthink this is possible? Should we just rename the parameter for now\nand think about that later?\n\n>\n> - if (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid))\n> + if (!ItemIdIsUsed(itemid))\n> continue;\n>\n> There is a bit of overhead here for the !no_indexes case. I assume it\n> doesn't matter.\n\nRight. Should be okay. Alternatively, and I'm not saying this is a\ngood idea, but we could throw this into the loop in heap_page_prune()\nwhich calls heap_prune_chain():\n\n+ if (ItemIdIsDead(itemid) && prstate.no_indexes)\n+ {\n+ heap_prune_record_unused(&prstate, offnum);\n+ continue;\n+ }\n\nI think that is correct?\nBut, from a consistency perspective, we don't call the\nheap_prune_record* functions directly from heap_page_prune() in any\nother cases. And it seems like it would be nice to have all of those\nin one place?\n\n> static void\n> heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum)\n> {\n> + /*\n> + * If the relation has no indexes, we can remove dead tuples during\n> + * pruning instead of marking their line pointers dead. Set this tuple's\n> + * line pointer LP_UNUSED. We hint that tables with indexes are more\n> + * likely.\n> + */\n> + if (unlikely(prstate->no_indexes))\n> + {\n> + heap_prune_record_unused(prstate, offnum);\n> + return;\n> + }\n>\n> I think this should be pushed to the callers. Else you make the\n> existing function name into another lie.\n\nYes, so I preferred it in the body of heap_prune_chain() (the caller).\nAndres didn't like the extra level of indentation. I could wrap\nheap_record_dead() and heap_record_unused(), but I couldn't really\nthink of a good wrapper name. heap_record_dead_or_unused() seems long\nand literal. But, it might be the best I can do. I can't think of a\ngeneral word which encompasses both the transition to death and to\ndisposal.\n\n> + bool recordfreespace;\n>\n> Not sure if it's necessary to move this to an outer scope like this?\n> The whole handling of this looks kind of confusing. If we're going to\n> do it this way, then I think lazy_scan_prune() definitely needs to\n> document how it handles this function (i.e. might set true to false,\n> won't set false to true, also meaning). But are we sure we want to let\n> a local variable with a weird name \"leak out\" like this?\n\nWhich function do you mean when you say \"document how\nlazy_scan_prune() handles this function\". And no we definitely don't\nwant a variable like this to be hanging out in lazy_scan_heap(), IMHO.\nThe other patches in the stack move the FSM updates into\nlazy_scan_[no]prune() and eliminate this parameter. I moved up the\nscope because lazy_scan_noprune() already had recordfreespace as an\noutput parameter and initialized it unconditionally inside. I\ninitialize it unconditionally in lazy_scan_prune() as well. I mention\nin the commit message that this is temporary because we plan to\neliminate recordfreespace as an output parameter by updating the FSM\nin lazy_scan_[no]prune(). I could have stuck recordfreespace into the\nLVPagePruneState with the other output parameters. But, leaving it as\na separate output parameter made the diffs lovely for the rest of the\npatches in the stack.\n\n> + Assert(vacrel->do_index_vacuuming);\n>\n> Is this related?\n\nYes, previously the assert was:\nAssert(vacrel->nindexes == 0 || vacrel->do_index_vacuuming);\nAnd we eliminated the caller of lazy_vacuum_heap_page() with\nvacrel->nindexes == 0. Now it should only be called after doing index\nvacuuming (thus index vacuuming should definitely be enabled).\n\n- Melanie\n\n\n",
"msg_date": "Wed, 10 Jan 2024 17:27:57 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 10, 2024 at 5:28 PM Melanie Plageman\n<[email protected]> wrote:\n> Yes, the options I can think of are:\n>\n> 1) rename the parameter to \"immed_reap\" or similar and make very clear\n> in heap_page_prune_opt() that you are not to pass true.\n> 2) make immediate reaping work for on-access pruning. I would need a\n> low cost way to find out if there are any indexes on the table. Do you\n> think this is possible? Should we just rename the parameter for now\n> and think about that later?\n\nI don't think we can implement (2), because:\n\nrobert.haas=# create table test (a int);\nCREATE TABLE\nrobert.haas=# begin;\nBEGIN\nrobert.haas=*# select * from test;\n a\n---\n(0 rows)\n\n<in another window>\n\nrobert.haas=# create index on test (a);\nCREATE INDEX\n\nIn English, the lock we hold during regular table access isn't strong\nenough to foreclose concurrent addition of an index.\n\nSo renaming the parameter seems like the way to go. How about \"mark_unused_now\"?\n\n> > - if (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid))\n> > + if (!ItemIdIsUsed(itemid))\n> > continue;\n> >\n> > There is a bit of overhead here for the !no_indexes case. I assume it\n> > doesn't matter.\n>\n> Right. Should be okay. Alternatively, and I'm not saying this is a\n> good idea, but we could throw this into the loop in heap_page_prune()\n> which calls heap_prune_chain():\n>\n> + if (ItemIdIsDead(itemid) && prstate.no_indexes)\n> + {\n> + heap_prune_record_unused(&prstate, offnum);\n> + continue;\n> + }\n>\n> I think that is correct?\n\nWouldn't that be wrong in any case where heap_prune_chain() calls\nheap_prune_record_dead()?\n\n> Yes, so I preferred it in the body of heap_prune_chain() (the caller).\n> Andres didn't like the extra level of indentation. I could wrap\n> heap_record_dead() and heap_record_unused(), but I couldn't really\n> think of a good wrapper name. heap_record_dead_or_unused() seems long\n> and literal. But, it might be the best I can do. I can't think of a\n> general word which encompasses both the transition to death and to\n> disposal.\n\nI'm sure we could solve the wordsmithing problem but I think it's\nclearer if the heap_prune_record_* functions don't get clever.\n\n> > + bool recordfreespace;\n> >\n> > Not sure if it's necessary to move this to an outer scope like this?\n> > The whole handling of this looks kind of confusing. If we're going to\n> > do it this way, then I think lazy_scan_prune() definitely needs to\n> > document how it handles this function (i.e. might set true to false,\n> > won't set false to true, also meaning). But are we sure we want to let\n> > a local variable with a weird name \"leak out\" like this?\n>\n> Which function do you mean when you say \"document how\n> lazy_scan_prune() handles this function\".\n\n\"function\" was a thinko for \"variable\".\n\n> And no we definitely don't\n> want a variable like this to be hanging out in lazy_scan_heap(), IMHO.\n> The other patches in the stack move the FSM updates into\n> lazy_scan_[no]prune() and eliminate this parameter. I moved up the\n> scope because lazy_scan_noprune() already had recordfreespace as an\n> output parameter and initialized it unconditionally inside. I\n> initialize it unconditionally in lazy_scan_prune() as well. I mention\n> in the commit message that this is temporary because we plan to\n> eliminate recordfreespace as an output parameter by updating the FSM\n> in lazy_scan_[no]prune(). I could have stuck recordfreespace into the\n> LVPagePruneState with the other output parameters. But, leaving it as\n> a separate output parameter made the diffs lovely for the rest of the\n> patches in the stack.\n\nI guess I'll have to look at more of the patches to see what I think\nof this. Looking at this patch in isolation, it's ugly, IMHO, but it\nseems we agree on that.\n\n> > + Assert(vacrel->do_index_vacuuming);\n> >\n> > Is this related?\n>\n> Yes, previously the assert was:\n> Assert(vacrel->nindexes == 0 || vacrel->do_index_vacuuming);\n> And we eliminated the caller of lazy_vacuum_heap_page() with\n> vacrel->nindexes == 0. Now it should only be called after doing index\n> vacuuming (thus index vacuuming should definitely be enabled).\n\nAh.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Jan 2024 11:54:32 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 2:35 PM Andres Freund <[email protected]> wrote:\n> I don't have that strong feelings about it. If both of you think it looks good, go ahead...\n\nDone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Jan 2024 13:43:27 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "Attached v7 is rebased over the commit you just made to remove\nLVPagePruneState->hastup.\n\nOn Thu, Jan 11, 2024 at 11:54 AM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Jan 10, 2024 at 5:28 PM Melanie Plageman\n> <[email protected]> wrote:\n> > Yes, the options I can think of are:\n> >\n> > 1) rename the parameter to \"immed_reap\" or similar and make very clear\n> > in heap_page_prune_opt() that you are not to pass true.\n> > 2) make immediate reaping work for on-access pruning. I would need a\n> > low cost way to find out if there are any indexes on the table. Do you\n> > think this is possible? Should we just rename the parameter for now\n> > and think about that later?\n>\n> I don't think we can implement (2), because:\n>\n> robert.haas=# create table test (a int);\n> CREATE TABLE\n> robert.haas=# begin;\n> BEGIN\n> robert.haas=*# select * from test;\n> a\n> ---\n> (0 rows)\n>\n> <in another window>\n>\n> robert.haas=# create index on test (a);\n> CREATE INDEX\n>\n> In English, the lock we hold during regular table access isn't strong\n> enough to foreclose concurrent addition of an index.\n\nAh, I see.\n\n> So renaming the parameter seems like the way to go. How about \"mark_unused_now\"?\n\nI've done this in attached v7.\n\n> > > - if (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid))\n> > > + if (!ItemIdIsUsed(itemid))\n> > > continue;\n> > >\n> > > There is a bit of overhead here for the !no_indexes case. I assume it\n> > > doesn't matter.\n> >\n> > Right. Should be okay. Alternatively, and I'm not saying this is a\n> > good idea, but we could throw this into the loop in heap_page_prune()\n> > which calls heap_prune_chain():\n> >\n> > + if (ItemIdIsDead(itemid) && prstate.no_indexes)\n> > + {\n> > + heap_prune_record_unused(&prstate, offnum);\n> > + continue;\n> > + }\n> >\n> > I think that is correct?\n>\n> Wouldn't that be wrong in any case where heap_prune_chain() calls\n> heap_prune_record_dead()?\n\nHmm. But previously, we skipped heap_prune_chain() for already dead\nline pointers. In my patch, I rely on the test in the loop in\nheap_prune_chain() to set those LP_UNUSED if mark_unused_now is true\n(previously the below code just broke out of the loop).\n\n /*\n * Likewise, a dead line pointer can't be part of the chain. (We\n * already eliminated the case of dead root tuple outside this\n * function.)\n */\n if (ItemIdIsDead(lp))\n {\n /*\n * If the caller set mark_unused_now true, we can set dead line\n * pointers LP_UNUSED now. We don't increment ndeleted here since\n * the LP was already marked dead.\n */\n if (unlikely(prstate->mark_unused_now))\n heap_prune_record_unused(prstate, offnum);\n\n break;\n }\n\nso wouldn't what I suggested simply set the item LP_UNSED that before\ninvoking heap_prune_chain()? Thereby achieving the same result without\ninvoking heap_prune_chain() for already dead line pointers? I could be\nmissing something. That heap_prune_chain() logic sometimes gets me\nturned around.\n\n> > Yes, so I preferred it in the body of heap_prune_chain() (the caller).\n> > Andres didn't like the extra level of indentation. I could wrap\n> > heap_record_dead() and heap_record_unused(), but I couldn't really\n> > think of a good wrapper name. heap_record_dead_or_unused() seems long\n> > and literal. But, it might be the best I can do. I can't think of a\n> > general word which encompasses both the transition to death and to\n> > disposal.\n>\n> I'm sure we could solve the wordsmithing problem but I think it's\n> clearer if the heap_prune_record_* functions don't get clever.\n\nI've gone with my heap_record_dead_or_unused() suggestion.\n\n- Melanie",
"msg_date": "Thu, 11 Jan 2024 14:30:07 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 2:30 PM Melanie Plageman\n<[email protected]> wrote:\n> Attached v7 is rebased over the commit you just made to remove\n> LVPagePruneState->hastup.\n\nApologies for making you rebase but I was tired of thinking about that patch.\n\nI'm still kind of hung up on the changes that 0001 makes to vacuumlazy.c.\n\nSay we didn't add the recordfreespace parameter to lazy_scan_prune().\nCouldn't the caller just compute it? lpdead_items goes out of scope,\nbut there's also prstate.has_lpdead_items.\n\nPressing that gripe a bit more, I just figured out that \"Wait until\nlazy_vacuum_heap_rel() to save free space\" gets turned into \"If we\nwill likely do index vacuuming, wait until lazy_vacuum_heap_rel() to\nsave free space.\" That follows the good practice of phrasing the\ncomment conditionally when the comment is outside the if-statement.\nBut the if-statement says merely \"if (recordfreespace)\", which is not\nobviously related to \"If we will likely do index vacuuming\" but under\nthe hood actually is. But it seems like an unpleasant amount of action\nat a distance. If that condition said if (vacrel->nindexes > 0 &&\nvacrel->do_index_vacuuming && prstate.has_lpdead_items)\nUnlockReleaseBuffer(); else { PageGetHeapFreeSpace;\nUnlockReleaseBuffer; RecordPageWithFreeSpace } it would be a lot more\nobvious that the code was doing what the comment says.\n\nThat's a refactoring question, but I'm also wondering if there's a\nfunctional issue. Pre-patch, if the table has no indexes and some\nitems are left LP_DEAD, then we mark them unused, forget them,\nRecordPageWithFreeSpace(), and if enough pages have been visited,\nFreeSpaceMapVacuumRange(). Post-patch, if the table has no indexes, no\nitems will be left LP_DEAD, and *recordfreespace will be set to true,\nso we'll PageRecordFreeSpace(). But this seems to me to mean that\npost-patch we'll PageRecordFreeSpace() in more cases than we do now.\nSpecifically, imagine a case where the current code wouldn't mark any\nitems LP_DEAD and the page also had no pre-existing items that were\nLP_DEAD and the table also has no indexes. Currently, I believe we\nwouldn't PageRecordFreeSpace(), but with the patch, I think we would.\nAm I wrong?\n\nNote that the call to FreeSpaceMapVacuumRange() seems to try to guard\nagainst this problem by testing for vacrel->tuples_deleted >\ntuples_already_deleted. I haven't tried to verify whether that guard\nis correct, but the fact that FreeSpaceMapVacuumRange() has such a\nguard and RecordPageWithFreeSpace() does not have one makes me\nsuspicious.\n\nAnother interesting effect of the patch is that instead of getting\nlazy_vacuum_heap_page()'s handling of the all-visible status of the\npage, we get the somewhat more complex handling done by\nlazy_scan_heap(). I haven't fully through the consequences of that,\nbut if you have, I'd be interested to hear your analysis.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Jan 2024 16:49:01 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 4:49 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Jan 11, 2024 at 2:30 PM Melanie Plageman\n> <[email protected]> wrote:\n>\n> I'm still kind of hung up on the changes that 0001 makes to vacuumlazy.c.\n>\n> Say we didn't add the recordfreespace parameter to lazy_scan_prune().\n> Couldn't the caller just compute it? lpdead_items goes out of scope,\n> but there's also prstate.has_lpdead_items.\n>\n> Pressing that gripe a bit more, I just figured out that \"Wait until\n> lazy_vacuum_heap_rel() to save free space\" gets turned into \"If we\n> will likely do index vacuuming, wait until lazy_vacuum_heap_rel() to\n> save free space.\" That follows the good practice of phrasing the\n> comment conditionally when the comment is outside the if-statement.\n> But the if-statement says merely \"if (recordfreespace)\", which is not\n> obviously related to \"If we will likely do index vacuuming\" but under\n> the hood actually is. But it seems like an unpleasant amount of action\n> at a distance. If that condition said if (vacrel->nindexes > 0 &&\n> vacrel->do_index_vacuuming && prstate.has_lpdead_items)\n> UnlockReleaseBuffer(); else { PageGetHeapFreeSpace;\n> UnlockReleaseBuffer; RecordPageWithFreeSpace } it would be a lot more\n> obvious that the code was doing what the comment says.\n\nYes, this is a good point. Seems like writing maintainable code is\nreally about never lying and then figuring out when hiding the truth\nis also lying. Darn!\n\nMy only excuse is that lazy_scan_noprune() has a similarly confusing\noutput parameter, recordfreespace, both of which I removed in a later\npatch in the set. I think we should change it as you say.\n\n> That's a refactoring question, but I'm also wondering if there's a\n> functional issue. Pre-patch, if the table has no indexes and some\n> items are left LP_DEAD, then we mark them unused, forget them,\n> RecordPageWithFreeSpace(), and if enough pages have been visited,\n> FreeSpaceMapVacuumRange(). Post-patch, if the table has no indexes, no\n> items will be left LP_DEAD, and *recordfreespace will be set to true,\n> so we'll PageRecordFreeSpace(). But this seems to me to mean that\n> post-patch we'll PageRecordFreeSpace() in more cases than we do now.\n> Specifically, imagine a case where the current code wouldn't mark any\n> items LP_DEAD and the page also had no pre-existing items that were\n> LP_DEAD and the table also has no indexes. Currently, I believe we\n> wouldn't PageRecordFreeSpace(), but with the patch, I think we would.\n> Am I wrong?\n\nAh! I think you are right. Good catch. I could fix this with logic like this:\n\nbool space_freed = vacrel->tuples_deleted > tuples_already_deleted;\nif ((vacrel->nindexes == 0 && space_freed) ||\n (vacrel->nindexes > 0 && (space_freed || !vacrel->do_index_vacuuming)))\n\nI think I made this mistake when working on a different version of\nthis that combined the prune and no prune cases. I noticed that\nlazy_scan_noprune() updates the FSM whenever there are no indexes. I\nwonder why this is (and why we don't do it in the prune case).\n\n> Note that the call to FreeSpaceMapVacuumRange() seems to try to guard\n> against this problem by testing for vacrel->tuples_deleted >\n> tuples_already_deleted. I haven't tried to verify whether that guard\n> is correct, but the fact that FreeSpaceMapVacuumRange() has such a\n> guard and RecordPageWithFreeSpace() does not have one makes me\n> suspicious.\n\nFreeSpaceMapVacuumRange() is not called for the no prune case, so I\nthink this is right.\n\n> Another interesting effect of the patch is that instead of getting\n> lazy_vacuum_heap_page()'s handling of the all-visible status of the\n> page, we get the somewhat more complex handling done by\n> lazy_scan_heap(). I haven't fully through the consequences of that,\n> but if you have, I'd be interested to hear your analysis.\n\nlazy_vacuum_heap_page() calls heap_page_is_all_visible() which does\nanother HeapTupleSatisfiesVacuum() call -- which is definitely going\nto be more expensive than not doing that. In one case, in\nlazy_scan_heap(), we might do a visibilitymap_get_status() (via\nVM_ALL_FROZEN()) to avoid calling visibilitymap_set() if the page is\nalready marked all frozen in the VM. But that would pale in comparison\nto another HeapTupleSatisfiesVacuum() (I think).\n\nThe VM update code in lazy_scan_heap() looks complicated but two out\nof four cases are basically to deal with uncommon data corruption.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 11 Jan 2024 21:04:59 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 01:43:27PM -0500, Robert Haas wrote:\n> On Tue, Jan 9, 2024 at 2:35 PM Andres Freund <[email protected]> wrote:\n>> I don't have that strong feelings about it. If both of you think it\n>> looks good, go ahead...\n> \n> Done.\n\nThanks for e2d5b3b9b643.\n--\nMichael",
"msg_date": "Fri, 12 Jan 2024 13:35:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 9:05 PM Melanie Plageman\n<[email protected]> wrote:\n> Yes, this is a good point. Seems like writing maintainable code is\n> really about never lying and then figuring out when hiding the truth\n> is also lying. Darn!\n\nI think that's pretty true. In this particular case, this code has a\nfair number of preexisting problems of this type, IMHO. It's been\ntouched by many hands, each person with their own design ideas, and\nthe result isn't as coherent as if it were written by a single mind\nfrom scratch. But, because this code is also absolutely critical to\nthe system not eating user data, any changes have to be approached\nwith the highest level of caution. I think it's good to be really\ncareful about this sort of refactoring anywhere, because finding out a\nyear later that you broke something and having to go back and fix it\nis no fun even if the consequences are minor ... here, they might not\nbe.\n\n> Ah! I think you are right. Good catch. I could fix this with logic like this:\n>\n> bool space_freed = vacrel->tuples_deleted > tuples_already_deleted;\n> if ((vacrel->nindexes == 0 && space_freed) ||\n> (vacrel->nindexes > 0 && (space_freed || !vacrel->do_index_vacuuming)))\n\nPerhaps that would be better written as space_freed ||\n(vacrel->nindexes > 0 && !vacrel->do_index_vacuuming), at which point\nyou might not need to introduce the variable.\n\n> I think I made this mistake when working on a different version of\n> this that combined the prune and no prune cases. I noticed that\n> lazy_scan_noprune() updates the FSM whenever there are no indexes. I\n> wonder why this is (and why we don't do it in the prune case).\n\nYeah, this all seems distinctly under-commented. As noted above, this\ncode has grown organically, and some things just never got written\ndown.\n\nLooking through the git history, I see that this behavior seems to\ndate back to 44fa84881fff4529d68e2437a58ad2c906af5805 which introduced\nlazy_scan_noprune(). The comments don't explain the reasoning, but my\nguess is that it was just an accident. It's not entirely evident to me\nwhether there might ever be good reasons to update the freespace map\nfor a page where we haven't freed up any space -- after all, the free\nspace map isn't crash-safe, so you could always postulate that\nupdating it will correct an existing inaccuracy. But I really doubt\nthat there's any good reason for lazy_scan_prune() and\nlazy_scan_noprune() to have different ideas about whether to update\nthe FSM or not, especially in an obscure corner case like this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 09:43:53 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 9:44 AM Robert Haas <[email protected]> wrote:\n> Looking through the git history, I see that this behavior seems to\n> date back to 44fa84881fff4529d68e2437a58ad2c906af5805 which introduced\n> lazy_scan_noprune(). The comments don't explain the reasoning, but my\n> guess is that it was just an accident. It's not entirely evident to me\n> whether there might ever be good reasons to update the freespace map\n> for a page where we haven't freed up any space -- after all, the free\n> space map isn't crash-safe, so you could always postulate that\n> updating it will correct an existing inaccuracy. But I really doubt\n> that there's any good reason for lazy_scan_prune() and\n> lazy_scan_noprune() to have different ideas about whether to update\n> the FSM or not, especially in an obscure corner case like this.\n\nWhy do you think that lazy_scan_prune() and lazy_scan_noprune() have\ndifferent ideas about whether to update the FSM or not?\n\nBarring certain failsafe edge cases, we always call\nPageGetHeapFreeSpace() exactly once for each scanned page. While it's\ntrue that we won't always do that in the first heap pass (in cases\nwhere VACUUM has indexes), that's only because we expect to do it in\nthe second heap pass instead -- since we only want to do it once.\n\nIt's true that lazy_scan_noprune unconditionally calls\nPageGetHeapFreeSpace() when vacrel->nindexes == 0. But that's the same\nbehavior as lazy_scan_prune when vacrel-> nindexes == 0. In both cases\nwe know that there won't be any second heap pass, and so in both cases\nwe always call PageGetHeapFreeSpace() in the first heap pass. It's\njust that it's a bit harder to see that in the lazy_scan_prune case.\nNo?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 12 Jan 2024 11:50:00 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 9:44 AM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Jan 11, 2024 at 9:05 PM Melanie Plageman\n> <[email protected]> wrote:\n> > Ah! I think you are right. Good catch. I could fix this with logic like this:\n> >\n> > bool space_freed = vacrel->tuples_deleted > tuples_already_deleted;\n> > if ((vacrel->nindexes == 0 && space_freed) ||\n> > (vacrel->nindexes > 0 && (space_freed || !vacrel->do_index_vacuuming)))\n>\n> Perhaps that would be better written as space_freed ||\n> (vacrel->nindexes > 0 && !vacrel->do_index_vacuuming), at which point\n> you might not need to introduce the variable.\n\nAs I revisited this, I realized why I had written this logic for\nupdating the FSM:\n\n if (vacrel->nindexes == 0 ||\n !vacrel->do_index_vacuuming || lpdead_items == 0)\n\nIt is because in master, in lazy_scan_heap(), when there are no\nindexes and no space has been freed, we actually keep going and hit\nthe other logic to update the FSM at the bottom of the loop:\n\n if (prunestate.has_lpdead_items && vacrel->do_index_vacuuming)\n\nThe effect is that if there are no indexes and no space is freed we\nalways update the FSM. When combined, that means that if there are no\nindexes, we always update the FSM. With the logic you suggested, we\nfail a pageinspect test which expects the FSM to exist when it\ndoesn't. So, adding the logic:\n\n if (space_freed ||\n (vacrel->nindexes > 0 && !vacrel->do_index_vacuuming) ||\n vacrel->nindexes == 0)\n\nWe pass that pageinspect test. But, we fail a freespacemap test which\nexpects some freespace reported which isn't:\n\n id | blkno | is_avail\n -----------------+-------+----------\n- freespace_tab | 0 | t\n+ freespace_tab | 0 | f\n\nwhich I presume is because space_freed is not the same as !has_lpdead_items.\n\nI can't really see what logic would be right here.\n\n> > I think I made this mistake when working on a different version of\n> > this that combined the prune and no prune cases. I noticed that\n> > lazy_scan_noprune() updates the FSM whenever there are no indexes. I\n> > wonder why this is (and why we don't do it in the prune case).\n>\n> Yeah, this all seems distinctly under-commented. As noted above, this\n> code has grown organically, and some things just never got written\n> down.\n>\n> Looking through the git history, I see that this behavior seems to\n> date back to 44fa84881fff4529d68e2437a58ad2c906af5805 which introduced\n> lazy_scan_noprune(). The comments don't explain the reasoning, but my\n> guess is that it was just an accident. It's not entirely evident to me\n> whether there might ever be good reasons to update the freespace map\n> for a page where we haven't freed up any space -- after all, the free\n> space map isn't crash-safe, so you could always postulate that\n> updating it will correct an existing inaccuracy. But I really doubt\n> that there's any good reason for lazy_scan_prune() and\n> lazy_scan_noprune() to have different ideas about whether to update\n> the FSM or not, especially in an obscure corner case like this.\n\nAll of this makes me wonder why we would update the FSM in vacuum when\nno space was freed. I thought that RelationAddBlocks() and\nRelationGetBufferForTuple() would handle making sure the FSM gets\ncreated and updated if the reason there is freespace is because the\npage hasn't been filled yet.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 12 Jan 2024 12:03:05 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 11:50 AM Peter Geoghegan <[email protected]> wrote:\n>\n> On Fri, Jan 12, 2024 at 9:44 AM Robert Haas <[email protected]> wrote:\n> > Looking through the git history, I see that this behavior seems to\n> > date back to 44fa84881fff4529d68e2437a58ad2c906af5805 which introduced\n> > lazy_scan_noprune(). The comments don't explain the reasoning, but my\n> > guess is that it was just an accident. It's not entirely evident to me\n> > whether there might ever be good reasons to update the freespace map\n> > for a page where we haven't freed up any space -- after all, the free\n> > space map isn't crash-safe, so you could always postulate that\n> > updating it will correct an existing inaccuracy. But I really doubt\n> > that there's any good reason for lazy_scan_prune() and\n> > lazy_scan_noprune() to have different ideas about whether to update\n> > the FSM or not, especially in an obscure corner case like this.\n>\n> Why do you think that lazy_scan_prune() and lazy_scan_noprune() have\n> different ideas about whether to update the FSM or not?\n>\n> Barring certain failsafe edge cases, we always call\n> PageGetHeapFreeSpace() exactly once for each scanned page. While it's\n> true that we won't always do that in the first heap pass (in cases\n> where VACUUM has indexes), that's only because we expect to do it in\n> the second heap pass instead -- since we only want to do it once.\n>\n> It's true that lazy_scan_noprune unconditionally calls\n> PageGetHeapFreeSpace() when vacrel->nindexes == 0. But that's the same\n> behavior as lazy_scan_prune when vacrel-> nindexes == 0. In both cases\n> we know that there won't be any second heap pass, and so in both cases\n> we always call PageGetHeapFreeSpace() in the first heap pass. It's\n> just that it's a bit harder to see that in the lazy_scan_prune case.\n> No?\n\nSo, I think this is the logic in master:\n\nPrune case, first pass\n\n- indexes == 0 && space_freed -> update FSM\n- indexes == 0 && (!space_freed || !index_vacuuming) -> update FSM\n- indexes > 0 && (!space_freed || !index_vacuuming) -> update FSM\n\nwhich reduces to:\n\n- indexes == 0 || !space_freed || !index_vacuuming\n\nNo Prune (except aggressive vacuum), first pass:\n\n- indexes == 0 -> update FSM\n- indexes > 0 && !space_freed -> update FSM\n\nwhich reduces to:\n\n- indexes == 0 || !space_freed\n\nwhich is, during the first pass, if we will not do index vacuuming,\neither because we have no indexes, because there is nothing to vacuum,\nor because do_index_vacuuming is false, make sure we update the\nfreespace map now.\n\nbut what about no prune when do_index_vacuuming is false?\n\nI still don't understand why vacuum is responsible for updating the\nFSM per page when no line pointers have been set unused. That is how\nPageGetFreeSpace() figures out if there is free space, right?\n\n- Melanie\n\n\n",
"msg_date": "Fri, 12 Jan 2024 12:33:28 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 12:33 PM Melanie Plageman\n<[email protected]> wrote:\n> So, I think this is the logic in master:\n>\n> Prune case, first pass\n>\n> ...\n> - indexes > 0 && (!space_freed || !index_vacuuming) -> update FSM\n\nWhat is \"space_freed\"? Isn't that something from your uncommitted patch?\n\nAs I said, the aim is to call PageGetHeapFreeSpace() (*not*\nPageGetFreeSpace(), which is only used for index pages) exactly once\nper heap page scanned. This is supposed to happen independently of\nwhatever specific work was/will be required for the heap page. In\ngeneral, we don't ever trust that the FSM is already up-to-date.\nPresumably because the FSM isn't crash safe.\n\nOn master, prunestate.has_lpdead_items may be set true when our VACUUM\nwasn't actually the thing that performed pruning that freed tuple\nstorage -- typically when some other backend was the one that did all\nrequired pruning at some earlier point in time, often via\nopportunistic pruning. For better or worse, the only thing that VACUUM\naims to do is make sure that PageGetHeapFreeSpace() gets called\nexactly once per scanned page.\n\n> which is, during the first pass, if we will not do index vacuuming,\n> either because we have no indexes, because there is nothing to vacuum,\n> or because do_index_vacuuming is false, make sure we update the\n> freespace map now.\n>\n> but what about no prune when do_index_vacuuming is false?\n\nNote that do_index_vacuuming cannot actually affect the \"nindexes ==\n0\" case. We have an assertion that's kinda relevant to this point:\n\n if (prunestate.has_lpdead_items && vacrel->do_index_vacuuming)\n {\n /*\n * Wait until lazy_vacuum_heap_rel() to save free space.\n * ....\n */\n Assert(vacrel->nindexes > 0);\n UnlockReleaseBuffer(buf);\n }\n else\n {\n ....\n }\n\nWe should never get this far down in the lazy_scan_heap() loop when\n\"nindexes == 0 && prunestate.has_lpdead_items\". That's handled in the\nspecial \"single heap pass\" branch after lazy_scan_prune().\n\nAs for the case where we use lazy_scan_noprune() for a \"nindexes == 0\"\nVACUUM's heap page, we also can't get this far down. (Actually,\nlazy_scan_noprune lies if it has to in order to make sure that\nlazy_scan_heap never has to deal with a \"nindexes == 0\" VACUUM with\nLP_DEAD items from a no-cleanup-lock page. This is a kludge -- see\ncomments about how this is \"dishonest\" inside lazy_scan_noprune().)\n\n> I still don't understand why vacuum is responsible for updating the\n> FSM per page when no line pointers have been set unused. That is how\n> PageGetFreeSpace() figures out if there is free space, right?\n\nYou mean PageGetHeapFreeSpace? Not really. (Though even pruning can\nset line pointers unused, or heap-only tuples.)\n\nEven if pruning doesn't happen in VACUUM, that doesn't mean that the\nFSM is up-to-date.\n\nIn short, we do these things with the free space map because it is a\nmap of free space (which isn't crash safe) -- nothing more. I happen\nto agree that that general design has a lot of problems, but those\nseem out of scope here.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 12 Jan 2024 13:07:30 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 11:50 AM Peter Geoghegan <[email protected]> wrote:\n> Why do you think that lazy_scan_prune() and lazy_scan_noprune() have\n> different ideas about whether to update the FSM or not?\n\nWhen lazy_scan_prune() is called, we call RecordPageWithFreeSpace if\nvacrel->nindexes == 0 && prunestate.has_lpdead_items. See the code\nnear the comment that begins \"Consider the need to do page-at-a-time\nheap vacuuming\".\n\nWhen lazy_scan_noprune() is called, whether we call\nRecordPageWithFreeSpace depends on the output parameter\nrecordfreespace. That is set to true whenever vacrel->nindexes == 0 ||\nlpdead_items == 0. See the code near the comment that begins \"Save any\nLP_DEAD items\".\n\nThe case where I thought there was a behavior difference is when\nvacrel->nindexes == 0 and lpdead_items == 0 and thus\nprunestate.has_lpdead_items is false. But now I see (from Melanie's\nemail) that this isn't really true, because in that case we fall\nthrough to the logic that we use when indexes are present, giving us a\nsecond chance to call RecordPageWithFreeSpace(), which we take when\n(prunestate.has_lpdead_items && vacrel->do_index_vacuuming) comes out\nfalse, as it always does in the scenario that I postulated.\n\nP.S. to Melanie: I'll respond to your further emails next but I wanted\nto respond to this one from Peter first so he didn't think I was\nrudely ignoring him. :-)\n\nP.P.S. to everyone: Yikes, this logic is really confusing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 13:45:53 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 1:07 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Fri, Jan 12, 2024 at 12:33 PM Melanie Plageman\n> <[email protected]> wrote:\n> > So, I think this is the logic in master:\n> >\n> > Prune case, first pass\n> >\n> > ...\n> > - indexes > 0 && (!space_freed || !index_vacuuming) -> update FSM\n>\n> What is \"space_freed\"? Isn't that something from your uncommitted patch?\n\nYes, I was mixing the two together.\n\nI just want to make sure that we agree that, on master, when\nlazy_scan_prune() is called, the logic for whether or not to update\nthe FSM after the first pass is:\n\nindexes == 0 || !has_lpdead_items || !index_vacuuming\n\nand when lazy_scan_noprune() is called, the logic for whether or not\nto update the FSM after the first pass is:\n\nindexes == 0 || !has_lpdead_items\n\nThose seem different to me.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 12 Jan 2024 13:52:14 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 1:07 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Fri, Jan 12, 2024 at 12:33 PM Melanie Plageman\n> <[email protected]> wrote:\n> > So, I think this is the logic in master:\n> >\n> > Prune case, first pass\n> >\n> > ...\n> > - indexes > 0 && (!space_freed || !index_vacuuming) -> update FSM\n>\n> What is \"space_freed\"? Isn't that something from your uncommitted patch?\n>\n> As I said, the aim is to call PageGetHeapFreeSpace() (*not*\n> PageGetFreeSpace(), which is only used for index pages) exactly once\n> per heap page scanned. This is supposed to happen independently of\n> whatever specific work was/will be required for the heap page. In\n> general, we don't ever trust that the FSM is already up-to-date.\n> Presumably because the FSM isn't crash safe.\n>\n> On master, prunestate.has_lpdead_items may be set true when our VACUUM\n> wasn't actually the thing that performed pruning that freed tuple\n> storage -- typically when some other backend was the one that did all\n> required pruning at some earlier point in time, often via\n> opportunistic pruning. For better or worse, the only thing that VACUUM\n> aims to do is make sure that PageGetHeapFreeSpace() gets called\n> exactly once per scanned page.\n...\n> > I still don't understand why vacuum is responsible for updating the\n> > FSM per page when no line pointers have been set unused. That is how\n> > PageGetFreeSpace() figures out if there is free space, right?\n>\n> You mean PageGetHeapFreeSpace? Not really. (Though even pruning can\n> set line pointers unused, or heap-only tuples.)\n>\n> Even if pruning doesn't happen in VACUUM, that doesn't mean that the\n> FSM is up-to-date.\n>\n> In short, we do these things with the free space map because it is a\n> map of free space (which isn't crash safe) -- nothing more. I happen\n> to agree that that general design has a lot of problems, but those\n> seem out of scope here.\n\nSo, there are 3 issues I am trying to understand:\n\n1) How often should vacuum update the FSM (not vacuum as in the second\npass but vacuum as in the whole thing that is happening in\nlazy_scan_heap())?\n2) What is the exact logic in master that ensures that vacuum\nimplements the cadence in 1)?\n3) How can the logic in 2) be replicated exactly in my patch that sets\nwould-be dead items LP_UNUSED during pruning?\n\n From what Peter is saying, I think 1) is decided and is once per page\n(across all passes)\nFor 2), see my previous email. And for 3), TBD until 2) is agreed upon.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 12 Jan 2024 14:02:21 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 1:52 PM Melanie Plageman\n<[email protected]> wrote:\n> Yes, I was mixing the two together.\n>\n> I just want to make sure that we agree that, on master, when\n> lazy_scan_prune() is called, the logic for whether or not to update\n> the FSM after the first pass is:\n>\n> indexes == 0 || !has_lpdead_items || !index_vacuuming\n>\n> and when lazy_scan_noprune() is called, the logic for whether or not\n> to update the FSM after the first pass is:\n>\n> indexes == 0 || !has_lpdead_items\n>\n> Those seem different to me.\n\nThis analysis seems correct to me, except that \"when\nlazy_scan_noprune() is called\" should really say \"when\nlazy_scan_noprune() is called (and returns true)\", because when it\nreturns false we fall through and call lazy_scan_prune() afterwards.\n\nHere's a draft patch to clean up the inconsistency here. It also gets\nrid of recordfreespace, because ISTM that recordfreespace is adding to\nthe confusion here rather than helping anything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 12 Jan 2024 14:32:13 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 2:32 PM Robert Haas <[email protected]> wrote:\n> On Fri, Jan 12, 2024 at 1:52 PM Melanie Plageman\n> <[email protected]> wrote:\n> This analysis seems correct to me, except that \"when\n> lazy_scan_noprune() is called\" should really say \"when\n> lazy_scan_noprune() is called (and returns true)\", because when it\n> returns false we fall through and call lazy_scan_prune() afterwards.\n\nNow that I see your patch, I understand what Melanie must have meant.\nI agree that there is a small inconsistency here, that we could well\ndo without.\n\nIn general I am in favor of religiously eliminating such\ninconsistencies (between lazy_scan_prune and lazy_scan_noprune),\nunless there is a reason not to. Not because it's necessarily\nimportant. More because it's just too hard to be sure whether it might\nmatter. It's usually easier to defensively assume that it matters.\n\n> Here's a draft patch to clean up the inconsistency here. It also gets\n> rid of recordfreespace, because ISTM that recordfreespace is adding to\n> the confusion here rather than helping anything.\n\nYou're using \"!prunestate.has_lpdead_items\" as part of your test that\nsets \"recordfreespace\". But lazy_scan_noprune doesn't get passed a\npointer to prunestate, so clearly you'll need to detect the same\ncondition some other way.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 12 Jan 2024 14:43:09 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 2:43 PM Peter Geoghegan <[email protected]> wrote:\n> You're using \"!prunestate.has_lpdead_items\" as part of your test that\n> sets \"recordfreespace\". But lazy_scan_noprune doesn't get passed a\n> pointer to prunestate, so clearly you'll need to detect the same\n> condition some other way.\n\nOOPS. Thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 14:47:11 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 1:52 PM Melanie Plageman\n<[email protected]> wrote:\n> On Fri, Jan 12, 2024 at 1:07 PM Peter Geoghegan <[email protected]> wrote:\n> > What is \"space_freed\"? Isn't that something from your uncommitted patch?\n>\n> Yes, I was mixing the two together.\n\nAn understandable mistake.\n\n> I just want to make sure that we agree that, on master, when\n> lazy_scan_prune() is called, the logic for whether or not to update\n> the FSM after the first pass is:\n>\n> indexes == 0 || !has_lpdead_items || !index_vacuuming\n>\n> and when lazy_scan_noprune() is called, the logic for whether or not\n> to update the FSM after the first pass is:\n>\n> indexes == 0 || !has_lpdead_items\n>\n> Those seem different to me.\n\nRight. As I said to Robert just now, I can now see that they're\nslightly different conditions.\n\nFWIW my brain was just ignoring \" || !index_vacuuming\". I dismissed it\nas an edge-case, only relevant when the failsafe has kicked in. Which\nit is. But that's still no reason to allow an inconsistency that we\ncan easily just avoid.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 12 Jan 2024 15:01:59 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 2:47 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Jan 12, 2024 at 2:43 PM Peter Geoghegan <[email protected]> wrote:\n> > You're using \"!prunestate.has_lpdead_items\" as part of your test that\n> > sets \"recordfreespace\". But lazy_scan_noprune doesn't get passed a\n> > pointer to prunestate, so clearly you'll need to detect the same\n> > condition some other way.\n>\n> OOPS. Thanks.\n\nAlso, I think you should combine these in lazy_scan_noprune() now\n\n /* Save any LP_DEAD items found on the page in dead_items array */\n if (vacrel->nindexes == 0)\n {\n /* Using one-pass strategy (since table has no indexes) */\n if (lpdead_items > 0)\n {\n\nSince we don't set recordfreespace in the outer if statement anymore\n\nAnd I noticed you missed a reference to recordfreespace output\nparameter in the function comment above lazy_scan_noprune().\n\n- Melanie\n\n\n",
"msg_date": "Fri, 12 Jan 2024 15:04:19 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 3:04 PM Melanie Plageman\n<[email protected]> wrote:\n> Also, I think you should combine these in lazy_scan_noprune() now\n>\n> /* Save any LP_DEAD items found on the page in dead_items array */\n> if (vacrel->nindexes == 0)\n> {\n> /* Using one-pass strategy (since table has no indexes) */\n> if (lpdead_items > 0)\n> {\n>\n> Since we don't set recordfreespace in the outer if statement anymore\n\nWell, maybe, but there's an else clause attached to the outer \"if\", so\nyou have to be a bit careful. I didn't think it was critical to\nfurther rejigger this.\n\n> And I noticed you missed a reference to recordfreespace output\n> parameter in the function comment above lazy_scan_noprune().\n\nOK.\n\nSo what's the best way to solve the problem that Peter pointed out?\nShould we pass in the prunestate? Maybe just replace bool\n*recordfreespace with bool *has_lpdead_items?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 15:22:16 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 3:22 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Jan 12, 2024 at 3:04 PM Melanie Plageman\n> <[email protected]> wrote:\n>\n> So what's the best way to solve the problem that Peter pointed out?\n> Should we pass in the prunestate? Maybe just replace bool\n> *recordfreespace with bool *has_lpdead_items?\n\nYea, that works for now. I mean, I think the way we should do it is\nupdate the FSM in lazy_scan_noprune(), but, for the purposes of this\npatch, yes. has_lpdead_items output parameter seems fine to me.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 12 Jan 2024 16:05:35 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 4:05 PM Melanie Plageman\n<[email protected]> wrote:\n> Yea, that works for now. I mean, I think the way we should do it is\n> update the FSM in lazy_scan_noprune(), but, for the purposes of this\n> patch, yes. has_lpdead_items output parameter seems fine to me.\n\nHere's v2.\n\nIt's not exactly clear to me why you want to update the FSM in\nlazy_scan_[no]prune(). When I first looked at v7-0004, I said to\nmyself \"well, this is dumb, because now we're just duplicating\nsomething that is common to both cases\". But then I realized that the\nlogic was *already* duplicated in lazy_scan_heap(), and that by\npushing the duplication down to lazy_scan_[no]prune(), you made the\ntwo copies identical rather than, as at present, having two copies\nthat differ from each other. Perhaps that's a sufficient reason to\nmake the change all by itself, but it seems like what would be really\ngood is if we only needed one copy of the logic. I don't know if\nthat's achievable, though.\n\nMore generally, I somewhat question whether it's really right to push\nthings from lazy_scan_heap() and into lazy_scan_[no]prune(), mostly\nbecause of the risk of having to duplicate logic. I'm not even really\nconvinced that it's good for us to have both of those functions.\nThere's an awful lot of code duplication between them already. Each\nhas a loop that tests the status of each item and then, for LP_USED,\nswitches on htsv_get_valid_status or HeapTupleSatisfiesVacuum. It\ndoesn't seem trivial to unify all that code, but it doesn't seem very\nnice to have two copies of it, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 15 Jan 2024 12:29:57 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Mon, Jan 15, 2024 at 12:29:57PM -0500, Robert Haas wrote:\n> On Fri, Jan 12, 2024 at 4:05 PM Melanie Plageman\n> <[email protected]> wrote:\n> > Yea, that works for now. I mean, I think the way we should do it is\n> > update the FSM in lazy_scan_noprune(), but, for the purposes of this\n> > patch, yes. has_lpdead_items output parameter seems fine to me.\n> \n> Here's v2.\n> \n> It's not exactly clear to me why you want to update the FSM in\n> lazy_scan_[no]prune(). When I first looked at v7-0004, I said to\n> myself \"well, this is dumb, because now we're just duplicating\n> something that is common to both cases\". But then I realized that the\n> logic was *already* duplicated in lazy_scan_heap(), and that by\n> pushing the duplication down to lazy_scan_[no]prune(), you made the\n> two copies identical rather than, as at present, having two copies\n> that differ from each other. Perhaps that's a sufficient reason to\n> make the change all by itself, but it seems like what would be really\n> good is if we only needed one copy of the logic. I don't know if\n> that's achievable, though.\n\nUpthread in v4-0004, I do have a version which combines the two FSM\nupdates for lazy_scan_prune() and lazy_scan_noprune().\n\nIf you move the VM updates from lazy_scan_heap() into lazy_scan_prune(),\nthen there is little difference between the prune and no prune cases in\nlazy_scan_heap(). The main difference is that, when lazy_scan_noprune()\nreturns true, you are supposed to avoid calling lazy_scan_new_or_empty()\nagain and have to avoid calling lazy_scan_prune(). I solved this with a\nlocal variable \"do_prune\" and checked it before calling\nlazy_scan_new_or_empty() and lazy_scan_prune().\n\nI moved away from this approach because it felt odd to test \"do_prune\"\nbefore calling lazy_scan_new_or_empty(). Though, perhaps it doesn't\nmatter if we just call lazy_scan_new_or_empty() again. We do that when\nlazy_scan_noprune() returns false anyway.\n\nI thought perhaps we could circumvent this issue by moving\nlazy_scan_new_or_empty() into lazy_scan_prune(). But, this seemed wrong\nto me because, as it stands now, if lazy_scan_new_or_empty() returns\ntrue, we would want to skip the VM update code and FSM update code in\nlazy_scan_heap() after lazy_scan_prune(). That would mean\nlazy_scan_prune() would have to return something to indicate that.\n\n> More generally, I somewhat question whether it's really right to push\n> things from lazy_scan_heap() and into lazy_scan_[no]prune(), mostly\n> because of the risk of having to duplicate logic. I'm not even really\n> convinced that it's good for us to have both of those functions.\n> There's an awful lot of code duplication between them already. Each\n> has a loop that tests the status of each item and then, for LP_USED,\n> switches on htsv_get_valid_status or HeapTupleSatisfiesVacuum. It\n> doesn't seem trivial to unify all that code, but it doesn't seem very\n> nice to have two copies of it, either.\n\nI agree that the duplicated logic in both places is undesirable.\nI think the biggest issue with combining them would be that when\nlazy_scan_noprune() returns false, it needs to go get a cleanup lock and\nthen invoke the logic of lazy_scan_prune() for all the tuples on the\npage. This seems a little hard to get right in a single function.\n\nThen there are more trivial-to-solve differences like invoking\nheap_tuple_should_freeze() and not bothering with the whole\nheap_prepare_freeze_tuple() if we only have the share lock.\n\nI am willing to try and write a version of this to see if it is better.\nI will say, though, my agenda was to eventually push the actions taken\nin the loop in lazy_scan_prune() into heap_page_prune() and\nheap_prune_chain().\n\n> From 32684f41d1dd50f726aa0dfe8a5d816aa5c42d64 Mon Sep 17 00:00:00 2001\n> From: Robert Haas <[email protected]>\n> Date: Mon, 15 Jan 2024 12:05:52 -0500\n> Subject: [PATCH v2] Be more consistent about whether to update the FSM while\n> vacuuming.\n\nFew small review comments below, but, overall, LGTM.\n\n> Previously, when lazy_scan_noprune() was called and returned true, we would\n> update the FSM immediately if the relation had no indexes or if the page\n> contained no dead items. On the other hand, when lazy_scan_prune() was\n> called, we would update the FSM if either of those things was true or\n> if index vacuuming was disabled. Eliminate that behavioral difference by\n> considering vacrel->do_index_vacuuming in both cases.\n> \n> Also, instead of having lazy_scan_noprune() make the decision\n> internally and pass it back to the caller via *recordfreespace, just\n> have it pass the number of LP_DEAD items back to the caller, and then\n\nIt doesn't pass the number of LP_DEAD items back to the caller. It\npasses a boolean.\n\n> let the caller make the decision. That seems less confusing, since\n> the caller also decides in the lazy_scan_prune() case; moreover, this\n> way, the whole test is in one place, instead of spread out.\n\nPerhaps it isn't important, but I find this wording confusing. You\nmention lazy_scan_prune() and then mention that \"the whole test is in\none place instead of spread out\" -- which kind of makes it sound like\nyou are consolidating FSM updates for both the lazy_scan_noprune() and\nlazy_scan_prune() cases. Perhaps simply flipping the order of the \"since\nthe caller\" and \"moreover, this way\" conjunctions would solve it. I\ndefer to your judgment.\n\n> ---\n> src/backend/access/heap/vacuumlazy.c | 58 ++++++++++++++--------------\n> 1 file changed, 29 insertions(+), 29 deletions(-)\n> \n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index b63cad1335..f17816b81d 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -1960,7 +1974,6 @@ lazy_scan_noprune(LVRelState *vacrel,\n> \tAssert(BufferGetBlockNumber(buf) == blkno);\n> \n> \thastup = false;\t\t\t\t/* for now */\n> -\t*recordfreespace = false;\t/* for now */\n> \n> \tlpdead_items = 0;\n> \tlive_tuples = 0;\n> @@ -2102,18 +2115,8 @@ lazy_scan_noprune(LVRelState *vacrel,\n> \t\t\thastup = true;\n> \t\t\tmissed_dead_tuples += lpdead_items;\n> \t\t}\n> -\n> -\t\t*recordfreespace = true;\n> -\t}\n> -\telse if (lpdead_items == 0)\n> -\t{\n> -\t\t/*\n> -\t\t * Won't be vacuuming this page later, so record page's freespace in\n> -\t\t * the FSM now\n> -\t\t */\n> -\t\t*recordfreespace = true;\n> \t}\n> -\telse\n> +\telse if (lpdead_items != 0)\n\nIt stuck out to me a bit that this test is if lpdead_items != 0 and the\none above it:\n\n\t/* Save any LP_DEAD items found on the page in dead_items array */\n\tif (vacrel->nindexes == 0)\n\t{\n\t\t/* Using one-pass strategy (since table has no indexes) */\n\t\tif (lpdead_items > 0)\n\t\t{\n\ntests if lpdead_items > 0. It is more the inconsistency that bothered me\nthan the fact that lpdead_items is signed.\n\n> @@ -2159,6 +2156,9 @@ lazy_scan_noprune(LVRelState *vacrel,\n> \tif (hastup)\n> \t\tvacrel->nonempty_pages = blkno + 1;\n> \n> +\t/* Did we find LP_DEAD items? */\n> +\t*has_lpdead_items = (lpdead_items > 0);\n\nI would drop this comment. The code doesn't really need it, and the\nreason we care if there are LP_DEAD items is not because they are dead\nbut because we want to know if we'll touch this page again. You don't\nneed to rehash all that here, so I think omitting the comment is enough.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 15 Jan 2024 16:03:24 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Mon, Jan 15, 2024 at 4:03 PM Melanie Plageman\n<[email protected]> wrote:\n> It doesn't pass the number of LP_DEAD items back to the caller. It\n> passes a boolean.\n\nOops.\n\n> Perhaps it isn't important, but I find this wording confusing. You\n> mention lazy_scan_prune() and then mention that \"the whole test is in\n> one place instead of spread out\" -- which kind of makes it sound like\n> you are consolidating FSM updates for both the lazy_scan_noprune() and\n> lazy_scan_prune() cases. Perhaps simply flipping the order of the \"since\n> the caller\" and \"moreover, this way\" conjunctions would solve it. I\n> defer to your judgment.\n\nI rewrote the commit message a bit. See what you think of this version.\n\n> tests if lpdead_items > 0. It is more the inconsistency that bothered me\n> than the fact that lpdead_items is signed.\n\nChanged.\n\n> > @@ -2159,6 +2156,9 @@ lazy_scan_noprune(LVRelState *vacrel,\n> > if (hastup)\n> > vacrel->nonempty_pages = blkno + 1;\n> >\n> > + /* Did we find LP_DEAD items? */\n> > + *has_lpdead_items = (lpdead_items > 0);\n>\n> I would drop this comment. The code doesn't really need it, and the\n> reason we care if there are LP_DEAD items is not because they are dead\n> but because we want to know if we'll touch this page again. You don't\n> need to rehash all that here, so I think omitting the comment is enough.\n\nI want to keep the comment. I guess it's a pet peeve of mine, but I\nhate it when people do this:\n\n/* some comment */\nsome_code();\n\nsome_more_code();\n\n/* some other comment */\neven_more_code();\n\nIMV, this makes it unclear whether /* some comment */ is describing\nboth some_code() and some_more_code(), or just the former. To be fair,\nthere is often no practical confusion, because if the comment is good\nand the code is nothing too complicated then you understand which way\nthe author meant it. But sometimes the comment is bad or out of date\nand sometimes the code is difficult to understand and then you're left\nscratching your head as to what the author meant. I prefer to insert a\ncomment above some_more_code() in such cases, even if it's a bit\nperfunctory. I think it makes the code easier to read.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 16 Jan 2024 10:24:27 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 10:24 AM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Jan 15, 2024 at 4:03 PM Melanie Plageman\n> <[email protected]> wrote:\n>\n> > Perhaps it isn't important, but I find this wording confusing. You\n> > mention lazy_scan_prune() and then mention that \"the whole test is in\n> > one place instead of spread out\" -- which kind of makes it sound like\n> > you are consolidating FSM updates for both the lazy_scan_noprune() and\n> > lazy_scan_prune() cases. Perhaps simply flipping the order of the \"since\n> > the caller\" and \"moreover, this way\" conjunctions would solve it. I\n> > defer to your judgment.\n>\n> I rewrote the commit message a bit. See what you think of this version.\n\nAll LGTM.\n\n\n",
"msg_date": "Tue, 16 Jan 2024 11:28:46 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 11:28 AM Melanie Plageman\n<[email protected]> wrote:\n> All LGTM.\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jan 2024 14:23:07 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On 1/12/24 12:45 PM, Robert Haas wrote:\n> P.P.S. to everyone: Yikes, this logic is really confusing.\n\nHaving studied all this code several years ago when it was even simpler \n- it was *still* very hard to grok even back then. I *greatly \nappreciate* the effort that y'all are putting into increasing the \nclarity here.\n\nBTW, back in the day the whole \"no indexes\" optimization was a really \ntiny amount of code... I think it amounted to 2 or 3 if statements. I \nhaven't yet attempted to grok this patchset, but I'm definitely \nwondering how much it's worth continuing to optimize that case. Clearly \nit'd be very expensive to memoize dead tuples just to trawl that list a \nsingle time to clean the heap, but outside of that I'm not sure other \noptimazations are worth it given the amount of code \ncomplexity/duplication they seem to require - especially for code where \ncorrectness is so crucial.\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n\n",
"msg_date": "Tue, 16 Jan 2024 15:28:32 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 4:28 PM Jim Nasby <[email protected]> wrote:\n> On 1/12/24 12:45 PM, Robert Haas wrote:\n> > P.P.S. to everyone: Yikes, this logic is really confusing.\n>\n> Having studied all this code several years ago when it was even simpler\n> - it was *still* very hard to grok even back then. I *greatly\n> appreciate* the effort that y'all are putting into increasing the\n> clarity here.\n\nThanks. And yeah, I agree.\n\n> BTW, back in the day the whole \"no indexes\" optimization was a really\n> tiny amount of code... I think it amounted to 2 or 3 if statements. I\n> haven't yet attempted to grok this patchset, but I'm definitely\n> wondering how much it's worth continuing to optimize that case. Clearly\n> it'd be very expensive to memoize dead tuples just to trawl that list a\n> single time to clean the heap, but outside of that I'm not sure other\n> optimazations are worth it given the amount of code\n> complexity/duplication they seem to require - especially for code where\n> correctness is so crucial.\n\nPersonally, I don't think throwing away that optimization is the way\nto go. The idea isn't intrinsically complicated, I believe. It's just\nthat the code has become messy because of too many hands touching it.\nAt least, that's my read.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jan 2024 16:59:17 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 2:23 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jan 16, 2024 at 11:28 AM Melanie Plageman\n> <[email protected]> wrote:\n> > All LGTM.\n>\n> Committed.\n\nAttached v8 patch set is rebased over this.\n\nIn 0004, I've taken the approach you seem to favor and combined the FSM\nupdates from the prune and no prune cases in lazy_scan_heap() instead\nof pushing the FSM updates into lazy_scan_prune() and\nlazy_scan_noprune().\n\nI did not guard against calling lazy_scan_new_or_empty() a second time\nin the case that lazy_scan_noprune() was called. I can do this. I\nmentioned upthread I found it confusing for lazy_scan_new_or_empty()\nto be guarded by if (do_prune). The overhead of calling it wouldn't be\nterribly high. I can change that based on your opinion of what is\nbetter.\n\nThe big open question/functional change is when we consider vacuuming\nthe FSM. Previously, we only considered vacuuming the FSM in the no\nindexes, has dead items case. After combining the FSM updates from\nlazy_scan_prune()'s no indexes/has lpdead items case,\nlazy_scan_prune()'s regular case, and lazy_scan_noprune(), all of them\nconsider vacuuming the FSM. I could guard against this, but I wasn't\nsure why we would want to only vacuum the FSM in the no indexes/has\ndead items case.\n\nI also noticed while rebasing something I missed while reviewing\n45d395cd75ffc5b -- has_lpdead_items is set in a slightly different\nplace in lazy_scan_noprune() than lazy_scan_prune() (with otherwise\nidentical surrounding code). Both are correct, but it would have been\nnice for them to be the same. If the patches below are committed, we\ncould standardize on the location in lazy_scan_noprune().\n\n\n- Melanie",
"msg_date": "Tue, 16 Jan 2024 18:07:24 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 6:07 PM Melanie Plageman\n<[email protected]> wrote:\n> Attached v8 patch set is rebased over this.\n\nReviewing 0001, consider the case where a table has no indexes.\nPre-patch, PageTruncateLinePointerArray() will happen when\nlazy_vacuum_heap_page() is called; post-patch, it will not occur.\nThat's a loss. Should we care? On the plus side, visibility map\nrepair, if required, can now take place. That's a gain.\n\nI'm otherwise satisfied with this patch now, except for some extremely\nminor nitpicking:\n\n+ * For now, pass mark_unused_now == false regardless of whether or\n\nPersonally, i would write \"pass mark_unused_now as false\" here,\nbecause we're not testing equality. Or else \"pass mark_unused_now =\nfalse\". This is not an equality test.\n\n+ * During pruning, the caller may have passed mark_unused_now == true,\n\nAgain here, but also, this is referring to the name of a parameter to\na function whose name is not given. I think this this should either\ntalk fully in terms of code (\"When heap_page_prune was called,\nmark_unused_now may have been passed as true, which allows would-be\nLP_DEAD items to be made LP_USED instead.\") or fully in English\n(\"During pruning, we may have decided to mark would-be dead items as\nunused.\").\n\n> In 0004, I've taken the approach you seem to favor and combined the FSM\n> updates from the prune and no prune cases in lazy_scan_heap() instead\n> of pushing the FSM updates into lazy_scan_prune() and\n> lazy_scan_noprune().\n\nI do like that approach.\n\nI think do_prune can be declared one scope inward, in the per-block\nfor loop. I would probably initialize it to true so I could drop the\nstubby else block:\n\n+ /* If we do get a cleanup lock, we will definitely prune */\n+ else\n+ do_prune = true;\n\nAnd then I'd probably write the call as if (!lazy_scan_noprune())\ndoprune = true.\n\nIf I wanted to stick with not initializing do_prune, I'd put the\ncomment inside as /* We got the cleanup lock, so we will definitely\nprune */ and add braces since that makes it a two-line block.\n\n> I did not guard against calling lazy_scan_new_or_empty() a second time\n> in the case that lazy_scan_noprune() was called. I can do this. I\n> mentioned upthread I found it confusing for lazy_scan_new_or_empty()\n> to be guarded by if (do_prune). The overhead of calling it wouldn't be\n> terribly high. I can change that based on your opinion of what is\n> better.\n\nTo me, the relevant question here isn't reader confusion, because that\ncan be cleared up with a comment explaining why we do or do not test\ndo_prune. Indeed, I'd say it's a textbook example of when you should\ncomment a test: when it might look wrong to the reader but you have a\ngood reason for doing it.\n\nBut that brings me to what I think the real question is here: do we,\nuh, have a good reason for doing it? At first blush the structure\nlooks a bit odd here. lazy_scan_new_or_empty() is intended to handle\nPageIsNew() and PageIsEmpty() cases, lazy_scan_noprune() the cases\nwhere we process the page without a cleanup lock, and\nlazy_scan_prune() the regular case. So you might think that\nlazy_scan_new_or_empty() would always be applied *first*, that we\nwould then conditionally apply lazy_scan_noprune(), and finally\nconditionally apply lazy_scan_prune(). Like this:\n\nbool got_cleanup_lock = ConditionalLockBufferForCleanup(buf);\nif (lazy_scan_new_or_empty())\n continue;\nif (!got_cleanup_lock && !lazy_scan_noprune())\n{\n LockBuffer(buf, BUFFER_LOCK_UNLOCK);\n LockBufferForCleanup(buf);\n got_cleanup_lock = true;\n}\nif (got_cleanup_lock)\n lazy_scan_prune();\n\nThe current organization of the code seems to imply that we don't need\nto worry about the PageIsNew() and PageIsEmpty() cases before calling\nlazy_scan_noprune(), and at the moment I'm not understanding why that\nshould be the case. I wonder if this is a bug, or if I'm just\nconfused.\n\n> The big open question/functional change is when we consider vacuuming\n> the FSM. Previously, we only considered vacuuming the FSM in the no\n> indexes, has dead items case. After combining the FSM updates from\n> lazy_scan_prune()'s no indexes/has lpdead items case,\n> lazy_scan_prune()'s regular case, and lazy_scan_noprune(), all of them\n> consider vacuuming the FSM. I could guard against this, but I wasn't\n> sure why we would want to only vacuum the FSM in the no indexes/has\n> dead items case.\n\nI don't get it. Conceptually, I agree that we don't want to be\ninconsistent here without some good reason. One of the big advantages\nof unifying different code paths is that you avoid being accidentally\ninconsistent. If different things are different it shows up as a test\nin the code instead of just having different code paths in different\nplaces that may or may not match.\n\nBut I thought the whole point of\n45d395cd75ffc5b4c824467140127a5d11696d4c was to iron out the existing\ninconsistencies so that we could unify this code without having to\nchange any more behavior. In particular, I thought we just made it\nconsistently adhere to the principle Peter articulated, where we\nrecord free space when we're touching the page for presumptively the\nlast time. I gather that you think it's still not consistent, but I\ndon't understand what the remaining inconsistency is. Can you explain\nfurther?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jan 2024 12:17:09 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 12:17 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jan 16, 2024 at 6:07 PM Melanie Plageman\n> <[email protected]> wrote:\n> > Attached v8 patch set is rebased over this.\n>\n> Reviewing 0001, consider the case where a table has no indexes.\n> Pre-patch, PageTruncateLinePointerArray() will happen when\n> lazy_vacuum_heap_page() is called; post-patch, it will not occur.\n> That's a loss. Should we care? On the plus side, visibility map\n> repair, if required, can now take place. That's a gain.\n\nI thought that this wasn't an issue because heap_page_prune_execute()\ncalls PageRepairFragmentation() which similarly modifies pd_lower and\nsets the hint bit about free line pointers.\n\n> I'm otherwise satisfied with this patch now, except for some extremely\n> minor nitpicking:\n>\n> + * For now, pass mark_unused_now == false regardless of whether or\n>\n> Personally, i would write \"pass mark_unused_now as false\" here,\n> because we're not testing equality. Or else \"pass mark_unused_now =\n> false\". This is not an equality test.\n>\n> + * During pruning, the caller may have passed mark_unused_now == true,\n>\n> Again here, but also, this is referring to the name of a parameter to\n> a function whose name is not given. I think this this should either\n> talk fully in terms of code (\"When heap_page_prune was called,\n> mark_unused_now may have been passed as true, which allows would-be\n> LP_DEAD items to be made LP_USED instead.\") or fully in English\n> (\"During pruning, we may have decided to mark would-be dead items as\n> unused.\").\n\nFixed both of the above issues as suggested in attached v9\n\n> > In 0004, I've taken the approach you seem to favor and combined the FSM\n> > updates from the prune and no prune cases in lazy_scan_heap() instead\n> > of pushing the FSM updates into lazy_scan_prune() and\n> > lazy_scan_noprune().\n>\n> I do like that approach.\n>\n> I think do_prune can be declared one scope inward, in the per-block\n> for loop. I would probably initialize it to true so I could drop the\n> stubby else block:\n>\n> + /* If we do get a cleanup lock, we will definitely prune */\n> + else\n> + do_prune = true;\n>\n> And then I'd probably write the call as if (!lazy_scan_noprune())\n> doprune = true.\n>\n> If I wanted to stick with not initializing do_prune, I'd put the\n> comment inside as /* We got the cleanup lock, so we will definitely\n> prune */ and add braces since that makes it a two-line block.\n\nIf we don't unconditionally set do_prune using the result of\nlazy_scan_noprune(), then we cannot leave do_prune uninitialized. I\npreferred having it uninitialized, as it didn't imply that doing\npruning was the default. Also, it made it simpler to have that comment\nabout always pruning when we get the cleanup lock.\n\nHowever, with the changes you mentioned below (got_cleanup_lock), this\ndiscussion is moot.\n\n> > I did not guard against calling lazy_scan_new_or_empty() a second time\n> > in the case that lazy_scan_noprune() was called. I can do this. I\n> > mentioned upthread I found it confusing for lazy_scan_new_or_empty()\n> > to be guarded by if (do_prune). The overhead of calling it wouldn't be\n> > terribly high. I can change that based on your opinion of what is\n> > better.\n>\n> To me, the relevant question here isn't reader confusion, because that\n> can be cleared up with a comment explaining why we do or do not test\n> do_prune. Indeed, I'd say it's a textbook example of when you should\n> comment a test: when it might look wrong to the reader but you have a\n> good reason for doing it.\n>\n> But that brings me to what I think the real question is here: do we,\n> uh, have a good reason for doing it? At first blush the structure\n> looks a bit odd here. lazy_scan_new_or_empty() is intended to handle\n> PageIsNew() and PageIsEmpty() cases, lazy_scan_noprune() the cases\n> where we process the page without a cleanup lock, and\n> lazy_scan_prune() the regular case. So you might think that\n> lazy_scan_new_or_empty() would always be applied *first*, that we\n> would then conditionally apply lazy_scan_noprune(), and finally\n> conditionally apply lazy_scan_prune(). Like this:\n>\n> bool got_cleanup_lock = ConditionalLockBufferForCleanup(buf);\n> if (lazy_scan_new_or_empty())\n> continue;\n> if (!got_cleanup_lock && !lazy_scan_noprune())\n> {\n> LockBuffer(buf, BUFFER_LOCK_UNLOCK);\n> LockBufferForCleanup(buf);\n> got_cleanup_lock = true;\n> }\n> if (got_cleanup_lock)\n> lazy_scan_prune();\n>\n> The current organization of the code seems to imply that we don't need\n> to worry about the PageIsNew() and PageIsEmpty() cases before calling\n> lazy_scan_noprune(), and at the moment I'm not understanding why that\n> should be the case. I wonder if this is a bug, or if I'm just\n> confused.\n\nYes, I also spent some time thinking about this. In master, we do\nalways call lazy_scan_new_or_empty() before calling\nlazy_scan_noprune(). The code is aiming to ensure we call\nlazy_scan_new_or_empty() once before calling either of\nlazy_scan_noprune() or lazy_scan_prune(). I think it didn't call\nlazy_scan_new_or_empty() unconditionally first because of the\ndifferent lock types expected. But, your structure has solved that.\nI've used a version of your example code above in attached v9. It is\nmuch nicer.\n\n> > The big open question/functional change is when we consider vacuuming\n> > the FSM. Previously, we only considered vacuuming the FSM in the no\n> > indexes, has dead items case. After combining the FSM updates from\n> > lazy_scan_prune()'s no indexes/has lpdead items case,\n> > lazy_scan_prune()'s regular case, and lazy_scan_noprune(), all of them\n> > consider vacuuming the FSM. I could guard against this, but I wasn't\n> > sure why we would want to only vacuum the FSM in the no indexes/has\n> > dead items case.\n>\n> I don't get it. Conceptually, I agree that we don't want to be\n> inconsistent here without some good reason. One of the big advantages\n> of unifying different code paths is that you avoid being accidentally\n> inconsistent. If different things are different it shows up as a test\n> in the code instead of just having different code paths in different\n> places that may or may not match.\n>\n> But I thought the whole point of\n> 45d395cd75ffc5b4c824467140127a5d11696d4c was to iron out the existing\n> inconsistencies so that we could unify this code without having to\n> change any more behavior. In particular, I thought we just made it\n> consistently adhere to the principle Peter articulated, where we\n> record free space when we're touching the page for presumptively the\n> last time. I gather that you think it's still not consistent, but I\n> don't understand what the remaining inconsistency is. Can you explain\n> further?\n\nAh, I realize I was not clear. I am now talking about inconsistencies\nin vacuuming the FSM itself. FreeSpaceMapVacuumRange(). Not updating\nthe freespace map during the course of vacuuming the heap relation.\n\n- Melanie",
"msg_date": "Wed, 17 Jan 2024 15:11:55 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 3:12 PM Melanie Plageman\n<[email protected]> wrote:\n> > Reviewing 0001, consider the case where a table has no indexes.\n> > Pre-patch, PageTruncateLinePointerArray() will happen when\n> > lazy_vacuum_heap_page() is called; post-patch, it will not occur.\n> > That's a loss. Should we care? On the plus side, visibility map\n> > repair, if required, can now take place. That's a gain.\n>\n> I thought that this wasn't an issue because heap_page_prune_execute()\n> calls PageRepairFragmentation() which similarly modifies pd_lower and\n> sets the hint bit about free line pointers.\n\nAh, OK, I didn't understand that PageRepairFragmentation() does what\nis also done by PageTruncateLinePointerArray().\n\n> Yes, I also spent some time thinking about this. In master, we do\n> always call lazy_scan_new_or_empty() before calling\n> lazy_scan_noprune(). The code is aiming to ensure we call\n> lazy_scan_new_or_empty() once before calling either of\n> lazy_scan_noprune() or lazy_scan_prune(). I think it didn't call\n> lazy_scan_new_or_empty() unconditionally first because of the\n> different lock types expected. But, your structure has solved that.\n> I've used a version of your example code above in attached v9. It is\n> much nicer.\n\nOh, OK, I see it now. I missed that lazy_scan_new_or_empty() was\ncalled either way. Glad that my proposed restructuring managed to be\nhelpful despite that confusion, though. :-)\n\nAt a quick glance, I also like the way this looks. I'll review it more\nthoroughly later. Does this patch require 0002 and 0003 or could it\nequally well go first? I confess that I don't entirely understand why\nwe want 0002 and 0003.\n\n> Ah, I realize I was not clear. I am now talking about inconsistencies\n> in vacuuming the FSM itself. FreeSpaceMapVacuumRange(). Not updating\n> the freespace map during the course of vacuuming the heap relation.\n\nFair enough, but I'm still not quite sure exactly what the question\nis. It looks to me like the current code, when there are indexes,\nvacuums the FSM after each round of index vacuuming. When there are no\nindexes, doing it after each round of index vacuuming would mean never\ndoing it, so instead we vacuum the FSM every ~8GB. I assume what\nhappened here is that somebody decided doing it after each round of\nindex vacuuming was the \"right thing,\" and then realized that was not\ngoing to work if no index vacuuming was happening, and so inserted the\n8GB threshold to cover that case. I don't really know what to make of\nall of this. On a happy PostgreSQL system, doing anything after each\nround of index vacuuming means doing it once, because multiple rounds\nof indexing vacuum are extremely expensive and we hope that it won't\never occur. From that point of view, the 8GB threshold is better,\nbecause it means that when we vacuum a large relation, space should\nbecome visible to the rest of the system incrementally without needing\nto wait for the entire vacuum to finish. On the other hand, we also\nhave this idea that we want to record free space in the FSM once,\nafter the last time we touch the page. Given that behavior, vacuuming\nthe FSM every 8GB when we haven't yet done index vacuuming wouldn't\naccomplish much of anything, because we haven't updated it for the\npages we just touched. On the third hand, the current behavior seems\nslightly ridiculous, because pruning the page is where we're mostly\ngoing to free up space, so we might be better off just updating the\nFSM then instead of waiting. That free space could be mighty useful\nduring the long wait between pruning and marking line pointers unused.\nOn the fourth hand, that seems like a significant behavior change that\nwe might not want to undertake without a bunch of research that we\nmight not want to do right now -- and if we did do it, should we then\nupdate the FSM a second time after marking line pointers unused?\n\nI'm not sure if any of this is answering your actual question, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jan 2024 15:58:10 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 3:58 PM Robert Haas <[email protected]> wrote:\n> > Ah, I realize I was not clear. I am now talking about inconsistencies\n> > in vacuuming the FSM itself. FreeSpaceMapVacuumRange(). Not updating\n> > the freespace map during the course of vacuuming the heap relation.\n>\n> Fair enough, but I'm still not quite sure exactly what the question\n> is. It looks to me like the current code, when there are indexes,\n> vacuums the FSM after each round of index vacuuming. When there are no\n> indexes, doing it after each round of index vacuuming would mean never\n> doing it, so instead we vacuum the FSM every ~8GB. I assume what\n> happened here is that somebody decided doing it after each round of\n> index vacuuming was the \"right thing,\" and then realized that was not\n> going to work if no index vacuuming was happening, and so inserted the\n> 8GB threshold to cover that case.\n\nNote that VACUUM_FSM_EVERY_PAGES is applied against the number of\nrel_pages \"processed\" so far -- *including* any pages that were\nskipped using the visibility map. It would make a bit more sense if it\nwas applied against scanned_pages instead (just like\nFAILSAFE_EVERY_PAGES has been since commit 07eef53955). In other\nwords, VACUUM_FSM_EVERY_PAGES is applied against a thing that has only\na very loose relationship with physical work performed/time elapsed.\n\nI tend to suspect that VACUUM_FSM_EVERY_PAGES is fundamentally the\nwrong idea. If it's such a good idea then why not apply it all the\ntime? That is, why not apply it independently of whether nindexes==0\nin the current VACUUM operation? (You know, just like with\nFAILSAFE_EVERY_PAGES.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 17 Jan 2024 16:25:02 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 4:25 PM Peter Geoghegan <[email protected]> wrote:\n> I tend to suspect that VACUUM_FSM_EVERY_PAGES is fundamentally the\n> wrong idea. If it's such a good idea then why not apply it all the\n> time? That is, why not apply it independently of whether nindexes==0\n> in the current VACUUM operation? (You know, just like with\n> FAILSAFE_EVERY_PAGES.)\n\nActually, I suppose that we couldn't apply it independently of\nnindexes==0. Then we'd call FreeSpaceMapVacuumRange() before our\nsecond pass over the heap takes place for those LP_DEAD-containing\nheap pages scanned since the last round of index/heap vacuuming took\nplace (or since VACUUM began). We need to make sure that the FSM has\nthe most recent possible information known to VACUUM, which would\nbreak if we applied VACUUM_FSM_EVERY_PAGES rules when nindexes > 0.\n\nEven still, the design of VACUUM_FSM_EVERY_PAGES seems questionable to me.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 17 Jan 2024 16:31:12 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 3:58 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Jan 17, 2024 at 3:12 PM Melanie Plageman\n> <[email protected]> wrote:\n>\n> > Yes, I also spent some time thinking about this. In master, we do\n> > always call lazy_scan_new_or_empty() before calling\n> > lazy_scan_noprune(). The code is aiming to ensure we call\n> > lazy_scan_new_or_empty() once before calling either of\n> > lazy_scan_noprune() or lazy_scan_prune(). I think it didn't call\n> > lazy_scan_new_or_empty() unconditionally first because of the\n> > different lock types expected. But, your structure has solved that.\n> > I've used a version of your example code above in attached v9. It is\n> > much nicer.\n>\n> Oh, OK, I see it now. I missed that lazy_scan_new_or_empty() was\n> called either way. Glad that my proposed restructuring managed to be\n> helpful despite that confusion, though. :-)\n>\n> At a quick glance, I also like the way this looks. I'll review it more\n> thoroughly later. Does this patch require 0002 and 0003 or could it\n> equally well go first? I confess that I don't entirely understand why\n> we want 0002 and 0003.\n\nWell, 0002 and 0003 move the updates to the visibility map into\nlazy_scan_prune(). We only want to update the VM if we called\nlazy_scan_prune() (i.e. not if lazy_scan_noprune() returned true). We\nalso need the lock on the heap page when updating the visibility map\nbut we want to have released the lock before updating the FSM, so we\nneed to first update the VM then the FSM.\n\nThe VM update code, in my opinion, belongs in lazy_scan_prune() --\nsince it is only done when lazy_scan_prune() is called. To keep the VM\nupdate code in lazy_scan_heap() and still consolidate the FSM update\ncode, we would have to surround all of the VM update code in a test\n(if got_cleanup_lock, I suppose). I don't see any advantage in doing\nthat.\n\n> > Ah, I realize I was not clear. I am now talking about inconsistencies\n> > in vacuuming the FSM itself. FreeSpaceMapVacuumRange(). Not updating\n> > the freespace map during the course of vacuuming the heap relation.\n>\n> Fair enough, but I'm still not quite sure exactly what the question\n> is. It looks to me like the current code, when there are indexes,\n> vacuums the FSM after each round of index vacuuming. When there are no\n> indexes, doing it after each round of index vacuuming would mean never\n> doing it, so instead we vacuum the FSM every ~8GB. I assume what\n> happened here is that somebody decided doing it after each round of\n> index vacuuming was the \"right thing,\" and then realized that was not\n> going to work if no index vacuuming was happening, and so inserted the\n> 8GB threshold to cover that case.\n\nAh, I see. I understood that we want to update the FSM every 8GB, but\nI didn't understand that we wanted to check if we were at that 8GB\nonly after a round of index vacuuming. That would explain why we also\nhad to do it in the no indexes case -- because, as you say, there\nwouldn't be a round of index vacuuming.\n\nThis does mean that something is not quite right with 0001 as well as\n0004. We'd end up checking if we are at 8GB much more often. I should\nprobably find a way to replicate the cadence on master.\n\n> I don't really know what to make of\n> all of this. On a happy PostgreSQL system, doing anything after each\n> round of index vacuuming means doing it once, because multiple rounds\n> of indexing vacuum are extremely expensive and we hope that it won't\n> ever occur. From that point of view, the 8GB threshold is better,\n> because it means that when we vacuum a large relation, space should\n> become visible to the rest of the system incrementally without needing\n> to wait for the entire vacuum to finish. On the other hand, we also\n> have this idea that we want to record free space in the FSM once,\n> after the last time we touch the page. Given that behavior, vacuuming\n> the FSM every 8GB when we haven't yet done index vacuuming wouldn't\n> accomplish much of anything, because we haven't updated it for the\n> pages we just touched. On the third hand, the current behavior seems\n> slightly ridiculous, because pruning the page is where we're mostly\n> going to free up space, so we might be better off just updating the\n> FSM then instead of waiting. That free space could be mighty useful\n> during the long wait between pruning and marking line pointers unused.\n> On the fourth hand, that seems like a significant behavior change that\n> we might not want to undertake without a bunch of research that we\n> might not want to do right now -- and if we did do it, should we then\n> update the FSM a second time after marking line pointers unused?\n\nI suspect we'd need to do some testing of various scenarios to justify\nsuch a change.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 17 Jan 2024 17:33:27 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 4:25 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Wed, Jan 17, 2024 at 3:58 PM Robert Haas <[email protected]> wrote:\n> > > Ah, I realize I was not clear. I am now talking about inconsistencies\n> > > in vacuuming the FSM itself. FreeSpaceMapVacuumRange(). Not updating\n> > > the freespace map during the course of vacuuming the heap relation.\n> >\n> > Fair enough, but I'm still not quite sure exactly what the question\n> > is. It looks to me like the current code, when there are indexes,\n> > vacuums the FSM after each round of index vacuuming. When there are no\n> > indexes, doing it after each round of index vacuuming would mean never\n> > doing it, so instead we vacuum the FSM every ~8GB. I assume what\n> > happened here is that somebody decided doing it after each round of\n> > index vacuuming was the \"right thing,\" and then realized that was not\n> > going to work if no index vacuuming was happening, and so inserted the\n> > 8GB threshold to cover that case.\n>\n> Note that VACUUM_FSM_EVERY_PAGES is applied against the number of\n> rel_pages \"processed\" so far -- *including* any pages that were\n> skipped using the visibility map. It would make a bit more sense if it\n> was applied against scanned_pages instead (just like\n> FAILSAFE_EVERY_PAGES has been since commit 07eef53955). In other\n> words, VACUUM_FSM_EVERY_PAGES is applied against a thing that has only\n> a very loose relationship with physical work performed/time elapsed.\n\nThis is a good point. Seems like a very reasonable change to make, as\nI would think that was the original intent.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 17 Jan 2024 17:38:52 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 4:31 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Wed, Jan 17, 2024 at 4:25 PM Peter Geoghegan <[email protected]> wrote:\n> > I tend to suspect that VACUUM_FSM_EVERY_PAGES is fundamentally the\n> > wrong idea. If it's such a good idea then why not apply it all the\n> > time? That is, why not apply it independently of whether nindexes==0\n> > in the current VACUUM operation? (You know, just like with\n> > FAILSAFE_EVERY_PAGES.)\n>\n> Actually, I suppose that we couldn't apply it independently of\n> nindexes==0. Then we'd call FreeSpaceMapVacuumRange() before our\n> second pass over the heap takes place for those LP_DEAD-containing\n> heap pages scanned since the last round of index/heap vacuuming took\n> place (or since VACUUM began). We need to make sure that the FSM has\n> the most recent possible information known to VACUUM, which would\n> break if we applied VACUUM_FSM_EVERY_PAGES rules when nindexes > 0.\n>\n> Even still, the design of VACUUM_FSM_EVERY_PAGES seems questionable to me.\n\nI now see I misunderstood and my earlier email was wrong. I didn't\nnotice that we only use VACUUM_FSM_EVERY_PAGES if nindexes ==0.\nSo, in master, we call FreeSpaceMapVacuumRange() always after a round\nof index vacuuming and periodically if there are no indexes.\n\nIt seems like you are asking whether not we should vacuum the FSM at a\ndifferent cadence for the no indexes case (and potentially count\nblocks actually vacuumed instead of blocks considered).\n\nAnd it seems like Robert is asking whether or not we should\nFreeSpaceMapVacuumRange() more frequently than after index vacuuming\nin the nindexes > 0 case.\n\nOther than the overhead of the actual vacuuming of the FSM, what are\nthe potential downsides of knowing about freespace sooner? It could\nchange what pages are inserted to. What are the possible undesirable\nside effects?\n\n- Melanie\n\n\n",
"msg_date": "Wed, 17 Jan 2024 17:47:37 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 5:47 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Wed, Jan 17, 2024 at 4:31 PM Peter Geoghegan <[email protected]> wrote:\n> >\n> > On Wed, Jan 17, 2024 at 4:25 PM Peter Geoghegan <[email protected]> wrote:\n> > > I tend to suspect that VACUUM_FSM_EVERY_PAGES is fundamentally the\n> > > wrong idea. If it's such a good idea then why not apply it all the\n> > > time? That is, why not apply it independently of whether nindexes==0\n> > > in the current VACUUM operation? (You know, just like with\n> > > FAILSAFE_EVERY_PAGES.)\n> >\n> > Actually, I suppose that we couldn't apply it independently of\n> > nindexes==0. Then we'd call FreeSpaceMapVacuumRange() before our\n> > second pass over the heap takes place for those LP_DEAD-containing\n> > heap pages scanned since the last round of index/heap vacuuming took\n> > place (or since VACUUM began). We need to make sure that the FSM has\n> > the most recent possible information known to VACUUM, which would\n> > break if we applied VACUUM_FSM_EVERY_PAGES rules when nindexes > 0.\n> >\n> > Even still, the design of VACUUM_FSM_EVERY_PAGES seems questionable to me.\n>\n> I now see I misunderstood and my earlier email was wrong. I didn't\n> notice that we only use VACUUM_FSM_EVERY_PAGES if nindexes ==0.\n> So, in master, we call FreeSpaceMapVacuumRange() always after a round\n> of index vacuuming and periodically if there are no indexes.\n\nThe \"nindexes == 0\" if() that comes just after our call to\nlazy_scan_prune() is \"the one-pass equivalent of a call to\nlazy_vacuum()\". Though this includes the call to\nFreeSpaceMapVacuumRange() that immediately follows the two-pass case\ncalling lazy_vacuum(), too.\n\n> It seems like you are asking whether not we should vacuum the FSM at a\n> different cadence for the no indexes case (and potentially count\n> blocks actually vacuumed instead of blocks considered).\n>\n> And it seems like Robert is asking whether or not we should\n> FreeSpaceMapVacuumRange() more frequently than after index vacuuming\n> in the nindexes > 0 case.\n\nThere is no particular reason for the nindexes==0 case to care about\nhow often we'd call FreeSpaceMapVacuumRange() in the counterfactual\nworld where the same VACUUM ran on the same table, except that it was\nnindexes>1 instead. At least I don't see any.\n\n> Other than the overhead of the actual vacuuming of the FSM, what are\n> the potential downsides of knowing about freespace sooner? It could\n> change what pages are inserted to. What are the possible undesirable\n> side effects?\n\nThe whole VACUUM_FSM_EVERY_PAGES thing comes from commit 851a26e266.\nThe commit message of that work seems to suppose that calling\nFreeSpaceMapVacuumRange() more frequently is pretty much strictly\nbetter than calling it less frequently, at least up to the point where\ncertain more-or-less fixed costs paid once per\nFreeSpaceMapVacuumRange() start to become a problem. I think that\nthat's probably about right.\n\nThe commit message also says that we \"arbitrarily update upper FSM\npages after each 8GB of heap\" (in the nindexes==0 case). So\nVACUUM_FSM_EVERY_PAGES is only very approximately analogous to what we\ndo in the nindexes>1 case. That seems reasonable because these two\ncases really aren't so comparable in terms of the FSM vacuuming\nrequirements -- the nindexes==0 case legitimately doesn't have the\nsame dependency on heap vacuuming (and index vacuuming) that we have\nto consider when nindexes>1.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 17 Jan 2024 18:08:51 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 4:31 PM Peter Geoghegan <[email protected]> wrote:\n> Actually, I suppose that we couldn't apply it independently of\n> nindexes==0. Then we'd call FreeSpaceMapVacuumRange() before our\n> second pass over the heap takes place for those LP_DEAD-containing\n> heap pages scanned since the last round of index/heap vacuuming took\n> place (or since VACUUM began). We need to make sure that the FSM has\n> the most recent possible information known to VACUUM, which would\n> break if we applied VACUUM_FSM_EVERY_PAGES rules when nindexes > 0.\n>\n> Even still, the design of VACUUM_FSM_EVERY_PAGES seems questionable to me.\n\nI agree with all of this. I thought I'd said all of this, actually, in\nmy prior email, but perhaps it wasn't as clear as it needed to be.\n\nBut I also said one more thing that I'd still like to hear your\nthoughts about, which is: why is it right to update the FSM after the\nsecond heap pass rather than the first one? I can't help but suspect\nthis is an algorithmic holdover from pre-HOT days, when VACUUM's first\nheap pass was read-only and all the work happened in the second pass.\nNow, nearly all of the free space that will ever become free becomes\nfree in the first pass, so why not advertise it then, instead of\nwaiting?\n\nAdmittedly, HOT is not yet 15 years old, so maybe it's too soon to\nadapt our VACUUM algorithm for it. *wink*\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jan 2024 08:51:46 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 5:33 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> This does mean that something is not quite right with 0001 as well as\n> 0004. We'd end up checking if we are at 8GB much more often. I should\n> probably find a way to replicate the cadence on master.\n\nI believe I've done this in attached v10.\n\n- Melanie",
"msg_date": "Thu, 18 Jan 2024 09:53:12 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 8:52 AM Robert Haas <[email protected]> wrote:\n> But I also said one more thing that I'd still like to hear your\n> thoughts about, which is: why is it right to update the FSM after the\n> second heap pass rather than the first one? I can't help but suspect\n> this is an algorithmic holdover from pre-HOT days, when VACUUM's first\n> heap pass was read-only and all the work happened in the second pass.\n> Now, nearly all of the free space that will ever become free becomes\n> free in the first pass, so why not advertise it then, instead of\n> waiting?\n\nI don't think that doing everything FSM-related in the first heap pass\nis a bad idea -- especially not if it buys you something elsewhere.\n\nThe problem with your justification for moving things in that\ndirection (if any) is that it is occasionally not quite true: there\nare at least some cases where line pointer truncation after making a\npage's LP_DEAD items -> LP_UNUSED will actually matter. Plus\nPageGetHeapFreeSpace() will return 0 if and when\n\"PageGetMaxOffsetNumber(page) > MaxHeapTuplesPerPage &&\n!PageHasFreeLinePointers(page)\". Of course, nothing stops you from\ncompensating for this by anticipating what will happen later on, and\nassuming that the page already has that much free space.\n\nIt might even be okay to just not try to compensate for anything,\nPageGetHeapFreeSpace-wise -- just do all FSM stuff in the first heap\npass, and ignore all this. I happen to believe that a FSM_CATEGORIES\nof 256 is way too much granularity to be useful in practice -- I just\ndon't have any faith in the idea that that kind of granularity is\nuseful (it's quite the opposite).\n\nA further justification might be what we already do in the heapam.c\nREDO routines: the way that we use XLogRecordPageWithFreeSpace already\noperates with far less precision that corresponding code from\nvacuumlazy.c. heap_xlog_prune() already has recovery do what you\npropose to do during original execution; it doesn't try to avoid\nduplicating an anticipated call to XLogRecordPageWithFreeSpace that'll\ntake place when heap_xlog_vacuum() runs against the same page a bit\nlater on.\n\nYou'd likely prefer a simpler argument for doing this -- an argument\nthat doesn't require abandoning/discrediting the idea that a high\ndegree of FSM_CATEGORIES-wise precision is a valuable thing. Not sure\nthat that's possible -- the current design is at least correct on its\nown terms. And what you propose to do will probably be less correct on\nthose same terms, silly though they are.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 18 Jan 2024 10:09:11 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 9:53 AM Melanie Plageman\n<[email protected]> wrote:\n> I believe I've done this in attached v10.\n\nOh, I see. Good catch.\n\nI've now committed 0001.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jan 2024 10:09:28 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 10:09 AM Peter Geoghegan <[email protected]> wrote:\n> The problem with your justification for moving things in that\n> direction (if any) is that it is occasionally not quite true: there\n> are at least some cases where line pointer truncation after making a\n> page's LP_DEAD items -> LP_UNUSED will actually matter. Plus\n> PageGetHeapFreeSpace() will return 0 if and when\n> \"PageGetMaxOffsetNumber(page) > MaxHeapTuplesPerPage &&\n> !PageHasFreeLinePointers(page)\". Of course, nothing stops you from\n> compensating for this by anticipating what will happen later on, and\n> assuming that the page already has that much free space.\n\nI think we're agreeing but I want to be sure. If we only set LP_DEAD\nitems to LP_UNUSED, that frees no space. But if doing so allows us to\ntruncate the line pointer array, that that frees a little bit of\nspace. Right?\n\nOne problem with using this as a justification for the status quo is\nthat truncating the line pointer array is a relatively recent\nbehavior. It's certainly much newer than the choice to have VACUUM\ntouch the FSM in the second page than the first page.\n\nAnother problem is that the amount of space that we're freeing up in\nthe second pass is really quite minimal even when it's >0. Any tuple\nthat actually contains any data at all is at least 32 bytes, and most\nof them are quite a bit larger. Item pointers are 2 bytes. To save\nenough space to fit even one additional tuple, we'd have to free *at\nleast* 16 line pointers. That's going to be really rare.\n\nAnd even if it happens, is it even useful to advertise that free\nspace? Do we want to cram one more tuple into a page that has a\nhistory of extremely heavy updates? Could it be that it's smarter to\njust forget about that free space? You've written before about the\nstupidity of cramming tuples of different generations into the same\npage, and that concept seems to apply here. When we heap_page_prune(),\nwe don't know how much time has elapsed since the page was last\nmodified - but if we're lucky, it might not be very much. Updating the\nFSM at that time gives us some shot of filling up the page with data\ncreated around the same time as the existing page contents. By the\ntime we vacuum the indexes and come back, that temporal locality is\ndefinitely lost.\n\n> You'd likely prefer a simpler argument for doing this -- an argument\n> that doesn't require abandoning/discrediting the idea that a high\n> degree of FSM_CATEGORIES-wise precision is a valuable thing. Not sure\n> that that's possible -- the current design is at least correct on its\n> own terms. And what you propose to do will probably be less correct on\n> those same terms, silly though they are.\n\nI've never really understood why you think that the number of\nFSM_CATEGORIES is the problem. I believe I recall you endorsing a\nsystem where pages are open or closed, to try to achieve temporal\nlocality of data. I agree that such a system could work better than\nwhat we have now. I think there's a risk that such a system could\ncreate pathological cases where the behavior is much worse than what\nwe have today, and I think we'd need to consider carefully what such\ncases might exist and what mitigation strategies might make sense.\nHowever, I don't see a reason why such a system should intrinsically\nwant to reduce FSM_CATEGORIES. If we have two open pages and one of\nthem has enough space for the tuple we're now trying to insert and the\nother doesn't, we'd still like to avoid having the FSM hand us the one\nthat doesn't.\n\nNow, that said, I suspect that we actually could reduce FSM_CATEGORIES\nsomewhat without causing any real problems, because many tables are\ngoing to have tuples that are all about the same size, and even in a\ntable where the sizes vary more than is typical, a single tuple can't\nconsume more than a quarter of the page, so granularity above that\npoint seems completely useless. So if we needed some bitspace to track\nthe open/closed status of pages or similar, I suspect we could find\nthat in the existing FSM byte per page without losing anything. But\nall of that is just an argument that reducing the number of\nFSM_CATEGORIES is *acceptable*; it doesn't amount to an argument that\nit's better. My current belief is that it isn't better, just a vehicle\nto do something else that maybe is better, like squeezing open/closed\ntracking or similar into the existing bit space. My understanding is\nthat you think it would be better on its own terms, but I have not yet\nbeen able to grasp why that would be so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jan 2024 10:42:53 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 10:42 AM Robert Haas <[email protected]> wrote:\n> Now, that said, I suspect that we actually could reduce FSM_CATEGORIES\n> somewhat without causing any real problems, because many tables are\n> going to have tuples that are all about the same size, and even in a\n> table where the sizes vary more than is typical, a single tuple can't\n> consume more than a quarter of the page,\n\nActually, I think that's a soft limit, not a hard limit. But the\ngranularity above that level probably doesn't need to be very high, at\nleat.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jan 2024 10:50:13 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 10:43 AM Robert Haas <[email protected]> wrote:\n> I think we're agreeing but I want to be sure. If we only set LP_DEAD\n> items to LP_UNUSED, that frees no space. But if doing so allows us to\n> truncate the line pointer array, that that frees a little bit of\n> space. Right?\n\nThat's part of it, yes.\n\n> One problem with using this as a justification for the status quo is\n> that truncating the line pointer array is a relatively recent\n> behavior. It's certainly much newer than the choice to have VACUUM\n> touch the FSM in the second page than the first page.\n\nTrue. But the way that PageGetHeapFreeSpace() returns 0 for a page\nwith 291 LP_DEAD stubs is a much older behavior. When that happens it\nis literally true that the page has lots of free space. And yet it's\nnot free space we can actually use. Not until those LP_DEAD items are\nmarked LP_UNUSED.\n\n> Another problem is that the amount of space that we're freeing up in\n> the second pass is really quite minimal even when it's >0. Any tuple\n> that actually contains any data at all is at least 32 bytes, and most\n> of them are quite a bit larger. Item pointers are 2 bytes. To save\n> enough space to fit even one additional tuple, we'd have to free *at\n> least* 16 line pointers. That's going to be really rare.\n\nI basically agree with this. I would still worry about the \"291\nLP_DEAD stubs makes PageGetHeapFreeSpace return 0\" thing specifically,\nthough. It's sort of a special case.\n\n> And even if it happens, is it even useful to advertise that free\n> space? Do we want to cram one more tuple into a page that has a\n> history of extremely heavy updates? Could it be that it's smarter to\n> just forget about that free space?\n\nI think so, yes.\n\nAnother big source of inaccuracies here is that we don't credit\nRECENTLY_DEAD tuple space with being free space. Maybe that isn't a\nhuge problem, but it makes it even harder to believe that precision in\nFSM accounting is an intrinsic good.\n\n> > You'd likely prefer a simpler argument for doing this -- an argument\n> > that doesn't require abandoning/discrediting the idea that a high\n> > degree of FSM_CATEGORIES-wise precision is a valuable thing. Not sure\n> > that that's possible -- the current design is at least correct on its\n> > own terms. And what you propose to do will probably be less correct on\n> > those same terms, silly though they are.\n>\n> I've never really understood why you think that the number of\n> FSM_CATEGORIES is the problem. I believe I recall you endorsing a\n> system where pages are open or closed, to try to achieve temporal\n> locality of data.\n\nMy remarks about \"FSM_CATEGORIES-wise precision\" were basically\nremarks about the fundamental problem with the free space map. Which\nis really that it's just a map of free space, that gives exactly zero\nthought to various high level things that *obviously* matter. I wasn't\nparticularly planning on getting into the specifics of that with you\nnow, on this thread.\n\nA brief recap might be useful: other systems with a heap table AM free\nspace management structure typically represent the free space\navailable on each page using a far more coarse grained counter.\nUsually one with less than 10 distinct increments. The immediate\nproblem with FSM_CATEGORIES having such a fine granularity is that it\nincreases contention/competition among backends that need to find some\nfree space for a new tuple. They'll all diligently try to find the\npage with the least free space that still satisfies their immediate\nneeds -- there is no thought for the second-order effects, which are\nreally important in practice.\n\n> But all of that is just an argument that reducing the number of\n> FSM_CATEGORIES is *acceptable*; it doesn't amount to an argument that\n> it's better. My current belief is that it isn't better, just a vehicle\n> to do something else that maybe is better, like squeezing open/closed\n> tracking or similar into the existing bit space. My understanding is\n> that you think it would be better on its own terms, but I have not yet\n> been able to grasp why that would be so.\n\nI'm not really arguing that reducing FSM_CATEGORIES and changing\nnothing else would be better on its own (it might be, but that's not\nwhat I meant to convey).\n\nWhat I really wanted to convey is this: if you're going to go the\nroute of ignoring LP_DEAD free space during vacuuming, you're\nconceding that having a high degree of precision about available free\nspace isn't actually useful (or wouldn't be useful if it was actually\npossible at all). Which is something that I generally agree with. I'd\njust like it to be clear that you/Melanie are in fact taking one small\nstep in that direction. We don't need to discuss possible later steps\nbeyond that first step. Not right now.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 18 Jan 2024 11:17:21 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 11:17 AM Peter Geoghegan <[email protected]> wrote:\n> True. But the way that PageGetHeapFreeSpace() returns 0 for a page\n> with 291 LP_DEAD stubs is a much older behavior. When that happens it\n> is literally true that the page has lots of free space. And yet it's\n> not free space we can actually use. Not until those LP_DEAD items are\n> marked LP_UNUSED.\n\nTo me, this is just accurate reporting. What we care about in this\ncontext is the amount of free space on the page that can be used to\nstore a new tuple. When there are no line pointers available to be\nallocated, that amount is 0.\n\n> Another big source of inaccuracies here is that we don't credit\n> RECENTLY_DEAD tuple space with being free space. Maybe that isn't a\n> huge problem, but it makes it even harder to believe that precision in\n> FSM accounting is an intrinsic good.\n\nThe difficulty here is that we don't know how long it will be before\nthat space can be reused. Those recently dead tuples could become dead\nwithin a few milliseconds or stick around for hours. I've wondered\nabout the merits of some FSM that had built-in visibility awareness,\ni.e. the capability to record something like \"page X currently has Y\nspace free and after XID Z is all-visible it will have Y' space free\".\nThat seems complex, but without it, we either have to bet that the\nspace will actually become free before anyone tries to use it, or that\nit won't. If whatever guess we make is wrong, bad things happen.\n\n> My remarks about \"FSM_CATEGORIES-wise precision\" were basically\n> remarks about the fundamental problem with the free space map. Which\n> is really that it's just a map of free space, that gives exactly zero\n> thought to various high level things that *obviously* matter. I wasn't\n> particularly planning on getting into the specifics of that with you\n> now, on this thread.\n\nFair.\n\n> A brief recap might be useful: other systems with a heap table AM free\n> space management structure typically represent the free space\n> available on each page using a far more coarse grained counter.\n> Usually one with less than 10 distinct increments. The immediate\n> problem with FSM_CATEGORIES having such a fine granularity is that it\n> increases contention/competition among backends that need to find some\n> free space for a new tuple. They'll all diligently try to find the\n> page with the least free space that still satisfies their immediate\n> needs -- there is no thought for the second-order effects, which are\n> really important in practice.\n\nI think that the completely deterministic nature of the computation is\na mistake regardless of anything else. That serves to focus contention\nrather than spreading it out, which is dumb, and would still be dumb\nwith any other number of FSM_CATEGORIES.\n\n> What I really wanted to convey is this: if you're going to go the\n> route of ignoring LP_DEAD free space during vacuuming, you're\n> conceding that having a high degree of precision about available free\n> space isn't actually useful (or wouldn't be useful if it was actually\n> possible at all). Which is something that I generally agree with. I'd\n> just like it to be clear that you/Melanie are in fact taking one small\n> step in that direction. We don't need to discuss possible later steps\n> beyond that first step. Not right now.\n\nYeah. I'm not sure we're actually going to change that right now, but\nI agree with the high-level point regardless, which I would summarize\nlike this: The current system provides more precision about available\nfree space than we actually need, while failing to provide some other\nthings that we really do need. We need not agree today on exactly what\nthose other things are or how best to get them in order to agree that\nthe current system has significant flaws, and we do agree that it\ndoes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jan 2024 11:45:53 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 11:46 AM Robert Haas <[email protected]> wrote:\n> On Thu, Jan 18, 2024 at 11:17 AM Peter Geoghegan <[email protected]> wrote:\n> > True. But the way that PageGetHeapFreeSpace() returns 0 for a page\n> > with 291 LP_DEAD stubs is a much older behavior. When that happens it\n> > is literally true that the page has lots of free space. And yet it's\n> > not free space we can actually use. Not until those LP_DEAD items are\n> > marked LP_UNUSED.\n>\n> To me, this is just accurate reporting. What we care about in this\n> context is the amount of free space on the page that can be used to\n> store a new tuple. When there are no line pointers available to be\n> allocated, that amount is 0.\n\nI agree. All I'm saying is this (can't imagine you'll disagree):\n\nIt's not okay if you fail to update the FSM a second time in the\nsecond heap pass -- at least in some cases. It's reasonably frequent\nfor a page that has 0 usable free space when lazy_scan_prune returns\nto go on to have almost BLCKSZ free space once lazy_vacuum_heap_page()\nis done with it.\n\nWhile I am sympathetic to the argument that LP_DEAD item space just\nisn't that important in general, that doesn't apply with this one\nspecial case. This is a \"step function\" behavior, and is seen whenever\nVACUUM runs following bulk deletes of tuples -- a rather common case.\nClearly the FSM shouldn't show that pages that are actually completely\nempty at the end of VACUUM as having no available free space after a\nVACUUM finishes (on account of how they looked immediately after\nlazy_scan_prune ran). That'd just be wrong.\n\n> > Another big source of inaccuracies here is that we don't credit\n> > RECENTLY_DEAD tuple space with being free space. Maybe that isn't a\n> > huge problem, but it makes it even harder to believe that precision in\n> > FSM accounting is an intrinsic good.\n>\n> The difficulty here is that we don't know how long it will be before\n> that space can be reused. Those recently dead tuples could become dead\n> within a few milliseconds or stick around for hours. I've wondered\n> about the merits of some FSM that had built-in visibility awareness,\n> i.e. the capability to record something like \"page X currently has Y\n> space free and after XID Z is all-visible it will have Y' space free\".\n> That seems complex, but without it, we either have to bet that the\n> space will actually become free before anyone tries to use it, or that\n> it won't. If whatever guess we make is wrong, bad things happen.\n\nAll true -- it is rather complex.\n\nOther systems with a heap table access method based on a foundation of\n2PL (Oracle, DB2) literally need a transactionally consistent FSM\nstructure. In fact I believe that Oracle literally requires the\nequivalent of an MVCC snapshot read (a \"consistent get\") to be able to\naccess what seems like it ought to be strictly a physical data\nstructure correctly. Everything needs to work in the rollback path,\nindependent of whatever else may happen to the page before an xact\nrolls back (i.e. independently of what other xacts might end up doing\nwith the page). This requires very tight coordination to avoid bugs\nwhere a transaction cannot roll back due to not having enough free\nspace to restore the original tuple during UNDO.\n\nI don't think it's desirable to have anything as delicate as that\nhere. But some rudimentary understanding of free space being\nallocated/leased to certain transactions and/or backends does seem\nlike a good idea. There is some intrinsic value to these sorts of\nbehaviors, even in a system without any UNDO segments, where it is\nnever strictly necessary.\n\n> I think that the completely deterministic nature of the computation is\n> a mistake regardless of anything else. That serves to focus contention\n> rather than spreading it out, which is dumb, and would still be dumb\n> with any other number of FSM_CATEGORIES.\n\nThat's a part of the problem too, I guess.\n\nThe actual available free space on each page is literally changing all\nthe time, when measured at FSM_CATEGORIES-wise granularity -- which\nleads to a mad dash among backends that all need the same amount of\nfree space for their new tuple. One reason why other systems pretty\nmuch require coarse-grained increments of free space is the need to\nmanage the WAL overhead for a crash-safe FSM/free list structure.\n\n> Yeah. I'm not sure we're actually going to change that right now, but\n> I agree with the high-level point regardless, which I would summarize\n> like this: The current system provides more precision about available\n> free space than we actually need, while failing to provide some other\n> things that we really do need. We need not agree today on exactly what\n> those other things are or how best to get them in order to agree that\n> the current system has significant flaws, and we do agree that it\n> does.\n\nI agree with this.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 18 Jan 2024 12:14:55 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 12:15 PM Peter Geoghegan <[email protected]> wrote:\n> It's not okay if you fail to update the FSM a second time in the\n> second heap pass -- at least in some cases. It's reasonably frequent\n> for a page that has 0 usable free space when lazy_scan_prune returns\n> to go on to have almost BLCKSZ free space once lazy_vacuum_heap_page()\n> is done with it.\n\nOh, good point. That's an important subtlety I had missed.\n\n> That's a part of the problem too, I guess.\n>\n> The actual available free space on each page is literally changing all\n> the time, when measured at FSM_CATEGORIES-wise granularity -- which\n> leads to a mad dash among backends that all need the same amount of\n> free space for their new tuple. One reason why other systems pretty\n> much require coarse-grained increments of free space is the need to\n> manage the WAL overhead for a crash-safe FSM/free list structure.\n\nInteresting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jan 2024 12:38:13 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 10:09 AM Robert Haas <[email protected]> wrote:\n> Oh, I see. Good catch.\n>\n> I've now committed 0001.\n\nI have now also committed 0002 and 0003. I made some modifications to\n0003. Specifically:\n\n- I moved has_lpdead_items inside the loop over blocks instead of\nputting it at the function toplevel.\n- I adjusted the comments in lazy_scan_prune() because it seemed to me\nthat the comment about \"Now scan the page...\" had gotten too far\nseparated from the loop where that happens.\n- I combined two lines in an if-test because one of them was kinda short.\n\nHope that's OK with you.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jan 2024 15:20:38 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 3:20 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Jan 18, 2024 at 10:09 AM Robert Haas <[email protected]> wrote:\n> > Oh, I see. Good catch.\n> >\n> > I've now committed 0001.\n>\n> I have now also committed 0002 and 0003. I made some modifications to\n> 0003. Specifically:\n>\n> - I moved has_lpdead_items inside the loop over blocks instead of\n> putting it at the function toplevel.\n> - I adjusted the comments in lazy_scan_prune() because it seemed to me\n> that the comment about \"Now scan the page...\" had gotten too far\n> separated from the loop where that happens.\n> - I combined two lines in an if-test because one of them was kinda short.\n>\n> Hope that's OK with you.\n\nAwesome, thanks!\n\nI have attached a rebased version of the former 0004 as v11-0001.\n\n- Melanie",
"msg_date": "Thu, 18 Jan 2024 21:23:44 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 9:23 PM Melanie Plageman\n<[email protected]> wrote:\n> I have attached a rebased version of the former 0004 as v11-0001.\n\nThis looks correct to me, although I wouldn't mind some more eyes on\nit. However, I think that the comments still need more work.\n\nSpecifically:\n\n /*\n * Prune, freeze, and count tuples.\n *\n * Accumulates details of remaining LP_DEAD line pointers on page in\n * dead_items array. This includes LP_DEAD line pointers that we\n * pruned ourselves, as well as existing LP_DEAD line pointers that\n * were pruned some time earlier. Also considers freezing XIDs in the\n * tuple headers of remaining items with storage. It also determines\n * if truncating this block is safe.\n */\n- lazy_scan_prune(vacrel, buf, blkno, page,\n- vmbuffer, all_visible_according_to_vm,\n- &has_lpdead_items);\n+ if (got_cleanup_lock)\n+ lazy_scan_prune(vacrel, buf, blkno, page,\n+ vmbuffer, all_visible_according_to_vm,\n+ &has_lpdead_items);\n\nI think this comment needs adjusting. Possibly, the preceding calls to\nlazy_scan_noprune() and lazy_scan_new_or_empty() could even use a bit\nbetter comments, but in those cases, you're basically keeping the same\ncode with the same comment, so it's kinda defensible. Here, though,\nyou're making the call conditional without any comment update.\n\n /*\n * Final steps for block: drop cleanup lock, record free space in the\n * FSM.\n *\n * If we will likely do index vacuuming, wait until\n * lazy_vacuum_heap_rel() to save free space. This doesn't just save\n * us some cycles; it also allows us to record any additional free\n * space that lazy_vacuum_heap_page() will make available in cases\n * where it's possible to truncate the page's line pointer array.\n *\n+ * Our goal is to update the freespace map the last time we touch the\n+ * page. If the relation has no indexes, or if index vacuuming is\n+ * disabled, there will be no second heap pass; if this particular\n+ * page has no dead items, the second heap pass will not touch this\n+ * page. So, in those cases, update the FSM now.\n+ *\n * Note: It's not in fact 100% certain that we really will call\n * lazy_vacuum_heap_rel() -- lazy_vacuum() might yet opt to skip index\n * vacuuming (and so must skip heap vacuuming). This is deemed okay\n * because it only happens in emergencies, or when there is very\n * little free space anyway. (Besides, we start recording free space\n * in the FSM once index vacuuming has been abandoned.)\n */\n\nI think this comment needs a rewrite, not just sticking the other\ncomment in the middle of it. There's some duplication between these\ntwo comments, and merging it all together should iron that out.\nPersonally, I think my comment (which was there before, this commit\nonly moves it here) is clearer than what's already here about the\nintent, but it's lacking some details that are captured in the other\ntwo paragraphs, and we probably don't want to lose those details.\n\nIf you'd like, I can try rewriting these comments to my satisfaction\nand you can reverse-review the result. Or you can rewrite them and\nI'll re-review the result. But I think this needs to be a little less\nmechanical. It's not just about shuffling all the comments around so\nthat all the text ends up somewhere -- we also need to consider the\ndegree to which the meaning becomes duplicated when it all gets merged\ntogether.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jan 2024 14:59:44 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 2:59 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Jan 18, 2024 at 9:23 PM Melanie Plageman\n> <[email protected]> wrote:\n> > I have attached a rebased version of the former 0004 as v11-0001.\n>\n> This looks correct to me, although I wouldn't mind some more eyes on\n> it. However, I think that the comments still need more work.\n>\n> Specifically:\n>\n> /*\n> * Prune, freeze, and count tuples.\n> *\n> * Accumulates details of remaining LP_DEAD line pointers on page in\n> * dead_items array. This includes LP_DEAD line pointers that we\n> * pruned ourselves, as well as existing LP_DEAD line pointers that\n> * were pruned some time earlier. Also considers freezing XIDs in the\n> * tuple headers of remaining items with storage. It also determines\n> * if truncating this block is safe.\n> */\n> - lazy_scan_prune(vacrel, buf, blkno, page,\n> - vmbuffer, all_visible_according_to_vm,\n> - &has_lpdead_items);\n> + if (got_cleanup_lock)\n> + lazy_scan_prune(vacrel, buf, blkno, page,\n> + vmbuffer, all_visible_according_to_vm,\n> + &has_lpdead_items);\n>\n> I think this comment needs adjusting. Possibly, the preceding calls to\n> lazy_scan_noprune() and lazy_scan_new_or_empty() could even use a bit\n> better comments, but in those cases, you're basically keeping the same\n> code with the same comment, so it's kinda defensible. Here, though,\n> you're making the call conditional without any comment update.\n>\n> /*\n> * Final steps for block: drop cleanup lock, record free space in the\n> * FSM.\n> *\n> * If we will likely do index vacuuming, wait until\n> * lazy_vacuum_heap_rel() to save free space. This doesn't just save\n> * us some cycles; it also allows us to record any additional free\n> * space that lazy_vacuum_heap_page() will make available in cases\n> * where it's possible to truncate the page's line pointer array.\n> *\n> + * Our goal is to update the freespace map the last time we touch the\n> + * page. If the relation has no indexes, or if index vacuuming is\n> + * disabled, there will be no second heap pass; if this particular\n> + * page has no dead items, the second heap pass will not touch this\n> + * page. So, in those cases, update the FSM now.\n> + *\n> * Note: It's not in fact 100% certain that we really will call\n> * lazy_vacuum_heap_rel() -- lazy_vacuum() might yet opt to skip index\n> * vacuuming (and so must skip heap vacuuming). This is deemed okay\n> * because it only happens in emergencies, or when there is very\n> * little free space anyway. (Besides, we start recording free space\n> * in the FSM once index vacuuming has been abandoned.)\n> */\n>\n> I think this comment needs a rewrite, not just sticking the other\n> comment in the middle of it. There's some duplication between these\n> two comments, and merging it all together should iron that out.\n> Personally, I think my comment (which was there before, this commit\n> only moves it here) is clearer than what's already here about the\n> intent, but it's lacking some details that are captured in the other\n> two paragraphs, and we probably don't want to lose those details.\n>\n> If you'd like, I can try rewriting these comments to my satisfaction\n> and you can reverse-review the result. Or you can rewrite them and\n> I'll re-review the result. But I think this needs to be a little less\n> mechanical. It's not just about shuffling all the comments around so\n> that all the text ends up somewhere -- we also need to consider the\n> degree to which the meaning becomes duplicated when it all gets merged\n> together.\n\nI will take a stab at rewriting the comments myself first. Usually, I\ntry to avoid changing comments if the code isn't functionally\ndifferent because I know it adds additional review overhead and I try\nto reduce that to an absolute minimum. However, I see what you are\nsaying and agree that it would be better to have actually good\ncomments instead of frankenstein comments made up of parts that were\npreviously considered acceptable. I'll have a new version ready by\ntomorrow.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 24 Jan 2024 16:34:11 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 4:34 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Wed, Jan 24, 2024 at 2:59 PM Robert Haas <[email protected]> wrote:\n...\n> > If you'd like, I can try rewriting these comments to my satisfaction\n> > and you can reverse-review the result. Or you can rewrite them and\n> > I'll re-review the result. But I think this needs to be a little less\n> > mechanical. It's not just about shuffling all the comments around so\n> > that all the text ends up somewhere -- we also need to consider the\n> > degree to which the meaning becomes duplicated when it all gets merged\n> > together.\n>\n> I will take a stab at rewriting the comments myself first. Usually, I\n> try to avoid changing comments if the code isn't functionally\n> different because I know it adds additional review overhead and I try\n> to reduce that to an absolute minimum. However, I see what you are\n> saying and agree that it would be better to have actually good\n> comments instead of frankenstein comments made up of parts that were\n> previously considered acceptable. I'll have a new version ready by\n> tomorrow.\n\nv12 attached has my attempt at writing better comments for this\nsection of lazy_scan_heap().\n\nAbove the combined FSM update code, I have written a comment that is a\nrevised version of your comment from above the lazy_scan_noprune() FSM\nupdate code but with some of the additional details from the previous\ncomment above the lazy_scan_pruen() FSM update code.\n\nThe one part that I did not incorporate was the point about how\nsometimes we think we'll do a second pass on the block so we don't\nupdate the FSM but then we end up not doing it but it's all okay.\n\n* Note: It's not in fact 100% certain that we really will call\n* lazy_vacuum_heap_rel() -- lazy_vacuum() might yet opt to skip index\n* vacuuming (and so must skip heap vacuuming). This is deemed okay\n* because it only happens in emergencies, or when there is very\n* little free space anyway. (Besides, we start recording free space\n* in the FSM once index vacuuming has been abandoned.)\n\nI didn't incorporate it because I wasn't sure I understood the\nsituation. I can imagine us skipping updating the FSM after\nlazy_scan_prune() because there are indexes on the relation and dead\nitems on the page and we think we'll do a second pass. Later, we end\nup triggering a failsafe vacuum or, somehow, there are still too few\nTIDs for the second pass, so we update do_index_vacuuming to false.\nThen we wouldn't ever record this block's free space in the FSM. That\nseems fine (which is what the comment says). So, what does the last\nsentence mean? \"Besides, we start recording...\"\n\n- Melanie",
"msg_date": "Wed, 24 Jan 2024 21:13:09 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 9:13 PM Melanie Plageman\n<[email protected]> wrote:\n> I didn't incorporate it because I wasn't sure I understood the\n> situation. I can imagine us skipping updating the FSM after\n> lazy_scan_prune() because there are indexes on the relation and dead\n> items on the page and we think we'll do a second pass. Later, we end\n> up triggering a failsafe vacuum or, somehow, there are still too few\n> TIDs for the second pass, so we update do_index_vacuuming to false.\n> Then we wouldn't ever record this block's free space in the FSM. That\n> seems fine (which is what the comment says). So, what does the last\n> sentence mean? \"Besides, we start recording...\"\n\nIt means: when the failsafe kicks in, from that point on we won't do\nany more heap vacuuming. Clearly any pages that still need to be\nscanned at that point won't ever be processed by\nlazy_vacuum_heap_rel(). So from that point on we should record the\nfree space in every scanned heap page in the \"first heap pass\" --\nincluding pages that have LP_DEAD stubs that aren't going to be made\nLP_UNUSED in the ongoing VACUUM.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 24 Jan 2024 21:26:12 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 9:13 PM Melanie Plageman\n<[email protected]> wrote:\n> v12 attached has my attempt at writing better comments for this\n> section of lazy_scan_heap().\n\n+ /*\n+ * If we didn't get the cleanup lock and the page is not new or empty,\n+ * we can still collect LP_DEAD items in the dead_items array for\n+ * later vacuuming, count live and recently dead tuples for vacuum\n+ * logging, and determine if this block could later be truncated. If\n+ * we encounter any xid/mxids that require advancing the\n+ * relfrozenxid/relminxid, we'll have to wait for a cleanup lock and\n+ * call lazy_scan_prune().\n+ */\n\nI like this comment. I would probably drop \"and the page is not new or\nempty\" from it since that's really speaking to the previous bit of\ncode, but it wouldn't be the end of the world to keep it, either.\n\n /*\n- * Prune, freeze, and count tuples.\n+ * If we got a cleanup lock, we can prune and freeze tuples and\n+ * defragment the page. If we didn't get a cleanup lock, we will still\n+ * consider whether or not to update the FSM.\n *\n- * Accumulates details of remaining LP_DEAD line pointers on page in\n- * dead_items array. This includes LP_DEAD line pointers that we\n- * pruned ourselves, as well as existing LP_DEAD line pointers that\n- * were pruned some time earlier. Also considers freezing XIDs in the\n- * tuple headers of remaining items with storage. It also determines\n- * if truncating this block is safe.\n+ * Like lazy_scan_noprune(), lazy_scan_prune() will count\n+ * recently_dead_tuples and live tuples for vacuum logging, determine\n+ * if the block can later be truncated, and accumulate the details of\n+ * remaining LP_DEAD line pointers on the page in the dead_items\n+ * array. These dead items include those pruned by lazy_scan_prune()\n+ * as well we line pointers previously marked LP_DEAD.\n */\n\nTo me, the first paragraph of this one misses the mark. What I thought\nwe should be saying here was something like \"If we don't have a\ncleanup lock, the code above has already processed this page to the\nextent that is possible. Otherwise, we either got the cleanup lock\ninitially and have not processed the page yet, or we didn't get it\ninitially, attempted to process it without the cleanup lock, and\ndecided we needed one after all. Either way, if we now have the lock,\nwe must prune, freeze, and count tuples.\"\n\nThe second paragraph seems fine.\n\n- * Note: It's not in fact 100% certain that we really will call\n- * lazy_vacuum_heap_rel() -- lazy_vacuum() might yet opt to skip index\n- * vacuuming (and so must skip heap vacuuming). This is deemed okay\n- * because it only happens in emergencies, or when there is very\n- * little free space anyway. (Besides, we start recording free space\n- * in the FSM once index vacuuming has been abandoned.)\n\nHere's a suggestion from me:\n\nNote: In corner cases, it's possible to miss updating the FSM\nentirely. If index vacuuming is currently enabled, we'll skip the FSM\nupdate now. But if failsafe mode is later activated, disabling index\nvacuuming, there will also be no opportunity to update the FSM later,\nbecause we'll never revisit this page. Since updating the FSM is\ndesirable but not absolutely required, that's OK.\n\nI think this expresses the same sentiment as the current comment, but\nIMHO more clearly. The one part of the current comment that I don't\nunderstand at all is the remark about \"when there is very little\nfreespace anyway\". I get that if the failsafe activates we won't come\nback to the page, which is the \"only happens in emergencies\" part of\nthe existing comment. But the current phrasing makes it sound like\nthere is a second case where it can happen -- \"when there is very\nlittle free space anyway\" -- and I don't know what that is talking\nabout. If it's important, we should try to make it clearer.\n\nWe could also just decide to keep this entire paragraph as it is for\npurposes of the present patch. The part I really thought needed\nadjusting was \"Prune, freeze, and count tuples.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jan 2024 08:56:54 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 8:57 AM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Jan 24, 2024 at 9:13 PM Melanie Plageman\n> <[email protected]> wrote:\n> > v12 attached has my attempt at writing better comments for this\n> > section of lazy_scan_heap().\n>\n> + /*\n> + * If we didn't get the cleanup lock and the page is not new or empty,\n> + * we can still collect LP_DEAD items in the dead_items array for\n> + * later vacuuming, count live and recently dead tuples for vacuum\n> + * logging, and determine if this block could later be truncated. If\n> + * we encounter any xid/mxids that require advancing the\n> + * relfrozenxid/relminxid, we'll have to wait for a cleanup lock and\n> + * call lazy_scan_prune().\n> + */\n>\n> I like this comment. I would probably drop \"and the page is not new or\n> empty\" from it since that's really speaking to the previous bit of\n> code, but it wouldn't be the end of the world to keep it, either.\n\nYes, probably best to get rid of the part about new or empty.\n\n> /*\n> - * Prune, freeze, and count tuples.\n> + * If we got a cleanup lock, we can prune and freeze tuples and\n> + * defragment the page. If we didn't get a cleanup lock, we will still\n> + * consider whether or not to update the FSM.\n> *\n> - * Accumulates details of remaining LP_DEAD line pointers on page in\n> - * dead_items array. This includes LP_DEAD line pointers that we\n> - * pruned ourselves, as well as existing LP_DEAD line pointers that\n> - * were pruned some time earlier. Also considers freezing XIDs in the\n> - * tuple headers of remaining items with storage. It also determines\n> - * if truncating this block is safe.\n> + * Like lazy_scan_noprune(), lazy_scan_prune() will count\n> + * recently_dead_tuples and live tuples for vacuum logging, determine\n> + * if the block can later be truncated, and accumulate the details of\n> + * remaining LP_DEAD line pointers on the page in the dead_items\n> + * array. These dead items include those pruned by lazy_scan_prune()\n> + * as well we line pointers previously marked LP_DEAD.\n> */\n>\n> To me, the first paragraph of this one misses the mark. What I thought\n> we should be saying here was something like \"If we don't have a\n> cleanup lock, the code above has already processed this page to the\n> extent that is possible. Otherwise, we either got the cleanup lock\n> initially and have not processed the page yet, or we didn't get it\n> initially, attempted to process it without the cleanup lock, and\n> decided we needed one after all. Either way, if we now have the lock,\n> we must prune, freeze, and count tuples.\"\n\nI see. Your suggestion makes sense. The sentence starting with\n\"Otherwise\" is a bit long. I started to lose the thread at \"decided we\nneeded one after all\". You previously referred to the cleanup lock as\n\"it\" -- once you start referring to it as \"one\", I as the future\ndeveloper am no longer sure we are talking about the cleanup lock (as\nopposed to the page or something else).\n\n> - * Note: It's not in fact 100% certain that we really will call\n> - * lazy_vacuum_heap_rel() -- lazy_vacuum() might yet opt to skip index\n> - * vacuuming (and so must skip heap vacuuming). This is deemed okay\n> - * because it only happens in emergencies, or when there is very\n> - * little free space anyway. (Besides, we start recording free space\n> - * in the FSM once index vacuuming has been abandoned.)\n>\n> Here's a suggestion from me:\n>\n> Note: In corner cases, it's possible to miss updating the FSM\n> entirely. If index vacuuming is currently enabled, we'll skip the FSM\n> update now. But if failsafe mode is later activated, disabling index\n> vacuuming, there will also be no opportunity to update the FSM later,\n> because we'll never revisit this page. Since updating the FSM is\n> desirable but not absolutely required, that's OK.\n>\n> I think this expresses the same sentiment as the current comment, but\n> IMHO more clearly. The one part of the current comment that I don't\n> understand at all is the remark about \"when there is very little\n> freespace anyway\". I get that if the failsafe activates we won't come\n> back to the page, which is the \"only happens in emergencies\" part of\n> the existing comment. But the current phrasing makes it sound like\n> there is a second case where it can happen -- \"when there is very\n> little free space anyway\" -- and I don't know what that is talking\n> about. If it's important, we should try to make it clearer.\n>\n> We could also just decide to keep this entire paragraph as it is for\n> purposes of the present patch. The part I really thought needed\n> adjusting was \"Prune, freeze, and count tuples.\"\n\nI think it would be nice to clarify this comment. I think the \"when\nthere is little free space anyway\" is referring to the case in\nlazy_vacuum() where we set do_index_vacuuming to false because \"there\nare almost zero TIDs\". I initially thought it was saying that in the\nfailsafe vacuum case the pages whose free space we wouldn't record in\nthe FSM have little free space anyway -- which I didn't get. But then\nI looked at where we set do_index_vacuuming to false.\n\nAs for the last sentence starting with \"Besides\", even with Peter's\nexplanation I still am not sure what it should say. There are blocks\nwhose free space we don't record in the first heap pass. Then, due to\nskipping index vacuuming and the second heap pass, we also don't\nrecord their free space in the second heap pass. I think he is saying\nthat once we set do_index_vacuuming to false, we will stop skipping\nupdating the FSM after the first pass for future blocks. So, future\nblocks will have their free space recorded in the FSM. But that feels\nself-evident. The more salient point is that there are some blocks\nwhose free space is not recorded (those whose first pass happened\nbefore unsetting do_index_vacuuming and whose second pass did not\nhappen before do_index_vacuuming is unset). The extra sentence made me\nthink there was some way we might go back and record free space for\nthose blocks, but that is not true.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 25 Jan 2024 09:17:47 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 9:18 AM Melanie Plageman\n<[email protected]> wrote:\n> > To me, the first paragraph of this one misses the mark. What I thought\n> > we should be saying here was something like \"If we don't have a\n> > cleanup lock, the code above has already processed this page to the\n> > extent that is possible. Otherwise, we either got the cleanup lock\n> > initially and have not processed the page yet, or we didn't get it\n> > initially, attempted to process it without the cleanup lock, and\n> > decided we needed one after all. Either way, if we now have the lock,\n> > we must prune, freeze, and count tuples.\"\n>\n> I see. Your suggestion makes sense. The sentence starting with\n> \"Otherwise\" is a bit long. I started to lose the thread at \"decided we\n> needed one after all\". You previously referred to the cleanup lock as\n> \"it\" -- once you start referring to it as \"one\", I as the future\n> developer am no longer sure we are talking about the cleanup lock (as\n> opposed to the page or something else).\n\nOk... trying again:\n\nIf we have a cleanup lock, we must now prune, freeze, and count\ntuples. We may have acquired the cleanup lock originally, or we may\nhave gone back and acquired it after lazy_scan_noprune() returned\nfalse. Either way, the page hasn't been processed yet.\n\n> I think it would be nice to clarify this comment. I think the \"when\n> there is little free space anyway\" is referring to the case in\n> lazy_vacuum() where we set do_index_vacuuming to false because \"there\n> are almost zero TIDs\". I initially thought it was saying that in the\n> failsafe vacuum case the pages whose free space we wouldn't record in\n> the FSM have little free space anyway -- which I didn't get. But then\n> I looked at where we set do_index_vacuuming to false.\n\nOh... wow. That's kind of confusing; somehow I was thinking we were\ntalking about free space on the disk, rather than newly free space in\npages that could be added to the FSM. And it seems really questionable\nwhether that case is OK. I mean, in the emergency case, fine,\nwhatever, we should do whatever it takes to get the system back up,\nand it should barely ever happen on a well-configured system. But this\ncase could happen regularly, and losing track of free space could\neasily cause bloat.\n\nThis might be another argument for moving FSM updates to the first\nheap pass, but that's a separate task from fixing the comment.\n\n> As for the last sentence starting with \"Besides\", even with Peter's\n> explanation I still am not sure what it should say. There are blocks\n> whose free space we don't record in the first heap pass. Then, due to\n> skipping index vacuuming and the second heap pass, we also don't\n> record their free space in the second heap pass. I think he is saying\n> that once we set do_index_vacuuming to false, we will stop skipping\n> updating the FSM after the first pass for future blocks. So, future\n> blocks will have their free space recorded in the FSM. But that feels\n> self-evident.\n\nYes, I don't think that necessarily needs to be mentioned here.\n\n> The more salient point is that there are some blocks\n> whose free space is not recorded (those whose first pass happened\n> before unsetting do_index_vacuuming and whose second pass did not\n> happen before do_index_vacuuming is unset). The extra sentence made me\n> think there was some way we might go back and record free space for\n> those blocks, but that is not true.\n\nI don't really see why that sentence made you think that, but it's not\nimportant. I agree with you about what point we need to emphasize\nhere.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jan 2024 10:19:28 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 10:19 AM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Jan 25, 2024 at 9:18 AM Melanie Plageman\n> <[email protected]> wrote:\n> > > To me, the first paragraph of this one misses the mark. What I thought\n> > > we should be saying here was something like \"If we don't have a\n> > > cleanup lock, the code above has already processed this page to the\n> > > extent that is possible. Otherwise, we either got the cleanup lock\n> > > initially and have not processed the page yet, or we didn't get it\n> > > initially, attempted to process it without the cleanup lock, and\n> > > decided we needed one after all. Either way, if we now have the lock,\n> > > we must prune, freeze, and count tuples.\"\n> >\n> > I see. Your suggestion makes sense. The sentence starting with\n> > \"Otherwise\" is a bit long. I started to lose the thread at \"decided we\n> > needed one after all\". You previously referred to the cleanup lock as\n> > \"it\" -- once you start referring to it as \"one\", I as the future\n> > developer am no longer sure we are talking about the cleanup lock (as\n> > opposed to the page or something else).\n>\n> Ok... trying again:\n>\n> If we have a cleanup lock, we must now prune, freeze, and count\n> tuples. We may have acquired the cleanup lock originally, or we may\n> have gone back and acquired it after lazy_scan_noprune() returned\n> false. Either way, the page hasn't been processed yet.\n\nCool. I might add \"successfully\" or \"fully\" to \"Either way, the page\nhasn't been processed yet\"\n\n> > I think it would be nice to clarify this comment. I think the \"when\n> > there is little free space anyway\" is referring to the case in\n> > lazy_vacuum() where we set do_index_vacuuming to false because \"there\n> > are almost zero TIDs\". I initially thought it was saying that in the\n> > failsafe vacuum case the pages whose free space we wouldn't record in\n> > the FSM have little free space anyway -- which I didn't get. But then\n> > I looked at where we set do_index_vacuuming to false.\n>\n> Oh... wow. That's kind of confusing; somehow I was thinking we were\n> talking about free space on the disk, rather than newly free space in\n> pages that could be added to the FSM.\n\nPerhaps I misunderstood. I interpreted it to refer to the bypass optimization.\n\n> And it seems really questionable\n> whether that case is OK. I mean, in the emergency case, fine,\n> whatever, we should do whatever it takes to get the system back up,\n> and it should barely ever happen on a well-configured system. But this\n> case could happen regularly, and losing track of free space could\n> easily cause bloat.\n>\n> This might be another argument for moving FSM updates to the first\n> heap pass, but that's a separate task from fixing the comment.\n\nYes, it seems we could miss recording space freed in the first pass if\nwe never end up doing a second pass. consider_bypass_optimization is\nset to false only if index cleanup is explicitly enabled or there are\ndead items accumulated for vacuum's second pass at some point.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 25 Jan 2024 11:18:55 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 11:19 AM Melanie Plageman\n<[email protected]> wrote:\n> Cool. I might add \"successfully\" or \"fully\" to \"Either way, the page\n> hasn't been processed yet\"\n\nI'm OK with that.\n\n> > > I think it would be nice to clarify this comment. I think the \"when\n> > > there is little free space anyway\" is referring to the case in\n> > > lazy_vacuum() where we set do_index_vacuuming to false because \"there\n> > > are almost zero TIDs\". I initially thought it was saying that in the\n> > > failsafe vacuum case the pages whose free space we wouldn't record in\n> > > the FSM have little free space anyway -- which I didn't get. But then\n> > > I looked at where we set do_index_vacuuming to false.\n> >\n> > Oh... wow. That's kind of confusing; somehow I was thinking we were\n> > talking about free space on the disk, rather than newly free space in\n> > pages that could be added to the FSM.\n>\n> Perhaps I misunderstood. I interpreted it to refer to the bypass optimization.\n\nI think you're probably correct. I just didn't realize what was meant.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jan 2024 12:25:46 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 12:25 PM Robert Haas <[email protected]> wrote:\n> I think you're probably correct. I just didn't realize what was meant.\n\nI tweaked your v12 based on this discussion and committed the result.\n\nThanks to you for the patches, and to Peter for participating in the\ndiscussion which, IMHO, was very helpful in clarifying things.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jan 2024 11:44:01 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 11:44 AM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Jan 25, 2024 at 12:25 PM Robert Haas <[email protected]> wrote:\n> > I think you're probably correct. I just didn't realize what was meant.\n>\n> I tweaked your v12 based on this discussion and committed the result.\n>\n> Thanks to you for the patches, and to Peter for participating in the\n> discussion which, IMHO, was very helpful in clarifying things.\n\nThanks! I've marked the CF entry as committed.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 26 Jan 2024 12:06:30 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 11:44 AM Robert Haas <[email protected]> wrote:\n> Thanks to you for the patches, and to Peter for participating in the\n> discussion which, IMHO, was very helpful in clarifying things.\n\nGlad I could help.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 26 Jan 2024 13:05:48 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Emit fewer vacuum records by reaping removable tuples during\n pruning"
}
] |
[
{
"msg_contents": "I noticed that the fallback pg_atomic_test_set_flag_impl() implementation\nthat uses atomic-exchange is giving pg_atomic_exchange_u32_impl() an extra\nargument. This appears to be copy/pasted from the atomic-compare-exchange\nversion a few lines down. It looks like it's been this way since this code\nwas introduced in commit b64d92f (2014). Patch attached.\n\nI'd ordinarily suggest removing this section of code since it doesn't seem\nto have gotten much coverage, but I'm actually looking into adding some\nfaster atomic-exchange implementations that may activate this code for\ncertain compiler/architecture combinations.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Nov 2023 21:54:39 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "typo in fallback implementation for pg_atomic_test_set_flag()"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-13 21:54:39 -0600, Nathan Bossart wrote:\n> I noticed that the fallback pg_atomic_test_set_flag_impl() implementation\n> that uses atomic-exchange is giving pg_atomic_exchange_u32_impl() an extra\n> argument. This appears to be copy/pasted from the atomic-compare-exchange\n> version a few lines down. It looks like it's been this way since this code\n> was introduced in commit b64d92f (2014). Patch attached.\n\nOops.\n\nI guess it's not too surprising this wasn't required - if the compiler has any\natomic intrinsics it's going to have support for the flag stuff. And there's\npractically no compiler that\n\nAre you planning to apply the fix?\n\n\n> I'd ordinarily suggest removing this section of code since it doesn't seem\n> to have gotten much coverage\n\nWhich section precisely?\n\n\n> but I'm actually looking into adding some faster atomic-exchange\n> implementations that may activate this code for certain\n> compiler/architecture combinations.\n\nHm. I don't really see how adding a faster atomic-exchange implementation\ncould trigger this implementation being used?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Nov 2023 19:17:32 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: typo in fallback implementation for pg_atomic_test_set_flag()"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 07:17:32PM -0800, Andres Freund wrote:\n> Are you planning to apply the fix?\n\nYes, I'll take care of it.\n\n>> I'd ordinarily suggest removing this section of code since it doesn't seem\n>> to have gotten much coverage\n> \n> Which section precisely?\n\nThe lines below this:\n\n\t/*\n\t * provide fallback for test_and_set using atomic_exchange if available\n\t */\n\t#if !defined(PG_HAVE_ATOMIC_TEST_SET_FLAG) && defined(PG_HAVE_ATOMIC_EXCHANGE_U32)\n\nbut above this:\n\n\t/*\n\t * provide fallback for test_and_set using atomic_compare_exchange if\n\t * available.\n\t */\n\t#elif !defined(PG_HAVE_ATOMIC_TEST_SET_FLAG) && defined(PG_HAVE_ATOMIC_COMPARE_EXCHANGE_U32)\n\n>> but I'm actually looking into adding some faster atomic-exchange\n>> implementations that may activate this code for certain\n>> compiler/architecture combinations.\n> \n> Hm. I don't really see how adding a faster atomic-exchange implementation\n> could trigger this implementation being used?\n\nThat'd define PG_HAVE_ATOMIC_EXCHANGE_U32, so this fallback might be used\nif PG_HAVE_ATOMIC_TEST_SET_FLAG is not defined. I haven't traced through\nall the #ifdefs that lead to this point exhaustively, though, so perhaps\nthis is still unlikely.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Nov 2023 09:52:34 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: typo in fallback implementation for pg_atomic_test_set_flag()"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 09:52:34AM -0600, Nathan Bossart wrote:\n> On Tue, Nov 14, 2023 at 07:17:32PM -0800, Andres Freund wrote:\n>> Are you planning to apply the fix?\n> \n> Yes, I'll take care of it.\n\nCommitted and back-patched. I probably could've skipped back-patching this\none since it doesn't seem to be causing any problems yet, but I didn't see\nany reason not to back-patch, either.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Nov 2023 15:13:29 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: typo in fallback implementation for pg_atomic_test_set_flag()"
}
] |
[
{
"msg_contents": "While working on BUG #18187 [1], I noticed that we also have issues with\nhow SJE replaces join clauses involving the removed rel. As an example,\nconsider the query below, which would trigger an Assert.\n\ncreate table t (a int primary key, b int);\n\nexplain (costs off)\nselect * from t t1\n inner join t t2 on t1.a = t2.a\n left join t t3 on t1.b > 1 and t1.b < 2;\nserver closed the connection unexpectedly\n\nThe Assert failure happens in remove_self_join_rel() when we're trying\nto remove t1. The two join clauses of t1, 't1.b > 1' and 't1.b < 2',\nshare the same pointer of 'required_relids', which is {t1, t3} at first.\nAfter we've performed replace_varno for the first clause, the\nrequired_relids becomes {t2, t3}, which is no problem. However, the\nsecond clause's required_relids also becomes {t2, t3}, because they are\nactually the same pointer. So when we proceed with replace_varno on the\nsecond clause, we'd trigger the Assert.\n\nOff the top of my head I'm thinking that we can fix this kind of issue\nby bms_copying the bitmapset first before we make a substitution in\nreplace_relid(), like attached.\n\nAlternatively, we can revise distribute_qual_to_rels() as below so that\ndifferent RestrictInfos don't share the same pointer of required_relids.\n\n--- a/src/backend/optimizer/plan/initsplan.c\n+++ b/src/backend/optimizer/plan/initsplan.c\n@@ -2385,7 +2385,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node\n*clause,\n * nonnullable-side rows failing the qual.\n */\n Assert(ojscope);\n- relids = ojscope;\n+ relids = bms_copy(ojscope);\n Assert(!pseudoconstant);\n }\n else\n\nWith this way, I'm worrying that there are other places where we should\navoid sharing the same pointer to Bitmapset structure. I'm not sure how\nto discover all these places.\n\nAny thoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/18187-831da249cbd2ff8e%40postgresql.org\n\nThanks\nRichard",
"msg_date": "Tue, 14 Nov 2023 19:14:57 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "Hi!\n\nThank you for spotting this and dealing with this.\n\nOn Tue, Nov 14, 2023 at 1:15 PM Richard Guo <[email protected]> wrote:\n> While working on BUG #18187 [1], I noticed that we also have issues with\n> how SJE replaces join clauses involving the removed rel. As an example,\n> consider the query below, which would trigger an Assert.\n>\n> create table t (a int primary key, b int);\n>\n> explain (costs off)\n> select * from t t1\n> inner join t t2 on t1.a = t2.a\n> left join t t3 on t1.b > 1 and t1.b < 2;\n> server closed the connection unexpectedly\n>\n> The Assert failure happens in remove_self_join_rel() when we're trying\n> to remove t1. The two join clauses of t1, 't1.b > 1' and 't1.b < 2',\n> share the same pointer of 'required_relids', which is {t1, t3} at first.\n> After we've performed replace_varno for the first clause, the\n> required_relids becomes {t2, t3}, which is no problem. However, the\n> second clause's required_relids also becomes {t2, t3}, because they are\n> actually the same pointer. So when we proceed with replace_varno on the\n> second clause, we'd trigger the Assert.\n>\n> Off the top of my head I'm thinking that we can fix this kind of issue\n> by bms_copying the bitmapset first before we make a substitution in\n> replace_relid(), like attached.\n\nI remember, I've removed bms_copy() from here. Now I understand why\nthat was needed. But I'm still not particularly happy about it. The\nreason is that logic of replace_relid() becomes cumbersome. In some\ncases it performs modification in-place, while in other cases it\ncopies.\n\n> Alternatively, we can revise distribute_qual_to_rels() as below so that\n> different RestrictInfos don't share the same pointer of required_relids.\n>\n> --- a/src/backend/optimizer/plan/initsplan.c\n> +++ b/src/backend/optimizer/plan/initsplan.c\n> @@ -2385,7 +2385,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause,\n> * nonnullable-side rows failing the qual.\n> */\n> Assert(ojscope);\n> - relids = ojscope;\n> + relids = bms_copy(ojscope);\n> Assert(!pseudoconstant);\n> }\n> else\n>\n> With this way, I'm worrying that there are other places where we should\n> avoid sharing the same pointer to Bitmapset structure. I'm not sure how\n> to discover all these places.\n\nThis looks better to me. However, I'm not sure what the overhead\nwould be? How much would it increase the memory footprint?\n\nIt's possibly dumb option, but what about just removing the assert?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 14 Nov 2023 14:42:13 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-14 19:14:57 +0800, Richard Guo wrote:\n> While working on BUG #18187 [1], I noticed that we also have issues with\n> how SJE replaces join clauses involving the removed rel. As an example,\n> consider the query below, which would trigger an Assert.\n>\n> create table t (a int primary key, b int);\n>\n> explain (costs off)\n> select * from t t1\n> inner join t t2 on t1.a = t2.a\n> left join t t3 on t1.b > 1 and t1.b < 2;\n> server closed the connection unexpectedly\n>\n> The Assert failure happens in remove_self_join_rel() when we're trying\n> to remove t1. The two join clauses of t1, 't1.b > 1' and 't1.b < 2',\n> share the same pointer of 'required_relids', which is {t1, t3} at first.\n> After we've performed replace_varno for the first clause, the\n> required_relids becomes {t2, t3}, which is no problem. However, the\n> second clause's required_relids also becomes {t2, t3}, because they are\n> actually the same pointer. So when we proceed with replace_varno on the\n> second clause, we'd trigger the Assert.\n\nGood catch.\n\n\n> Off the top of my head I'm thinking that we can fix this kind of issue\n> by bms_copying the bitmapset first before we make a substitution in\n> replace_relid(), like attached.\n>\n> Alternatively, we can revise distribute_qual_to_rels() as below so that\n> different RestrictInfos don't share the same pointer of required_relids.\n\n> --- a/src/backend/optimizer/plan/initsplan.c\n> +++ b/src/backend/optimizer/plan/initsplan.c\n> @@ -2385,7 +2385,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node\n> *clause,\n> * nonnullable-side rows failing the qual.\n> */\n> Assert(ojscope);\n> - relids = ojscope;\n> + relids = bms_copy(ojscope);\n> Assert(!pseudoconstant);\n> }\n> else\n>\n> With this way, I'm worrying that there are other places where we should\n> avoid sharing the same pointer to Bitmapset structure.\n\nIndeed.\n\n\n> I'm not sure how to discover all these places. Any thoughts?\n\nAt the very least I think we should add a mode to bitmapset.c mode where\nevery modification of a bitmapset reallocates, rather than just when the size\nactually changes. Because we only reallocte and free in relatively uncommon\ncases, particularly on 64bit systems, it's very easy to not find spots that\ncontinue to use the input pointer to one of the modifying bms functions.\n\nA very hacky implementation of that indeed catches this bug with the existing\nregression tests.\n\nThe tests do *not* pass with just the attached applied, as the \"Delete relid\nwithout substitution\" path has the same issue. With that also copying and all\nthe \"reusing\" bms* functions always reallocating, the tests pass - kinda.\n\n\nThe kinda because there are callers to bms_(add|del)_members() that pass the\nsame bms as a and b, which only works if the reallocation happens \"late\".\n\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Tue, 14 Nov 2023 22:02:35 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-14 14:42:13 +0200, Alexander Korotkov wrote:\n> It's possibly dumb option, but what about just removing the assert?\n\nThat's not at all an option - the in-place bms_* functions can free their\ninput. So a dangling pointer to the \"old\" version is a use-after-free waiting\nto happen - you just need a query that actually gets to bitmapsets that are a\nbit larger.\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Tue, 14 Nov 2023 22:04:21 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 8:04 AM Andres Freund <[email protected]> wrote:\n>\n> On 2023-11-14 14:42:13 +0200, Alexander Korotkov wrote:\n> > It's possibly dumb option, but what about just removing the assert?\n>\n> That's not at all an option - the in-place bms_* functions can free their\n> input. So a dangling pointer to the \"old\" version is a use-after-free waiting\n> to happen - you just need a query that actually gets to bitmapsets that are a\n> bit larger.\n\nYeah, now I got it, thank you. I was under the wrong impression that\nbitmapset has the level of indirection, so the pointer remains valid.\nNow, I see that bitmapset manipulation functions can do free/repalloc\nmaking the previous bitmapset pointer invalid.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 15 Nov 2023 17:06:32 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 8:02 AM Andres Freund <[email protected]> wrote:\n> On 2023-11-14 19:14:57 +0800, Richard Guo wrote:\n> > While working on BUG #18187 [1], I noticed that we also have issues with\n> > how SJE replaces join clauses involving the removed rel. As an example,\n> > consider the query below, which would trigger an Assert.\n> >\n> > create table t (a int primary key, b int);\n> >\n> > explain (costs off)\n> > select * from t t1\n> > inner join t t2 on t1.a = t2.a\n> > left join t t3 on t1.b > 1 and t1.b < 2;\n> > server closed the connection unexpectedly\n> >\n> > The Assert failure happens in remove_self_join_rel() when we're trying\n> > to remove t1. The two join clauses of t1, 't1.b > 1' and 't1.b < 2',\n> > share the same pointer of 'required_relids', which is {t1, t3} at first.\n> > After we've performed replace_varno for the first clause, the\n> > required_relids becomes {t2, t3}, which is no problem. However, the\n> > second clause's required_relids also becomes {t2, t3}, because they are\n> > actually the same pointer. So when we proceed with replace_varno on the\n> > second clause, we'd trigger the Assert.\n>\n> Good catch.\n>\n>\n> > Off the top of my head I'm thinking that we can fix this kind of issue\n> > by bms_copying the bitmapset first before we make a substitution in\n> > replace_relid(), like attached.\n> >\n> > Alternatively, we can revise distribute_qual_to_rels() as below so that\n> > different RestrictInfos don't share the same pointer of required_relids.\n>\n> > --- a/src/backend/optimizer/plan/initsplan.c\n> > +++ b/src/backend/optimizer/plan/initsplan.c\n> > @@ -2385,7 +2385,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node\n> > *clause,\n> > * nonnullable-side rows failing the qual.\n> > */\n> > Assert(ojscope);\n> > - relids = ojscope;\n> > + relids = bms_copy(ojscope);\n> > Assert(!pseudoconstant);\n> > }\n> > else\n> >\n> > With this way, I'm worrying that there are other places where we should\n> > avoid sharing the same pointer to Bitmapset structure.\n>\n> Indeed.\n>\n>\n> > I'm not sure how to discover all these places. Any thoughts?\n>\n> At the very least I think we should add a mode to bitmapset.c mode where\n> every modification of a bitmapset reallocates, rather than just when the size\n> actually changes. Because we only reallocte and free in relatively uncommon\n> cases, particularly on 64bit systems, it's very easy to not find spots that\n> continue to use the input pointer to one of the modifying bms functions.\n>\n> A very hacky implementation of that indeed catches this bug with the existing\n> regression tests.\n>\n> The tests do *not* pass with just the attached applied, as the \"Delete relid\n> without substitution\" path has the same issue. With that also copying and all\n> the \"reusing\" bms* functions always reallocating, the tests pass - kinda.\n>\n>\n> The kinda because there are callers to bms_(add|del)_members() that pass the\n> same bms as a and b, which only works if the reallocation happens \"late\".\n\n+1,\nNeat idea. I'm willing to work on this. Will propose the patch soon.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 15 Nov 2023 17:07:15 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 5:07 PM Alexander Korotkov <[email protected]> wrote:\n>\n> On Wed, Nov 15, 2023 at 8:02 AM Andres Freund <[email protected]> wrote:\n> > The kinda because there are callers to bms_(add|del)_members() that pass the\n> > same bms as a and b, which only works if the reallocation happens \"late\".\n>\n> +1,\n> Neat idea. I'm willing to work on this. Will propose the patch soon.\n\n\nIt's here. New REALLOCATE_BITMAPSETS forces bitmapset reallocation on\neach modification. I also find it useful to add assert to all\nbitmapset functions on argument NodeTag. This allows you to find\naccess to hanging pointers earlier.\n\nI had the feeling of falling into a rabbit hole while debugging all\nthe cases of failure with this new option. With the second patch\nregressions tests pass.\n\nAny thoughts?\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sun, 19 Nov 2023 03:17:29 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Sun, Nov 19, 2023 at 6:48 AM Alexander Korotkov <[email protected]> wrote:\n>\n> On Wed, Nov 15, 2023 at 5:07 PM Alexander Korotkov <[email protected]> wrote:\n> >\n> > On Wed, Nov 15, 2023 at 8:02 AM Andres Freund <[email protected]> wrote:\n> > > The kinda because there are callers to bms_(add|del)_members() that pass the\n> > > same bms as a and b, which only works if the reallocation happens \"late\".\n> >\n> > +1,\n> > Neat idea. I'm willing to work on this. Will propose the patch soon.\n>\n>\n> It's here. New REALLOCATE_BITMAPSETS forces bitmapset reallocation on\n> each modification. I also find it useful to add assert to all\n> bitmapset functions on argument NodeTag. This allows you to find\n> access to hanging pointers earlier.\n\nCreating separate patches for REALLOCATE_BITMAPSETs and\nAssert(ISA(Bitmapset)) will be easier to review. We will be able to\ncheck whether all the places that require either of the fixes have\nbeen indeed fixed and correctly. I kept switching back and forth.\n\n>\n> I had the feeling of falling into a rabbit hole while debugging all\n> the cases of failure with this new option. With the second patch\n> regressions tests pass.\n\nI think this will increase memory consumption when planning queries\nwith partitioned tables (100s or 1000s of partitions). Have you tried\nmeasuring the impact?\n\n We should take hit on memory consumption when there is correctness\ninvolved but not all these cases look correctness problems. For\nexample. RelOptInfo::left_relids or SpecialJoinInfo::syn_lefthand may\nnot get modified after they are set. But just because\nRelOptInfo::relids of a lower relation was assigned somewhere which\ngot modified, these two get modified. bms_copy() in\nmake_specialjoininfo may not be necessary. I haven't tried that myself\nso I may be wrong.\n\nWhat might be useful is to mark a bitmap as \"final\" once it's know\nthat it can not change. e.g. RelOptInfo->relids once set never\nchanges. Each operation that modifies a Bitmapset throws an\nerror/Asserts if it's marked as \"final\", thus catching the places\nwhere we expect a Bitmapset being modified when not intended. This\nwill catch shared bitmapsets as well. We could apply bms_copy in only\nthose cases then.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 20 Nov 2023 15:12:10 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Sun, Nov 19, 2023 at 9:17 AM Alexander Korotkov <[email protected]>\nwrote:\n\n> It's here. New REALLOCATE_BITMAPSETS forces bitmapset reallocation on\n> each modification.\n\n\n+1 to the idea of introducing a reallocation mode to Bitmapset.\n\n\n> I had the feeling of falling into a rabbit hole while debugging all\n> the cases of failure with this new option. With the second patch\n> regressions tests pass.\n\n\nIt seems to me that we have always had situations where we share the\nsame pointer to a Bitmapset structure across different places. I do not\nthink this is a problem as long as we do not modify the Bitmapsets in a\nway that requires reallocation or impact the locations sharing the same\npointer.\n\nSo I'm wondering, instead of attempting to avoid sharing pointer to\nBitmapset in all locations that have problems, can we simply bms_copy\nthe original Bitmapset within replace_relid() before making any\nmodifications, as I proposed previously? Of course, as Andres pointed\nout, we need to do so also for the \"Delete relid without substitution\"\npath. Please see the attached.\n\nThanks\nRichard",
"msg_date": "Thu, 23 Nov 2023 10:33:46 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 4:33 AM Richard Guo <[email protected]> wrote:\n>\n> On Sun, Nov 19, 2023 at 9:17 AM Alexander Korotkov <[email protected]> wrote:\n>>\n>> It's here. New REALLOCATE_BITMAPSETS forces bitmapset reallocation on\n>> each modification.\n>\n>\n> +1 to the idea of introducing a reallocation mode to Bitmapset.\n>\n>>\n>> I had the feeling of falling into a rabbit hole while debugging all\n>> the cases of failure with this new option. With the second patch\n>> regressions tests pass.\n>\n>\n> It seems to me that we have always had situations where we share the\n> same pointer to a Bitmapset structure across different places. I do not\n> think this is a problem as long as we do not modify the Bitmapsets in a\n> way that requires reallocation or impact the locations sharing the same\n> pointer.\n>\n> So I'm wondering, instead of attempting to avoid sharing pointer to\n> Bitmapset in all locations that have problems, can we simply bms_copy\n> the original Bitmapset within replace_relid() before making any\n> modifications, as I proposed previously? Of course, as Andres pointed\n> out, we need to do so also for the \"Delete relid without substitution\"\n> path. Please see the attached.\n\n\nYes, this makes sense. Thank you for the patch. My initial point was\nthat replace_relid() should either do in-place in all cases or make a\ncopy in all cases. Now I see that it should make a copy in all cases.\nNote, that without making a copy in delete case, regression tests fail\nwith REALLOCATE_BITMAPSETS on.\n\nPlease, find the revised patchset. As Ashutosh Bapat asked, asserts\nare split into separate patch.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Fri, 24 Nov 2023 15:54:27 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 3:54 PM Alexander Korotkov <[email protected]> wrote:\n>\n> On Thu, Nov 23, 2023 at 4:33 AM Richard Guo <[email protected]> wrote:\n> >\n> > On Sun, Nov 19, 2023 at 9:17 AM Alexander Korotkov <[email protected]> wrote:\n> >>\n> >> It's here. New REALLOCATE_BITMAPSETS forces bitmapset reallocation on\n> >> each modification.\n> >\n> >\n> > +1 to the idea of introducing a reallocation mode to Bitmapset.\n> >\n> >>\n> >> I had the feeling of falling into a rabbit hole while debugging all\n> >> the cases of failure with this new option. With the second patch\n> >> regressions tests pass.\n> >\n> >\n> > It seems to me that we have always had situations where we share the\n> > same pointer to a Bitmapset structure across different places. I do not\n> > think this is a problem as long as we do not modify the Bitmapsets in a\n> > way that requires reallocation or impact the locations sharing the same\n> > pointer.\n> >\n> > So I'm wondering, instead of attempting to avoid sharing pointer to\n> > Bitmapset in all locations that have problems, can we simply bms_copy\n> > the original Bitmapset within replace_relid() before making any\n> > modifications, as I proposed previously? Of course, as Andres pointed\n> > out, we need to do so also for the \"Delete relid without substitution\"\n> > path. Please see the attached.\n>\n>\n> Yes, this makes sense. Thank you for the patch. My initial point was\n> that replace_relid() should either do in-place in all cases or make a\n> copy in all cases. Now I see that it should make a copy in all cases.\n> Note, that without making a copy in delete case, regression tests fail\n> with REALLOCATE_BITMAPSETS on.\n>\n> Please, find the revised patchset. As Ashutosh Bapat asked, asserts\n> are split into separate patch.\n\nAny objections to pushing this?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 27 Nov 2023 03:04:38 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 6:35 AM Alexander Korotkov <[email protected]> wrote:\n>\n> On Fri, Nov 24, 2023 at 3:54 PM Alexander Korotkov <[email protected]> wrote:\n> >\n> > On Thu, Nov 23, 2023 at 4:33 AM Richard Guo <[email protected]> wrote:\n> > >\n> > > On Sun, Nov 19, 2023 at 9:17 AM Alexander Korotkov <[email protected]> wrote:\n> > >>\n> > >> It's here. New REALLOCATE_BITMAPSETS forces bitmapset reallocation on\n> > >> each modification.\n> > >\n> > >\n> > > +1 to the idea of introducing a reallocation mode to Bitmapset.\n> > >\n> > >>\n> > >> I had the feeling of falling into a rabbit hole while debugging all\n> > >> the cases of failure with this new option. With the second patch\n> > >> regressions tests pass.\n> > >\n> > >\n> > > It seems to me that we have always had situations where we share the\n> > > same pointer to a Bitmapset structure across different places. I do not\n> > > think this is a problem as long as we do not modify the Bitmapsets in a\n> > > way that requires reallocation or impact the locations sharing the same\n> > > pointer.\n> > >\n> > > So I'm wondering, instead of attempting to avoid sharing pointer to\n> > > Bitmapset in all locations that have problems, can we simply bms_copy\n> > > the original Bitmapset within replace_relid() before making any\n> > > modifications, as I proposed previously? Of course, as Andres pointed\n> > > out, we need to do so also for the \"Delete relid without substitution\"\n> > > path. Please see the attached.\n> >\n> >\n> > Yes, this makes sense. Thank you for the patch. My initial point was\n> > that replace_relid() should either do in-place in all cases or make a\n> > copy in all cases. Now I see that it should make a copy in all cases.\n> > Note, that without making a copy in delete case, regression tests fail\n> > with REALLOCATE_BITMAPSETS on.\n> >\n> > Please, find the revised patchset. As Ashutosh Bapat asked, asserts\n> > are split into separate patch.\n>\n> Any objections to pushing this?\n>\n\nDid we at least measure the memory impact?\n\nHow do we ensure that we are not making unnecessary copies of Bitmapsets?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 27 Nov 2023 11:29:48 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On 2023-11-27 11:29:48 +0530, Ashutosh Bapat wrote:\n> How do we ensure that we are not making unnecessary copies of Bitmapsets?\n\nWe don't - but that's not specific to this patch. Bitmapsets typically aren't\nvery large, I doubt that it's a significant proportion of the memory\nusage. Adding refcounts or such would likely add more overhead than it'd save,\nboth in time and memory.\n\nI am a bit worried about the maintainability of remove_rel_from_query() et\nal. Is there any infrastructure for detecting that some PlannerInfo field that\nneeds updating wasn't updated? There's not even a note in PlannerInfo that\ndocuments that that needs to happen.\n\n\n",
"msg_date": "Mon, 27 Nov 2023 10:07:05 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 8:07 PM Andres Freund <[email protected]> wrote:\n>\n> On 2023-11-27 11:29:48 +0530, Ashutosh Bapat wrote:\n> > How do we ensure that we are not making unnecessary copies of Bitmapsets?\n>\n> We don't - but that's not specific to this patch. Bitmapsets typically aren't\n> very large, I doubt that it's a significant proportion of the memory\n> usage. Adding refcounts or such would likely add more overhead than it'd save,\n> both in time and memory.\n>\n> I am a bit worried about the maintainability of remove_rel_from_query() et\n> al. Is there any infrastructure for detecting that some PlannerInfo field that\n> needs updating wasn't updated? There's not even a note in PlannerInfo that\n> documents that that needs to happen.\n\nThat makes sense, thank you. We need at least a comment about this.\nI'll write a patch adding this comment.\n\nBTW, what do you think about the patches upthread [1].\n\nLinks\n1. https://www.postgresql.org/message-id/CAPpHfdtLgCryACcrmLv=Koq9rAB3=tr5y9D84dGgvUhSCvjzjg@mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 27 Nov 2023 20:37:45 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On 28/11/2023 01:37, Alexander Korotkov wrote:\n> On Mon, Nov 27, 2023 at 8:07 PM Andres Freund <[email protected]> wrote:\nSorry for the late answer, I missed this thread because of vacation.\n>> On 2023-11-27 11:29:48 +0530, Ashutosh Bapat wrote:\n>>> How do we ensure that we are not making unnecessary copies of Bitmapsets?\n>>\n>> We don't - but that's not specific to this patch. Bitmapsets typically aren't\n>> very large, I doubt that it's a significant proportion of the memory\n>> usage. Adding refcounts or such would likely add more overhead than it'd save,\n>> both in time and memory.\n\nI'd already clashed with Tom on copying the required_relids field and \nvoluntarily made unnecessary copies in the project [1].\nAnd ... stuck into huge memory consumption. The reason was in Bitmapsets:\nWhen we have 1E3-1E4 partitions and try to reparameterize a join, one \nbitmapset field can have a size of about 1kB. Having bitmapset \nreferencing Relation with a large index value, we had a lot of (for \nexample, 1E4 * 1kB) copies on each reparametrization of such a field. \nAlexander Pyhalov should remember that case.\nI don't claim we will certainly catch such an issue here, but it is a \nreason why we should look at this option carefully.\n\n>> I am a bit worried about the maintainability of remove_rel_from_query() et\n>> al. Is there any infrastructure for detecting that some PlannerInfo field that\n>> needs updating wasn't updated? There's not even a note in PlannerInfo that\n>> documents that that needs to happen.\nThanks you for highlighting this issue.> That makes sense, thank you. \nWe need at least a comment about this.\n> I'll write a patch adding this comment.\n> \n> BTW, what do you think about the patches upthread [1].\n> \n> Links\n> 1. https://www.postgresql.org/message-id/CAPpHfdtLgCryACcrmLv=Koq9rAB3=tr5y9D84dGgvUhSCvjzjg@mail.gmail.com\n\n0001 - Looks good and can be applied.\n0002 - I am afraid the problems with expanded range table entries are \nlikewise described above. The patch makes sense, but it requires time to \nreproduce corner cases. Maybe we can do it separately from the current \nhotfix?\n0003 - I think it is really what we need right now: SJE is quite a rare \noptimization and executes before the entries expansion procedure. So it \nlooks less risky.\n\n[1] Asymmetric partition-wise JOIN\nhttps://www.postgresql.org/message-id/flat/CAOP8fzaVL_2SCJayLL9kj5pCA46PJOXXjuei6-3aFUV45j4LJQ%40mail.gmail.com\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Fri, 8 Dec 2023 11:37:27 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "Andrei Lepikhov писал(а) 2023-12-08 07:37:\n> On 28/11/2023 01:37, Alexander Korotkov wrote:\n>> On Mon, Nov 27, 2023 at 8:07 PM Andres Freund <[email protected]> \n>> wrote:\n> Sorry for the late answer, I missed this thread because of vacation.\n>>> On 2023-11-27 11:29:48 +0530, Ashutosh Bapat wrote:\n>>>> How do we ensure that we are not making unnecessary copies of \n>>>> Bitmapsets?\n>>> \n>>> We don't - but that's not specific to this patch. Bitmapsets \n>>> typically aren't\n>>> very large, I doubt that it's a significant proportion of the memory\n>>> usage. Adding refcounts or such would likely add more overhead than \n>>> it'd save,\n>>> both in time and memory.\n> \n> I'd already clashed with Tom on copying the required_relids field and \n> voluntarily made unnecessary copies in the project [1].\n> And ... stuck into huge memory consumption. The reason was in \n> Bitmapsets:\n> When we have 1E3-1E4 partitions and try to reparameterize a join, one \n> bitmapset field can have a size of about 1kB. Having bitmapset \n> referencing Relation with a large index value, we had a lot of (for \n> example, 1E4 * 1kB) copies on each reparametrization of such a field. \n> Alexander Pyhalov should remember that case.\n\nYes. If it matters, this happened during reparametrization when 2 \npartitioned tables with 1000 partitions each were joined. Then \nasymmetric pw join managed to eat lots of memory for bitmapsets (by \nlots of memory I mean all available on the test VM).\n\n> [1] Asymmetric partition-wise JOIN\n> https://www.postgresql.org/message-id/flat/CAOP8fzaVL_2SCJayLL9kj5pCA46PJOXXjuei6-3aFUV45j4LJQ%40mail.gmail.com\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Fri, 08 Dec 2023 10:13:36 +0300",
"msg_from": "Alexander Pyhalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 12:43 PM Alexander Pyhalov\n<[email protected]> wrote:\n>\n> Andrei Lepikhov писал(а) 2023-12-08 07:37:\n> > On 28/11/2023 01:37, Alexander Korotkov wrote:\n> >> On Mon, Nov 27, 2023 at 8:07 PM Andres Freund <[email protected]>\n> >> wrote:\n> > Sorry for the late answer, I missed this thread because of vacation.\n> >>> On 2023-11-27 11:29:48 +0530, Ashutosh Bapat wrote:\n> >>>> How do we ensure that we are not making unnecessary copies of\n> >>>> Bitmapsets?\n> >>>\n> >>> We don't - but that's not specific to this patch. Bitmapsets\n> >>> typically aren't\n> >>> very large, I doubt that it's a significant proportion of the memory\n> >>> usage. Adding refcounts or such would likely add more overhead than\n> >>> it'd save,\n> >>> both in time and memory.\n> >\n> > I'd already clashed with Tom on copying the required_relids field and\n> > voluntarily made unnecessary copies in the project [1].\n> > And ... stuck into huge memory consumption. The reason was in\n> > Bitmapsets:\n> > When we have 1E3-1E4 partitions and try to reparameterize a join, one\n> > bitmapset field can have a size of about 1kB. Having bitmapset\n> > referencing Relation with a large index value, we had a lot of (for\n> > example, 1E4 * 1kB) copies on each reparametrization of such a field.\n> > Alexander Pyhalov should remember that case.\n>\n> Yes. If it matters, this happened during reparametrization when 2\n> partitioned tables with 1000 partitions each were joined. Then\n> asymmetric pw join managed to eat lots of memory for bitmapsets (by\n> lots of memory I mean all available on the test VM).\n\nI did some analysis of memory consumption by bitmapsets in such cases.\n[1] contains slides with the result of this analysis. The slides are\ncrude and quite WIP. But they will give some idea.\n\n[1] https://docs.google.com/presentation/d/1S9BiAADhX-Fv9tDbx5R5Izq4blAofhZMhHcO1c-wzfI/edit?usp=sharing\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 8 Dec 2023 18:57:54 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "Hi, Ashutosh!\n\nOn Fri, Dec 8, 2023 at 3:28 PM Ashutosh Bapat\n<[email protected]> wrote:\n> I did some analysis of memory consumption by bitmapsets in such cases.\n> [1] contains slides with the result of this analysis. The slides are\n> crude and quite WIP. But they will give some idea.\n>\n> [1] https://docs.google.com/presentation/d/1S9BiAADhX-Fv9tDbx5R5Izq4blAofhZMhHcO1c-wzfI/edit?usp=sharing\n\nThank you for sharing your analysis. I understand that usage of a\nplain bitmap becomes a problem with a large number of partitions. But\nI wonder what does \"post proposed fixes\" mean? Is it the fixes posted\nin [1]. If so it's very surprising for me they are reducing the\nmemory footprint size.\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfdtLgCryACcrmLv=Koq9rAB3=tr5y9D84dGgvUhSCvjzjg@mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 8 Dec 2023 19:54:38 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 3:13 PM Alexander Pyhalov <[email protected]>\nwrote:\n\n> Andrei Lepikhov писал(а) 2023-12-08 07:37:\n> > I'd already clashed with Tom on copying the required_relids field and\n> > voluntarily made unnecessary copies in the project [1].\n> > And ... stuck into huge memory consumption. The reason was in\n> > Bitmapsets:\n> > When we have 1E3-1E4 partitions and try to reparameterize a join, one\n> > bitmapset field can have a size of about 1kB. Having bitmapset\n> > referencing Relation with a large index value, we had a lot of (for\n> > example, 1E4 * 1kB) copies on each reparametrization of such a field.\n> > Alexander Pyhalov should remember that case.\n>\n> Yes. If it matters, this happened during reparametrization when 2\n> partitioned tables with 1000 partitions each were joined. Then\n> asymmetric pw join managed to eat lots of memory for bitmapsets (by\n> lots of memory I mean all available on the test VM).\n\n\nBy reparametrization did you mean the work done in\nreparameterize_path_by_child()? If so maybe you'd be interested in the\npatch [1] which postpones reparameterization of paths until createplan.c\nand thus can help avoid unnecessary reparametrization work.\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs48PBwe1YadzgKGW_ES%3DV9BZhq00BaZTOTM6Oye8n_cDNg%40mail.gmail.com\n\nThanks\nRichard\n\nOn Fri, Dec 8, 2023 at 3:13 PM Alexander Pyhalov <[email protected]> wrote:Andrei Lepikhov писал(а) 2023-12-08 07:37:\n> I'd already clashed with Tom on copying the required_relids field and \n> voluntarily made unnecessary copies in the project [1].\n> And ... stuck into huge memory consumption. The reason was in \n> Bitmapsets:\n> When we have 1E3-1E4 partitions and try to reparameterize a join, one \n> bitmapset field can have a size of about 1kB. Having bitmapset \n> referencing Relation with a large index value, we had a lot of (for \n> example, 1E4 * 1kB) copies on each reparametrization of such a field. \n> Alexander Pyhalov should remember that case.\n\nYes. If it matters, this happened during reparametrization when 2 \npartitioned tables with 1000 partitions each were joined. Then \nasymmetric pw join managed to eat lots of memory for bitmapsets (by \nlots of memory I mean all available on the test VM).By reparametrization did you mean the work done inreparameterize_path_by_child()? If so maybe you'd be interested in thepatch [1] which postpones reparameterization of paths until createplan.cand thus can help avoid unnecessary reparametrization work.[1] https://www.postgresql.org/message-id/CAMbWs48PBwe1YadzgKGW_ES%3DV9BZhq00BaZTOTM6Oye8n_cDNg%40mail.gmail.comThanksRichard",
"msg_date": "Mon, 11 Dec 2023 10:31:14 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On 11/12/2023 09:31, Richard Guo wrote:\n> On Fri, Dec 8, 2023 at 3:13 PM Alexander Pyhalov \n> <[email protected] <mailto:[email protected]>> wrote:\n> Andrei Lepikhov писал(а) 2023-12-08 07:37:\n> > I'd already clashed with Tom on copying the required_relids field\n> and\n> > voluntarily made unnecessary copies in the project [1].\n> > And ... stuck into huge memory consumption. The reason was in\n> > Bitmapsets:\n> > When we have 1E3-1E4 partitions and try to reparameterize a join,\n> one\n> > bitmapset field can have a size of about 1kB. Having bitmapset\n> > referencing Relation with a large index value, we had a lot of (for\n> > example, 1E4 * 1kB) copies on each reparametrization of such a\n> field.\n> > Alexander Pyhalov should remember that case.\n> Yes. If it matters, this happened during reparametrization when 2\n> partitioned tables with 1000 partitions each were joined. Then\n> asymmetric pw join managed to eat lots of memory for bitmapsets (by\n> lots of memory I mean all available on the test VM).\n> By reparametrization did you mean the work done in\n> reparameterize_path_by_child()? If so maybe you'd be interested in the\n> patch [1] which postpones reparameterization of paths until createplan.c\n> and thus can help avoid unnecessary reparametrization work.\n\nYeah, I have discovered it already. It is a promising solution and only \nneeds a bit more review. But here, I embraced some corner cases with the \nidea that we may not see other cases right now. And also, sometimes the \nBitmapset field is significant - it is not a corner case.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 11 Dec 2023 09:40:55 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 11:24 PM Alexander Korotkov <[email protected]> wrote:\n>\n> Hi, Ashutosh!\n>\n> On Fri, Dec 8, 2023 at 3:28 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> > I did some analysis of memory consumption by bitmapsets in such cases.\n> > [1] contains slides with the result of this analysis. The slides are\n> > crude and quite WIP. But they will give some idea.\n> >\n> > [1] https://docs.google.com/presentation/d/1S9BiAADhX-Fv9tDbx5R5Izq4blAofhZMhHcO1c-wzfI/edit?usp=sharing\n>\n> Thank you for sharing your analysis. I understand that usage of a\n> plain bitmap becomes a problem with a large number of partitions. But\n> I wonder what does \"post proposed fixes\" mean? Is it the fixes posted\n> in [1]. If so it's very surprising for me they are reducing the\n> memory footprint size.\n\nNo. These are fixes in various threads all listed together in [1]. I\nhad started investigating memory consumption by Bitmapsets around the\nsame time. The slides are result of that investigation. I have updated\nslides with this reference.\n\n[1] https://www.postgresql.org/message-id/CAExHW5s_KwB0Rb9L3TuRJxsvO5UCtEpdskkAeMb5X1EtssMjgg@mail.gmail.com\n\nThey reduce the memory footprint by Bitmapset because they reduce the\nobjects that contain the bitmapsets, thus reducing the total number of\nbitmapsets produced.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 11 Dec 2023 18:55:34 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "Hi!\n\nOn Mon, Dec 11, 2023 at 3:25 PM Ashutosh Bapat <[email protected]>\nwrote:\n\n> On Fri, Dec 8, 2023 at 11:24 PM Alexander Korotkov <[email protected]>\n> wrote:\n> > On Fri, Dec 8, 2023 at 3:28 PM Ashutosh Bapat\n> > <[email protected]> wrote:\n> > > I did some analysis of memory consumption by bitmapsets in such cases.\n> > > [1] contains slides with the result of this analysis. The slides are\n> > > crude and quite WIP. But they will give some idea.\n> > >\n> > > [1]\n> https://docs.google.com/presentation/d/1S9BiAADhX-Fv9tDbx5R5Izq4blAofhZMhHcO1c-wzfI/edit?usp=sharing\n> >\n> > Thank you for sharing your analysis. I understand that usage of a\n> > plain bitmap becomes a problem with a large number of partitions. But\n> > I wonder what does \"post proposed fixes\" mean? Is it the fixes posted\n> > in [1]. If so it's very surprising for me they are reducing the\n> > memory footprint size.\n>\n> No. These are fixes in various threads all listed together in [1]. I\n> had started investigating memory consumption by Bitmapsets around the\n> same time. The slides are result of that investigation. I have updated\n> slides with this reference.\n>\n> [1]\n> https://www.postgresql.org/message-id/CAExHW5s_KwB0Rb9L3TuRJxsvO5UCtEpdskkAeMb5X1EtssMjgg@mail.gmail.com\n>\n> They reduce the memory footprint by Bitmapset because they reduce the\n> objects that contain the bitmapsets, thus reducing the total number of\n> bitmapsets produced.\n>\n\nThank you Ashutosh for your work on this matter. With a large number of\npartitions, it definitely makes sense to reduce both Bitmapset's size as\nwell as the number of Bitmapsets.\n\nI've checked the patchset [1] with your test suite to check the memory\nconsumption. The results are in the table below.\n\nquery | no patch | patch | no self-join\nremoval\n----------------------------------------------------------------------------------\n2-way join, non partitioned | 14792 | 15208 | 29152\n2-way join, no partitionwise join | 19519576 | 19519576 | 19519576\n2-way join, partitionwise join | 40851968 | 40851968 | 40851968\n3-way join, non partitioned | 20632 | 21784 | 79376\n3-way join, no partitionwise join | 45227224 | 45227224 | 45227224\n3-way join, partitionwise join | 151655144 | 151655144 | 151655144\n4-way join, non partitioned | 25816 | 27736 | 209128\n4-way join, no partitionwise join | 83540712 | 83540712 | 83540712\n4-way join, partitionwise join | 463960088 | 463960088 | 463960088\n5-way join, non partitioned | 31000 | 33720 | 562552\n5-way join, no partitionwise join | 149284376 | 149284376 | 149284376\n5-way join, partitionwise join | 1663896608 | 1663896608 | 1663896608\n\n\nThe most noticeable thing for me is that self-join removal doesn't work\nwith partitioned tables. I think this is the direction for future work on\nthis subject. In non-partitioned cases, patchset gives a small memory\noverhead. However, the memory consumption is still much less than it is\nwithout the self-join removal. So, removing the join still lowers memory\nconsumption even if it copies some Bitmapsets. Given that patchset [1] is\nrequired for the correctness of memory manipulations in Bitmapsets during\njoin removals, I'm going to push it if there are no objections.\n\nLinks.\n1.\nhttps://www.postgresql.org/message-id/CAPpHfdtLgCryACcrmLv%3DKoq9rAB3%3Dtr5y9D84dGgvUhSCvjzjg%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\nHi!On Mon, Dec 11, 2023 at 3:25 PM Ashutosh Bapat <[email protected]> wrote:On Fri, Dec 8, 2023 at 11:24 PM Alexander Korotkov <[email protected]> wrote:> On Fri, Dec 8, 2023 at 3:28 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> > I did some analysis of memory consumption by bitmapsets in such cases.\n> > [1] contains slides with the result of this analysis. The slides are\n> > crude and quite WIP. But they will give some idea.\n> >\n> > [1] https://docs.google.com/presentation/d/1S9BiAADhX-Fv9tDbx5R5Izq4blAofhZMhHcO1c-wzfI/edit?usp=sharing\n>\n> Thank you for sharing your analysis. I understand that usage of a\n> plain bitmap becomes a problem with a large number of partitions. But\n> I wonder what does \"post proposed fixes\" mean? Is it the fixes posted\n> in [1]. If so it's very surprising for me they are reducing the\n> memory footprint size.\n\nNo. These are fixes in various threads all listed together in [1]. I\nhad started investigating memory consumption by Bitmapsets around the\nsame time. The slides are result of that investigation. I have updated\nslides with this reference.\n\n[1] https://www.postgresql.org/message-id/CAExHW5s_KwB0Rb9L3TuRJxsvO5UCtEpdskkAeMb5X1EtssMjgg@mail.gmail.com\n\nThey reduce the memory footprint by Bitmapset because they reduce the\nobjects that contain the bitmapsets, thus reducing the total number of\nbitmapsets produced.Thank you Ashutosh for your work on this matter. With a large number of partitions, it definitely makes sense to reduce both Bitmapset's size as well as the number of Bitmapsets.I've checked the patchset [1] with your test suite to check the memory consumption. The results are in the table below.query | no patch | patch | no self-join removal----------------------------------------------------------------------------------2-way join, non partitioned | 14792 | 15208 | 291522-way join, no partitionwise join | 19519576 | 19519576 | 19519576 2-way join, partitionwise join | 40851968 | 40851968 | 40851968 3-way join, non partitioned | 20632 | 21784 | 793763-way join, no partitionwise join | 45227224 | 45227224 | 45227224 3-way join, partitionwise join | 151655144 | 151655144 | 151655144 4-way join, non partitioned | 25816 | 27736 | 2091284-way join, no partitionwise join | 83540712 | 83540712 | 83540712 4-way join, partitionwise join | 463960088 | 463960088 | 463960088 5-way join, non partitioned | 31000 | 33720 | 5625525-way join, no partitionwise join | 149284376 | 149284376 | 149284376 5-way join, partitionwise join | 1663896608 | 1663896608 | 1663896608 \nThe most noticeable thing for me is that self-join removal doesn't work with partitioned tables. I think this is the direction for future work on this subject. In non-partitioned cases, patchset gives a small memory overhead. However, the memory consumption is still much less than it is without the self-join removal. So, removing the join still lowers memory consumption even if it copies some Bitmapsets. Given that patchset [1] is required for the correctness of memory manipulations in Bitmapsets during join removals, I'm going to push it if there are no objections.Links.1. https://www.postgresql.org/message-id/CAPpHfdtLgCryACcrmLv%3DKoq9rAB3%3Dtr5y9D84dGgvUhSCvjzjg%40mail.gmail.com------Regards,Alexander Korotkov",
"msg_date": "Sun, 24 Dec 2023 14:02:45 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "Hi Alexander,\n\nOn Sun, Dec 24, 2023 at 5:32 PM Alexander Korotkov <[email protected]> wrote:\n>\n>\n> Thank you Ashutosh for your work on this matter. With a large number of partitions, it definitely makes sense to reduce both Bitmapset's size as well as the number of Bitmapsets.\n>\n> I've checked the patchset [1] with your test suite to check the memory consumption. The results are in the table below.\n>\n> query | no patch | patch | no self-join removal\n> ----------------------------------------------------------------------------------\n> 2-way join, non partitioned | 14792 | 15208 | 29152\n> 2-way join, no partitionwise join | 19519576 | 19519576 | 19519576\n> 2-way join, partitionwise join | 40851968 | 40851968 | 40851968\n> 3-way join, non partitioned | 20632 | 21784 | 79376\n> 3-way join, no partitionwise join | 45227224 | 45227224 | 45227224\n> 3-way join, partitionwise join | 151655144 | 151655144 | 151655144\n> 4-way join, non partitioned | 25816 | 27736 | 209128\n> 4-way join, no partitionwise join | 83540712 | 83540712 | 83540712\n> 4-way join, partitionwise join | 463960088 | 463960088 | 463960088\n> 5-way join, non partitioned | 31000 | 33720 | 562552\n> 5-way join, no partitionwise join | 149284376 | 149284376 | 149284376\n> 5-way join, partitionwise join | 1663896608 | 1663896608 | 1663896608\n>\n>\n> The most noticeable thing for me is that self-join removal doesn't work with partitioned tables. I think this is the direction for future work on this subject. In non-partitioned cases, patchset gives a small memory overhead. However, the memory consumption is still much less than it is without the self-join removal. So, removing the join still lowers memory consumption even if it copies some Bitmapsets. Given that patchset [1] is required for the correctness of memory manipulations in Bitmapsets during join removals, I'm going to push it if there are no objections.\n\nI am missing the link between this work and the self join work. Can\nyou please provide me relevant pointers?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 25 Dec 2023 06:26:20 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 2:56 AM Ashutosh Bapat\n<[email protected]> wrote:\n> On Sun, Dec 24, 2023 at 5:32 PM Alexander Korotkov <[email protected]> wrote:\n> >\n> >\n> > Thank you Ashutosh for your work on this matter. With a large number of partitions, it definitely makes sense to reduce both Bitmapset's size as well as the number of Bitmapsets.\n> >\n> > I've checked the patchset [1] with your test suite to check the memory consumption. The results are in the table below.\n> >\n> > query | no patch | patch | no self-join removal\n> > ----------------------------------------------------------------------------------\n> > 2-way join, non partitioned | 14792 | 15208 | 29152\n> > 2-way join, no partitionwise join | 19519576 | 19519576 | 19519576\n> > 2-way join, partitionwise join | 40851968 | 40851968 | 40851968\n> > 3-way join, non partitioned | 20632 | 21784 | 79376\n> > 3-way join, no partitionwise join | 45227224 | 45227224 | 45227224\n> > 3-way join, partitionwise join | 151655144 | 151655144 | 151655144\n> > 4-way join, non partitioned | 25816 | 27736 | 209128\n> > 4-way join, no partitionwise join | 83540712 | 83540712 | 83540712\n> > 4-way join, partitionwise join | 463960088 | 463960088 | 463960088\n> > 5-way join, non partitioned | 31000 | 33720 | 562552\n> > 5-way join, no partitionwise join | 149284376 | 149284376 | 149284376\n> > 5-way join, partitionwise join | 1663896608 | 1663896608 | 1663896608\n> >\n> >\n> > The most noticeable thing for me is that self-join removal doesn't work with partitioned tables. I think this is the direction for future work on this subject. In non-partitioned cases, patchset gives a small memory overhead. However, the memory consumption is still much less than it is without the self-join removal. So, removing the join still lowers memory consumption even if it copies some Bitmapsets. Given that patchset [1] is required for the correctness of memory manipulations in Bitmapsets during join removals, I'm going to push it if there are no objections.\n>\n> I am missing the link between this work and the self join work. Can\n> you please provide me relevant pointers?\n\nThis thread was started from the bug in self-join removal [1]. The\nfix under consideration [2] makes replace_relid() leave the argument\nunmodified. I've used your test set [3] to check the memory overhead\nof this solution.\n\nLinks.\n1. https://www.postgresql.org/message-id/CAMbWs4_wJthNtYBL%2BSsebpgF-5L2r5zFFk6xYbS0A78GKOTFHw%40mail.gmail.com\n2. https://www.postgresql.org/message-id/CAPpHfdtLgCryACcrmLv%3DKoq9rAB3%3Dtr5y9D84dGgvUhSCvjzjg%40mail.gmail.com\n3. https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph%2BPvo5dNpdrVCsBgXEzDQ%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 25 Dec 2023 03:04:36 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
},
{
"msg_contents": "On Sun, Dec 24, 2023 at 2:02 PM Alexander Korotkov <[email protected]> wrote:\n> The most noticeable thing for me is that self-join removal doesn't work with partitioned tables. I think this is the direction for future work on this subject. In non-partitioned cases, patchset gives a small memory overhead. However, the memory consumption is still much less than it is without the self-join removal. So, removing the join still lowers memory consumption even if it copies some Bitmapsets. Given that patchset [1] is required for the correctness of memory manipulations in Bitmapsets during join removals, I'm going to push it if there are no objections.\n\nPushed!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 27 Dec 2023 04:00:27 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure on 'list_member_ptr(rel->joininfo, restrictinfo)'"
}
] |
[
{
"msg_contents": "Hi Team,\n\nGood day! I'm not able to launch Postgres PGAdmin 4 in my MAC OS, I have\ntried with both versions 15 and 16, but nothing is working. It says that it\nhas quit unexpectedly (screenshot attached). I have attached the bug report\nas well along with my system specifications below. Kindly help me with this\nASAP.\n\n-------------------------------------\nTranslated Report (Full Report Below)\n-------------------------------------\n\nProcess: pgAdmin 4 [3505]\nPath: /Library/PostgreSQL/16/pgAdmin\n4.app/Contents/MacOS/pgAdmin 4\nIdentifier: org.pgadmin.pgadmin4\nVersion: 7.8 (4280.88)\nCode Type: X86-64 (Translated)\nParent Process: launchd [1]\nUser ID: 501\n\nDate/Time: 2023-11-14 11:47:14.7065 -0500\nOS Version: macOS 14.1.1 (23B81)\nReport Version: 12\nAnonymous UUID: A4518538-B2A9-0B93-C540-A9DCCCD929EF\n\nSleep/Wake UUID: E31F7EEF-42B9-4E61-88DC-9C0571A2F4E3\n\nTime Awake Since Boot: 2800 seconds\nTime Since Wake: 920 seconds\n\nSystem Integrity Protection: enabled\n\nNotes:\nPC register does not match crashing frame (0x0 vs 0x100812560)\n\nCrashed Thread: 0 CrBrowserMain Dispatch queue:\ncom.apple.main-thread\n\nException Type: EXC_BAD_ACCESS (SIGSEGV)\nException Codes: KERN_INVALID_ADDRESS at 0x0000000000000020\nException Codes: 0x0000000000000001, 0x0000000000000020\n\nTermination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11\nTerminating Process: exc handler [3505]\n\nVM Region Info: 0x20 is not in any region. Bytes before following region:\n140723014549472\n REGION TYPE START - END [ VSIZE] PRT/MAX\nSHRMOD REGION DETAIL\n UNUSED SPACE AT START\n--->\n mapped file 7ffca14b4000-7ffcc6b5c000 [598.7M] r-x/r-x\nSM=COW ...t_id=cccf3f63\n\nError Formulating Crash Report:\nPC register does not match crashing frame (0x0 vs 0x100812560)\n\nThread 0 Crashed:: CrBrowserMain Dispatch queue: com.apple.main-thread\n0 ??? 0x100812560 ???\n1 libsystem_platform.dylib 0x7ff80ce5a393 _sigtramp + 51\n2 nwjs Framework 0x11eb31522 0x1151ad000 + 160974114\n3 nwjs Framework 0x11ed4e1f0 0x1151ad000 + 163189232\n4 nwjs Framework 0x11edbe0db 0x1151ad000 + 163647707\n5 nwjs Framework 0x11b6ad51a 0x1151ad000 + 105907482\n6 nwjs Framework 0x11b6b58d9 0x1151ad000 + 105941209\n7 nwjs Framework 0x11c6444bd 0x1151ad000 + 122254525\n8 nwjs Framework 0x11c642c46 0x1151ad000 + 122248262\n9 nwjs Framework 0x11ed4fde5 0x1151ad000 + 163196389\n10 nwjs Framework 0x11edcac97 0x1151ad000 + 163699863\n11 nwjs Framework 0x11eaffcd5 0x1151ad000 + 160771285\n12 nwjs Framework 0x11eafef17 0x1151ad000 + 160767767\n13 nwjs Framework 0x11c1a7259 0x1151ad000 + 117416537\n14 nwjs Framework 0x118619cb0 0x1151ad000 + 54971568\n15 nwjs Framework 0x11861c494 0x1151ad000 + 54981780\n16 nwjs Framework 0x11861c927 0x1151ad000 + 54982951\n17 nwjs Framework 0x118618a63 0x1151ad000 + 54966883\n18 nwjs Framework 0x1181bb179 0x1151ad000 + 50389369\n19 nwjs Framework 0x1186191c6 0x1151ad000 + 54968774\n20 nwjs Framework 0x11a46bf90 0x1151ad000 + 86765456\n21 nwjs Framework 0x11a471131 0x1151ad000 + 86786353\n22 nwjs Framework 0x11a46d6d0 0x1151ad000 + 86771408\n23 nwjs Framework 0x11a55f1da 0x1151ad000 + 87761370\n24 nwjs Framework 0x11a46f799 0x1151ad000 + 86779801\n25 nwjs Framework 0x11990f4d2 0x1151ad000 + 74851538\n26 nwjs Framework 0x119926a6e 0x1151ad000 + 74947182\n27 nwjs Framework 0x1199264a9 0x1151ad000 + 74945705\n28 nwjs Framework 0x1199270d5 0x1151ad000 + 74948821\n29 nwjs Framework 0x119980ec3 0x1151ad000 + 75316931\n30 nwjs Framework 0x11997da22 0x1151ad000 + 75303458\n31 nwjs Framework 0x1199806df 0x1151ad000 + 75314911\n32 CoreFoundation 0x7ff80cf07a16\n__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17\n33 CoreFoundation 0x7ff80cf079b9 __CFRunLoopDoSource0 +\n157\n34 CoreFoundation 0x7ff80cf07788 __CFRunLoopDoSources0\n+ 215\n35 CoreFoundation 0x7ff80cf063f8 __CFRunLoopRun + 919\n36 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n37 HIToolbox 0x7ff817c6d9d9\nRunCurrentEventLoopInMode + 292\n38 HIToolbox 0x7ff817c6d7e6 ReceiveNextEventCommon\n+ 665\n39 HIToolbox 0x7ff817c6d531\n_BlockUntilNextEventMatchingListInModeWithFilter + 66\n40 AppKit 0x7ff810477885 _DPSNextEvent + 880\n41 AppKit 0x7ff810d6b348\n-[NSApplication(NSEventRouting)\n_nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1304\n42 nwjs Framework 0x1193b83f0 0x1151ad000 + 69252080\n43 nwjs Framework 0x11997da22 0x1151ad000 + 75303458\n44 nwjs Framework 0x1193b8369 0x1151ad000 + 69251945\n45 AppKit 0x7ff810468dfa -[NSApplication run] +\n603\n46 nwjs Framework 0x1199814ac 0x1151ad000 + 75318444\n47 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n48 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n49 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n50 nwjs Framework 0x1177fc301 0x1151ad000 + 40170241\n51 nwjs Framework 0x1177fdc92 0x1151ad000 + 40176786\n52 nwjs Framework 0x1177f9a5a 0x1151ad000 + 40159834\n53 nwjs Framework 0x118d351c4 0x1151ad000 + 62423492\n54 nwjs Framework 0x118d36389 0x1151ad000 + 62428041\n55 nwjs Framework 0x118d3619d 0x1151ad000 + 62427549\n56 nwjs Framework 0x118d34867 0x1151ad000 + 62421095\n57 nwjs Framework 0x118d34b03 0x1151ad000 + 62421763\n58 nwjs Framework 0x1151b0930 ChromeMain + 560\n59 pgAdmin 4 0x10065187e main + 286\n60 dyld 0x200a803a6 start + 1942\n\nThread 1:: com.apple.rosetta.exceptionserver\n0 runtime 0x7ff7ffc58294 0x7ff7ffc54000 + 17044\n\nThread 2:: StackSamplingProfiler\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x1198c3f98 0x1151ad000 + 74543000\n8 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n9 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n10 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n11 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n12 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n13 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n14 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 3:\n0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n\nThread 4:\n0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n\nThread 5:\n0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n\nThread 6:\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x11b85d23d 0x1151ad000 + 107676221\n6 nwjs Framework 0x11b85e656 0x1151ad000 + 107681366\n7 nwjs Framework 0x11b85e321 0x1151ad000 + 107680545\n8 nwjs Framework 0x11b860318 0x1151ad000 + 107688728\n9 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n10 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 7:: HangWatcher\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993d944 0x1151ad000 + 75041092\n8 nwjs Framework 0x11993db03 0x1151ad000 + 75041539\n9 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n10 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n11 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 8:: ThreadPoolServiceThread\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf7506 kevent64 + 10\n2 nwjs Framework 0x119973e31 0x1151ad000 + 75263537\n3 nwjs Framework 0x119973cee 0x1151ad000 + 75263214\n4 nwjs Framework 0x119973c65 0x1151ad000 + 75263077\n5 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n6 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n7 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n8 nwjs Framework 0x11993043d 0x1151ad000 + 74986557\n9 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n10 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n11 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n12 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 9:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 10:: ThreadPoolBackgroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b61d 0x1151ad000 + 75032093\n10 nwjs Framework 0x11993b5d0 0x1151ad000 + 75032016\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 11:: ThreadPoolBackgroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b61d 0x1151ad000 + 75032093\n10 nwjs Framework 0x11993b5d0 0x1151ad000 + 75032016\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 12:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 13:: Chrome_IOThread\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf7506 kevent64 + 10\n2 nwjs Framework 0x119973e31 0x1151ad000 + 75263537\n3 nwjs Framework 0x119973cee 0x1151ad000 + 75263214\n4 nwjs Framework 0x119973c65 0x1151ad000 + 75263077\n5 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n6 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n7 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n8 nwjs Framework 0x1177fef30 0x1151ad000 + 40181552\n9 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n10 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n11 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n12 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 14:: MemoryInfra\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x1198c3f98 0x1151ad000 + 74543000\n8 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n9 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n10 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n11 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n12 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n13 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n14 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 15:: NetworkConfigWatcher\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 Foundation 0x7ff80de01551 -[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:] + 216\n9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 16:: CrShutdownDetector\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdee4d2 read + 10\n2 nwjs Framework 0x11979c9de 0x1151ad000 + 73333214\n3 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n4 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n5 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 17:: NetworkConfigWatcher\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 Foundation 0x7ff80de01551 -[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:] + 216\n9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 18:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 19:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 20:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 21:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 22:: NetworkNotificationThreadMac\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 Foundation 0x7ff80de01551 -[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:] + 216\n9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 23:: CompositorTileWorker1\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf060e __psynch_cvwait + 10\n2 libsystem_pthread.dylib 0x7ff80ce2d76b _pthread_cond_wait +\n1211\n3 nwjs Framework 0x1199577cb 0x1151ad000 + 75147211\n4 nwjs Framework 0x11ae82a55 0x1151ad000 + 97344085\n5 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n6 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n7 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 24:: ThreadPoolSingleThreadForegroundBlocking0\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b70d 0x1151ad000 + 75032333\n10 nwjs Framework 0x11993b5da 0x1151ad000 + 75032026\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 25:: ThreadPoolSingleThreadSharedForeground1\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6dd 0x1151ad000 + 75032285\n10 nwjs Framework 0x11993b5e4 0x1151ad000 + 75032036\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 26:: NetworkConfigWatcher\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 Foundation 0x7ff80de01551 -[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:] + 216\n9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 27:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 28:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 29:: ThreadPoolSingleThreadSharedBackgroundBlocking2\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b64d 0x1151ad000 + 75032141\n10 nwjs Framework 0x11993b5f8 0x1151ad000 + 75032056\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 30:: ThreadPoolSingleThreadSharedForegroundBlocking3\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6dd 0x1151ad000 + 75032285\n10 nwjs Framework 0x11993b5e4 0x1151ad000 + 75032036\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 31:: CacheThread_BlockFile\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf7506 kevent64 + 10\n2 nwjs Framework 0x119973e31 0x1151ad000 + 75263537\n3 nwjs Framework 0x119973cee 0x1151ad000 + 75263214\n4 nwjs Framework 0x119973c65 0x1151ad000 + 75263077\n5 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n6 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n7 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n8 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n9 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n10 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n11 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 32:: com.apple.NSEventThread\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 AppKit 0x7ff8105d4a00 _NSEventThread + 122\n9 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n10 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 33:: Service Discovery Thread\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 Foundation 0x7ff80de01551 -[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:] + 216\n9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 34:: com.apple.CFSocket.private\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf694a __select + 10\n2 CoreFoundation 0x7ff80cf2f6af __CFSocketManager + 637\n3 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n4 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 35:\n0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n\nThread 36:\n0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n\nThread 37:: org.libusb.device-hotplug\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 CoreFoundation 0x7ff80cf80889 CFRunLoopRun + 40\n9 nwjs Framework 0x11b7f4feb 0x1151ad000 + 107249643\n10 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n11 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 38:: UsbEventHandler\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf4876 poll + 10\n2 nwjs Framework 0x11b7f1cb7 0x1151ad000 + 107236535\n3 nwjs Framework 0x11b7f19db 0x1151ad000 + 107235803\n4 nwjs Framework 0x11b7f1e40 0x1151ad000 + 107236928\n5 nwjs Framework 0x11b7e35cf 0x1151ad000 + 107177423\n6 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n7 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n8 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\n\nThread 0 crashed with X86 Thread State (64-bit):\n rax: 0x00007fd519066801 rbx: 0x0000000000000008 rcx: 0x0000000120cdf478\n rdx: 0x0000000000000400\n rdi: 0x0000000000000018 rsi: 0x000000000000031c rbp: 0x00000003051ce690\n rsp: 0x00000003051ce690\n r8: 0xc5bdffd50ea7b1b7 r9: 0x0000000000000400 r10: 0x0000000000000000\n r11: 0x00007ff810494646\n r12: 0x00007fd50ea247d8 r13: 0x00007fd50ea24740 r14: 0x0000000000000008\n r15: 0x00000003051ce720\n rip: <unavailable> rfl: 0x0000000000000206\n tmp0: 0x0000000000000001 tmp1: 0x000000011ed4e1e0 tmp2: 0x000000011eb31522\n\n\nBinary Images:\n 0x200a7a000 - 0x200b19fff dyld (*)\n<d5406f23-6967-39c4-beb5-6ae3293c7753> /usr/lib/dyld\n 0x113a08000 - 0x113a17fff libobjc-trampolines.dylib (*)\n<7e101877-a6ff-3331-99a3-4222cb254447> /usr/lib/libobjc-trampolines.dylib\n 0x1151ad000 - 0x120616fff io.nwjs.nwjs.framework\n(115.0.5790.98) <4c4c447b-5555-3144-a1ec-62791bcf166d>\n/Library/PostgreSQL/16/pgAdmin 4.app/Contents/Frameworks/nwjs\nFramework.framework/Versions/115.0.5790.98/nwjs Framework\n 0x108c52000 - 0x108c59fff\ncom.apple.AutomaticAssessmentConfiguration (1.0)\n<b30252ae-24c6-3839-b779-661ef263b52d>\n/System/Library/Frameworks/AutomaticAssessmentConfiguration.framework/Versions/A/AutomaticAssessmentConfiguration\n 0x109141000 - 0x1092e4fff libffmpeg.dylib (*)\n<4c4c4416-5555-3144-a164-70bbf0436f17> /Library/PostgreSQL/16/pgAdmin\n4.app/Contents/Frameworks/nwjs\nFramework.framework/Versions/115.0.5790.98/libffmpeg.dylib\n 0x7ff7ffc54000 - 0x7ff7ffc83fff runtime (*)\n<2c5acb8c-fbaf-31ab-aeb3-90905c3fa905> /usr/libexec/rosetta/runtime\n 0x1086d8000 - 0x10872bfff libRosettaRuntime (*)\n<a61ec9e9-1174-3dc6-9cdb-0d31811f4850> /Library/Apple/*/libRosettaRuntime\n 0x100651000 - 0x10067bfff org.pgadmin.pgadmin4 (7.8)\n<4c4c4402-5555-3144-a1c7-07729cda43c0> /Library/PostgreSQL/16/pgAdmin\n4.app/Contents/MacOS/pgAdmin 4\n 0x0 - 0xffffffffffffffff ??? (*)\n<00000000-0000-0000-0000-000000000000> ???\n 0x7ff80ce57000 - 0x7ff80ce60fff libsystem_platform.dylib (*)\n<c94f952c-2787-30d2-ab77-ee474abd88d6>\n/usr/lib/system/libsystem_platform.dylib\n 0x7ff80ce8c000 - 0x7ff80d324ffc com.apple.CoreFoundation (6.9)\n<4d842118-bb65-3f01-9087-ff1a2e3ab0d5>\n/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation\n 0x7ff817c3d000 - 0x7ff817ed8ff4 com.apple.HIToolbox (2.1.1)\n<06bf0872-3b34-3c7b-ad5b-7a447d793405>\n/System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox\n 0x7ff810439000 - 0x7ff81183effb com.apple.AppKit (6.9)\n<27fed5dd-d148-3238-bc95-1dac5dd57fa1>\n/System/Library/Frameworks/AppKit.framework/Versions/C/AppKit\n 0x7ff80cdec000 - 0x7ff80ce26ff7 libsystem_kernel.dylib (*)\n<4df0d732-7fc4-3200-8176-f1804c63f2c8>\n/usr/lib/system/libsystem_kernel.dylib\n 0x7ff80ce27000 - 0x7ff80ce32fff libsystem_pthread.dylib (*)\n<c64722b0-e96a-3fa5-96c3-b4beaf0c494a>\n/usr/lib/system/libsystem_pthread.dylib\n 0x7ff80dda5000 - 0x7ff80e9e3ffb com.apple.Foundation (6.9)\n<581d66fd-7cef-3a8c-8647-1d962624703b>\n/System/Library/Frameworks/Foundation.framework/Versions/C/Foundation\n\nExternal Modification Summary:\n Calls made by other processes targeting this process:\n task_for_pid: 0\n thread_create: 0\n thread_set_state: 0\n Calls made by this process:\n task_for_pid: 0\n thread_create: 0\n thread_set_state: 0\n Calls made by all processes on this machine:\n task_for_pid: 0\n thread_create: 0\n thread_set_state: 0\n\n\n-----------\nFull Report\n-----------\n\n{\"app_name\":\"pgAdmin 4\",\"timestamp\":\"2023-11-14 11:47:18.00\n-0500\",\"app_version\":\"7.8\",\"slice_uuid\":\"4c4c4402-5555-3144-a1c7-07729cda43c0\",\"build_version\":\"4280.88\",\"platform\":1,\"bundleID\":\"org.pgadmin.pgadmin4\",\"share_with_app_devs\":1,\"is_first_party\":0,\"bug_type\":\"309\",\"os_version\":\"macOS\n14.1.1 (23B81)\",\"roots_installed\":0,\"name\":\"pgAdmin\n4\",\"incident_id\":\"1AF5B51F-D7DC-4AD5-8526-1C5B3A33AFA5\"}\n{\n \"uptime\" : 2800,\n \"procRole\" : \"Foreground\",\n \"version\" : 2,\n \"userID\" : 501,\n \"deployVersion\" : 210,\n \"modelCode\" : \"Mac14,9\",\n \"coalitionID\" : 2672,\n \"osVersion\" : {\n \"train\" : \"macOS 14.1.1\",\n \"build\" : \"23B81\",\n \"releaseType\" : \"User\"\n },\n \"captureTime\" : \"2023-11-14 11:47:14.7065 -0500\",\n \"codeSigningMonitor\" : 1,\n \"incident\" : \"1AF5B51F-D7DC-4AD5-8526-1C5B3A33AFA5\",\n \"pid\" : 3505,\n \"translated\" : true,\n \"cpuType\" : \"X86-64\",\n \"roots_installed\" : 0,\n \"bug_type\" : \"309\",\n \"procLaunch\" : \"2023-11-14 11:47:06.3899 -0500\",\n \"procStartAbsTime\" : 67472503520,\n \"procExitAbsTime\" : 67672052074,\n \"procName\" : \"pgAdmin 4\",\n \"procPath\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin\n4.app\\/Contents\\/MacOS\\/pgAdmin\n4\",\n \"bundleInfo\" :\n{\"CFBundleShortVersionString\":\"7.8\",\"CFBundleVersion\":\"4280.88\",\"CFBundleIdentifier\":\"org.pgadmin.pgadmin4\"},\n \"storeInfo\" :\n{\"deviceIdentifierForVendor\":\"F2A41A90-E8FF-58E0-AF26-5F17BFD205F1\",\"thirdParty\":true},\n \"parentProc\" : \"launchd\",\n \"parentPid\" : 1,\n \"coalitionName\" : \"org.pgadmin.pgadmin4\",\n \"crashReporterKey\" : \"A4518538-B2A9-0B93-C540-A9DCCCD929EF\",\n \"codeSigningID\" : \"\",\n \"codeSigningTeamID\" : \"\",\n \"codeSigningValidationCategory\" : 0,\n \"codeSigningTrustLevel\" : 4294967295,\n \"wakeTime\" : 920,\n \"sleepWakeUUID\" : \"E31F7EEF-42B9-4E61-88DC-9C0571A2F4E3\",\n \"sip\" : \"enabled\",\n \"vmRegionInfo\" : \"0x20 is not in any region. Bytes before following\nregion: 140723014549472\\n REGION TYPE START - END\n [ VSIZE] PRT\\/MAX SHRMOD REGION DETAIL\\n UNUSED SPACE AT\nSTART\\n---> \\n mapped file 7ffca14b4000-7ffcc6b5c000\n[598.7M] r-x\\/r-x SM=COW ...t_id=cccf3f63\",\n \"exception\" : {\"codes\":\"0x0000000000000001,\n0x0000000000000020\",\"rawCodes\":[1,32],\"type\":\"EXC_BAD_ACCESS\",\"signal\":\"SIGSEGV\",\"subtype\":\"KERN_INVALID_ADDRESS\nat 0x0000000000000020\"},\n \"termination\" :\n{\"flags\":0,\"code\":11,\"namespace\":\"SIGNAL\",\"indicator\":\"Segmentation fault:\n11\",\"byProc\":\"exc handler\",\"byPid\":3505},\n \"vmregioninfo\" : \"0x20 is not in any region. Bytes before following\nregion: 140723014549472\\n REGION TYPE START - END\n [ VSIZE] PRT\\/MAX SHRMOD REGION DETAIL\\n UNUSED SPACE AT\nSTART\\n---> \\n mapped file 7ffca14b4000-7ffcc6b5c000\n[598.7M] r-x\\/r-x SM=COW ...t_id=cccf3f63\",\n \"extMods\" :\n{\"caller\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"system\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"targeted\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"warnings\":0},\n \"faultingThread\" : 0,\n \"threads\" :\n[{\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":12970682000},\"r12\":{\"value\":140553050277848},\"rosetta\":{\"tmp2\":{\"value\":4810020130},\"tmp1\":{\"value\":4812235232},\"tmp0\":{\"value\":1}},\"rbx\":{\"value\":8},\"r8\":{\"value\":14248826086609105335},\"r15\":{\"value\":12970682144},\"r10\":{\"value\":0},\"rdx\":{\"value\":1024},\"rdi\":{\"value\":24},\"r9\":{\"value\":1024},\"r13\":{\"value\":140553050277696},\"rflags\":{\"value\":518},\"rax\":{\"value\":140553224611841},\"rsp\":{\"value\":12970682000},\"r11\":{\"value\":140703401854534,\"symbolLocation\":0,\"symbol\":\"-[NSView\nalphaValue]\"},\"rcx\":{\"value\":4845335672,\"symbolLocation\":5872920,\"symbol\":\"vtable\nfor\nv8::internal::SetupIsolateDelegate\"},\"r14\":{\"value\":8},\"rsi\":{\"value\":796}},\"id\":57285,\"triggered\":true,\"name\":\"CrBrowserMain\",\"queue\":\"com.apple.main-thread\",\"frames\":[{\"imageOffset\":4303431008,\"imageIndex\":8},{\"imageOffset\":13203,\"symbol\":\"_sigtramp\",\"symbolLocation\":51,\"imageIndex\":9},{\"imageOffset\":160974114,\"imageIndex\":2},{\"imageOffset\":163189232,\"imageIndex\":2},{\"imageOffset\":163647707,\"imageIndex\":2},{\"imageOffset\":105907482,\"imageIndex\":2},{\"imageOffset\":105941209,\"imageIndex\":2},{\"imageOffset\":122254525,\"imageIndex\":2},{\"imageOffset\":122248262,\"imageIndex\":2},{\"imageOffset\":163196389,\"imageIndex\":2},{\"imageOffset\":163699863,\"imageIndex\":2},{\"imageOffset\":160771285,\"imageIndex\":2},{\"imageOffset\":160767767,\"imageIndex\":2},{\"imageOffset\":117416537,\"imageIndex\":2},{\"imageOffset\":54971568,\"imageIndex\":2},{\"imageOffset\":54981780,\"imageIndex\":2},{\"imageOffset\":54982951,\"imageIndex\":2},{\"imageOffset\":54966883,\"imageIndex\":2},{\"imageOffset\":50389369,\"imageIndex\":2},{\"imageOffset\":54968774,\"imageIndex\":2},{\"imageOffset\":86765456,\"imageIndex\":2},{\"imageOffset\":86786353,\"imageIndex\":2},{\"imageOffset\":86771408,\"imageIndex\":2},{\"imageOffset\":87761370,\"imageIndex\":2},{\"imageOffset\":86779801,\"imageIndex\":2},{\"imageOffset\":74851538,\"imageIndex\":2},{\"imageOffset\":74947182,\"imageIndex\":2},{\"imageOffset\":74945705,\"imageIndex\":2},{\"imageOffset\":74948821,\"imageIndex\":2},{\"imageOffset\":75316931,\"imageIndex\":2},{\"imageOffset\":75303458,\"imageIndex\":2},{\"imageOffset\":75314911,\"imageIndex\":2},{\"imageOffset\":506390,\"symbol\":\"__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__\",\"symbolLocation\":17,\"imageIndex\":10},{\"imageOffset\":506297,\"symbol\":\"__CFRunLoopDoSource0\",\"symbolLocation\":157,\"imageIndex\":10},{\"imageOffset\":505736,\"symbol\":\"__CFRunLoopDoSources0\",\"symbolLocation\":215,\"imageIndex\":10},{\"imageOffset\":500728,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":919,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":199129,\"symbol\":\"RunCurrentEventLoopInMode\",\"symbolLocation\":292,\"imageIndex\":11},{\"imageOffset\":198630,\"symbol\":\"ReceiveNextEventCommon\",\"symbolLocation\":665,\"imageIndex\":11},{\"imageOffset\":197937,\"symbol\":\"_BlockUntilNextEventMatchingListInModeWithFilter\",\"symbolLocation\":66,\"imageIndex\":11},{\"imageOffset\":256133,\"symbol\":\"_DPSNextEvent\",\"symbolLocation\":880,\"imageIndex\":12},{\"imageOffset\":9642824,\"symbol\":\"-[NSApplication(NSEventRouting)\n_nextEventMatchingEventMask:untilDate:inMode:dequeue:]\",\"symbolLocation\":1304,\"imageIndex\":12},{\"imageOffset\":69252080,\"imageIndex\":2},{\"imageOffset\":75303458,\"imageIndex\":2},{\"imageOffset\":69251945,\"imageIndex\":2},{\"imageOffset\":196090,\"symbol\":\"-[NSApplication\nrun]\",\"symbolLocation\":603,\"imageIndex\":12},{\"imageOffset\":75318444,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":40170241,\"imageIndex\":2},{\"imageOffset\":40176786,\"imageIndex\":2},{\"imageOffset\":40159834,\"imageIndex\":2},{\"imageOffset\":62423492,\"imageIndex\":2},{\"imageOffset\":62428041,\"imageIndex\":2},{\"imageOffset\":62427549,\"imageIndex\":2},{\"imageOffset\":62421095,\"imageIndex\":2},{\"imageOffset\":62421763,\"imageIndex\":2},{\"imageOffset\":14640,\"symbol\":\"ChromeMain\",\"symbolLocation\":560,\"imageIndex\":2},{\"imageOffset\":2174,\"symbol\":\"main\",\"symbolLocation\":286,\"imageIndex\":7},{\"imageOffset\":25510,\"symbol\":\"start\",\"symbolLocation\":1942,\"imageIndex\":0}]},{\"id\":57293,\"name\":\"com.apple.rosetta.exceptionserver\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":34097745362944},\"r12\":{\"value\":5117060296},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":4462471109675},\"tmp0\":{\"value\":10337986281472}},\"rbx\":{\"value\":4462471109675},\"r8\":{\"value\":7939},\"r15\":{\"value\":4898951168},\"r10\":{\"value\":15586436317184},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":4436777856},\"rflags\":{\"value\":582},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":10337986281472},\"r11\":{\"value\":32},\"rcx\":{\"value\":17314086914},\"r14\":{\"value\":4303431008},\"rsi\":{\"value\":2616}},\"frames\":[{\"imageOffset\":17044,\"imageIndex\":5}]},{\"id\":57315,\"name\":\"StackSamplingProfiler\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":43993350012928},\"r12\":{\"value\":78},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":43993350012928},\"r8\":{\"value\":0},\"r15\":{\"value\":43993350012928},\"r10\":{\"value\":43993350012928},\"rdx\":{\"value\":0},\"rdi\":{\"value\":78},\"r9\":{\"value\":43993350012928},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":74543000,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57316,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12979638272},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12980174848},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":8967},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57317,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12980195328},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12980731904},\"rsp\":{\"value\":409602},\"r11\":{\"value\":0},\"rcx\":{\"value\":12035},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57318,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12980752384},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12981288960},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":10503},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57332,\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":107676221,\"imageIndex\":2},{\"imageOffset\":107681366,\"imageIndex\":2},{\"imageOffset\":107680545,\"imageIndex\":2},{\"imageOffset\":107688728,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":53888954662912},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":53888954662912},\"r10\":{\"value\":0},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":53888954662912},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":48},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":48},\"rsi\":{\"value\":48}}},{\"id\":57350,\"name\":\"HangWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":149632365625344},\"r12\":{\"value\":10000},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":149632365625344},\"r8\":{\"value\":0},\"r15\":{\"value\":149632365625344},\"r10\":{\"value\":149632365625344},\"rdx\":{\"value\":0},\"rdi\":{\"value\":10000},\"r9\":{\"value\":149632365625344},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75041092,\"imageIndex\":2},{\"imageOffset\":75041539,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57351,\"name\":\"ThreadPoolServiceThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":12998057104},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140553009377648},\"r8\":{\"value\":140553008003200},\"r15\":{\"value\":0},\"r10\":{\"value\":140553009377648},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":5},\"r11\":{\"value\":12998057984},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553008497312},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":74986557,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57352,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":180332791857152},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":180332791857152},\"r8\":{\"value\":0},\"r15\":{\"value\":180332791857152},\"r10\":{\"value\":180332791857152},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":180332791857152},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57353,\"name\":\"ThreadPoolBackgroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":152845001162752},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":152845001162752},\"r8\":{\"value\":0},\"r15\":{\"value\":152845001162752},\"r10\":{\"value\":152845001162752},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":152845001162752},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032093,\"imageIndex\":2},{\"imageOffset\":75032016,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57354,\"name\":\"ThreadPoolBackgroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":175934745346048},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":175934745346048},\"r8\":{\"value\":0},\"r15\":{\"value\":175934745346048},\"r10\":{\"value\":175934745346048},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":175934745346048},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032093,\"imageIndex\":2},{\"imageOffset\":75032016,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57355,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":155044024418304},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":155044024418304},\"r8\":{\"value\":0},\"r15\":{\"value\":155044024418304},\"r10\":{\"value\":155044024418304},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":155044024418304},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57357,\"name\":\"Chrome_IOThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":13039979632},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140553050165472},\"r8\":{\"value\":140553008471776},\"r15\":{\"value\":0},\"r10\":{\"value\":140553050165472},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":8},\"r11\":{\"value\":13039980544},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553008516224},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":40181552,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57358,\"name\":\"MemoryInfra\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":188029373251584},\"r12\":{\"value\":14641},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":188029373251584},\"r8\":{\"value\":0},\"r15\":{\"value\":188029373251584},\"r10\":{\"value\":188029373251584},\"rdx\":{\"value\":0},\"rdi\":{\"value\":14641},\"r9\":{\"value\":188029373251584},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":74543000,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57364,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":274890791845888},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":274890791845888},\"r8\":{\"value\":0},\"r15\":{\"value\":274890791845888},\"r10\":{\"value\":274890791845888},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":274890791845888},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57365,\"name\":\"CrShutdownDetector\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344551112},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":13065134179},\"r8\":{\"value\":140552812261236},\"r15\":{\"value\":4},\"r10\":{\"value\":13065134179},\"rdx\":{\"value\":4},\"rdi\":{\"value\":7162258760691251055},\"r9\":{\"value\":18},\"r13\":{\"value\":0},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":0},\"r11\":{\"value\":4294967280},\"rcx\":{\"value\":0},\"r14\":{\"value\":13065133916},\"rsi\":{\"value\":7238539592028275492}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":9426,\"symbol\":\"read\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":73333214,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57432,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":264995187195904},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":264995187195904},\"r8\":{\"value\":0},\"r15\":{\"value\":264995187195904},\"r10\":{\"value\":264995187195904},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":264995187195904},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57433,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":200124001157120},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":200124001157120},\"r8\":{\"value\":0},\"r15\":{\"value\":200124001157120},\"r10\":{\"value\":200124001157120},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":200124001157120},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57434,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":199024489529344},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":199024489529344},\"r8\":{\"value\":0},\"r15\":{\"value\":199024489529344},\"r10\":{\"value\":199024489529344},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":199024489529344},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57435,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":201223512784896},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":201223512784896},\"r8\":{\"value\":0},\"r15\":{\"value\":201223512784896},\"r10\":{\"value\":201223512784896},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":201223512784896},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57436,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":262796163940352},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":262796163940352},\"r8\":{\"value\":0},\"r15\":{\"value\":262796163940352},\"r10\":{\"value\":262796163940352},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":262796163940352},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57437,\"name\":\"NetworkNotificationThreadMac\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":205621559296000},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":205621559296000},\"r8\":{\"value\":0},\"r15\":{\"value\":205621559296000},\"r10\":{\"value\":205621559296000},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":205621559296000},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57438,\"name\":\"CompositorTileWorker1\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":161},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344559620},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":140703344816589,\"symbolLocation\":0,\"symbol\":\"_pthread_psynch_cond_cleanup\"},\"r15\":{\"value\":6912},\"r10\":{\"value\":0},\"rdx\":{\"value\":6912},\"rdi\":{\"value\":0},\"r9\":{\"value\":161},\"r13\":{\"value\":29691108924416},\"rflags\":{\"value\":658},\"rax\":{\"value\":260},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":13123825664},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":17934,\"symbol\":\"__psynch_cvwait\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":26475,\"symbol\":\"_pthread_cond_wait\",\"symbolLocation\":1211,\"imageIndex\":14},{\"imageOffset\":75147211,\"imageIndex\":2},{\"imageOffset\":97344085,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57439,\"name\":\"ThreadPoolSingleThreadForegroundBlocking0\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":239706419757056},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":239706419757056},\"r8\":{\"value\":0},\"r15\":{\"value\":239706419757056},\"r10\":{\"value\":239706419757056},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":239706419757056},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032333,\"imageIndex\":2},{\"imageOffset\":75032026,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57440,\"name\":\"ThreadPoolSingleThreadSharedForeground1\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":223213745340416},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":223213745340416},\"r8\":{\"value\":0},\"r15\":{\"value\":223213745340416},\"r10\":{\"value\":223213745340416},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":223213745340416},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032285,\"imageIndex\":2},{\"imageOffset\":75032036,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57456,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":356254652301312},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":356254652301312},\"r8\":{\"value\":0},\"r15\":{\"value\":356254652301312},\"r10\":{\"value\":356254652301312},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":356254652301312},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57459,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":296881024401408},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":296881024401408},\"r8\":{\"value\":0},\"r15\":{\"value\":296881024401408},\"r10\":{\"value\":296881024401408},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":296881024401408},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57460,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":297980536029184},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":297980536029184},\"r8\":{\"value\":0},\"r15\":{\"value\":297980536029184},\"r10\":{\"value\":297980536029184},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":297980536029184},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57461,\"name\":\"ThreadPoolSingleThreadSharedBackgroundBlocking2\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":301279070912512},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":301279070912512},\"r8\":{\"value\":0},\"r15\":{\"value\":301279070912512},\"r10\":{\"value\":301279070912512},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":301279070912512},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032141,\"imageIndex\":2},{\"imageOffset\":75032056,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57463,\"name\":\"ThreadPoolSingleThreadSharedForegroundBlocking3\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":346359047651328},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":346359047651328},\"r8\":{\"value\":0},\"r15\":{\"value\":346359047651328},\"r10\":{\"value\":346359047651328},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":346359047651328},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032285,\"imageIndex\":2},{\"imageOffset\":75032036,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57529,\"name\":\"CacheThread_BlockFile\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":13190900912},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140552997185664},\"r8\":{\"value\":140553053191392},\"r15\":{\"value\":0},\"r10\":{\"value\":140552997185664},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":2},\"r11\":{\"value\":13190901760},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553243401024},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57530,\"name\":\"com.apple.NSEventThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":394771919011840},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":394771919011840},\"r8\":{\"value\":0},\"r15\":{\"value\":394771919011840},\"r10\":{\"value\":394771919011840},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":394771919011840},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":1686016,\"symbol\":\"_NSEventThread\",\"symbolLocation\":122,\"imageIndex\":12},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57561,\"name\":\"Service\nDiscovery\nThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":423324861595648},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":423324861595648},\"r8\":{\"value\":0},\"r15\":{\"value\":423324861595648},\"r10\":{\"value\":423324861595648},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":423324861595648},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57562,\"name\":\"com.apple.CFSocket.private\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":3},\"rosetta\":{\"tmp2\":{\"value\":140703344585024},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":140553010214768},\"r10\":{\"value\":0},\"rdx\":{\"value\":140553010211280},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":140704414334832,\"symbolLocation\":0,\"symbol\":\"__kCFNull\"},\"rflags\":{\"value\":642},\"rax\":{\"value\":4},\"rsp\":{\"value\":0},\"r11\":{\"value\":140703345435675,\"symbolLocation\":0,\"symbol\":\"-[__NSCFArray\nobjectAtIndex:]\"},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553050490128},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":43338,\"symbol\":\"__select\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":669359,\"symbol\":\"__CFSocketManager\",\"symbolLocation\":637,\"imageIndex\":10},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57570,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":13200379904},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":13200916480},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":172295},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57571,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":13200936960},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":515},\"rax\":{\"value\":13201473536},\"rsp\":{\"value\":278532},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57600,\"name\":\"org.libusb.device-hotplug\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":545387832147968},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":545387832147968},\"r8\":{\"value\":0},\"r15\":{\"value\":545387832147968},\"r10\":{\"value\":545387832147968},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":545387832147968},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":1001609,\"symbol\":\"CFRunLoopRun\",\"symbolLocation\":40,\"imageIndex\":10},{\"imageOffset\":107249643,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57601,\"name\":\"UsbEventHandler\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":6837},\"r12\":{\"value\":140553247458160},\"rosetta\":{\"tmp2\":{\"value\":140703344576620},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":2147483},\"r8\":{\"value\":12297829382473034410},\"r15\":{\"value\":140553247458168},\"r10\":{\"value\":2147483},\"rdx\":{\"value\":60000},\"rdi\":{\"value\":140553247457824},\"r9\":{\"value\":6837},\"r13\":{\"value\":140553247458184},\"rflags\":{\"value\":658},\"rax\":{\"value\":4},\"rsp\":{\"value\":25997},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":2},\"rsi\":{\"value\":13210394000}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":34934,\"symbol\":\"poll\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":107236535,\"imageIndex\":2},{\"imageOffset\":107235803,\"imageIndex\":2},{\"imageOffset\":107236928,\"imageIndex\":2},{\"imageOffset\":107177423,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]}],\n \"usedImages\" : [\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 8600920064,\n \"size\" : 655360,\n \"uuid\" : \"d5406f23-6967-39c4-beb5-6ae3293c7753\",\n \"path\" : \"\\/usr\\/lib\\/dyld\",\n \"name\" : \"dyld\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 4624252928,\n \"size\" : 65536,\n \"uuid\" : \"7e101877-a6ff-3331-99a3-4222cb254447\",\n \"path\" : \"\\/usr\\/lib\\/libobjc-trampolines.dylib\",\n \"name\" : \"libobjc-trampolines.dylib\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 4649046016,\n \"CFBundleShortVersionString\" : \"115.0.5790.98\",\n \"CFBundleIdentifier\" : \"io.nwjs.nwjs.framework\",\n \"size\" : 189177856,\n \"uuid\" : \"4c4c447b-5555-3144-a1ec-62791bcf166d\",\n \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin\n4.app\\/Contents\\/Frameworks\\/nwjs\nFramework.framework\\/Versions\\/115.0.5790.98\\/nwjs Framework\",\n \"name\" : \"nwjs Framework\",\n \"CFBundleVersion\" : \"5790.98\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 4442103808,\n \"CFBundleShortVersionString\" : \"1.0\",\n \"CFBundleIdentifier\" : \"com.apple.AutomaticAssessmentConfiguration\",\n \"size\" : 32768,\n \"uuid\" : \"b30252ae-24c6-3839-b779-661ef263b52d\",\n \"path\" :\n\"\\/System\\/Library\\/Frameworks\\/AutomaticAssessmentConfiguration.framework\\/Versions\\/A\\/AutomaticAssessmentConfiguration\",\n \"name\" : \"AutomaticAssessmentConfiguration\",\n \"CFBundleVersion\" : \"12.0.0\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 4447277056,\n \"size\" : 1720320,\n \"uuid\" : \"4c4c4416-5555-3144-a164-70bbf0436f17\",\n \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin\n4.app\\/Contents\\/Frameworks\\/nwjs\nFramework.framework\\/Versions\\/115.0.5790.98\\/libffmpeg.dylib\",\n \"name\" : \"libffmpeg.dylib\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"arm64\",\n \"base\" : 140703124766720,\n \"size\" : 196608,\n \"uuid\" : \"2c5acb8c-fbaf-31ab-aeb3-90905c3fa905\",\n \"path\" : \"\\/usr\\/libexec\\/rosetta\\/runtime\",\n \"name\" : \"runtime\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"arm64\",\n \"base\" : 4436361216,\n \"size\" : 344064,\n \"uuid\" : \"a61ec9e9-1174-3dc6-9cdb-0d31811f4850\",\n \"path\" : \"\\/Library\\/Apple\\/*\\/libRosettaRuntime\",\n \"name\" : \"libRosettaRuntime\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 4301590528,\n \"CFBundleShortVersionString\" : \"7.8\",\n \"CFBundleIdentifier\" : \"org.pgadmin.pgadmin4\",\n \"size\" : 176128,\n \"uuid\" : \"4c4c4402-5555-3144-a1c7-07729cda43c0\",\n \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin\n4.app\\/Contents\\/MacOS\\/pgAdmin\n4\",\n \"name\" : \"pgAdmin 4\",\n \"CFBundleVersion\" : \"4280.88\"\n },\n {\n \"size\" : 0,\n \"source\" : \"A\",\n \"base\" : 0,\n \"uuid\" : \"00000000-0000-0000-0000-000000000000\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703344979968,\n \"size\" : 40960,\n \"uuid\" : \"c94f952c-2787-30d2-ab77-ee474abd88d6\",\n \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_platform.dylib\",\n \"name\" : \"libsystem_platform.dylib\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703345197056,\n \"CFBundleShortVersionString\" : \"6.9\",\n \"CFBundleIdentifier\" : \"com.apple.CoreFoundation\",\n \"size\" : 4820989,\n \"uuid\" : \"4d842118-bb65-3f01-9087-ff1a2e3ab0d5\",\n \"path\" :\n\"\\/System\\/Library\\/Frameworks\\/CoreFoundation.framework\\/Versions\\/A\\/CoreFoundation\",\n \"name\" : \"CoreFoundation\",\n \"CFBundleVersion\" : \"2106\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703527325696,\n \"CFBundleShortVersionString\" : \"2.1.1\",\n \"CFBundleIdentifier\" : \"com.apple.HIToolbox\",\n \"size\" : 2736117,\n \"uuid\" : \"06bf0872-3b34-3c7b-ad5b-7a447d793405\",\n \"path\" :\n\"\\/System\\/Library\\/Frameworks\\/Carbon.framework\\/Versions\\/A\\/Frameworks\\/HIToolbox.framework\\/Versions\\/A\\/HIToolbox\",\n \"name\" : \"HIToolbox\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703401480192,\n \"CFBundleShortVersionString\" : \"6.9\",\n \"CFBundleIdentifier\" : \"com.apple.AppKit\",\n \"size\" : 20996092,\n \"uuid\" : \"27fed5dd-d148-3238-bc95-1dac5dd57fa1\",\n \"path\" :\n\"\\/System\\/Library\\/Frameworks\\/AppKit.framework\\/Versions\\/C\\/AppKit\",\n \"name\" : \"AppKit\",\n \"CFBundleVersion\" : \"2487.20.107\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703344541696,\n \"size\" : 241656,\n \"uuid\" : \"4df0d732-7fc4-3200-8176-f1804c63f2c8\",\n \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_kernel.dylib\",\n \"name\" : \"libsystem_kernel.dylib\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703344783360,\n \"size\" : 49152,\n \"uuid\" : \"c64722b0-e96a-3fa5-96c3-b4beaf0c494a\",\n \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_pthread.dylib\",\n \"name\" : \"libsystem_pthread.dylib\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703361028096,\n \"CFBundleShortVersionString\" : \"6.9\",\n \"CFBundleIdentifier\" : \"com.apple.Foundation\",\n \"size\" : 12840956,\n \"uuid\" : \"581d66fd-7cef-3a8c-8647-1d962624703b\",\n \"path\" :\n\"\\/System\\/Library\\/Frameworks\\/Foundation.framework\\/Versions\\/C\\/Foundation\",\n \"name\" : \"Foundation\",\n \"CFBundleVersion\" : \"2106\"\n }\n],\n \"sharedCache\" : {\n \"base\" : 140703340380160,\n \"size\" : 21474836480,\n \"uuid\" : \"67c86f0b-dd40-3694-909d-52e210cbd5fa\"\n},\n \"legacyInfo\" : {\n \"threadTriggered\" : {\n \"name\" : \"CrBrowserMain\",\n \"queue\" : \"com.apple.main-thread\"\n }\n},\n \"logWritingSignature\" : \"8b321ae8a79f5edf7aad3381809b3fbd28f3768b\",\n \"trialInfo\" : {\n \"rollouts\" : [\n {\n \"rolloutId\" : \"60da5e84ab0ca017dace9abf\",\n \"factorPackIds\" : {\n\n },\n \"deploymentId\" : 240000008\n },\n {\n \"rolloutId\" : \"63f9578e238e7b23a1f3030a\",\n \"factorPackIds\" : {\n\n },\n \"deploymentId\" : 240000005\n }\n ],\n \"experiments\" : [\n {\n \"treatmentId\" : \"a092db1b-c401-44fa-9c54-518b7d69ca61\",\n \"experimentId\" : \"64a844035c85000c0f42398a\",\n \"deploymentId\" : 400000019\n }\n ]\n},\n \"reportNotes\" : [\n \"PC register does not match crashing frame (0x0 vs 0x100812560)\"\n]\n}\n\nModel: Mac14,9, BootROM 10151.41.12, proc 10:6:4 processors, 16 GB, SMC\nGraphics: Apple M2 Pro, Apple M2 Pro, Built-In\nDisplay: Color LCD, 3024 x 1964 Retina, Main, MirrorOff, Online\nMemory Module: LPDDR5, Micron\nAirPort: spairport_wireless_card_type_wifi (0x14E4, 0x4388), wl0: Sep 1\n2023 19:33:37 version 23.10.765.4.41.51.121 FWID 01-e2f09e46\nAirPort:\nBluetooth: Version (null), 0 services, 0 devices, 0 incoming serial ports\nNetwork Service: Wi-Fi, AirPort, en0\nUSB Device: USB31Bus\nUSB Device: USB31Bus\nUSB Device: USB31Bus\nThunderbolt Bus: MacBook Pro, Apple Inc.\nThunderbolt Bus: MacBook Pro, Apple Inc.\nThunderbolt Bus: MacBook Pro, Apple Inc.\n\nThanks & Regards,\nKanmani",
"msg_date": "Tue, 14 Nov 2023 12:13:36 -0500",
"msg_from": "Kanmani Thamizhanban <[email protected]>",
"msg_from_op": true,
"msg_subject": "Issue with launching PGAdmin 4 on Mac OC"
},
{
"msg_contents": "On 2023-11-14 18:13 +0100, Kanmani Thamizhanban wrote:\n> Good day! I'm not able to launch Postgres PGAdmin 4 in my MAC OS, I have\n> tried with both versions 15 and 16, but nothing is working. It says that it\n> has quit unexpectedly (screenshot attached). I have attached the bug report\n> as well along with my system specifications below. Kindly help me with this\n> ASAP.\n\nIt's very unlikely that you'll get any pgAdmin support on this list\nwhich is for Postgres development only. Unless your Postgres server\nalso crashed but the attached bug report doesn't show that. Please\ncreate an issue on GitHub or post to the pgadmin-support list instead.\n\nhttps://github.com/pgadmin-org/pgadmin4/issues\nhttps://www.postgresql.org/list/pgadmin-support/\n\n-- \nErik\n\n\n",
"msg_date": "Tue, 14 Nov 2023 18:40:31 +0100",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue with launching PGAdmin 4 on Mac OC"
},
{
"msg_contents": "Thanks for your email Erik! I’ll check it out.\n\nRegards,\nKanmani\n\nOn Tue, Nov 14, 2023 at 12:40 PM Erik Wienhold <[email protected]> wrote:\n\n> On 2023-11-14 18:13 +0100, Kanmani Thamizhanban wrote:\n> > Good day! I'm not able to launch Postgres PGAdmin 4 in my MAC OS, I have\n> > tried with both versions 15 and 16, but nothing is working. It says that\n> it\n> > has quit unexpectedly (screenshot attached). I have attached the bug\n> report\n> > as well along with my system specifications below. Kindly help me with\n> this\n> > ASAP.\n>\n> It's very unlikely that you'll get any pgAdmin support on this list\n> which is for Postgres development only. Unless your Postgres server\n> also crashed but the attached bug report doesn't show that. Please\n> create an issue on GitHub or post to the pgadmin-support list instead.\n>\n> https://github.com/pgadmin-org/pgadmin4/issues\n> https://www.postgresql.org/list/pgadmin-support/\n>\n> --\n> Erik\n>\n\nThanks for your email Erik! I’ll check it out. Regards,KanmaniOn Tue, Nov 14, 2023 at 12:40 PM Erik Wienhold <[email protected]> wrote:On 2023-11-14 18:13 +0100, Kanmani Thamizhanban wrote:\n> Good day! I'm not able to launch Postgres PGAdmin 4 in my MAC OS, I have\n> tried with both versions 15 and 16, but nothing is working. It says that it\n> has quit unexpectedly (screenshot attached). I have attached the bug report\n> as well along with my system specifications below. Kindly help me with this\n> ASAP.\n\nIt's very unlikely that you'll get any pgAdmin support on this list\nwhich is for Postgres development only. Unless your Postgres server\nalso crashed but the attached bug report doesn't show that. Please\ncreate an issue on GitHub or post to the pgadmin-support list instead.\n\nhttps://github.com/pgadmin-org/pgadmin4/issues\nhttps://www.postgresql.org/list/pgadmin-support/\n\n-- \nErik",
"msg_date": "Tue, 14 Nov 2023 17:59:46 -0500",
"msg_from": "Kanmani Thamizhanban <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Issue with launching PGAdmin 4 on Mac OC"
},
{
"msg_contents": "Hi Team,\n\nGood day! I need an urgent help with launching PGAdmin4 in my Mac OS, I\nhave tried with both the versions 15, 16 and almost every other possible\nway, but nothing is working. It says that it has quit unexpectedly\n(screenshot attached). I have attached the bug report as well along with my\nsystem specifications below. I really appreciate your help on resolving\nthis.\n\n-------------------------------------\nTranslated Report (Full Report Below)\n-------------------------------------\n\nProcess: pgAdmin 4 [3505]\nPath: /Library/PostgreSQL/16/pgAdmin\n4.app/Contents/MacOS/pgAdmin 4\nIdentifier: org.pgadmin.pgadmin4\nVersion: 7.8 (4280.88)\nCode Type: X86-64 (Translated)\nParent Process: launchd [1]\nUser ID: 501\n\nDate/Time: 2023-11-14 11:47:14.7065 -0500\nOS Version: macOS 14.1.1 (23B81)\nReport Version: 12\nAnonymous UUID: A4518538-B2A9-0B93-C540-A9DCCCD929EF\n\nSleep/Wake UUID: E31F7EEF-42B9-4E61-88DC-9C0571A2F4E3\n\nTime Awake Since Boot: 2800 seconds\nTime Since Wake: 920 seconds\n\nSystem Integrity Protection: enabled\n\nNotes:\nPC register does not match crashing frame (0x0 vs 0x100812560)\n\nCrashed Thread: 0 CrBrowserMain Dispatch queue:\ncom.apple.main-thread\n\nException Type: EXC_BAD_ACCESS (SIGSEGV)\nException Codes: KERN_INVALID_ADDRESS at 0x0000000000000020\nException Codes: 0x0000000000000001, 0x0000000000000020\n\nTermination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11\nTerminating Process: exc handler [3505]\n\nVM Region Info: 0x20 is not in any region. Bytes before following region:\n140723014549472\n REGION TYPE START - END [ VSIZE] PRT/MAX\nSHRMOD REGION DETAIL\n UNUSED SPACE AT START\n--->\n mapped file 7ffca14b4000-7ffcc6b5c000 [598.7M] r-x/r-x\nSM=COW ...t_id=cccf3f63\n\nError Formulating Crash Report:\nPC register does not match crashing frame (0x0 vs 0x100812560)\n\nThread 0 Crashed:: CrBrowserMain Dispatch queue: com.apple.main-thread\n0 ??? 0x100812560 ???\n1 libsystem_platform.dylib 0x7ff80ce5a393 _sigtramp + 51\n2 nwjs Framework 0x11eb31522 0x1151ad000 + 160974114\n3 nwjs Framework 0x11ed4e1f0 0x1151ad000 + 163189232\n4 nwjs Framework 0x11edbe0db 0x1151ad000 + 163647707\n5 nwjs Framework 0x11b6ad51a 0x1151ad000 + 105907482\n6 nwjs Framework 0x11b6b58d9 0x1151ad000 + 105941209\n7 nwjs Framework 0x11c6444bd 0x1151ad000 + 122254525\n8 nwjs Framework 0x11c642c46 0x1151ad000 + 122248262\n9 nwjs Framework 0x11ed4fde5 0x1151ad000 + 163196389\n10 nwjs Framework 0x11edcac97 0x1151ad000 + 163699863\n11 nwjs Framework 0x11eaffcd5 0x1151ad000 + 160771285\n12 nwjs Framework 0x11eafef17 0x1151ad000 + 160767767\n13 nwjs Framework 0x11c1a7259 0x1151ad000 + 117416537\n14 nwjs Framework 0x118619cb0 0x1151ad000 + 54971568\n15 nwjs Framework 0x11861c494 0x1151ad000 + 54981780\n16 nwjs Framework 0x11861c927 0x1151ad000 + 54982951\n17 nwjs Framework 0x118618a63 0x1151ad000 + 54966883\n18 nwjs Framework 0x1181bb179 0x1151ad000 + 50389369\n19 nwjs Framework 0x1186191c6 0x1151ad000 + 54968774\n20 nwjs Framework 0x11a46bf90 0x1151ad000 + 86765456\n21 nwjs Framework 0x11a471131 0x1151ad000 + 86786353\n22 nwjs Framework 0x11a46d6d0 0x1151ad000 + 86771408\n23 nwjs Framework 0x11a55f1da 0x1151ad000 + 87761370\n24 nwjs Framework 0x11a46f799 0x1151ad000 + 86779801\n25 nwjs Framework 0x11990f4d2 0x1151ad000 + 74851538\n26 nwjs Framework 0x119926a6e 0x1151ad000 + 74947182\n27 nwjs Framework 0x1199264a9 0x1151ad000 + 74945705\n28 nwjs Framework 0x1199270d5 0x1151ad000 + 74948821\n29 nwjs Framework 0x119980ec3 0x1151ad000 + 75316931\n30 nwjs Framework 0x11997da22 0x1151ad000 + 75303458\n31 nwjs Framework 0x1199806df 0x1151ad000 + 75314911\n32 CoreFoundation 0x7ff80cf07a16\n__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17\n33 CoreFoundation 0x7ff80cf079b9 __CFRunLoopDoSource0 +\n157\n34 CoreFoundation 0x7ff80cf07788 __CFRunLoopDoSources0\n+ 215\n35 CoreFoundation 0x7ff80cf063f8 __CFRunLoopRun + 919\n36 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n37 HIToolbox 0x7ff817c6d9d9\nRunCurrentEventLoopInMode + 292\n38 HIToolbox 0x7ff817c6d7e6 ReceiveNextEventCommon\n+ 665\n39 HIToolbox 0x7ff817c6d531\n_BlockUntilNextEventMatchingListInModeWithFilter + 66\n40 AppKit 0x7ff810477885 _DPSNextEvent + 880\n41 AppKit 0x7ff810d6b348\n-[NSApplication(NSEventRouting)\n_nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1304\n42 nwjs Framework 0x1193b83f0 0x1151ad000 + 69252080\n43 nwjs Framework 0x11997da22 0x1151ad000 + 75303458\n44 nwjs Framework 0x1193b8369 0x1151ad000 + 69251945\n45 AppKit 0x7ff810468dfa -[NSApplication run] +\n603\n46 nwjs Framework 0x1199814ac 0x1151ad000 + 75318444\n47 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n48 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n49 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n50 nwjs Framework 0x1177fc301 0x1151ad000 + 40170241\n51 nwjs Framework 0x1177fdc92 0x1151ad000 + 40176786\n52 nwjs Framework 0x1177f9a5a 0x1151ad000 + 40159834\n53 nwjs Framework 0x118d351c4 0x1151ad000 + 62423492\n54 nwjs Framework 0x118d36389 0x1151ad000 + 62428041\n55 nwjs Framework 0x118d3619d 0x1151ad000 + 62427549\n56 nwjs Framework 0x118d34867 0x1151ad000 + 62421095\n57 nwjs Framework 0x118d34b03 0x1151ad000 + 62421763\n58 nwjs Framework 0x1151b0930 ChromeMain + 560\n59 pgAdmin 4 0x10065187e main + 286\n60 dyld 0x200a803a6 start + 1942\n\nThread 1:: com.apple.rosetta.exceptionserver\n0 runtime 0x7ff7ffc58294 0x7ff7ffc54000 + 17044\n\nThread 2:: StackSamplingProfiler\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x1198c3f98 0x1151ad000 + 74543000\n8 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n9 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n10 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n11 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n12 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n13 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n14 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 3:\n0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n\nThread 4:\n0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n\nThread 5:\n0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n\nThread 6:\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x11b85d23d 0x1151ad000 + 107676221\n6 nwjs Framework 0x11b85e656 0x1151ad000 + 107681366\n7 nwjs Framework 0x11b85e321 0x1151ad000 + 107680545\n8 nwjs Framework 0x11b860318 0x1151ad000 + 107688728\n9 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n10 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 7:: HangWatcher\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993d944 0x1151ad000 + 75041092\n8 nwjs Framework 0x11993db03 0x1151ad000 + 75041539\n9 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n10 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n11 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 8:: ThreadPoolServiceThread\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf7506 kevent64 + 10\n2 nwjs Framework 0x119973e31 0x1151ad000 + 75263537\n3 nwjs Framework 0x119973cee 0x1151ad000 + 75263214\n4 nwjs Framework 0x119973c65 0x1151ad000 + 75263077\n5 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n6 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n7 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n8 nwjs Framework 0x11993043d 0x1151ad000 + 74986557\n9 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n10 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n11 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n12 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 9:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 10:: ThreadPoolBackgroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b61d 0x1151ad000 + 75032093\n10 nwjs Framework 0x11993b5d0 0x1151ad000 + 75032016\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 11:: ThreadPoolBackgroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b61d 0x1151ad000 + 75032093\n10 nwjs Framework 0x11993b5d0 0x1151ad000 + 75032016\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 12:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 13:: Chrome_IOThread\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf7506 kevent64 + 10\n2 nwjs Framework 0x119973e31 0x1151ad000 + 75263537\n3 nwjs Framework 0x119973cee 0x1151ad000 + 75263214\n4 nwjs Framework 0x119973c65 0x1151ad000 + 75263077\n5 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n6 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n7 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n8 nwjs Framework 0x1177fef30 0x1151ad000 + 40181552\n9 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n10 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n11 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n12 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 14:: MemoryInfra\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x1198c3f98 0x1151ad000 + 74543000\n8 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n9 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n10 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n11 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n12 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n13 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n14 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 15:: NetworkConfigWatcher\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 Foundation 0x7ff80de01551 -[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:] + 216\n9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 16:: CrShutdownDetector\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdee4d2 read + 10\n2 nwjs Framework 0x11979c9de 0x1151ad000 + 73333214\n3 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n4 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n5 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 17:: NetworkConfigWatcher\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 Foundation 0x7ff80de01551 -[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:] + 216\n9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 18:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 19:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 20:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 21:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 22:: NetworkNotificationThreadMac\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 Foundation 0x7ff80de01551 -[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:] + 216\n9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 23:: CompositorTileWorker1\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf060e __psynch_cvwait + 10\n2 libsystem_pthread.dylib 0x7ff80ce2d76b _pthread_cond_wait +\n1211\n3 nwjs Framework 0x1199577cb 0x1151ad000 + 75147211\n4 nwjs Framework 0x11ae82a55 0x1151ad000 + 97344085\n5 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n6 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n7 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 24:: ThreadPoolSingleThreadForegroundBlocking0\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b70d 0x1151ad000 + 75032333\n10 nwjs Framework 0x11993b5da 0x1151ad000 + 75032026\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 25:: ThreadPoolSingleThreadSharedForeground1\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6dd 0x1151ad000 + 75032285\n10 nwjs Framework 0x11993b5e4 0x1151ad000 + 75032036\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 26:: NetworkConfigWatcher\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 Foundation 0x7ff80de01551 -[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:] + 216\n9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 27:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 28:: ThreadPoolForegroundWorker\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 29:: ThreadPoolSingleThreadSharedBackgroundBlocking2\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b64d 0x1151ad000 + 75032141\n10 nwjs Framework 0x11993b5f8 0x1151ad000 + 75032056\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 30:: ThreadPoolSingleThreadSharedForegroundBlocking3\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n9 nwjs Framework 0x11993b6dd 0x1151ad000 + 75032285\n10 nwjs Framework 0x11993b5e4 0x1151ad000 + 75032036\n11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 31:: CacheThread_BlockFile\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf7506 kevent64 + 10\n2 nwjs Framework 0x119973e31 0x1151ad000 + 75263537\n3 nwjs Framework 0x119973cee 0x1151ad000 + 75263214\n4 nwjs Framework 0x119973c65 0x1151ad000 + 75263077\n5 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n6 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n7 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n8 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n9 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n10 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n11 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 32:: com.apple.NSEventThread\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 AppKit 0x7ff8105d4a00 _NSEventThread + 122\n9 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n10 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 33:: Service Discovery Thread\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 Foundation 0x7ff80de01551 -[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:] + 216\n9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 34:: com.apple.CFSocket.private\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf694a __select + 10\n2 CoreFoundation 0x7ff80cf2f6af __CFSocketManager + 637\n3 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n4 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 35:\n0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n\nThread 36:\n0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n\nThread 37:: org.libusb.device-hotplug\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal + 84\n3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n653\n4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n5 CoreFoundation 0x7ff80cf07b49\n__CFRunLoopServiceMachPort + 143\n6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific +\n557\n8 CoreFoundation 0x7ff80cf80889 CFRunLoopRun + 40\n9 nwjs Framework 0x11b7f4feb 0x1151ad000 + 107249643\n10 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n11 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\nThread 38:: UsbEventHandler\n0 ??? 0x7ff89d2e2a78 ???\n1 libsystem_kernel.dylib 0x7ff80cdf4876 poll + 10\n2 nwjs Framework 0x11b7f1cb7 0x1151ad000 + 107236535\n3 nwjs Framework 0x11b7f19db 0x1151ad000 + 107235803\n4 nwjs Framework 0x11b7f1e40 0x1151ad000 + 107236928\n5 nwjs Framework 0x11b7e35cf 0x1151ad000 + 107177423\n6 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n7 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n8 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n\n\nThread 0 crashed with X86 Thread State (64-bit):\n rax: 0x00007fd519066801 rbx: 0x0000000000000008 rcx: 0x0000000120cdf478\n rdx: 0x0000000000000400\n rdi: 0x0000000000000018 rsi: 0x000000000000031c rbp: 0x00000003051ce690\n rsp: 0x00000003051ce690\n r8: 0xc5bdffd50ea7b1b7 r9: 0x0000000000000400 r10: 0x0000000000000000\n r11: 0x00007ff810494646\n r12: 0x00007fd50ea247d8 r13: 0x00007fd50ea24740 r14: 0x0000000000000008\n r15: 0x00000003051ce720\n rip: <unavailable> rfl: 0x0000000000000206\n tmp0: 0x0000000000000001 tmp1: 0x000000011ed4e1e0 tmp2: 0x000000011eb31522\n\n\nBinary Images:\n 0x200a7a000 - 0x200b19fff dyld (*)\n<d5406f23-6967-39c4-beb5-6ae3293c7753> /usr/lib/dyld\n 0x113a08000 - 0x113a17fff libobjc-trampolines.dylib (*)\n<7e101877-a6ff-3331-99a3-4222cb254447> /usr/lib/libobjc-trampolines.dylib\n 0x1151ad000 - 0x120616fff io.nwjs.nwjs.framework\n(115.0.5790.98) <4c4c447b-5555-3144-a1ec-62791bcf166d>\n/Library/PostgreSQL/16/pgAdmin 4.app/Contents/Frameworks/nwjs\nFramework.framework/Versions/115.0.5790.98/nwjs Framework\n 0x108c52000 - 0x108c59fff\ncom.apple.AutomaticAssessmentConfiguration (1.0)\n<b30252ae-24c6-3839-b779-661ef263b52d>\n/System/Library/Frameworks/AutomaticAssessmentConfiguration.framework/Versions/A/AutomaticAssessmentConfiguration\n 0x109141000 - 0x1092e4fff libffmpeg.dylib (*)\n<4c4c4416-5555-3144-a164-70bbf0436f17> /Library/PostgreSQL/16/pgAdmin\n4.app/Contents/Frameworks/nwjs\nFramework.framework/Versions/115.0.5790.98/libffmpeg.dylib\n 0x7ff7ffc54000 - 0x7ff7ffc83fff runtime (*)\n<2c5acb8c-fbaf-31ab-aeb3-90905c3fa905> /usr/libexec/rosetta/runtime\n 0x1086d8000 - 0x10872bfff libRosettaRuntime (*)\n<a61ec9e9-1174-3dc6-9cdb-0d31811f4850> /Library/Apple/*/libRosettaRuntime\n 0x100651000 - 0x10067bfff org.pgadmin.pgadmin4 (7.8)\n<4c4c4402-5555-3144-a1c7-07729cda43c0> /Library/PostgreSQL/16/pgAdmin\n4.app/Contents/MacOS/pgAdmin 4\n 0x0 - 0xffffffffffffffff ??? (*)\n<00000000-0000-0000-0000-000000000000> ???\n 0x7ff80ce57000 - 0x7ff80ce60fff libsystem_platform.dylib (*)\n<c94f952c-2787-30d2-ab77-ee474abd88d6>\n/usr/lib/system/libsystem_platform.dylib\n 0x7ff80ce8c000 - 0x7ff80d324ffc com.apple.CoreFoundation (6.9)\n<4d842118-bb65-3f01-9087-ff1a2e3ab0d5>\n/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation\n 0x7ff817c3d000 - 0x7ff817ed8ff4 com.apple.HIToolbox (2.1.1)\n<06bf0872-3b34-3c7b-ad5b-7a447d793405>\n/System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox\n 0x7ff810439000 - 0x7ff81183effb com.apple.AppKit (6.9)\n<27fed5dd-d148-3238-bc95-1dac5dd57fa1>\n/System/Library/Frameworks/AppKit.framework/Versions/C/AppKit\n 0x7ff80cdec000 - 0x7ff80ce26ff7 libsystem_kernel.dylib (*)\n<4df0d732-7fc4-3200-8176-f1804c63f2c8>\n/usr/lib/system/libsystem_kernel.dylib\n 0x7ff80ce27000 - 0x7ff80ce32fff libsystem_pthread.dylib (*)\n<c64722b0-e96a-3fa5-96c3-b4beaf0c494a>\n/usr/lib/system/libsystem_pthread.dylib\n 0x7ff80dda5000 - 0x7ff80e9e3ffb com.apple.Foundation (6.9)\n<581d66fd-7cef-3a8c-8647-1d962624703b>\n/System/Library/Frameworks/Foundation.framework/Versions/C/Foundation\n\nExternal Modification Summary:\n Calls made by other processes targeting this process:\n task_for_pid: 0\n thread_create: 0\n thread_set_state: 0\n Calls made by this process:\n task_for_pid: 0\n thread_create: 0\n thread_set_state: 0\n Calls made by all processes on this machine:\n task_for_pid: 0\n thread_create: 0\n thread_set_state: 0\n\n\n-----------\nFull Report\n-----------\n\n{\"app_name\":\"pgAdmin 4\",\"timestamp\":\"2023-11-14 11:47:18.00\n-0500\",\"app_version\":\"7.8\",\"slice_uuid\":\"4c4c4402-5555-3144-a1c7-07729cda43c0\",\"build_version\":\"4280.88\",\"platform\":1,\"bundleID\":\"org.pgadmin.pgadmin4\",\"share_with_app_devs\":1,\"is_first_party\":0,\"bug_type\":\"309\",\"os_version\":\"macOS\n14.1.1 (23B81)\",\"roots_installed\":0,\"name\":\"pgAdmin\n4\",\"incident_id\":\"1AF5B51F-D7DC-4AD5-8526-1C5B3A33AFA5\"}\n{\n \"uptime\" : 2800,\n \"procRole\" : \"Foreground\",\n \"version\" : 2,\n \"userID\" : 501,\n \"deployVersion\" : 210,\n \"modelCode\" : \"Mac14,9\",\n \"coalitionID\" : 2672,\n \"osVersion\" : {\n \"train\" : \"macOS 14.1.1\",\n \"build\" : \"23B81\",\n \"releaseType\" : \"User\"\n },\n \"captureTime\" : \"2023-11-14 11:47:14.7065 -0500\",\n \"codeSigningMonitor\" : 1,\n \"incident\" : \"1AF5B51F-D7DC-4AD5-8526-1C5B3A33AFA5\",\n \"pid\" : 3505,\n \"translated\" : true,\n \"cpuType\" : \"X86-64\",\n \"roots_installed\" : 0,\n \"bug_type\" : \"309\",\n \"procLaunch\" : \"2023-11-14 11:47:06.3899 -0500\",\n \"procStartAbsTime\" : 67472503520,\n \"procExitAbsTime\" : 67672052074,\n \"procName\" : \"pgAdmin 4\",\n \"procPath\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin\n4.app\\/Contents\\/MacOS\\/pgAdmin\n4\",\n \"bundleInfo\" :\n{\"CFBundleShortVersionString\":\"7.8\",\"CFBundleVersion\":\"4280.88\",\"CFBundleIdentifier\":\"org.pgadmin.pgadmin4\"},\n \"storeInfo\" :\n{\"deviceIdentifierForVendor\":\"F2A41A90-E8FF-58E0-AF26-5F17BFD205F1\",\"thirdParty\":true},\n \"parentProc\" : \"launchd\",\n \"parentPid\" : 1,\n \"coalitionName\" : \"org.pgadmin.pgadmin4\",\n \"crashReporterKey\" : \"A4518538-B2A9-0B93-C540-A9DCCCD929EF\",\n \"codeSigningID\" : \"\",\n \"codeSigningTeamID\" : \"\",\n \"codeSigningValidationCategory\" : 0,\n \"codeSigningTrustLevel\" : 4294967295,\n \"wakeTime\" : 920,\n \"sleepWakeUUID\" : \"E31F7EEF-42B9-4E61-88DC-9C0571A2F4E3\",\n \"sip\" : \"enabled\",\n \"vmRegionInfo\" : \"0x20 is not in any region. Bytes before following\nregion: 140723014549472\\n REGION TYPE START - END\n [ VSIZE] PRT\\/MAX SHRMOD REGION DETAIL\\n UNUSED SPACE AT\nSTART\\n---> \\n mapped file 7ffca14b4000-7ffcc6b5c000\n[598.7M] r-x\\/r-x SM=COW ...t_id=cccf3f63\",\n \"exception\" : {\"codes\":\"0x0000000000000001,\n0x0000000000000020\",\"rawCodes\":[1,32],\"type\":\"EXC_BAD_ACCESS\",\"signal\":\"SIGSEGV\",\"subtype\":\"KERN_INVALID_ADDRESS\nat 0x0000000000000020\"},\n \"termination\" :\n{\"flags\":0,\"code\":11,\"namespace\":\"SIGNAL\",\"indicator\":\"Segmentation fault:\n11\",\"byProc\":\"exc handler\",\"byPid\":3505},\n \"vmregioninfo\" : \"0x20 is not in any region. Bytes before following\nregion: 140723014549472\\n REGION TYPE START - END\n [ VSIZE] PRT\\/MAX SHRMOD REGION DETAIL\\n UNUSED SPACE AT\nSTART\\n---> \\n mapped file 7ffca14b4000-7ffcc6b5c000\n[598.7M] r-x\\/r-x SM=COW ...t_id=cccf3f63\",\n \"extMods\" :\n{\"caller\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"system\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"targeted\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"warnings\":0},\n \"faultingThread\" : 0,\n \"threads\" :\n[{\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":12970682000},\"r12\":{\"value\":140553050277848},\"rosetta\":{\"tmp2\":{\"value\":4810020130},\"tmp1\":{\"value\":4812235232},\"tmp0\":{\"value\":1}},\"rbx\":{\"value\":8},\"r8\":{\"value\":14248826086609105335},\"r15\":{\"value\":12970682144},\"r10\":{\"value\":0},\"rdx\":{\"value\":1024},\"rdi\":{\"value\":24},\"r9\":{\"value\":1024},\"r13\":{\"value\":140553050277696},\"rflags\":{\"value\":518},\"rax\":{\"value\":140553224611841},\"rsp\":{\"value\":12970682000},\"r11\":{\"value\":140703401854534,\"symbolLocation\":0,\"symbol\":\"-[NSView\nalphaValue]\"},\"rcx\":{\"value\":4845335672,\"symbolLocation\":5872920,\"symbol\":\"vtable\nfor\nv8::internal::SetupIsolateDelegate\"},\"r14\":{\"value\":8},\"rsi\":{\"value\":796}},\"id\":57285,\"triggered\":true,\"name\":\"CrBrowserMain\",\"queue\":\"com.apple.main-thread\",\"frames\":[{\"imageOffset\":4303431008,\"imageIndex\":8},{\"imageOffset\":13203,\"symbol\":\"_sigtramp\",\"symbolLocation\":51,\"imageIndex\":9},{\"imageOffset\":160974114,\"imageIndex\":2},{\"imageOffset\":163189232,\"imageIndex\":2},{\"imageOffset\":163647707,\"imageIndex\":2},{\"imageOffset\":105907482,\"imageIndex\":2},{\"imageOffset\":105941209,\"imageIndex\":2},{\"imageOffset\":122254525,\"imageIndex\":2},{\"imageOffset\":122248262,\"imageIndex\":2},{\"imageOffset\":163196389,\"imageIndex\":2},{\"imageOffset\":163699863,\"imageIndex\":2},{\"imageOffset\":160771285,\"imageIndex\":2},{\"imageOffset\":160767767,\"imageIndex\":2},{\"imageOffset\":117416537,\"imageIndex\":2},{\"imageOffset\":54971568,\"imageIndex\":2},{\"imageOffset\":54981780,\"imageIndex\":2},{\"imageOffset\":54982951,\"imageIndex\":2},{\"imageOffset\":54966883,\"imageIndex\":2},{\"imageOffset\":50389369,\"imageIndex\":2},{\"imageOffset\":54968774,\"imageIndex\":2},{\"imageOffset\":86765456,\"imageIndex\":2},{\"imageOffset\":86786353,\"imageIndex\":2},{\"imageOffset\":86771408,\"imageIndex\":2},{\"imageOffset\":87761370,\"imageIndex\":2},{\"imageOffset\":86779801,\"imageIndex\":2},{\"imageOffset\":74851538,\"imageIndex\":2},{\"imageOffset\":74947182,\"imageIndex\":2},{\"imageOffset\":74945705,\"imageIndex\":2},{\"imageOffset\":74948821,\"imageIndex\":2},{\"imageOffset\":75316931,\"imageIndex\":2},{\"imageOffset\":75303458,\"imageIndex\":2},{\"imageOffset\":75314911,\"imageIndex\":2},{\"imageOffset\":506390,\"symbol\":\"__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__\",\"symbolLocation\":17,\"imageIndex\":10},{\"imageOffset\":506297,\"symbol\":\"__CFRunLoopDoSource0\",\"symbolLocation\":157,\"imageIndex\":10},{\"imageOffset\":505736,\"symbol\":\"__CFRunLoopDoSources0\",\"symbolLocation\":215,\"imageIndex\":10},{\"imageOffset\":500728,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":919,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":199129,\"symbol\":\"RunCurrentEventLoopInMode\",\"symbolLocation\":292,\"imageIndex\":11},{\"imageOffset\":198630,\"symbol\":\"ReceiveNextEventCommon\",\"symbolLocation\":665,\"imageIndex\":11},{\"imageOffset\":197937,\"symbol\":\"_BlockUntilNextEventMatchingListInModeWithFilter\",\"symbolLocation\":66,\"imageIndex\":11},{\"imageOffset\":256133,\"symbol\":\"_DPSNextEvent\",\"symbolLocation\":880,\"imageIndex\":12},{\"imageOffset\":9642824,\"symbol\":\"-[NSApplication(NSEventRouting)\n_nextEventMatchingEventMask:untilDate:inMode:dequeue:]\",\"symbolLocation\":1304,\"imageIndex\":12},{\"imageOffset\":69252080,\"imageIndex\":2},{\"imageOffset\":75303458,\"imageIndex\":2},{\"imageOffset\":69251945,\"imageIndex\":2},{\"imageOffset\":196090,\"symbol\":\"-[NSApplication\nrun]\",\"symbolLocation\":603,\"imageIndex\":12},{\"imageOffset\":75318444,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":40170241,\"imageIndex\":2},{\"imageOffset\":40176786,\"imageIndex\":2},{\"imageOffset\":40159834,\"imageIndex\":2},{\"imageOffset\":62423492,\"imageIndex\":2},{\"imageOffset\":62428041,\"imageIndex\":2},{\"imageOffset\":62427549,\"imageIndex\":2},{\"imageOffset\":62421095,\"imageIndex\":2},{\"imageOffset\":62421763,\"imageIndex\":2},{\"imageOffset\":14640,\"symbol\":\"ChromeMain\",\"symbolLocation\":560,\"imageIndex\":2},{\"imageOffset\":2174,\"symbol\":\"main\",\"symbolLocation\":286,\"imageIndex\":7},{\"imageOffset\":25510,\"symbol\":\"start\",\"symbolLocation\":1942,\"imageIndex\":0}]},{\"id\":57293,\"name\":\"com.apple.rosetta.exceptionserver\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":34097745362944},\"r12\":{\"value\":5117060296},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":4462471109675},\"tmp0\":{\"value\":10337986281472}},\"rbx\":{\"value\":4462471109675},\"r8\":{\"value\":7939},\"r15\":{\"value\":4898951168},\"r10\":{\"value\":15586436317184},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":4436777856},\"rflags\":{\"value\":582},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":10337986281472},\"r11\":{\"value\":32},\"rcx\":{\"value\":17314086914},\"r14\":{\"value\":4303431008},\"rsi\":{\"value\":2616}},\"frames\":[{\"imageOffset\":17044,\"imageIndex\":5}]},{\"id\":57315,\"name\":\"StackSamplingProfiler\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":43993350012928},\"r12\":{\"value\":78},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":43993350012928},\"r8\":{\"value\":0},\"r15\":{\"value\":43993350012928},\"r10\":{\"value\":43993350012928},\"rdx\":{\"value\":0},\"rdi\":{\"value\":78},\"r9\":{\"value\":43993350012928},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":74543000,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57316,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12979638272},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12980174848},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":8967},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57317,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12980195328},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12980731904},\"rsp\":{\"value\":409602},\"r11\":{\"value\":0},\"rcx\":{\"value\":12035},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57318,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12980752384},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12981288960},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":10503},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57332,\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":107676221,\"imageIndex\":2},{\"imageOffset\":107681366,\"imageIndex\":2},{\"imageOffset\":107680545,\"imageIndex\":2},{\"imageOffset\":107688728,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":53888954662912},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":53888954662912},\"r10\":{\"value\":0},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":53888954662912},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":48},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":48},\"rsi\":{\"value\":48}}},{\"id\":57350,\"name\":\"HangWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":149632365625344},\"r12\":{\"value\":10000},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":149632365625344},\"r8\":{\"value\":0},\"r15\":{\"value\":149632365625344},\"r10\":{\"value\":149632365625344},\"rdx\":{\"value\":0},\"rdi\":{\"value\":10000},\"r9\":{\"value\":149632365625344},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75041092,\"imageIndex\":2},{\"imageOffset\":75041539,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57351,\"name\":\"ThreadPoolServiceThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":12998057104},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140553009377648},\"r8\":{\"value\":140553008003200},\"r15\":{\"value\":0},\"r10\":{\"value\":140553009377648},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":5},\"r11\":{\"value\":12998057984},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553008497312},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":74986557,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57352,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":180332791857152},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":180332791857152},\"r8\":{\"value\":0},\"r15\":{\"value\":180332791857152},\"r10\":{\"value\":180332791857152},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":180332791857152},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57353,\"name\":\"ThreadPoolBackgroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":152845001162752},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":152845001162752},\"r8\":{\"value\":0},\"r15\":{\"value\":152845001162752},\"r10\":{\"value\":152845001162752},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":152845001162752},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032093,\"imageIndex\":2},{\"imageOffset\":75032016,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57354,\"name\":\"ThreadPoolBackgroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":175934745346048},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":175934745346048},\"r8\":{\"value\":0},\"r15\":{\"value\":175934745346048},\"r10\":{\"value\":175934745346048},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":175934745346048},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032093,\"imageIndex\":2},{\"imageOffset\":75032016,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57355,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":155044024418304},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":155044024418304},\"r8\":{\"value\":0},\"r15\":{\"value\":155044024418304},\"r10\":{\"value\":155044024418304},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":155044024418304},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57357,\"name\":\"Chrome_IOThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":13039979632},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140553050165472},\"r8\":{\"value\":140553008471776},\"r15\":{\"value\":0},\"r10\":{\"value\":140553050165472},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":8},\"r11\":{\"value\":13039980544},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553008516224},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":40181552,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57358,\"name\":\"MemoryInfra\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":188029373251584},\"r12\":{\"value\":14641},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":188029373251584},\"r8\":{\"value\":0},\"r15\":{\"value\":188029373251584},\"r10\":{\"value\":188029373251584},\"rdx\":{\"value\":0},\"rdi\":{\"value\":14641},\"r9\":{\"value\":188029373251584},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":74543000,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57364,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":274890791845888},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":274890791845888},\"r8\":{\"value\":0},\"r15\":{\"value\":274890791845888},\"r10\":{\"value\":274890791845888},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":274890791845888},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57365,\"name\":\"CrShutdownDetector\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344551112},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":13065134179},\"r8\":{\"value\":140552812261236},\"r15\":{\"value\":4},\"r10\":{\"value\":13065134179},\"rdx\":{\"value\":4},\"rdi\":{\"value\":7162258760691251055},\"r9\":{\"value\":18},\"r13\":{\"value\":0},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":0},\"r11\":{\"value\":4294967280},\"rcx\":{\"value\":0},\"r14\":{\"value\":13065133916},\"rsi\":{\"value\":7238539592028275492}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":9426,\"symbol\":\"read\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":73333214,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57432,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":264995187195904},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":264995187195904},\"r8\":{\"value\":0},\"r15\":{\"value\":264995187195904},\"r10\":{\"value\":264995187195904},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":264995187195904},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57433,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":200124001157120},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":200124001157120},\"r8\":{\"value\":0},\"r15\":{\"value\":200124001157120},\"r10\":{\"value\":200124001157120},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":200124001157120},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57434,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":199024489529344},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":199024489529344},\"r8\":{\"value\":0},\"r15\":{\"value\":199024489529344},\"r10\":{\"value\":199024489529344},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":199024489529344},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57435,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":201223512784896},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":201223512784896},\"r8\":{\"value\":0},\"r15\":{\"value\":201223512784896},\"r10\":{\"value\":201223512784896},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":201223512784896},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57436,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":262796163940352},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":262796163940352},\"r8\":{\"value\":0},\"r15\":{\"value\":262796163940352},\"r10\":{\"value\":262796163940352},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":262796163940352},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57437,\"name\":\"NetworkNotificationThreadMac\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":205621559296000},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":205621559296000},\"r8\":{\"value\":0},\"r15\":{\"value\":205621559296000},\"r10\":{\"value\":205621559296000},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":205621559296000},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57438,\"name\":\"CompositorTileWorker1\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":161},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344559620},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":140703344816589,\"symbolLocation\":0,\"symbol\":\"_pthread_psynch_cond_cleanup\"},\"r15\":{\"value\":6912},\"r10\":{\"value\":0},\"rdx\":{\"value\":6912},\"rdi\":{\"value\":0},\"r9\":{\"value\":161},\"r13\":{\"value\":29691108924416},\"rflags\":{\"value\":658},\"rax\":{\"value\":260},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":13123825664},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":17934,\"symbol\":\"__psynch_cvwait\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":26475,\"symbol\":\"_pthread_cond_wait\",\"symbolLocation\":1211,\"imageIndex\":14},{\"imageOffset\":75147211,\"imageIndex\":2},{\"imageOffset\":97344085,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57439,\"name\":\"ThreadPoolSingleThreadForegroundBlocking0\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":239706419757056},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":239706419757056},\"r8\":{\"value\":0},\"r15\":{\"value\":239706419757056},\"r10\":{\"value\":239706419757056},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":239706419757056},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032333,\"imageIndex\":2},{\"imageOffset\":75032026,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57440,\"name\":\"ThreadPoolSingleThreadSharedForeground1\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":223213745340416},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":223213745340416},\"r8\":{\"value\":0},\"r15\":{\"value\":223213745340416},\"r10\":{\"value\":223213745340416},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":223213745340416},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032285,\"imageIndex\":2},{\"imageOffset\":75032036,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57456,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":356254652301312},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":356254652301312},\"r8\":{\"value\":0},\"r15\":{\"value\":356254652301312},\"r10\":{\"value\":356254652301312},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":356254652301312},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57459,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":296881024401408},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":296881024401408},\"r8\":{\"value\":0},\"r15\":{\"value\":296881024401408},\"r10\":{\"value\":296881024401408},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":296881024401408},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57460,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":297980536029184},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":297980536029184},\"r8\":{\"value\":0},\"r15\":{\"value\":297980536029184},\"r10\":{\"value\":297980536029184},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":297980536029184},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57461,\"name\":\"ThreadPoolSingleThreadSharedBackgroundBlocking2\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":301279070912512},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":301279070912512},\"r8\":{\"value\":0},\"r15\":{\"value\":301279070912512},\"r10\":{\"value\":301279070912512},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":301279070912512},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032141,\"imageIndex\":2},{\"imageOffset\":75032056,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57463,\"name\":\"ThreadPoolSingleThreadSharedForegroundBlocking3\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":346359047651328},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":346359047651328},\"r8\":{\"value\":0},\"r15\":{\"value\":346359047651328},\"r10\":{\"value\":346359047651328},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":346359047651328},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032285,\"imageIndex\":2},{\"imageOffset\":75032036,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57529,\"name\":\"CacheThread_BlockFile\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":13190900912},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140552997185664},\"r8\":{\"value\":140553053191392},\"r15\":{\"value\":0},\"r10\":{\"value\":140552997185664},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":2},\"r11\":{\"value\":13190901760},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553243401024},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57530,\"name\":\"com.apple.NSEventThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":394771919011840},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":394771919011840},\"r8\":{\"value\":0},\"r15\":{\"value\":394771919011840},\"r10\":{\"value\":394771919011840},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":394771919011840},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":1686016,\"symbol\":\"_NSEventThread\",\"symbolLocation\":122,\"imageIndex\":12},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57561,\"name\":\"Service\nDiscovery\nThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":423324861595648},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":423324861595648},\"r8\":{\"value\":0},\"r15\":{\"value\":423324861595648},\"r10\":{\"value\":423324861595648},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":423324861595648},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\nrunMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57562,\"name\":\"com.apple.CFSocket.private\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":3},\"rosetta\":{\"tmp2\":{\"value\":140703344585024},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":140553010214768},\"r10\":{\"value\":0},\"rdx\":{\"value\":140553010211280},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":140704414334832,\"symbolLocation\":0,\"symbol\":\"__kCFNull\"},\"rflags\":{\"value\":642},\"rax\":{\"value\":4},\"rsp\":{\"value\":0},\"r11\":{\"value\":140703345435675,\"symbolLocation\":0,\"symbol\":\"-[__NSCFArray\nobjectAtIndex:]\"},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553050490128},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":43338,\"symbol\":\"__select\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":669359,\"symbol\":\"__CFSocketManager\",\"symbolLocation\":637,\"imageIndex\":10},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57570,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":13200379904},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":13200916480},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":172295},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57571,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":13200936960},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":515},\"rax\":{\"value\":13201473536},\"rsp\":{\"value\":278532},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57600,\"name\":\"org.libusb.device-hotplug\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":545387832147968},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":545387832147968},\"r8\":{\"value\":0},\"r15\":{\"value\":545387832147968},\"r10\":{\"value\":545387832147968},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":545387832147968},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":1001609,\"symbol\":\"CFRunLoopRun\",\"symbolLocation\":40,\"imageIndex\":10},{\"imageOffset\":107249643,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57601,\"name\":\"UsbEventHandler\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":6837},\"r12\":{\"value\":140553247458160},\"rosetta\":{\"tmp2\":{\"value\":140703344576620},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":2147483},\"r8\":{\"value\":12297829382473034410},\"r15\":{\"value\":140553247458168},\"r10\":{\"value\":2147483},\"rdx\":{\"value\":60000},\"rdi\":{\"value\":140553247457824},\"r9\":{\"value\":6837},\"r13\":{\"value\":140553247458184},\"rflags\":{\"value\":658},\"rax\":{\"value\":4},\"rsp\":{\"value\":25997},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":2},\"rsi\":{\"value\":13210394000}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":34934,\"symbol\":\"poll\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":107236535,\"imageIndex\":2},{\"imageOffset\":107235803,\"imageIndex\":2},{\"imageOffset\":107236928,\"imageIndex\":2},{\"imageOffset\":107177423,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]}],\n \"usedImages\" : [\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 8600920064,\n \"size\" : 655360,\n \"uuid\" : \"d5406f23-6967-39c4-beb5-6ae3293c7753\",\n \"path\" : \"\\/usr\\/lib\\/dyld\",\n \"name\" : \"dyld\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 4624252928,\n \"size\" : 65536,\n \"uuid\" : \"7e101877-a6ff-3331-99a3-4222cb254447\",\n \"path\" : \"\\/usr\\/lib\\/libobjc-trampolines.dylib\",\n \"name\" : \"libobjc-trampolines.dylib\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 4649046016,\n \"CFBundleShortVersionString\" : \"115.0.5790.98\",\n \"CFBundleIdentifier\" : \"io.nwjs.nwjs.framework\",\n \"size\" : 189177856,\n \"uuid\" : \"4c4c447b-5555-3144-a1ec-62791bcf166d\",\n \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin\n4.app\\/Contents\\/Frameworks\\/nwjs\nFramework.framework\\/Versions\\/115.0.5790.98\\/nwjs Framework\",\n \"name\" : \"nwjs Framework\",\n \"CFBundleVersion\" : \"5790.98\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 4442103808,\n \"CFBundleShortVersionString\" : \"1.0\",\n \"CFBundleIdentifier\" : \"com.apple.AutomaticAssessmentConfiguration\",\n \"size\" : 32768,\n \"uuid\" : \"b30252ae-24c6-3839-b779-661ef263b52d\",\n \"path\" :\n\"\\/System\\/Library\\/Frameworks\\/AutomaticAssessmentConfiguration.framework\\/Versions\\/A\\/AutomaticAssessmentConfiguration\",\n \"name\" : \"AutomaticAssessmentConfiguration\",\n \"CFBundleVersion\" : \"12.0.0\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 4447277056,\n \"size\" : 1720320,\n \"uuid\" : \"4c4c4416-5555-3144-a164-70bbf0436f17\",\n \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin\n4.app\\/Contents\\/Frameworks\\/nwjs\nFramework.framework\\/Versions\\/115.0.5790.98\\/libffmpeg.dylib\",\n \"name\" : \"libffmpeg.dylib\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"arm64\",\n \"base\" : 140703124766720,\n \"size\" : 196608,\n \"uuid\" : \"2c5acb8c-fbaf-31ab-aeb3-90905c3fa905\",\n \"path\" : \"\\/usr\\/libexec\\/rosetta\\/runtime\",\n \"name\" : \"runtime\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"arm64\",\n \"base\" : 4436361216,\n \"size\" : 344064,\n \"uuid\" : \"a61ec9e9-1174-3dc6-9cdb-0d31811f4850\",\n \"path\" : \"\\/Library\\/Apple\\/*\\/libRosettaRuntime\",\n \"name\" : \"libRosettaRuntime\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 4301590528,\n \"CFBundleShortVersionString\" : \"7.8\",\n \"CFBundleIdentifier\" : \"org.pgadmin.pgadmin4\",\n \"size\" : 176128,\n \"uuid\" : \"4c4c4402-5555-3144-a1c7-07729cda43c0\",\n \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin\n4.app\\/Contents\\/MacOS\\/pgAdmin\n4\",\n \"name\" : \"pgAdmin 4\",\n \"CFBundleVersion\" : \"4280.88\"\n },\n {\n \"size\" : 0,\n \"source\" : \"A\",\n \"base\" : 0,\n \"uuid\" : \"00000000-0000-0000-0000-000000000000\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703344979968,\n \"size\" : 40960,\n \"uuid\" : \"c94f952c-2787-30d2-ab77-ee474abd88d6\",\n \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_platform.dylib\",\n \"name\" : \"libsystem_platform.dylib\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703345197056,\n \"CFBundleShortVersionString\" : \"6.9\",\n \"CFBundleIdentifier\" : \"com.apple.CoreFoundation\",\n \"size\" : 4820989,\n \"uuid\" : \"4d842118-bb65-3f01-9087-ff1a2e3ab0d5\",\n \"path\" :\n\"\\/System\\/Library\\/Frameworks\\/CoreFoundation.framework\\/Versions\\/A\\/CoreFoundation\",\n \"name\" : \"CoreFoundation\",\n \"CFBundleVersion\" : \"2106\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703527325696,\n \"CFBundleShortVersionString\" : \"2.1.1\",\n \"CFBundleIdentifier\" : \"com.apple.HIToolbox\",\n \"size\" : 2736117,\n \"uuid\" : \"06bf0872-3b34-3c7b-ad5b-7a447d793405\",\n \"path\" :\n\"\\/System\\/Library\\/Frameworks\\/Carbon.framework\\/Versions\\/A\\/Frameworks\\/HIToolbox.framework\\/Versions\\/A\\/HIToolbox\",\n \"name\" : \"HIToolbox\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703401480192,\n \"CFBundleShortVersionString\" : \"6.9\",\n \"CFBundleIdentifier\" : \"com.apple.AppKit\",\n \"size\" : 20996092,\n \"uuid\" : \"27fed5dd-d148-3238-bc95-1dac5dd57fa1\",\n \"path\" :\n\"\\/System\\/Library\\/Frameworks\\/AppKit.framework\\/Versions\\/C\\/AppKit\",\n \"name\" : \"AppKit\",\n \"CFBundleVersion\" : \"2487.20.107\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703344541696,\n \"size\" : 241656,\n \"uuid\" : \"4df0d732-7fc4-3200-8176-f1804c63f2c8\",\n \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_kernel.dylib\",\n \"name\" : \"libsystem_kernel.dylib\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703344783360,\n \"size\" : 49152,\n \"uuid\" : \"c64722b0-e96a-3fa5-96c3-b4beaf0c494a\",\n \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_pthread.dylib\",\n \"name\" : \"libsystem_pthread.dylib\"\n },\n {\n \"source\" : \"P\",\n \"arch\" : \"x86_64\",\n \"base\" : 140703361028096,\n \"CFBundleShortVersionString\" : \"6.9\",\n \"CFBundleIdentifier\" : \"com.apple.Foundation\",\n \"size\" : 12840956,\n \"uuid\" : \"581d66fd-7cef-3a8c-8647-1d962624703b\",\n \"path\" :\n\"\\/System\\/Library\\/Frameworks\\/Foundation.framework\\/Versions\\/C\\/Foundation\",\n \"name\" : \"Foundation\",\n \"CFBundleVersion\" : \"2106\"\n }\n],\n \"sharedCache\" : {\n \"base\" : 140703340380160,\n \"size\" : 21474836480,\n \"uuid\" : \"67c86f0b-dd40-3694-909d-52e210cbd5fa\"\n},\n \"legacyInfo\" : {\n \"threadTriggered\" : {\n \"name\" : \"CrBrowserMain\",\n \"queue\" : \"com.apple.main-thread\"\n }\n},\n \"logWritingSignature\" : \"8b321ae8a79f5edf7aad3381809b3fbd28f3768b\",\n \"trialInfo\" : {\n \"rollouts\" : [\n {\n \"rolloutId\" : \"60da5e84ab0ca017dace9abf\",\n \"factorPackIds\" : {\n\n },\n \"deploymentId\" : 240000008\n },\n {\n \"rolloutId\" : \"63f9578e238e7b23a1f3030a\",\n \"factorPackIds\" : {\n\n },\n \"deploymentId\" : 240000005\n }\n ],\n \"experiments\" : [\n {\n \"treatmentId\" : \"a092db1b-c401-44fa-9c54-518b7d69ca61\",\n \"experimentId\" : \"64a844035c85000c0f42398a\",\n \"deploymentId\" : 400000019\n }\n ]\n},\n \"reportNotes\" : [\n \"PC register does not match crashing frame (0x0 vs 0x100812560)\"\n]\n}\n\nModel: Mac14,9, BootROM 10151.41.12, proc 10:6:4 processors, 16 GB, SMC\nGraphics: Apple M2 Pro, Apple M2 Pro, Built-In\nDisplay: Color LCD, 3024 x 1964 Retina, Main, MirrorOff, Online\nMemory Module: LPDDR5, Micron\nAirPort: spairport_wireless_card_type_wifi (0x14E4, 0x4388), wl0: Sep 1\n2023 19:33:37 version 23.10.765.4.41.51.121 FWID 01-e2f09e46\nAirPort:\nBluetooth: Version (null), 0 services, 0 devices, 0 incoming serial ports\nNetwork Service: Wi-Fi, AirPort, en0\nUSB Device: USB31Bus\nUSB Device: USB31Bus\nUSB Device: USB31Bus\nThunderbolt Bus: MacBook Pro, Apple Inc.\nThunderbolt Bus: MacBook Pro, Apple Inc.\nThunderbolt Bus: MacBook Pro, Apple Inc.\n\nThanks & Regards,\nKanmani",
"msg_date": "Tue, 14 Nov 2023 18:42:21 -0500",
"msg_from": "Kanmani Thamizhanban <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Issue with launching PGAdmin 4 on Mac OC"
},
{
"msg_contents": "Hi Kanmani,\n\nWhat is the pgAdmin version you're using?\n\nOn Wed, Nov 15, 2023 at 3:02 PM Kanmani Thamizhanban <[email protected]>\nwrote:\n\n> Hi Team,\n>\n> Good day! I need an urgent help with launching PGAdmin4 in my Mac OS, I\n> have tried with both the versions 15, 16 and almost every other possible\n> way, but nothing is working. It says that it has quit unexpectedly\n> (screenshot attached). I have attached the bug report as well along with my\n> system specifications below. I really appreciate your help on resolving\n> this.\n>\n> -------------------------------------\n> Translated Report (Full Report Below)\n> -------------------------------------\n>\n> Process: pgAdmin 4 [3505]\n> Path: /Library/PostgreSQL/16/pgAdmin\n> 4.app/Contents/MacOS/pgAdmin 4\n> Identifier: org.pgadmin.pgadmin4\n> Version: 7.8 (4280.88)\n> Code Type: X86-64 (Translated)\n> Parent Process: launchd [1]\n> User ID: 501\n>\n> Date/Time: 2023-11-14 11:47:14.7065 -0500\n> OS Version: macOS 14.1.1 (23B81)\n> Report Version: 12\n> Anonymous UUID: A4518538-B2A9-0B93-C540-A9DCCCD929EF\n>\n> Sleep/Wake UUID: E31F7EEF-42B9-4E61-88DC-9C0571A2F4E3\n>\n> Time Awake Since Boot: 2800 seconds\n> Time Since Wake: 920 seconds\n>\n> System Integrity Protection: enabled\n>\n> Notes:\n> PC register does not match crashing frame (0x0 vs 0x100812560)\n>\n> Crashed Thread: 0 CrBrowserMain Dispatch queue:\n> com.apple.main-thread\n>\n> Exception Type: EXC_BAD_ACCESS (SIGSEGV)\n> Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000020\n> Exception Codes: 0x0000000000000001, 0x0000000000000020\n>\n> Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11\n> Terminating Process: exc handler [3505]\n>\n> VM Region Info: 0x20 is not in any region. Bytes before following region:\n> 140723014549472\n> REGION TYPE START - END [ VSIZE] PRT/MAX\n> SHRMOD REGION DETAIL\n> UNUSED SPACE AT START\n> --->\n> mapped file 7ffca14b4000-7ffcc6b5c000 [598.7M] r-x/r-x\n> SM=COW ...t_id=cccf3f63\n>\n> Error Formulating Crash Report:\n> PC register does not match crashing frame (0x0 vs 0x100812560)\n>\n> Thread 0 Crashed:: CrBrowserMain Dispatch queue: com.apple.main-thread\n> 0 ??? 0x100812560 ???\n> 1 libsystem_platform.dylib 0x7ff80ce5a393 _sigtramp + 51\n> 2 nwjs Framework 0x11eb31522 0x1151ad000 +\n> 160974114\n> 3 nwjs Framework 0x11ed4e1f0 0x1151ad000 +\n> 163189232\n> 4 nwjs Framework 0x11edbe0db 0x1151ad000 +\n> 163647707\n> 5 nwjs Framework 0x11b6ad51a 0x1151ad000 +\n> 105907482\n> 6 nwjs Framework 0x11b6b58d9 0x1151ad000 +\n> 105941209\n> 7 nwjs Framework 0x11c6444bd 0x1151ad000 +\n> 122254525\n> 8 nwjs Framework 0x11c642c46 0x1151ad000 +\n> 122248262\n> 9 nwjs Framework 0x11ed4fde5 0x1151ad000 +\n> 163196389\n> 10 nwjs Framework 0x11edcac97 0x1151ad000 +\n> 163699863\n> 11 nwjs Framework 0x11eaffcd5 0x1151ad000 +\n> 160771285\n> 12 nwjs Framework 0x11eafef17 0x1151ad000 +\n> 160767767\n> 13 nwjs Framework 0x11c1a7259 0x1151ad000 +\n> 117416537\n> 14 nwjs Framework 0x118619cb0 0x1151ad000 + 54971568\n> 15 nwjs Framework 0x11861c494 0x1151ad000 + 54981780\n> 16 nwjs Framework 0x11861c927 0x1151ad000 + 54982951\n> 17 nwjs Framework 0x118618a63 0x1151ad000 + 54966883\n> 18 nwjs Framework 0x1181bb179 0x1151ad000 + 50389369\n> 19 nwjs Framework 0x1186191c6 0x1151ad000 + 54968774\n> 20 nwjs Framework 0x11a46bf90 0x1151ad000 + 86765456\n> 21 nwjs Framework 0x11a471131 0x1151ad000 + 86786353\n> 22 nwjs Framework 0x11a46d6d0 0x1151ad000 + 86771408\n> 23 nwjs Framework 0x11a55f1da 0x1151ad000 + 87761370\n> 24 nwjs Framework 0x11a46f799 0x1151ad000 + 86779801\n> 25 nwjs Framework 0x11990f4d2 0x1151ad000 + 74851538\n> 26 nwjs Framework 0x119926a6e 0x1151ad000 + 74947182\n> 27 nwjs Framework 0x1199264a9 0x1151ad000 + 74945705\n> 28 nwjs Framework 0x1199270d5 0x1151ad000 + 74948821\n> 29 nwjs Framework 0x119980ec3 0x1151ad000 + 75316931\n> 30 nwjs Framework 0x11997da22 0x1151ad000 + 75303458\n> 31 nwjs Framework 0x1199806df 0x1151ad000 + 75314911\n> 32 CoreFoundation 0x7ff80cf07a16\n> __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17\n> 33 CoreFoundation 0x7ff80cf079b9 __CFRunLoopDoSource0\n> + 157\n> 34 CoreFoundation 0x7ff80cf07788 __CFRunLoopDoSources0\n> + 215\n> 35 CoreFoundation 0x7ff80cf063f8 __CFRunLoopRun + 919\n> 36 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific\n> + 557\n> 37 HIToolbox 0x7ff817c6d9d9\n> RunCurrentEventLoopInMode + 292\n> 38 HIToolbox 0x7ff817c6d7e6 ReceiveNextEventCommon\n> + 665\n> 39 HIToolbox 0x7ff817c6d531\n> _BlockUntilNextEventMatchingListInModeWithFilter + 66\n> 40 AppKit 0x7ff810477885 _DPSNextEvent + 880\n> 41 AppKit 0x7ff810d6b348\n> -[NSApplication(NSEventRouting)\n> _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1304\n> 42 nwjs Framework 0x1193b83f0 0x1151ad000 + 69252080\n> 43 nwjs Framework 0x11997da22 0x1151ad000 + 75303458\n> 44 nwjs Framework 0x1193b8369 0x1151ad000 + 69251945\n> 45 AppKit 0x7ff810468dfa -[NSApplication run]\n> + 603\n> 46 nwjs Framework 0x1199814ac 0x1151ad000 + 75318444\n> 47 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n> 48 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 49 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 50 nwjs Framework 0x1177fc301 0x1151ad000 + 40170241\n> 51 nwjs Framework 0x1177fdc92 0x1151ad000 + 40176786\n> 52 nwjs Framework 0x1177f9a5a 0x1151ad000 + 40159834\n> 53 nwjs Framework 0x118d351c4 0x1151ad000 + 62423492\n> 54 nwjs Framework 0x118d36389 0x1151ad000 + 62428041\n> 55 nwjs Framework 0x118d3619d 0x1151ad000 + 62427549\n> 56 nwjs Framework 0x118d34867 0x1151ad000 + 62421095\n> 57 nwjs Framework 0x118d34b03 0x1151ad000 + 62421763\n> 58 nwjs Framework 0x1151b0930 ChromeMain + 560\n> 59 pgAdmin 4 0x10065187e main + 286\n> 60 dyld 0x200a803a6 start + 1942\n>\n> Thread 1:: com.apple.rosetta.exceptionserver\n> 0 runtime 0x7ff7ffc58294 0x7ff7ffc54000 + 17044\n>\n> Thread 2:: StackSamplingProfiler\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x1198c3f98 0x1151ad000 + 74543000\n> 8 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 9 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 10 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n> 11 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n> 12 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 13 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 14 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 3:\n> 0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n>\n> Thread 4:\n> 0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n>\n> Thread 5:\n> 0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n>\n> Thread 6:\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x11b85d23d 0x1151ad000 +\n> 107676221\n> 6 nwjs Framework 0x11b85e656 0x1151ad000 +\n> 107681366\n> 7 nwjs Framework 0x11b85e321 0x1151ad000 +\n> 107680545\n> 8 nwjs Framework 0x11b860318 0x1151ad000 +\n> 107688728\n> 9 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 10 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 7:: HangWatcher\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993d944 0x1151ad000 + 75041092\n> 8 nwjs Framework 0x11993db03 0x1151ad000 + 75041539\n> 9 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 10 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 11 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 8:: ThreadPoolServiceThread\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdf7506 kevent64 + 10\n> 2 nwjs Framework 0x119973e31 0x1151ad000 + 75263537\n> 3 nwjs Framework 0x119973cee 0x1151ad000 + 75263214\n> 4 nwjs Framework 0x119973c65 0x1151ad000 + 75263077\n> 5 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 6 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 7 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n> 8 nwjs Framework 0x11993043d 0x1151ad000 + 74986557\n> 9 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n> 10 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 11 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 12 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 9:: ThreadPoolForegroundWorker\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n> 10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 10:: ThreadPoolBackgroundWorker\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b61d 0x1151ad000 + 75032093\n> 10 nwjs Framework 0x11993b5d0 0x1151ad000 + 75032016\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 11:: ThreadPoolBackgroundWorker\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b61d 0x1151ad000 + 75032093\n> 10 nwjs Framework 0x11993b5d0 0x1151ad000 + 75032016\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 12:: ThreadPoolForegroundWorker\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n> 10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 13:: Chrome_IOThread\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdf7506 kevent64 + 10\n> 2 nwjs Framework 0x119973e31 0x1151ad000 + 75263537\n> 3 nwjs Framework 0x119973cee 0x1151ad000 + 75263214\n> 4 nwjs Framework 0x119973c65 0x1151ad000 + 75263077\n> 5 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 6 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 7 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n> 8 nwjs Framework 0x1177fef30 0x1151ad000 + 40181552\n> 9 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n> 10 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 11 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 12 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 14:: MemoryInfra\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x1198c3f98 0x1151ad000 + 74543000\n> 8 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 9 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 10 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n> 11 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n> 12 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 13 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 14 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 15:: NetworkConfigWatcher\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 CoreFoundation 0x7ff80cf07b49\n> __CFRunLoopServiceMachPort + 143\n> 6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n> 7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific\n> + 557\n> 8 Foundation 0x7ff80de01551\n> -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 216\n> 9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n> 10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n> 11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n> 14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n> 15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 16:: CrShutdownDetector\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdee4d2 read + 10\n> 2 nwjs Framework 0x11979c9de 0x1151ad000 + 73333214\n> 3 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 4 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 5 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 17:: NetworkConfigWatcher\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 CoreFoundation 0x7ff80cf07b49\n> __CFRunLoopServiceMachPort + 143\n> 6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n> 7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific\n> + 557\n> 8 Foundation 0x7ff80de01551\n> -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 216\n> 9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n> 10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n> 11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n> 14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n> 15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 18:: ThreadPoolForegroundWorker\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n> 10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 19:: ThreadPoolForegroundWorker\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n> 10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 20:: ThreadPoolForegroundWorker\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n> 10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 21:: ThreadPoolForegroundWorker\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n> 10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 22:: NetworkNotificationThreadMac\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 CoreFoundation 0x7ff80cf07b49\n> __CFRunLoopServiceMachPort + 143\n> 6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n> 7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific\n> + 557\n> 8 Foundation 0x7ff80de01551\n> -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 216\n> 9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n> 10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n> 11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n> 14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n> 15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 23:: CompositorTileWorker1\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdf060e __psynch_cvwait + 10\n> 2 libsystem_pthread.dylib 0x7ff80ce2d76b _pthread_cond_wait +\n> 1211\n> 3 nwjs Framework 0x1199577cb 0x1151ad000 + 75147211\n> 4 nwjs Framework 0x11ae82a55 0x1151ad000 + 97344085\n> 5 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 6 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 7 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 24:: ThreadPoolSingleThreadForegroundBlocking0\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b70d 0x1151ad000 + 75032333\n> 10 nwjs Framework 0x11993b5da 0x1151ad000 + 75032026\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 25:: ThreadPoolSingleThreadSharedForeground1\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b6dd 0x1151ad000 + 75032285\n> 10 nwjs Framework 0x11993b5e4 0x1151ad000 + 75032036\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 26:: NetworkConfigWatcher\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 CoreFoundation 0x7ff80cf07b49\n> __CFRunLoopServiceMachPort + 143\n> 6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n> 7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific\n> + 557\n> 8 Foundation 0x7ff80de01551\n> -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 216\n> 9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n> 10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n> 11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n> 14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n> 15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 27:: ThreadPoolForegroundWorker\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n> 10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 28:: ThreadPoolForegroundWorker\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b6ad 0x1151ad000 + 75032237\n> 10 nwjs Framework 0x11993b5ab 0x1151ad000 + 75031979\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 29:: ThreadPoolSingleThreadSharedBackgroundBlocking2\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b64d 0x1151ad000 + 75032141\n> 10 nwjs Framework 0x11993b5f8 0x1151ad000 + 75032056\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 30:: ThreadPoolSingleThreadSharedForegroundBlocking3\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 nwjs Framework 0x119984702 0x1151ad000 + 75331330\n> 6 nwjs Framework 0x11990c91c 0x1151ad000 + 74840348\n> 7 nwjs Framework 0x11993ae8a 0x1151ad000 + 75030154\n> 8 nwjs Framework 0x11993bac4 0x1151ad000 + 75033284\n> 9 nwjs Framework 0x11993b6dd 0x1151ad000 + 75032285\n> 10 nwjs Framework 0x11993b5e4 0x1151ad000 + 75032036\n> 11 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 12 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 13 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 31:: CacheThread_BlockFile\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdf7506 kevent64 + 10\n> 2 nwjs Framework 0x119973e31 0x1151ad000 + 75263537\n> 3 nwjs Framework 0x119973cee 0x1151ad000 + 75263214\n> 4 nwjs Framework 0x119973c65 0x1151ad000 + 75263077\n> 5 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 6 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 7 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n> 8 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n> 9 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 10 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 11 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 32:: com.apple.NSEventThread\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 CoreFoundation 0x7ff80cf07b49\n> __CFRunLoopServiceMachPort + 143\n> 6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n> 7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific\n> + 557\n> 8 AppKit 0x7ff8105d4a00 _NSEventThread + 122\n> 9 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 10 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 33:: Service Discovery Thread\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 CoreFoundation 0x7ff80cf07b49\n> __CFRunLoopServiceMachPort + 143\n> 6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n> 7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific\n> + 557\n> 8 Foundation 0x7ff80de01551\n> -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 216\n> 9 nwjs Framework 0x11998126e 0x1151ad000 + 75317870\n> 10 nwjs Framework 0x11998023c 0x1151ad000 + 75313724\n> 11 nwjs Framework 0x119927459 0x1151ad000 + 74949721\n> 12 nwjs Framework 0x1198ee15f 0x1151ad000 + 74715487\n> 13 nwjs Framework 0x119942c18 0x1151ad000 + 75062296\n> 14 nwjs Framework 0x119942d6b 0x1151ad000 + 75062635\n> 15 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 16 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 17 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 34:: com.apple.CFSocket.private\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdf694a __select + 10\n> 2 CoreFoundation 0x7ff80cf2f6af __CFSocketManager +\n> 637\n> 3 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 4 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 35:\n> 0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n>\n> Thread 36:\n> 0 runtime 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644\n>\n> Thread 37:: org.libusb.device-hotplug\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdeda6e mach_msg2_trap + 10\n> 2 libsystem_kernel.dylib 0x7ff80cdfbe7a mach_msg2_internal +\n> 84\n> 3 libsystem_kernel.dylib 0x7ff80cdf4b92 mach_msg_overwrite +\n> 653\n> 4 libsystem_kernel.dylib 0x7ff80cdedd5f mach_msg + 19\n> 5 CoreFoundation 0x7ff80cf07b49\n> __CFRunLoopServiceMachPort + 143\n> 6 CoreFoundation 0x7ff80cf065bc __CFRunLoopRun + 1371\n> 7 CoreFoundation 0x7ff80cf05a99 CFRunLoopRunSpecific\n> + 557\n> 8 CoreFoundation 0x7ff80cf80889 CFRunLoopRun + 40\n> 9 nwjs Framework 0x11b7f4feb 0x1151ad000 +\n> 107249643\n> 10 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 11 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n> Thread 38:: UsbEventHandler\n> 0 ??? 0x7ff89d2e2a78 ???\n> 1 libsystem_kernel.dylib 0x7ff80cdf4876 poll + 10\n> 2 nwjs Framework 0x11b7f1cb7 0x1151ad000 +\n> 107236535\n> 3 nwjs Framework 0x11b7f19db 0x1151ad000 +\n> 107235803\n> 4 nwjs Framework 0x11b7f1e40 0x1151ad000 +\n> 107236928\n> 5 nwjs Framework 0x11b7e35cf 0x1151ad000 +\n> 107177423\n> 6 nwjs Framework 0x119957ed9 0x1151ad000 + 75149017\n> 7 libsystem_pthread.dylib 0x7ff80ce2d202 _pthread_start + 99\n> 8 libsystem_pthread.dylib 0x7ff80ce28bab thread_start + 15\n>\n>\n> Thread 0 crashed with X86 Thread State (64-bit):\n> rax: 0x00007fd519066801 rbx: 0x0000000000000008 rcx:\n> 0x0000000120cdf478 rdx: 0x0000000000000400\n> rdi: 0x0000000000000018 rsi: 0x000000000000031c rbp:\n> 0x00000003051ce690 rsp: 0x00000003051ce690\n> r8: 0xc5bdffd50ea7b1b7 r9: 0x0000000000000400 r10:\n> 0x0000000000000000 r11: 0x00007ff810494646\n> r12: 0x00007fd50ea247d8 r13: 0x00007fd50ea24740 r14:\n> 0x0000000000000008 r15: 0x00000003051ce720\n> rip: <unavailable> rfl: 0x0000000000000206\n> tmp0: 0x0000000000000001 tmp1: 0x000000011ed4e1e0 tmp2: 0x000000011eb31522\n>\n>\n> Binary Images:\n> 0x200a7a000 - 0x200b19fff dyld (*)\n> <d5406f23-6967-39c4-beb5-6ae3293c7753> /usr/lib/dyld\n> 0x113a08000 - 0x113a17fff libobjc-trampolines.dylib (*)\n> <7e101877-a6ff-3331-99a3-4222cb254447> /usr/lib/libobjc-trampolines.dylib\n> 0x1151ad000 - 0x120616fff io.nwjs.nwjs.framework\n> (115.0.5790.98) <4c4c447b-5555-3144-a1ec-62791bcf166d>\n> /Library/PostgreSQL/16/pgAdmin 4.app/Contents/Frameworks/nwjs\n> Framework.framework/Versions/115.0.5790.98/nwjs Framework\n> 0x108c52000 - 0x108c59fff\n> com.apple.AutomaticAssessmentConfiguration (1.0)\n> <b30252ae-24c6-3839-b779-661ef263b52d>\n> /System/Library/Frameworks/AutomaticAssessmentConfiguration.framework/Versions/A/AutomaticAssessmentConfiguration\n> 0x109141000 - 0x1092e4fff libffmpeg.dylib (*)\n> <4c4c4416-5555-3144-a164-70bbf0436f17> /Library/PostgreSQL/16/pgAdmin\n> 4.app/Contents/Frameworks/nwjs\n> Framework.framework/Versions/115.0.5790.98/libffmpeg.dylib\n> 0x7ff7ffc54000 - 0x7ff7ffc83fff runtime (*)\n> <2c5acb8c-fbaf-31ab-aeb3-90905c3fa905> /usr/libexec/rosetta/runtime\n> 0x1086d8000 - 0x10872bfff libRosettaRuntime (*)\n> <a61ec9e9-1174-3dc6-9cdb-0d31811f4850> /Library/Apple/*/libRosettaRuntime\n> 0x100651000 - 0x10067bfff org.pgadmin.pgadmin4 (7.8)\n> <4c4c4402-5555-3144-a1c7-07729cda43c0> /Library/PostgreSQL/16/pgAdmin\n> 4.app/Contents/MacOS/pgAdmin 4\n> 0x0 - 0xffffffffffffffff ??? (*)\n> <00000000-0000-0000-0000-000000000000> ???\n> 0x7ff80ce57000 - 0x7ff80ce60fff libsystem_platform.dylib (*)\n> <c94f952c-2787-30d2-ab77-ee474abd88d6>\n> /usr/lib/system/libsystem_platform.dylib\n> 0x7ff80ce8c000 - 0x7ff80d324ffc com.apple.CoreFoundation (6.9)\n> <4d842118-bb65-3f01-9087-ff1a2e3ab0d5>\n> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation\n> 0x7ff817c3d000 - 0x7ff817ed8ff4 com.apple.HIToolbox (2.1.1)\n> <06bf0872-3b34-3c7b-ad5b-7a447d793405>\n> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox\n> 0x7ff810439000 - 0x7ff81183effb com.apple.AppKit (6.9)\n> <27fed5dd-d148-3238-bc95-1dac5dd57fa1>\n> /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit\n> 0x7ff80cdec000 - 0x7ff80ce26ff7 libsystem_kernel.dylib (*)\n> <4df0d732-7fc4-3200-8176-f1804c63f2c8>\n> /usr/lib/system/libsystem_kernel.dylib\n> 0x7ff80ce27000 - 0x7ff80ce32fff libsystem_pthread.dylib (*)\n> <c64722b0-e96a-3fa5-96c3-b4beaf0c494a>\n> /usr/lib/system/libsystem_pthread.dylib\n> 0x7ff80dda5000 - 0x7ff80e9e3ffb com.apple.Foundation (6.9)\n> <581d66fd-7cef-3a8c-8647-1d962624703b>\n> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation\n>\n> External Modification Summary:\n> Calls made by other processes targeting this process:\n> task_for_pid: 0\n> thread_create: 0\n> thread_set_state: 0\n> Calls made by this process:\n> task_for_pid: 0\n> thread_create: 0\n> thread_set_state: 0\n> Calls made by all processes on this machine:\n> task_for_pid: 0\n> thread_create: 0\n> thread_set_state: 0\n>\n>\n> -----------\n> Full Report\n> -----------\n>\n> {\"app_name\":\"pgAdmin 4\",\"timestamp\":\"2023-11-14 11:47:18.00\n> -0500\",\"app_version\":\"7.8\",\"slice_uuid\":\"4c4c4402-5555-3144-a1c7-07729cda43c0\",\"build_version\":\"4280.88\",\"platform\":1,\"bundleID\":\"org.pgadmin.pgadmin4\",\"share_with_app_devs\":1,\"is_first_party\":0,\"bug_type\":\"309\",\"os_version\":\"macOS\n> 14.1.1 (23B81)\",\"roots_installed\":0,\"name\":\"pgAdmin\n> 4\",\"incident_id\":\"1AF5B51F-D7DC-4AD5-8526-1C5B3A33AFA5\"}\n> {\n> \"uptime\" : 2800,\n> \"procRole\" : \"Foreground\",\n> \"version\" : 2,\n> \"userID\" : 501,\n> \"deployVersion\" : 210,\n> \"modelCode\" : \"Mac14,9\",\n> \"coalitionID\" : 2672,\n> \"osVersion\" : {\n> \"train\" : \"macOS 14.1.1\",\n> \"build\" : \"23B81\",\n> \"releaseType\" : \"User\"\n> },\n> \"captureTime\" : \"2023-11-14 11:47:14.7065 -0500\",\n> \"codeSigningMonitor\" : 1,\n> \"incident\" : \"1AF5B51F-D7DC-4AD5-8526-1C5B3A33AFA5\",\n> \"pid\" : 3505,\n> \"translated\" : true,\n> \"cpuType\" : \"X86-64\",\n> \"roots_installed\" : 0,\n> \"bug_type\" : \"309\",\n> \"procLaunch\" : \"2023-11-14 11:47:06.3899 -0500\",\n> \"procStartAbsTime\" : 67472503520,\n> \"procExitAbsTime\" : 67672052074,\n> \"procName\" : \"pgAdmin 4\",\n> \"procPath\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin 4.app\\/Contents\\/MacOS\\/pgAdmin\n> 4\",\n> \"bundleInfo\" :\n> {\"CFBundleShortVersionString\":\"7.8\",\"CFBundleVersion\":\"4280.88\",\"CFBundleIdentifier\":\"org.pgadmin.pgadmin4\"},\n> \"storeInfo\" :\n> {\"deviceIdentifierForVendor\":\"F2A41A90-E8FF-58E0-AF26-5F17BFD205F1\",\"thirdParty\":true},\n> \"parentProc\" : \"launchd\",\n> \"parentPid\" : 1,\n> \"coalitionName\" : \"org.pgadmin.pgadmin4\",\n> \"crashReporterKey\" : \"A4518538-B2A9-0B93-C540-A9DCCCD929EF\",\n> \"codeSigningID\" : \"\",\n> \"codeSigningTeamID\" : \"\",\n> \"codeSigningValidationCategory\" : 0,\n> \"codeSigningTrustLevel\" : 4294967295,\n> \"wakeTime\" : 920,\n> \"sleepWakeUUID\" : \"E31F7EEF-42B9-4E61-88DC-9C0571A2F4E3\",\n> \"sip\" : \"enabled\",\n> \"vmRegionInfo\" : \"0x20 is not in any region. Bytes before following\n> region: 140723014549472\\n REGION TYPE START - END\n> [ VSIZE] PRT\\/MAX SHRMOD REGION DETAIL\\n UNUSED SPACE AT\n> START\\n---> \\n mapped file 7ffca14b4000-7ffcc6b5c000\n> [598.7M] r-x\\/r-x SM=COW ...t_id=cccf3f63\",\n> \"exception\" : {\"codes\":\"0x0000000000000001,\n> 0x0000000000000020\",\"rawCodes\":[1,32],\"type\":\"EXC_BAD_ACCESS\",\"signal\":\"SIGSEGV\",\"subtype\":\"KERN_INVALID_ADDRESS\n> at 0x0000000000000020\"},\n> \"termination\" :\n> {\"flags\":0,\"code\":11,\"namespace\":\"SIGNAL\",\"indicator\":\"Segmentation fault:\n> 11\",\"byProc\":\"exc handler\",\"byPid\":3505},\n> \"vmregioninfo\" : \"0x20 is not in any region. Bytes before following\n> region: 140723014549472\\n REGION TYPE START - END\n> [ VSIZE] PRT\\/MAX SHRMOD REGION DETAIL\\n UNUSED SPACE AT\n> START\\n---> \\n mapped file 7ffca14b4000-7ffcc6b5c000\n> [598.7M] r-x\\/r-x SM=COW ...t_id=cccf3f63\",\n> \"extMods\" :\n> {\"caller\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"system\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"targeted\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"warnings\":0},\n> \"faultingThread\" : 0,\n> \"threads\" :\n> [{\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":12970682000},\"r12\":{\"value\":140553050277848},\"rosetta\":{\"tmp2\":{\"value\":4810020130},\"tmp1\":{\"value\":4812235232},\"tmp0\":{\"value\":1}},\"rbx\":{\"value\":8},\"r8\":{\"value\":14248826086609105335},\"r15\":{\"value\":12970682144},\"r10\":{\"value\":0},\"rdx\":{\"value\":1024},\"rdi\":{\"value\":24},\"r9\":{\"value\":1024},\"r13\":{\"value\":140553050277696},\"rflags\":{\"value\":518},\"rax\":{\"value\":140553224611841},\"rsp\":{\"value\":12970682000},\"r11\":{\"value\":140703401854534,\"symbolLocation\":0,\"symbol\":\"-[NSView\n> alphaValue]\"},\"rcx\":{\"value\":4845335672,\"symbolLocation\":5872920,\"symbol\":\"vtable\n> for\n> v8::internal::SetupIsolateDelegate\"},\"r14\":{\"value\":8},\"rsi\":{\"value\":796}},\"id\":57285,\"triggered\":true,\"name\":\"CrBrowserMain\",\"queue\":\"com.apple.main-thread\",\"frames\":[{\"imageOffset\":4303431008,\"imageIndex\":8},{\"imageOffset\":13203,\"symbol\":\"_sigtramp\",\"symbolLocation\":51,\"imageIndex\":9},{\"imageOffset\":160974114,\"imageIndex\":2},{\"imageOffset\":163189232,\"imageIndex\":2},{\"imageOffset\":163647707,\"imageIndex\":2},{\"imageOffset\":105907482,\"imageIndex\":2},{\"imageOffset\":105941209,\"imageIndex\":2},{\"imageOffset\":122254525,\"imageIndex\":2},{\"imageOffset\":122248262,\"imageIndex\":2},{\"imageOffset\":163196389,\"imageIndex\":2},{\"imageOffset\":163699863,\"imageIndex\":2},{\"imageOffset\":160771285,\"imageIndex\":2},{\"imageOffset\":160767767,\"imageIndex\":2},{\"imageOffset\":117416537,\"imageIndex\":2},{\"imageOffset\":54971568,\"imageIndex\":2},{\"imageOffset\":54981780,\"imageIndex\":2},{\"imageOffset\":54982951,\"imageIndex\":2},{\"imageOffset\":54966883,\"imageIndex\":2},{\"imageOffset\":50389369,\"imageIndex\":2},{\"imageOffset\":54968774,\"imageIndex\":2},{\"imageOffset\":86765456,\"imageIndex\":2},{\"imageOffset\":86786353,\"imageIndex\":2},{\"imageOffset\":86771408,\"imageIndex\":2},{\"imageOffset\":87761370,\"imageIndex\":2},{\"imageOffset\":86779801,\"imageIndex\":2},{\"imageOffset\":74851538,\"imageIndex\":2},{\"imageOffset\":74947182,\"imageIndex\":2},{\"imageOffset\":74945705,\"imageIndex\":2},{\"imageOffset\":74948821,\"imageIndex\":2},{\"imageOffset\":75316931,\"imageIndex\":2},{\"imageOffset\":75303458,\"imageIndex\":2},{\"imageOffset\":75314911,\"imageIndex\":2},{\"imageOffset\":506390,\"symbol\":\"__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__\",\"symbolLocation\":17,\"imageIndex\":10},{\"imageOffset\":506297,\"symbol\":\"__CFRunLoopDoSource0\",\"symbolLocation\":157,\"imageIndex\":10},{\"imageOffset\":505736,\"symbol\":\"__CFRunLoopDoSources0\",\"symbolLocation\":215,\"imageIndex\":10},{\"imageOffset\":500728,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":919,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":199129,\"symbol\":\"RunCurrentEventLoopInMode\",\"symbolLocation\":292,\"imageIndex\":11},{\"imageOffset\":198630,\"symbol\":\"ReceiveNextEventCommon\",\"symbolLocation\":665,\"imageIndex\":11},{\"imageOffset\":197937,\"symbol\":\"_BlockUntilNextEventMatchingListInModeWithFilter\",\"symbolLocation\":66,\"imageIndex\":11},{\"imageOffset\":256133,\"symbol\":\"_DPSNextEvent\",\"symbolLocation\":880,\"imageIndex\":12},{\"imageOffset\":9642824,\"symbol\":\"-[NSApplication(NSEventRouting)\n> _nextEventMatchingEventMask:untilDate:inMode:dequeue:]\",\"symbolLocation\":1304,\"imageIndex\":12},{\"imageOffset\":69252080,\"imageIndex\":2},{\"imageOffset\":75303458,\"imageIndex\":2},{\"imageOffset\":69251945,\"imageIndex\":2},{\"imageOffset\":196090,\"symbol\":\"-[NSApplication\n> run]\",\"symbolLocation\":603,\"imageIndex\":12},{\"imageOffset\":75318444,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":40170241,\"imageIndex\":2},{\"imageOffset\":40176786,\"imageIndex\":2},{\"imageOffset\":40159834,\"imageIndex\":2},{\"imageOffset\":62423492,\"imageIndex\":2},{\"imageOffset\":62428041,\"imageIndex\":2},{\"imageOffset\":62427549,\"imageIndex\":2},{\"imageOffset\":62421095,\"imageIndex\":2},{\"imageOffset\":62421763,\"imageIndex\":2},{\"imageOffset\":14640,\"symbol\":\"ChromeMain\",\"symbolLocation\":560,\"imageIndex\":2},{\"imageOffset\":2174,\"symbol\":\"main\",\"symbolLocation\":286,\"imageIndex\":7},{\"imageOffset\":25510,\"symbol\":\"start\",\"symbolLocation\":1942,\"imageIndex\":0}]},{\"id\":57293,\"name\":\"com.apple.rosetta.exceptionserver\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":34097745362944},\"r12\":{\"value\":5117060296},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":4462471109675},\"tmp0\":{\"value\":10337986281472}},\"rbx\":{\"value\":4462471109675},\"r8\":{\"value\":7939},\"r15\":{\"value\":4898951168},\"r10\":{\"value\":15586436317184},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":4436777856},\"rflags\":{\"value\":582},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":10337986281472},\"r11\":{\"value\":32},\"rcx\":{\"value\":17314086914},\"r14\":{\"value\":4303431008},\"rsi\":{\"value\":2616}},\"frames\":[{\"imageOffset\":17044,\"imageIndex\":5}]},{\"id\":57315,\"name\":\"StackSamplingProfiler\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":43993350012928},\"r12\":{\"value\":78},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":43993350012928},\"r8\":{\"value\":0},\"r15\":{\"value\":43993350012928},\"r10\":{\"value\":43993350012928},\"rdx\":{\"value\":0},\"rdi\":{\"value\":78},\"r9\":{\"value\":43993350012928},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":74543000,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57316,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12979638272},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12980174848},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":8967},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57317,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12980195328},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12980731904},\"rsp\":{\"value\":409602},\"r11\":{\"value\":0},\"rcx\":{\"value\":12035},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57318,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12980752384},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12981288960},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":10503},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57332,\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":107676221,\"imageIndex\":2},{\"imageOffset\":107681366,\"imageIndex\":2},{\"imageOffset\":107680545,\"imageIndex\":2},{\"imageOffset\":107688728,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":53888954662912},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":53888954662912},\"r10\":{\"value\":0},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":53888954662912},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":48},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":48},\"rsi\":{\"value\":48}}},{\"id\":57350,\"name\":\"HangWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":149632365625344},\"r12\":{\"value\":10000},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":149632365625344},\"r8\":{\"value\":0},\"r15\":{\"value\":149632365625344},\"r10\":{\"value\":149632365625344},\"rdx\":{\"value\":0},\"rdi\":{\"value\":10000},\"r9\":{\"value\":149632365625344},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75041092,\"imageIndex\":2},{\"imageOffset\":75041539,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57351,\"name\":\"ThreadPoolServiceThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":12998057104},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140553009377648},\"r8\":{\"value\":140553008003200},\"r15\":{\"value\":0},\"r10\":{\"value\":140553009377648},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":5},\"r11\":{\"value\":12998057984},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553008497312},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":74986557,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57352,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":180332791857152},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":180332791857152},\"r8\":{\"value\":0},\"r15\":{\"value\":180332791857152},\"r10\":{\"value\":180332791857152},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":180332791857152},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57353,\"name\":\"ThreadPoolBackgroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":152845001162752},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":152845001162752},\"r8\":{\"value\":0},\"r15\":{\"value\":152845001162752},\"r10\":{\"value\":152845001162752},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":152845001162752},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032093,\"imageIndex\":2},{\"imageOffset\":75032016,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57354,\"name\":\"ThreadPoolBackgroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":175934745346048},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":175934745346048},\"r8\":{\"value\":0},\"r15\":{\"value\":175934745346048},\"r10\":{\"value\":175934745346048},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":175934745346048},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032093,\"imageIndex\":2},{\"imageOffset\":75032016,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57355,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":155044024418304},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":155044024418304},\"r8\":{\"value\":0},\"r15\":{\"value\":155044024418304},\"r10\":{\"value\":155044024418304},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":155044024418304},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57357,\"name\":\"Chrome_IOThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":13039979632},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140553050165472},\"r8\":{\"value\":140553008471776},\"r15\":{\"value\":0},\"r10\":{\"value\":140553050165472},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":8},\"r11\":{\"value\":13039980544},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553008516224},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":40181552,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57358,\"name\":\"MemoryInfra\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":188029373251584},\"r12\":{\"value\":14641},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":188029373251584},\"r8\":{\"value\":0},\"r15\":{\"value\":188029373251584},\"r10\":{\"value\":188029373251584},\"rdx\":{\"value\":0},\"rdi\":{\"value\":14641},\"r9\":{\"value\":188029373251584},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":74543000,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57364,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":274890791845888},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":274890791845888},\"r8\":{\"value\":0},\"r15\":{\"value\":274890791845888},\"r10\":{\"value\":274890791845888},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":274890791845888},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\n> runMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57365,\"name\":\"CrShutdownDetector\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344551112},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":13065134179},\"r8\":{\"value\":140552812261236},\"r15\":{\"value\":4},\"r10\":{\"value\":13065134179},\"rdx\":{\"value\":4},\"rdi\":{\"value\":7162258760691251055},\"r9\":{\"value\":18},\"r13\":{\"value\":0},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":0},\"r11\":{\"value\":4294967280},\"rcx\":{\"value\":0},\"r14\":{\"value\":13065133916},\"rsi\":{\"value\":7238539592028275492}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":9426,\"symbol\":\"read\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":73333214,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57432,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":264995187195904},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":264995187195904},\"r8\":{\"value\":0},\"r15\":{\"value\":264995187195904},\"r10\":{\"value\":264995187195904},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":264995187195904},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\n> runMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57433,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":200124001157120},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":200124001157120},\"r8\":{\"value\":0},\"r15\":{\"value\":200124001157120},\"r10\":{\"value\":200124001157120},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":200124001157120},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57434,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":199024489529344},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":199024489529344},\"r8\":{\"value\":0},\"r15\":{\"value\":199024489529344},\"r10\":{\"value\":199024489529344},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":199024489529344},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57435,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":201223512784896},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":201223512784896},\"r8\":{\"value\":0},\"r15\":{\"value\":201223512784896},\"r10\":{\"value\":201223512784896},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":201223512784896},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57436,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":262796163940352},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":262796163940352},\"r8\":{\"value\":0},\"r15\":{\"value\":262796163940352},\"r10\":{\"value\":262796163940352},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":262796163940352},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57437,\"name\":\"NetworkNotificationThreadMac\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":205621559296000},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":205621559296000},\"r8\":{\"value\":0},\"r15\":{\"value\":205621559296000},\"r10\":{\"value\":205621559296000},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":205621559296000},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\n> runMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57438,\"name\":\"CompositorTileWorker1\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":161},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344559620},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":140703344816589,\"symbolLocation\":0,\"symbol\":\"_pthread_psynch_cond_cleanup\"},\"r15\":{\"value\":6912},\"r10\":{\"value\":0},\"rdx\":{\"value\":6912},\"rdi\":{\"value\":0},\"r9\":{\"value\":161},\"r13\":{\"value\":29691108924416},\"rflags\":{\"value\":658},\"rax\":{\"value\":260},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":13123825664},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":17934,\"symbol\":\"__psynch_cvwait\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":26475,\"symbol\":\"_pthread_cond_wait\",\"symbolLocation\":1211,\"imageIndex\":14},{\"imageOffset\":75147211,\"imageIndex\":2},{\"imageOffset\":97344085,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57439,\"name\":\"ThreadPoolSingleThreadForegroundBlocking0\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":239706419757056},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":239706419757056},\"r8\":{\"value\":0},\"r15\":{\"value\":239706419757056},\"r10\":{\"value\":239706419757056},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":239706419757056},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032333,\"imageIndex\":2},{\"imageOffset\":75032026,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57440,\"name\":\"ThreadPoolSingleThreadSharedForeground1\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":223213745340416},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":223213745340416},\"r8\":{\"value\":0},\"r15\":{\"value\":223213745340416},\"r10\":{\"value\":223213745340416},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":223213745340416},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032285,\"imageIndex\":2},{\"imageOffset\":75032036,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57456,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":356254652301312},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":356254652301312},\"r8\":{\"value\":0},\"r15\":{\"value\":356254652301312},\"r10\":{\"value\":356254652301312},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":356254652301312},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\n> runMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57459,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":296881024401408},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":296881024401408},\"r8\":{\"value\":0},\"r15\":{\"value\":296881024401408},\"r10\":{\"value\":296881024401408},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":296881024401408},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57460,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":297980536029184},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":297980536029184},\"r8\":{\"value\":0},\"r15\":{\"value\":297980536029184},\"r10\":{\"value\":297980536029184},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":297980536029184},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57461,\"name\":\"ThreadPoolSingleThreadSharedBackgroundBlocking2\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":301279070912512},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":301279070912512},\"r8\":{\"value\":0},\"r15\":{\"value\":301279070912512},\"r10\":{\"value\":301279070912512},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":301279070912512},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032141,\"imageIndex\":2},{\"imageOffset\":75032056,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57463,\"name\":\"ThreadPoolSingleThreadSharedForegroundBlocking3\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":346359047651328},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":346359047651328},\"r8\":{\"value\":0},\"r15\":{\"value\":346359047651328},\"r10\":{\"value\":346359047651328},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":346359047651328},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032285,\"imageIndex\":2},{\"imageOffset\":75032036,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57529,\"name\":\"CacheThread_BlockFile\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":13190900912},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140552997185664},\"r8\":{\"value\":140553053191392},\"r15\":{\"value\":0},\"r10\":{\"value\":140552997185664},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":2},\"r11\":{\"value\":13190901760},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553243401024},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57530,\"name\":\"com.apple.NSEventThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":394771919011840},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":394771919011840},\"r8\":{\"value\":0},\"r15\":{\"value\":394771919011840},\"r10\":{\"value\":394771919011840},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":394771919011840},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":1686016,\"symbol\":\"_NSEventThread\",\"symbolLocation\":122,\"imageIndex\":12},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57561,\"name\":\"Service\n> Discovery\n> Thread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":423324861595648},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":423324861595648},\"r8\":{\"value\":0},\"r15\":{\"value\":423324861595648},\"r10\":{\"value\":423324861595648},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":423324861595648},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop)\n> runMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57562,\"name\":\"com.apple.CFSocket.private\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":3},\"rosetta\":{\"tmp2\":{\"value\":140703344585024},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":140553010214768},\"r10\":{\"value\":0},\"rdx\":{\"value\":140553010211280},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":140704414334832,\"symbolLocation\":0,\"symbol\":\"__kCFNull\"},\"rflags\":{\"value\":642},\"rax\":{\"value\":4},\"rsp\":{\"value\":0},\"r11\":{\"value\":140703345435675,\"symbolLocation\":0,\"symbol\":\"-[__NSCFArray\n> objectAtIndex:]\"},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553050490128},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":43338,\"symbol\":\"__select\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":669359,\"symbol\":\"__CFSocketManager\",\"symbolLocation\":637,\"imageIndex\":10},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57570,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":13200379904},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":13200916480},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":172295},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57571,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":13200936960},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":515},\"rax\":{\"value\":13201473536},\"rsp\":{\"value\":278532},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57600,\"name\":\"org.libusb.device-hotplug\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":545387832147968},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":545387832147968},\"r8\":{\"value\":0},\"r15\":{\"value\":545387832147968},\"r10\":{\"value\":545387832147968},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":545387832147968},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":1001609,\"symbol\":\"CFRunLoopRun\",\"symbolLocation\":40,\"imageIndex\":10},{\"imageOffset\":107249643,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57601,\"name\":\"UsbEventHandler\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":6837},\"r12\":{\"value\":140553247458160},\"rosetta\":{\"tmp2\":{\"value\":140703344576620},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":2147483},\"r8\":{\"value\":12297829382473034410},\"r15\":{\"value\":140553247458168},\"r10\":{\"value\":2147483},\"rdx\":{\"value\":60000},\"rdi\":{\"value\":140553247457824},\"r9\":{\"value\":6837},\"r13\":{\"value\":140553247458184},\"rflags\":{\"value\":658},\"rax\":{\"value\":4},\"rsp\":{\"value\":25997},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":2},\"rsi\":{\"value\":13210394000}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":34934,\"symbol\":\"poll\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":107236535,\"imageIndex\":2},{\"imageOffset\":107235803,\"imageIndex\":2},{\"imageOffset\":107236928,\"imageIndex\":2},{\"imageOffset\":107177423,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]}],\n> \"usedImages\" : [\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 8600920064,\n> \"size\" : 655360,\n> \"uuid\" : \"d5406f23-6967-39c4-beb5-6ae3293c7753\",\n> \"path\" : \"\\/usr\\/lib\\/dyld\",\n> \"name\" : \"dyld\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 4624252928,\n> \"size\" : 65536,\n> \"uuid\" : \"7e101877-a6ff-3331-99a3-4222cb254447\",\n> \"path\" : \"\\/usr\\/lib\\/libobjc-trampolines.dylib\",\n> \"name\" : \"libobjc-trampolines.dylib\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 4649046016,\n> \"CFBundleShortVersionString\" : \"115.0.5790.98\",\n> \"CFBundleIdentifier\" : \"io.nwjs.nwjs.framework\",\n> \"size\" : 189177856,\n> \"uuid\" : \"4c4c447b-5555-3144-a1ec-62791bcf166d\",\n> \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin 4.app\\/Contents\\/Frameworks\\/nwjs\n> Framework.framework\\/Versions\\/115.0.5790.98\\/nwjs Framework\",\n> \"name\" : \"nwjs Framework\",\n> \"CFBundleVersion\" : \"5790.98\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 4442103808,\n> \"CFBundleShortVersionString\" : \"1.0\",\n> \"CFBundleIdentifier\" : \"com.apple.AutomaticAssessmentConfiguration\",\n> \"size\" : 32768,\n> \"uuid\" : \"b30252ae-24c6-3839-b779-661ef263b52d\",\n> \"path\" :\n> \"\\/System\\/Library\\/Frameworks\\/AutomaticAssessmentConfiguration.framework\\/Versions\\/A\\/AutomaticAssessmentConfiguration\",\n> \"name\" : \"AutomaticAssessmentConfiguration\",\n> \"CFBundleVersion\" : \"12.0.0\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 4447277056,\n> \"size\" : 1720320,\n> \"uuid\" : \"4c4c4416-5555-3144-a164-70bbf0436f17\",\n> \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin 4.app\\/Contents\\/Frameworks\\/nwjs\n> Framework.framework\\/Versions\\/115.0.5790.98\\/libffmpeg.dylib\",\n> \"name\" : \"libffmpeg.dylib\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"arm64\",\n> \"base\" : 140703124766720,\n> \"size\" : 196608,\n> \"uuid\" : \"2c5acb8c-fbaf-31ab-aeb3-90905c3fa905\",\n> \"path\" : \"\\/usr\\/libexec\\/rosetta\\/runtime\",\n> \"name\" : \"runtime\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"arm64\",\n> \"base\" : 4436361216,\n> \"size\" : 344064,\n> \"uuid\" : \"a61ec9e9-1174-3dc6-9cdb-0d31811f4850\",\n> \"path\" : \"\\/Library\\/Apple\\/*\\/libRosettaRuntime\",\n> \"name\" : \"libRosettaRuntime\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 4301590528,\n> \"CFBundleShortVersionString\" : \"7.8\",\n> \"CFBundleIdentifier\" : \"org.pgadmin.pgadmin4\",\n> \"size\" : 176128,\n> \"uuid\" : \"4c4c4402-5555-3144-a1c7-07729cda43c0\",\n> \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin 4.app\\/Contents\\/MacOS\\/pgAdmin\n> 4\",\n> \"name\" : \"pgAdmin 4\",\n> \"CFBundleVersion\" : \"4280.88\"\n> },\n> {\n> \"size\" : 0,\n> \"source\" : \"A\",\n> \"base\" : 0,\n> \"uuid\" : \"00000000-0000-0000-0000-000000000000\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 140703344979968,\n> \"size\" : 40960,\n> \"uuid\" : \"c94f952c-2787-30d2-ab77-ee474abd88d6\",\n> \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_platform.dylib\",\n> \"name\" : \"libsystem_platform.dylib\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 140703345197056,\n> \"CFBundleShortVersionString\" : \"6.9\",\n> \"CFBundleIdentifier\" : \"com.apple.CoreFoundation\",\n> \"size\" : 4820989,\n> \"uuid\" : \"4d842118-bb65-3f01-9087-ff1a2e3ab0d5\",\n> \"path\" :\n> \"\\/System\\/Library\\/Frameworks\\/CoreFoundation.framework\\/Versions\\/A\\/CoreFoundation\",\n> \"name\" : \"CoreFoundation\",\n> \"CFBundleVersion\" : \"2106\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 140703527325696,\n> \"CFBundleShortVersionString\" : \"2.1.1\",\n> \"CFBundleIdentifier\" : \"com.apple.HIToolbox\",\n> \"size\" : 2736117,\n> \"uuid\" : \"06bf0872-3b34-3c7b-ad5b-7a447d793405\",\n> \"path\" :\n> \"\\/System\\/Library\\/Frameworks\\/Carbon.framework\\/Versions\\/A\\/Frameworks\\/HIToolbox.framework\\/Versions\\/A\\/HIToolbox\",\n> \"name\" : \"HIToolbox\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 140703401480192,\n> \"CFBundleShortVersionString\" : \"6.9\",\n> \"CFBundleIdentifier\" : \"com.apple.AppKit\",\n> \"size\" : 20996092,\n> \"uuid\" : \"27fed5dd-d148-3238-bc95-1dac5dd57fa1\",\n> \"path\" :\n> \"\\/System\\/Library\\/Frameworks\\/AppKit.framework\\/Versions\\/C\\/AppKit\",\n> \"name\" : \"AppKit\",\n> \"CFBundleVersion\" : \"2487.20.107\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 140703344541696,\n> \"size\" : 241656,\n> \"uuid\" : \"4df0d732-7fc4-3200-8176-f1804c63f2c8\",\n> \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_kernel.dylib\",\n> \"name\" : \"libsystem_kernel.dylib\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 140703344783360,\n> \"size\" : 49152,\n> \"uuid\" : \"c64722b0-e96a-3fa5-96c3-b4beaf0c494a\",\n> \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_pthread.dylib\",\n> \"name\" : \"libsystem_pthread.dylib\"\n> },\n> {\n> \"source\" : \"P\",\n> \"arch\" : \"x86_64\",\n> \"base\" : 140703361028096,\n> \"CFBundleShortVersionString\" : \"6.9\",\n> \"CFBundleIdentifier\" : \"com.apple.Foundation\",\n> \"size\" : 12840956,\n> \"uuid\" : \"581d66fd-7cef-3a8c-8647-1d962624703b\",\n> \"path\" :\n> \"\\/System\\/Library\\/Frameworks\\/Foundation.framework\\/Versions\\/C\\/Foundation\",\n> \"name\" : \"Foundation\",\n> \"CFBundleVersion\" : \"2106\"\n> }\n> ],\n> \"sharedCache\" : {\n> \"base\" : 140703340380160,\n> \"size\" : 21474836480,\n> \"uuid\" : \"67c86f0b-dd40-3694-909d-52e210cbd5fa\"\n> },\n> \"legacyInfo\" : {\n> \"threadTriggered\" : {\n> \"name\" : \"CrBrowserMain\",\n> \"queue\" : \"com.apple.main-thread\"\n> }\n> },\n> \"logWritingSignature\" : \"8b321ae8a79f5edf7aad3381809b3fbd28f3768b\",\n> \"trialInfo\" : {\n> \"rollouts\" : [\n> {\n> \"rolloutId\" : \"60da5e84ab0ca017dace9abf\",\n> \"factorPackIds\" : {\n>\n> },\n> \"deploymentId\" : 240000008\n> },\n> {\n> \"rolloutId\" : \"63f9578e238e7b23a1f3030a\",\n> \"factorPackIds\" : {\n>\n> },\n> \"deploymentId\" : 240000005\n> }\n> ],\n> \"experiments\" : [\n> {\n> \"treatmentId\" : \"a092db1b-c401-44fa-9c54-518b7d69ca61\",\n> \"experimentId\" : \"64a844035c85000c0f42398a\",\n> \"deploymentId\" : 400000019\n> }\n> ]\n> },\n> \"reportNotes\" : [\n> \"PC register does not match crashing frame (0x0 vs 0x100812560)\"\n> ]\n> }\n>\n> Model: Mac14,9, BootROM 10151.41.12, proc 10:6:4 processors, 16 GB, SMC\n> Graphics: Apple M2 Pro, Apple M2 Pro, Built-In\n> Display: Color LCD, 3024 x 1964 Retina, Main, MirrorOff, Online\n> Memory Module: LPDDR5, Micron\n> AirPort: spairport_wireless_card_type_wifi (0x14E4, 0x4388), wl0: Sep 1\n> 2023 19:33:37 version 23.10.765.4.41.51.121 FWID 01-e2f09e46\n> AirPort:\n> Bluetooth: Version (null), 0 services, 0 devices, 0 incoming serial ports\n> Network Service: Wi-Fi, AirPort, en0\n> USB Device: USB31Bus\n> USB Device: USB31Bus\n> USB Device: USB31Bus\n> Thunderbolt Bus: MacBook Pro, Apple Inc.\n> Thunderbolt Bus: MacBook Pro, Apple Inc.\n> Thunderbolt Bus: MacBook Pro, Apple Inc.\n>\n> Thanks & Regards,\n> Kanmani\n>\n\n\n-- \nThanks,\nAditya Toshniwal\npgAdmin Hacker | Sr. Software Architect | *enterprisedb.com*\n<https://www.enterprisedb.com/>\n\"Don't Complain about Heat, Plant a TREE\"\n\nHi Kanmani,What is the pgAdmin version you're using?On Wed, Nov 15, 2023 at 3:02 PM Kanmani Thamizhanban <[email protected]> wrote:Hi Team,Good day! I need an urgent help with launching PGAdmin4 in my Mac OS, I have tried with both the versions 15, 16 and almost every other possible way, but nothing is working. It says that it has quit unexpectedly (screenshot attached). I have attached the bug report as well along with my system specifications below. I really appreciate your help on resolving this. -------------------------------------Translated Report (Full Report Below)-------------------------------------Process: pgAdmin 4 [3505]Path: /Library/PostgreSQL/16/pgAdmin 4.app/Contents/MacOS/pgAdmin 4Identifier: org.pgadmin.pgadmin4Version: 7.8 (4280.88)Code Type: X86-64 (Translated)Parent Process: launchd [1]User ID: 501Date/Time: 2023-11-14 11:47:14.7065 -0500OS Version: macOS 14.1.1 (23B81)Report Version: 12Anonymous UUID: A4518538-B2A9-0B93-C540-A9DCCCD929EFSleep/Wake UUID: E31F7EEF-42B9-4E61-88DC-9C0571A2F4E3Time Awake Since Boot: 2800 secondsTime Since Wake: 920 secondsSystem Integrity Protection: enabledNotes:PC register does not match crashing frame (0x0 vs 0x100812560)Crashed Thread: 0 CrBrowserMain Dispatch queue: com.apple.main-threadException Type: EXC_BAD_ACCESS (SIGSEGV)Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000020Exception Codes: 0x0000000000000001, 0x0000000000000020Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11Terminating Process: exc handler [3505]VM Region Info: 0x20 is not in any region. Bytes before following region: 140723014549472 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL UNUSED SPACE AT START---> mapped file 7ffca14b4000-7ffcc6b5c000 [598.7M] r-x/r-x SM=COW ...t_id=cccf3f63Error Formulating Crash Report:PC register does not match crashing frame (0x0 vs 0x100812560)Thread 0 Crashed:: CrBrowserMain Dispatch queue: com.apple.main-thread0 ??? \t 0x100812560 ???1 libsystem_platform.dylib \t 0x7ff80ce5a393 _sigtramp + 512 nwjs Framework \t 0x11eb31522 0x1151ad000 + 1609741143 nwjs Framework \t 0x11ed4e1f0 0x1151ad000 + 1631892324 nwjs Framework \t 0x11edbe0db 0x1151ad000 + 1636477075 nwjs Framework \t 0x11b6ad51a 0x1151ad000 + 1059074826 nwjs Framework \t 0x11b6b58d9 0x1151ad000 + 1059412097 nwjs Framework \t 0x11c6444bd 0x1151ad000 + 1222545258 nwjs Framework \t 0x11c642c46 0x1151ad000 + 1222482629 nwjs Framework \t 0x11ed4fde5 0x1151ad000 + 16319638910 nwjs Framework \t 0x11edcac97 0x1151ad000 + 16369986311 nwjs Framework \t 0x11eaffcd5 0x1151ad000 + 16077128512 nwjs Framework \t 0x11eafef17 0x1151ad000 + 16076776713 nwjs Framework \t 0x11c1a7259 0x1151ad000 + 11741653714 nwjs Framework \t 0x118619cb0 0x1151ad000 + 5497156815 nwjs Framework \t 0x11861c494 0x1151ad000 + 5498178016 nwjs Framework \t 0x11861c927 0x1151ad000 + 5498295117 nwjs Framework \t 0x118618a63 0x1151ad000 + 5496688318 nwjs Framework \t 0x1181bb179 0x1151ad000 + 5038936919 nwjs Framework \t 0x1186191c6 0x1151ad000 + 5496877420 nwjs Framework \t 0x11a46bf90 0x1151ad000 + 8676545621 nwjs Framework \t 0x11a471131 0x1151ad000 + 8678635322 nwjs Framework \t 0x11a46d6d0 0x1151ad000 + 8677140823 nwjs Framework \t 0x11a55f1da 0x1151ad000 + 8776137024 nwjs Framework \t 0x11a46f799 0x1151ad000 + 8677980125 nwjs Framework \t 0x11990f4d2 0x1151ad000 + 7485153826 nwjs Framework \t 0x119926a6e 0x1151ad000 + 7494718227 nwjs Framework \t 0x1199264a9 0x1151ad000 + 7494570528 nwjs Framework \t 0x1199270d5 0x1151ad000 + 7494882129 nwjs Framework \t 0x119980ec3 0x1151ad000 + 7531693130 nwjs Framework \t 0x11997da22 0x1151ad000 + 7530345831 nwjs Framework \t 0x1199806df 0x1151ad000 + 7531491132 CoreFoundation \t 0x7ff80cf07a16 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 1733 CoreFoundation \t 0x7ff80cf079b9 __CFRunLoopDoSource0 + 15734 CoreFoundation \t 0x7ff80cf07788 __CFRunLoopDoSources0 + 21535 CoreFoundation \t 0x7ff80cf063f8 __CFRunLoopRun + 91936 CoreFoundation \t 0x7ff80cf05a99 CFRunLoopRunSpecific + 55737 HIToolbox \t 0x7ff817c6d9d9 RunCurrentEventLoopInMode + 29238 HIToolbox \t 0x7ff817c6d7e6 ReceiveNextEventCommon + 66539 HIToolbox \t 0x7ff817c6d531 _BlockUntilNextEventMatchingListInModeWithFilter + 6640 AppKit \t 0x7ff810477885 _DPSNextEvent + 88041 AppKit \t 0x7ff810d6b348 -[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 130442 nwjs Framework \t 0x1193b83f0 0x1151ad000 + 6925208043 nwjs Framework \t 0x11997da22 0x1151ad000 + 7530345844 nwjs Framework \t 0x1193b8369 0x1151ad000 + 6925194545 AppKit \t 0x7ff810468dfa -[NSApplication run] + 60346 nwjs Framework \t 0x1199814ac 0x1151ad000 + 7531844447 nwjs Framework \t 0x11998023c 0x1151ad000 + 7531372448 nwjs Framework \t 0x119927459 0x1151ad000 + 7494972149 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 7471548750 nwjs Framework \t 0x1177fc301 0x1151ad000 + 4017024151 nwjs Framework \t 0x1177fdc92 0x1151ad000 + 4017678652 nwjs Framework \t 0x1177f9a5a 0x1151ad000 + 4015983453 nwjs Framework \t 0x118d351c4 0x1151ad000 + 6242349254 nwjs Framework \t 0x118d36389 0x1151ad000 + 6242804155 nwjs Framework \t 0x118d3619d 0x1151ad000 + 6242754956 nwjs Framework \t 0x118d34867 0x1151ad000 + 6242109557 nwjs Framework \t 0x118d34b03 0x1151ad000 + 6242176358 nwjs Framework \t 0x1151b0930 ChromeMain + 56059 pgAdmin 4 \t 0x10065187e main + 28660 dyld \t 0x200a803a6 start + 1942Thread 1:: com.apple.rosetta.exceptionserver0 runtime \t 0x7ff7ffc58294 0x7ff7ffc54000 + 17044Thread 2:: StackSamplingProfiler0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x1198c3f98 0x1151ad000 + 745430008 nwjs Framework \t 0x119927459 0x1151ad000 + 749497219 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 7471548710 nwjs Framework \t 0x119942c18 0x1151ad000 + 7506229611 nwjs Framework \t 0x119942d6b 0x1151ad000 + 7506263512 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901713 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9914 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 3:0 runtime \t 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644Thread 4:0 runtime \t 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644Thread 5:0 runtime \t 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644Thread 6:0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x11b85d23d 0x1151ad000 + 1076762216 nwjs Framework \t 0x11b85e656 0x1151ad000 + 1076813667 nwjs Framework \t 0x11b85e321 0x1151ad000 + 1076805458 nwjs Framework \t 0x11b860318 0x1151ad000 + 1076887289 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9910 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 7:: HangWatcher0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993d944 0x1151ad000 + 750410928 nwjs Framework \t 0x11993db03 0x1151ad000 + 750415399 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901710 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9911 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 8:: ThreadPoolServiceThread0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdf7506 kevent64 + 102 nwjs Framework \t 0x119973e31 0x1151ad000 + 752635373 nwjs Framework \t 0x119973cee 0x1151ad000 + 752632144 nwjs Framework \t 0x119973c65 0x1151ad000 + 752630775 nwjs Framework \t 0x119927459 0x1151ad000 + 749497216 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 747154877 nwjs Framework \t 0x119942c18 0x1151ad000 + 750622968 nwjs Framework \t 0x11993043d 0x1151ad000 + 749865579 nwjs Framework \t 0x119942d6b 0x1151ad000 + 7506263510 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901711 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9912 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 9:: ThreadPoolForegroundWorker0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b6ad 0x1151ad000 + 7503223710 nwjs Framework \t 0x11993b5ab 0x1151ad000 + 7503197911 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 10:: ThreadPoolBackgroundWorker0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b61d 0x1151ad000 + 7503209310 nwjs Framework \t 0x11993b5d0 0x1151ad000 + 7503201611 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 11:: ThreadPoolBackgroundWorker0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b61d 0x1151ad000 + 7503209310 nwjs Framework \t 0x11993b5d0 0x1151ad000 + 7503201611 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 12:: ThreadPoolForegroundWorker0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b6ad 0x1151ad000 + 7503223710 nwjs Framework \t 0x11993b5ab 0x1151ad000 + 7503197911 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 13:: Chrome_IOThread0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdf7506 kevent64 + 102 nwjs Framework \t 0x119973e31 0x1151ad000 + 752635373 nwjs Framework \t 0x119973cee 0x1151ad000 + 752632144 nwjs Framework \t 0x119973c65 0x1151ad000 + 752630775 nwjs Framework \t 0x119927459 0x1151ad000 + 749497216 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 747154877 nwjs Framework \t 0x119942c18 0x1151ad000 + 750622968 nwjs Framework \t 0x1177fef30 0x1151ad000 + 401815529 nwjs Framework \t 0x119942d6b 0x1151ad000 + 7506263510 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901711 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9912 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 14:: MemoryInfra0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x1198c3f98 0x1151ad000 + 745430008 nwjs Framework \t 0x119927459 0x1151ad000 + 749497219 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 7471548710 nwjs Framework \t 0x119942c18 0x1151ad000 + 7506229611 nwjs Framework \t 0x119942d6b 0x1151ad000 + 7506263512 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901713 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9914 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 15:: NetworkConfigWatcher0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 CoreFoundation \t 0x7ff80cf07b49 __CFRunLoopServiceMachPort + 1436 CoreFoundation \t 0x7ff80cf065bc __CFRunLoopRun + 13717 CoreFoundation \t 0x7ff80cf05a99 CFRunLoopRunSpecific + 5578 Foundation \t 0x7ff80de01551 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 2169 nwjs Framework \t 0x11998126e 0x1151ad000 + 7531787010 nwjs Framework \t 0x11998023c 0x1151ad000 + 7531372411 nwjs Framework \t 0x119927459 0x1151ad000 + 7494972112 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 7471548713 nwjs Framework \t 0x119942c18 0x1151ad000 + 7506229614 nwjs Framework \t 0x119942d6b 0x1151ad000 + 7506263515 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901716 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9917 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 16:: CrShutdownDetector0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdee4d2 read + 102 nwjs Framework \t 0x11979c9de 0x1151ad000 + 733332143 nwjs Framework \t 0x119957ed9 0x1151ad000 + 751490174 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 995 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 17:: NetworkConfigWatcher0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 CoreFoundation \t 0x7ff80cf07b49 __CFRunLoopServiceMachPort + 1436 CoreFoundation \t 0x7ff80cf065bc __CFRunLoopRun + 13717 CoreFoundation \t 0x7ff80cf05a99 CFRunLoopRunSpecific + 5578 Foundation \t 0x7ff80de01551 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 2169 nwjs Framework \t 0x11998126e 0x1151ad000 + 7531787010 nwjs Framework \t 0x11998023c 0x1151ad000 + 7531372411 nwjs Framework \t 0x119927459 0x1151ad000 + 7494972112 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 7471548713 nwjs Framework \t 0x119942c18 0x1151ad000 + 7506229614 nwjs Framework \t 0x119942d6b 0x1151ad000 + 7506263515 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901716 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9917 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 18:: ThreadPoolForegroundWorker0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b6ad 0x1151ad000 + 7503223710 nwjs Framework \t 0x11993b5ab 0x1151ad000 + 7503197911 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 19:: ThreadPoolForegroundWorker0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b6ad 0x1151ad000 + 7503223710 nwjs Framework \t 0x11993b5ab 0x1151ad000 + 7503197911 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 20:: ThreadPoolForegroundWorker0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b6ad 0x1151ad000 + 7503223710 nwjs Framework \t 0x11993b5ab 0x1151ad000 + 7503197911 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 21:: ThreadPoolForegroundWorker0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b6ad 0x1151ad000 + 7503223710 nwjs Framework \t 0x11993b5ab 0x1151ad000 + 7503197911 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 22:: NetworkNotificationThreadMac0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 CoreFoundation \t 0x7ff80cf07b49 __CFRunLoopServiceMachPort + 1436 CoreFoundation \t 0x7ff80cf065bc __CFRunLoopRun + 13717 CoreFoundation \t 0x7ff80cf05a99 CFRunLoopRunSpecific + 5578 Foundation \t 0x7ff80de01551 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 2169 nwjs Framework \t 0x11998126e 0x1151ad000 + 7531787010 nwjs Framework \t 0x11998023c 0x1151ad000 + 7531372411 nwjs Framework \t 0x119927459 0x1151ad000 + 7494972112 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 7471548713 nwjs Framework \t 0x119942c18 0x1151ad000 + 7506229614 nwjs Framework \t 0x119942d6b 0x1151ad000 + 7506263515 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901716 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9917 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 23:: CompositorTileWorker10 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdf060e __psynch_cvwait + 102 libsystem_pthread.dylib \t 0x7ff80ce2d76b _pthread_cond_wait + 12113 nwjs Framework \t 0x1199577cb 0x1151ad000 + 751472114 nwjs Framework \t 0x11ae82a55 0x1151ad000 + 973440855 nwjs Framework \t 0x119957ed9 0x1151ad000 + 751490176 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 997 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 24:: ThreadPoolSingleThreadForegroundBlocking00 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b70d 0x1151ad000 + 7503233310 nwjs Framework \t 0x11993b5da 0x1151ad000 + 7503202611 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 25:: ThreadPoolSingleThreadSharedForeground10 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b6dd 0x1151ad000 + 7503228510 nwjs Framework \t 0x11993b5e4 0x1151ad000 + 7503203611 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 26:: NetworkConfigWatcher0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 CoreFoundation \t 0x7ff80cf07b49 __CFRunLoopServiceMachPort + 1436 CoreFoundation \t 0x7ff80cf065bc __CFRunLoopRun + 13717 CoreFoundation \t 0x7ff80cf05a99 CFRunLoopRunSpecific + 5578 Foundation \t 0x7ff80de01551 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 2169 nwjs Framework \t 0x11998126e 0x1151ad000 + 7531787010 nwjs Framework \t 0x11998023c 0x1151ad000 + 7531372411 nwjs Framework \t 0x119927459 0x1151ad000 + 7494972112 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 7471548713 nwjs Framework \t 0x119942c18 0x1151ad000 + 7506229614 nwjs Framework \t 0x119942d6b 0x1151ad000 + 7506263515 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901716 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9917 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 27:: ThreadPoolForegroundWorker0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b6ad 0x1151ad000 + 7503223710 nwjs Framework \t 0x11993b5ab 0x1151ad000 + 7503197911 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 28:: ThreadPoolForegroundWorker0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b6ad 0x1151ad000 + 7503223710 nwjs Framework \t 0x11993b5ab 0x1151ad000 + 7503197911 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 29:: ThreadPoolSingleThreadSharedBackgroundBlocking20 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b64d 0x1151ad000 + 7503214110 nwjs Framework \t 0x11993b5f8 0x1151ad000 + 7503205611 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 30:: ThreadPoolSingleThreadSharedForegroundBlocking30 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 nwjs Framework \t 0x119984702 0x1151ad000 + 753313306 nwjs Framework \t 0x11990c91c 0x1151ad000 + 748403487 nwjs Framework \t 0x11993ae8a 0x1151ad000 + 750301548 nwjs Framework \t 0x11993bac4 0x1151ad000 + 750332849 nwjs Framework \t 0x11993b6dd 0x1151ad000 + 7503228510 nwjs Framework \t 0x11993b5e4 0x1151ad000 + 7503203611 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901712 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9913 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 31:: CacheThread_BlockFile0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdf7506 kevent64 + 102 nwjs Framework \t 0x119973e31 0x1151ad000 + 752635373 nwjs Framework \t 0x119973cee 0x1151ad000 + 752632144 nwjs Framework \t 0x119973c65 0x1151ad000 + 752630775 nwjs Framework \t 0x119927459 0x1151ad000 + 749497216 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 747154877 nwjs Framework \t 0x119942c18 0x1151ad000 + 750622968 nwjs Framework \t 0x119942d6b 0x1151ad000 + 750626359 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901710 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9911 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 32:: com.apple.NSEventThread0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 CoreFoundation \t 0x7ff80cf07b49 __CFRunLoopServiceMachPort + 1436 CoreFoundation \t 0x7ff80cf065bc __CFRunLoopRun + 13717 CoreFoundation \t 0x7ff80cf05a99 CFRunLoopRunSpecific + 5578 AppKit \t 0x7ff8105d4a00 _NSEventThread + 1229 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9910 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 33:: Service Discovery Thread0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 CoreFoundation \t 0x7ff80cf07b49 __CFRunLoopServiceMachPort + 1436 CoreFoundation \t 0x7ff80cf065bc __CFRunLoopRun + 13717 CoreFoundation \t 0x7ff80cf05a99 CFRunLoopRunSpecific + 5578 Foundation \t 0x7ff80de01551 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 2169 nwjs Framework \t 0x11998126e 0x1151ad000 + 7531787010 nwjs Framework \t 0x11998023c 0x1151ad000 + 7531372411 nwjs Framework \t 0x119927459 0x1151ad000 + 7494972112 nwjs Framework \t 0x1198ee15f 0x1151ad000 + 7471548713 nwjs Framework \t 0x119942c18 0x1151ad000 + 7506229614 nwjs Framework \t 0x119942d6b 0x1151ad000 + 7506263515 nwjs Framework \t 0x119957ed9 0x1151ad000 + 7514901716 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9917 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 34:: com.apple.CFSocket.private0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdf694a __select + 102 CoreFoundation \t 0x7ff80cf2f6af __CFSocketManager + 6373 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 994 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 35:0 runtime \t 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644Thread 36:0 runtime \t 0x7ff7ffc7694c 0x7ff7ffc54000 + 141644Thread 37:: org.libusb.device-hotplug0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdeda6e mach_msg2_trap + 102 libsystem_kernel.dylib \t 0x7ff80cdfbe7a mach_msg2_internal + 843 libsystem_kernel.dylib \t 0x7ff80cdf4b92 mach_msg_overwrite + 6534 libsystem_kernel.dylib \t 0x7ff80cdedd5f mach_msg + 195 CoreFoundation \t 0x7ff80cf07b49 __CFRunLoopServiceMachPort + 1436 CoreFoundation \t 0x7ff80cf065bc __CFRunLoopRun + 13717 CoreFoundation \t 0x7ff80cf05a99 CFRunLoopRunSpecific + 5578 CoreFoundation \t 0x7ff80cf80889 CFRunLoopRun + 409 nwjs Framework \t 0x11b7f4feb 0x1151ad000 + 10724964310 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 9911 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 38:: UsbEventHandler0 ??? \t 0x7ff89d2e2a78 ???1 libsystem_kernel.dylib \t 0x7ff80cdf4876 poll + 102 nwjs Framework \t 0x11b7f1cb7 0x1151ad000 + 1072365353 nwjs Framework \t 0x11b7f19db 0x1151ad000 + 1072358034 nwjs Framework \t 0x11b7f1e40 0x1151ad000 + 1072369285 nwjs Framework \t 0x11b7e35cf 0x1151ad000 + 1071774236 nwjs Framework \t 0x119957ed9 0x1151ad000 + 751490177 libsystem_pthread.dylib \t 0x7ff80ce2d202 _pthread_start + 998 libsystem_pthread.dylib \t 0x7ff80ce28bab thread_start + 15Thread 0 crashed with X86 Thread State (64-bit): rax: 0x00007fd519066801 rbx: 0x0000000000000008 rcx: 0x0000000120cdf478 rdx: 0x0000000000000400 rdi: 0x0000000000000018 rsi: 0x000000000000031c rbp: 0x00000003051ce690 rsp: 0x00000003051ce690 r8: 0xc5bdffd50ea7b1b7 r9: 0x0000000000000400 r10: 0x0000000000000000 r11: 0x00007ff810494646 r12: 0x00007fd50ea247d8 r13: 0x00007fd50ea24740 r14: 0x0000000000000008 r15: 0x00000003051ce720 rip: <unavailable> rfl: 0x0000000000000206 tmp0: 0x0000000000000001 tmp1: 0x000000011ed4e1e0 tmp2: 0x000000011eb31522Binary Images: 0x200a7a000 - 0x200b19fff dyld (*) <d5406f23-6967-39c4-beb5-6ae3293c7753> /usr/lib/dyld 0x113a08000 - 0x113a17fff libobjc-trampolines.dylib (*) <7e101877-a6ff-3331-99a3-4222cb254447> /usr/lib/libobjc-trampolines.dylib 0x1151ad000 - 0x120616fff io.nwjs.nwjs.framework (115.0.5790.98) <4c4c447b-5555-3144-a1ec-62791bcf166d> /Library/PostgreSQL/16/pgAdmin 4.app/Contents/Frameworks/nwjs Framework.framework/Versions/115.0.5790.98/nwjs Framework 0x108c52000 - 0x108c59fff com.apple.AutomaticAssessmentConfiguration (1.0) <b30252ae-24c6-3839-b779-661ef263b52d> /System/Library/Frameworks/AutomaticAssessmentConfiguration.framework/Versions/A/AutomaticAssessmentConfiguration 0x109141000 - 0x1092e4fff libffmpeg.dylib (*) <4c4c4416-5555-3144-a164-70bbf0436f17> /Library/PostgreSQL/16/pgAdmin 4.app/Contents/Frameworks/nwjs Framework.framework/Versions/115.0.5790.98/libffmpeg.dylib 0x7ff7ffc54000 - 0x7ff7ffc83fff runtime (*) <2c5acb8c-fbaf-31ab-aeb3-90905c3fa905> /usr/libexec/rosetta/runtime 0x1086d8000 - 0x10872bfff libRosettaRuntime (*) <a61ec9e9-1174-3dc6-9cdb-0d31811f4850> /Library/Apple/*/libRosettaRuntime 0x100651000 - 0x10067bfff org.pgadmin.pgadmin4 (7.8) <4c4c4402-5555-3144-a1c7-07729cda43c0> /Library/PostgreSQL/16/pgAdmin 4.app/Contents/MacOS/pgAdmin 4 0x0 - 0xffffffffffffffff ??? (*) <00000000-0000-0000-0000-000000000000> ??? 0x7ff80ce57000 - 0x7ff80ce60fff libsystem_platform.dylib (*) <c94f952c-2787-30d2-ab77-ee474abd88d6> /usr/lib/system/libsystem_platform.dylib 0x7ff80ce8c000 - 0x7ff80d324ffc com.apple.CoreFoundation (6.9) <4d842118-bb65-3f01-9087-ff1a2e3ab0d5> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7ff817c3d000 - 0x7ff817ed8ff4 com.apple.HIToolbox (2.1.1) <06bf0872-3b34-3c7b-ad5b-7a447d793405> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox 0x7ff810439000 - 0x7ff81183effb com.apple.AppKit (6.9) <27fed5dd-d148-3238-bc95-1dac5dd57fa1> /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit 0x7ff80cdec000 - 0x7ff80ce26ff7 libsystem_kernel.dylib (*) <4df0d732-7fc4-3200-8176-f1804c63f2c8> /usr/lib/system/libsystem_kernel.dylib 0x7ff80ce27000 - 0x7ff80ce32fff libsystem_pthread.dylib (*) <c64722b0-e96a-3fa5-96c3-b4beaf0c494a> /usr/lib/system/libsystem_pthread.dylib 0x7ff80dda5000 - 0x7ff80e9e3ffb com.apple.Foundation (6.9) <581d66fd-7cef-3a8c-8647-1d962624703b> /System/Library/Frameworks/Foundation.framework/Versions/C/FoundationExternal Modification Summary: Calls made by other processes targeting this process: task_for_pid: 0 thread_create: 0 thread_set_state: 0 Calls made by this process: task_for_pid: 0 thread_create: 0 thread_set_state: 0 Calls made by all processes on this machine: task_for_pid: 0 thread_create: 0 thread_set_state: 0-----------Full Report-----------{\"app_name\":\"pgAdmin 4\",\"timestamp\":\"2023-11-14 11:47:18.00 -0500\",\"app_version\":\"7.8\",\"slice_uuid\":\"4c4c4402-5555-3144-a1c7-07729cda43c0\",\"build_version\":\"4280.88\",\"platform\":1,\"bundleID\":\"org.pgadmin.pgadmin4\",\"share_with_app_devs\":1,\"is_first_party\":0,\"bug_type\":\"309\",\"os_version\":\"macOS 14.1.1 (23B81)\",\"roots_installed\":0,\"name\":\"pgAdmin 4\",\"incident_id\":\"1AF5B51F-D7DC-4AD5-8526-1C5B3A33AFA5\"}{ \"uptime\" : 2800, \"procRole\" : \"Foreground\", \"version\" : 2, \"userID\" : 501, \"deployVersion\" : 210, \"modelCode\" : \"Mac14,9\", \"coalitionID\" : 2672, \"osVersion\" : { \"train\" : \"macOS 14.1.1\", \"build\" : \"23B81\", \"releaseType\" : \"User\" }, \"captureTime\" : \"2023-11-14 11:47:14.7065 -0500\", \"codeSigningMonitor\" : 1, \"incident\" : \"1AF5B51F-D7DC-4AD5-8526-1C5B3A33AFA5\", \"pid\" : 3505, \"translated\" : true, \"cpuType\" : \"X86-64\", \"roots_installed\" : 0, \"bug_type\" : \"309\", \"procLaunch\" : \"2023-11-14 11:47:06.3899 -0500\", \"procStartAbsTime\" : 67472503520, \"procExitAbsTime\" : 67672052074, \"procName\" : \"pgAdmin 4\", \"procPath\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin 4.app\\/Contents\\/MacOS\\/pgAdmin 4\", \"bundleInfo\" : {\"CFBundleShortVersionString\":\"7.8\",\"CFBundleVersion\":\"4280.88\",\"CFBundleIdentifier\":\"org.pgadmin.pgadmin4\"}, \"storeInfo\" : {\"deviceIdentifierForVendor\":\"F2A41A90-E8FF-58E0-AF26-5F17BFD205F1\",\"thirdParty\":true}, \"parentProc\" : \"launchd\", \"parentPid\" : 1, \"coalitionName\" : \"org.pgadmin.pgadmin4\", \"crashReporterKey\" : \"A4518538-B2A9-0B93-C540-A9DCCCD929EF\", \"codeSigningID\" : \"\", \"codeSigningTeamID\" : \"\", \"codeSigningValidationCategory\" : 0, \"codeSigningTrustLevel\" : 4294967295, \"wakeTime\" : 920, \"sleepWakeUUID\" : \"E31F7EEF-42B9-4E61-88DC-9C0571A2F4E3\", \"sip\" : \"enabled\", \"vmRegionInfo\" : \"0x20 is not in any region. Bytes before following region: 140723014549472\\n REGION TYPE START - END [ VSIZE] PRT\\/MAX SHRMOD REGION DETAIL\\n UNUSED SPACE AT START\\n---> \\n mapped file 7ffca14b4000-7ffcc6b5c000 [598.7M] r-x\\/r-x SM=COW ...t_id=cccf3f63\", \"exception\" : {\"codes\":\"0x0000000000000001, 0x0000000000000020\",\"rawCodes\":[1,32],\"type\":\"EXC_BAD_ACCESS\",\"signal\":\"SIGSEGV\",\"subtype\":\"KERN_INVALID_ADDRESS at 0x0000000000000020\"}, \"termination\" : {\"flags\":0,\"code\":11,\"namespace\":\"SIGNAL\",\"indicator\":\"Segmentation fault: 11\",\"byProc\":\"exc handler\",\"byPid\":3505}, \"vmregioninfo\" : \"0x20 is not in any region. Bytes before following region: 140723014549472\\n REGION TYPE START - END [ VSIZE] PRT\\/MAX SHRMOD REGION DETAIL\\n UNUSED SPACE AT START\\n---> \\n mapped file 7ffca14b4000-7ffcc6b5c000 [598.7M] r-x\\/r-x SM=COW ...t_id=cccf3f63\", \"extMods\" : {\"caller\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"system\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"targeted\":{\"thread_create\":0,\"thread_set_state\":0,\"task_for_pid\":0},\"warnings\":0}, \"faultingThread\" : 0, \"threads\" : [{\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":12970682000},\"r12\":{\"value\":140553050277848},\"rosetta\":{\"tmp2\":{\"value\":4810020130},\"tmp1\":{\"value\":4812235232},\"tmp0\":{\"value\":1}},\"rbx\":{\"value\":8},\"r8\":{\"value\":14248826086609105335},\"r15\":{\"value\":12970682144},\"r10\":{\"value\":0},\"rdx\":{\"value\":1024},\"rdi\":{\"value\":24},\"r9\":{\"value\":1024},\"r13\":{\"value\":140553050277696},\"rflags\":{\"value\":518},\"rax\":{\"value\":140553224611841},\"rsp\":{\"value\":12970682000},\"r11\":{\"value\":140703401854534,\"symbolLocation\":0,\"symbol\":\"-[NSView alphaValue]\"},\"rcx\":{\"value\":4845335672,\"symbolLocation\":5872920,\"symbol\":\"vtable for v8::internal::SetupIsolateDelegate\"},\"r14\":{\"value\":8},\"rsi\":{\"value\":796}},\"id\":57285,\"triggered\":true,\"name\":\"CrBrowserMain\",\"queue\":\"com.apple.main-thread\",\"frames\":[{\"imageOffset\":4303431008,\"imageIndex\":8},{\"imageOffset\":13203,\"symbol\":\"_sigtramp\",\"symbolLocation\":51,\"imageIndex\":9},{\"imageOffset\":160974114,\"imageIndex\":2},{\"imageOffset\":163189232,\"imageIndex\":2},{\"imageOffset\":163647707,\"imageIndex\":2},{\"imageOffset\":105907482,\"imageIndex\":2},{\"imageOffset\":105941209,\"imageIndex\":2},{\"imageOffset\":122254525,\"imageIndex\":2},{\"imageOffset\":122248262,\"imageIndex\":2},{\"imageOffset\":163196389,\"imageIndex\":2},{\"imageOffset\":163699863,\"imageIndex\":2},{\"imageOffset\":160771285,\"imageIndex\":2},{\"imageOffset\":160767767,\"imageIndex\":2},{\"imageOffset\":117416537,\"imageIndex\":2},{\"imageOffset\":54971568,\"imageIndex\":2},{\"imageOffset\":54981780,\"imageIndex\":2},{\"imageOffset\":54982951,\"imageIndex\":2},{\"imageOffset\":54966883,\"imageIndex\":2},{\"imageOffset\":50389369,\"imageIndex\":2},{\"imageOffset\":54968774,\"imageIndex\":2},{\"imageOffset\":86765456,\"imageIndex\":2},{\"imageOffset\":86786353,\"imageIndex\":2},{\"imageOffset\":86771408,\"imageIndex\":2},{\"imageOffset\":87761370,\"imageIndex\":2},{\"imageOffset\":86779801,\"imageIndex\":2},{\"imageOffset\":74851538,\"imageIndex\":2},{\"imageOffset\":74947182,\"imageIndex\":2},{\"imageOffset\":74945705,\"imageIndex\":2},{\"imageOffset\":74948821,\"imageIndex\":2},{\"imageOffset\":75316931,\"imageIndex\":2},{\"imageOffset\":75303458,\"imageIndex\":2},{\"imageOffset\":75314911,\"imageIndex\":2},{\"imageOffset\":506390,\"symbol\":\"__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__\",\"symbolLocation\":17,\"imageIndex\":10},{\"imageOffset\":506297,\"symbol\":\"__CFRunLoopDoSource0\",\"symbolLocation\":157,\"imageIndex\":10},{\"imageOffset\":505736,\"symbol\":\"__CFRunLoopDoSources0\",\"symbolLocation\":215,\"imageIndex\":10},{\"imageOffset\":500728,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":919,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":199129,\"symbol\":\"RunCurrentEventLoopInMode\",\"symbolLocation\":292,\"imageIndex\":11},{\"imageOffset\":198630,\"symbol\":\"ReceiveNextEventCommon\",\"symbolLocation\":665,\"imageIndex\":11},{\"imageOffset\":197937,\"symbol\":\"_BlockUntilNextEventMatchingListInModeWithFilter\",\"symbolLocation\":66,\"imageIndex\":11},{\"imageOffset\":256133,\"symbol\":\"_DPSNextEvent\",\"symbolLocation\":880,\"imageIndex\":12},{\"imageOffset\":9642824,\"symbol\":\"-[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:]\",\"symbolLocation\":1304,\"imageIndex\":12},{\"imageOffset\":69252080,\"imageIndex\":2},{\"imageOffset\":75303458,\"imageIndex\":2},{\"imageOffset\":69251945,\"imageIndex\":2},{\"imageOffset\":196090,\"symbol\":\"-[NSApplication run]\",\"symbolLocation\":603,\"imageIndex\":12},{\"imageOffset\":75318444,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":40170241,\"imageIndex\":2},{\"imageOffset\":40176786,\"imageIndex\":2},{\"imageOffset\":40159834,\"imageIndex\":2},{\"imageOffset\":62423492,\"imageIndex\":2},{\"imageOffset\":62428041,\"imageIndex\":2},{\"imageOffset\":62427549,\"imageIndex\":2},{\"imageOffset\":62421095,\"imageIndex\":2},{\"imageOffset\":62421763,\"imageIndex\":2},{\"imageOffset\":14640,\"symbol\":\"ChromeMain\",\"symbolLocation\":560,\"imageIndex\":2},{\"imageOffset\":2174,\"symbol\":\"main\",\"symbolLocation\":286,\"imageIndex\":7},{\"imageOffset\":25510,\"symbol\":\"start\",\"symbolLocation\":1942,\"imageIndex\":0}]},{\"id\":57293,\"name\":\"com.apple.rosetta.exceptionserver\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":34097745362944},\"r12\":{\"value\":5117060296},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":4462471109675},\"tmp0\":{\"value\":10337986281472}},\"rbx\":{\"value\":4462471109675},\"r8\":{\"value\":7939},\"r15\":{\"value\":4898951168},\"r10\":{\"value\":15586436317184},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":4436777856},\"rflags\":{\"value\":582},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":10337986281472},\"r11\":{\"value\":32},\"rcx\":{\"value\":17314086914},\"r14\":{\"value\":4303431008},\"rsi\":{\"value\":2616}},\"frames\":[{\"imageOffset\":17044,\"imageIndex\":5}]},{\"id\":57315,\"name\":\"StackSamplingProfiler\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":43993350012928},\"r12\":{\"value\":78},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":43993350012928},\"r8\":{\"value\":0},\"r15\":{\"value\":43993350012928},\"r10\":{\"value\":43993350012928},\"rdx\":{\"value\":0},\"rdi\":{\"value\":78},\"r9\":{\"value\":43993350012928},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":74543000,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57316,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12979638272},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12980174848},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":8967},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57317,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12980195328},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12980731904},\"rsp\":{\"value\":409602},\"r11\":{\"value\":0},\"rcx\":{\"value\":12035},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57318,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":12980752384},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":12981288960},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":10503},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57332,\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":107676221,\"imageIndex\":2},{\"imageOffset\":107681366,\"imageIndex\":2},{\"imageOffset\":107680545,\"imageIndex\":2},{\"imageOffset\":107688728,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":53888954662912},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":53888954662912},\"r10\":{\"value\":0},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":53888954662912},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":48},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":48},\"rsi\":{\"value\":48}}},{\"id\":57350,\"name\":\"HangWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":149632365625344},\"r12\":{\"value\":10000},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":149632365625344},\"r8\":{\"value\":0},\"r15\":{\"value\":149632365625344},\"r10\":{\"value\":149632365625344},\"rdx\":{\"value\":0},\"rdi\":{\"value\":10000},\"r9\":{\"value\":149632365625344},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75041092,\"imageIndex\":2},{\"imageOffset\":75041539,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57351,\"name\":\"ThreadPoolServiceThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":12998057104},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140553009377648},\"r8\":{\"value\":140553008003200},\"r15\":{\"value\":0},\"r10\":{\"value\":140553009377648},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":5},\"r11\":{\"value\":12998057984},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553008497312},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":74986557,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57352,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":180332791857152},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":180332791857152},\"r8\":{\"value\":0},\"r15\":{\"value\":180332791857152},\"r10\":{\"value\":180332791857152},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":180332791857152},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57353,\"name\":\"ThreadPoolBackgroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":152845001162752},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":152845001162752},\"r8\":{\"value\":0},\"r15\":{\"value\":152845001162752},\"r10\":{\"value\":152845001162752},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":152845001162752},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032093,\"imageIndex\":2},{\"imageOffset\":75032016,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57354,\"name\":\"ThreadPoolBackgroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":175934745346048},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":175934745346048},\"r8\":{\"value\":0},\"r15\":{\"value\":175934745346048},\"r10\":{\"value\":175934745346048},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":175934745346048},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032093,\"imageIndex\":2},{\"imageOffset\":75032016,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57355,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":155044024418304},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":155044024418304},\"r8\":{\"value\":0},\"r15\":{\"value\":155044024418304},\"r10\":{\"value\":155044024418304},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":155044024418304},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57357,\"name\":\"Chrome_IOThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":13039979632},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140553050165472},\"r8\":{\"value\":140553008471776},\"r15\":{\"value\":0},\"r10\":{\"value\":140553050165472},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":8},\"r11\":{\"value\":13039980544},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553008516224},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":40181552,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57358,\"name\":\"MemoryInfra\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":188029373251584},\"r12\":{\"value\":14641},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":188029373251584},\"r8\":{\"value\":0},\"r15\":{\"value\":188029373251584},\"r10\":{\"value\":188029373251584},\"rdx\":{\"value\":0},\"rdi\":{\"value\":14641},\"r9\":{\"value\":188029373251584},\"r13\":{\"value\":17179869442},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869442},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":74543000,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57364,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":274890791845888},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":274890791845888},\"r8\":{\"value\":0},\"r15\":{\"value\":274890791845888},\"r10\":{\"value\":274890791845888},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":274890791845888},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop) runMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57365,\"name\":\"CrShutdownDetector\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344551112},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":13065134179},\"r8\":{\"value\":140552812261236},\"r15\":{\"value\":4},\"r10\":{\"value\":13065134179},\"rdx\":{\"value\":4},\"rdi\":{\"value\":7162258760691251055},\"r9\":{\"value\":18},\"r13\":{\"value\":0},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":0},\"r11\":{\"value\":4294967280},\"rcx\":{\"value\":0},\"r14\":{\"value\":13065133916},\"rsi\":{\"value\":7238539592028275492}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":9426,\"symbol\":\"read\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":73333214,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57432,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":264995187195904},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":264995187195904},\"r8\":{\"value\":0},\"r15\":{\"value\":264995187195904},\"r10\":{\"value\":264995187195904},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":264995187195904},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop) runMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57433,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":200124001157120},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":200124001157120},\"r8\":{\"value\":0},\"r15\":{\"value\":200124001157120},\"r10\":{\"value\":200124001157120},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":200124001157120},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57434,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":199024489529344},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":199024489529344},\"r8\":{\"value\":0},\"r15\":{\"value\":199024489529344},\"r10\":{\"value\":199024489529344},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":199024489529344},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57435,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":201223512784896},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":201223512784896},\"r8\":{\"value\":0},\"r15\":{\"value\":201223512784896},\"r10\":{\"value\":201223512784896},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":201223512784896},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57436,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":262796163940352},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":262796163940352},\"r8\":{\"value\":0},\"r15\":{\"value\":262796163940352},\"r10\":{\"value\":262796163940352},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":262796163940352},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57437,\"name\":\"NetworkNotificationThreadMac\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":205621559296000},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":205621559296000},\"r8\":{\"value\":0},\"r15\":{\"value\":205621559296000},\"r10\":{\"value\":205621559296000},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":205621559296000},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop) runMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57438,\"name\":\"CompositorTileWorker1\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":161},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344559620},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":140703344816589,\"symbolLocation\":0,\"symbol\":\"_pthread_psynch_cond_cleanup\"},\"r15\":{\"value\":6912},\"r10\":{\"value\":0},\"rdx\":{\"value\":6912},\"rdi\":{\"value\":0},\"r9\":{\"value\":161},\"r13\":{\"value\":29691108924416},\"rflags\":{\"value\":658},\"rax\":{\"value\":260},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":13123825664},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":17934,\"symbol\":\"__psynch_cvwait\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":26475,\"symbol\":\"_pthread_cond_wait\",\"symbolLocation\":1211,\"imageIndex\":14},{\"imageOffset\":75147211,\"imageIndex\":2},{\"imageOffset\":97344085,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57439,\"name\":\"ThreadPoolSingleThreadForegroundBlocking0\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":239706419757056},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":239706419757056},\"r8\":{\"value\":0},\"r15\":{\"value\":239706419757056},\"r10\":{\"value\":239706419757056},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":239706419757056},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032333,\"imageIndex\":2},{\"imageOffset\":75032026,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57440,\"name\":\"ThreadPoolSingleThreadSharedForeground1\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":223213745340416},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":223213745340416},\"r8\":{\"value\":0},\"r15\":{\"value\":223213745340416},\"r10\":{\"value\":223213745340416},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":223213745340416},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032285,\"imageIndex\":2},{\"imageOffset\":75032036,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57456,\"name\":\"NetworkConfigWatcher\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":356254652301312},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":356254652301312},\"r8\":{\"value\":0},\"r15\":{\"value\":356254652301312},\"r10\":{\"value\":356254652301312},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":356254652301312},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop) runMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57459,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":296881024401408},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":296881024401408},\"r8\":{\"value\":0},\"r15\":{\"value\":296881024401408},\"r10\":{\"value\":296881024401408},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":296881024401408},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57460,\"name\":\"ThreadPoolForegroundWorker\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":297980536029184},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":297980536029184},\"r8\":{\"value\":0},\"r15\":{\"value\":297980536029184},\"r10\":{\"value\":297980536029184},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":297980536029184},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032237,\"imageIndex\":2},{\"imageOffset\":75031979,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57461,\"name\":\"ThreadPoolSingleThreadSharedBackgroundBlocking2\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":301279070912512},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":301279070912512},\"r8\":{\"value\":0},\"r15\":{\"value\":301279070912512},\"r10\":{\"value\":301279070912512},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":301279070912512},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032141,\"imageIndex\":2},{\"imageOffset\":75032056,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57463,\"name\":\"ThreadPoolSingleThreadSharedForegroundBlocking3\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":346359047651328},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":346359047651328},\"r8\":{\"value\":0},\"r15\":{\"value\":346359047651328},\"r10\":{\"value\":346359047651328},\"rdx\":{\"value\":0},\"rdi\":{\"value\":0},\"r9\":{\"value\":346359047651328},\"r13\":{\"value\":17179869186},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":32},\"rcx\":{\"value\":17179869186},\"r14\":{\"value\":32},\"rsi\":{\"value\":32}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":75331330,\"imageIndex\":2},{\"imageOffset\":74840348,\"imageIndex\":2},{\"imageOffset\":75030154,\"imageIndex\":2},{\"imageOffset\":75033284,\"imageIndex\":2},{\"imageOffset\":75032285,\"imageIndex\":2},{\"imageOffset\":75032036,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57529,\"name\":\"CacheThread_BlockFile\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":13190900912},\"rosetta\":{\"tmp2\":{\"value\":140703344588028},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":140552997185664},\"r8\":{\"value\":140553053191392},\"r15\":{\"value\":0},\"r10\":{\"value\":140552997185664},\"rdx\":{\"value\":0},\"rdi\":{\"value\":4847415936},\"r9\":{\"value\":0},\"r13\":{\"value\":12297829382473034411},\"rflags\":{\"value\":662},\"rax\":{\"value\":4},\"rsp\":{\"value\":2},\"r11\":{\"value\":13190901760},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553243401024},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":46342,\"symbol\":\"kevent64\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":75263537,\"imageIndex\":2},{\"imageOffset\":75263214,\"imageIndex\":2},{\"imageOffset\":75263077,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57530,\"name\":\"com.apple.NSEventThread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":394771919011840},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":394771919011840},\"r8\":{\"value\":0},\"r15\":{\"value\":394771919011840},\"r10\":{\"value\":394771919011840},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":394771919011840},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":1686016,\"symbol\":\"_NSEventThread\",\"symbolLocation\":122,\"imageIndex\":12},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57561,\"name\":\"Service Discovery Thread\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":423324861595648},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":423324861595648},\"r8\":{\"value\":0},\"r15\":{\"value\":423324861595648},\"r10\":{\"value\":423324861595648},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":423324861595648},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":378193,\"symbol\":\"-[NSRunLoop(NSRunLoop) runMode:beforeDate:]\",\"symbolLocation\":216,\"imageIndex\":15},{\"imageOffset\":75317870,\"imageIndex\":2},{\"imageOffset\":75313724,\"imageIndex\":2},{\"imageOffset\":74949721,\"imageIndex\":2},{\"imageOffset\":74715487,\"imageIndex\":2},{\"imageOffset\":75062296,\"imageIndex\":2},{\"imageOffset\":75062635,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57562,\"name\":\"com.apple.CFSocket.private\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":0},\"r12\":{\"value\":3},\"rosetta\":{\"tmp2\":{\"value\":140703344585024},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":140553010214768},\"r10\":{\"value\":0},\"rdx\":{\"value\":140553010211280},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":140704414334832,\"symbolLocation\":0,\"symbol\":\"__kCFNull\"},\"rflags\":{\"value\":642},\"rax\":{\"value\":4},\"rsp\":{\"value\":0},\"r11\":{\"value\":140703345435675,\"symbolLocation\":0,\"symbol\":\"-[__NSCFArray objectAtIndex:]\"},\"rcx\":{\"value\":0},\"r14\":{\"value\":140553050490128},\"rsi\":{\"value\":0}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":43338,\"symbol\":\"__select\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":669359,\"symbol\":\"__CFSocketManager\",\"symbolLocation\":637,\"imageIndex\":10},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57570,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":13200379904},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":531},\"rax\":{\"value\":13200916480},\"rsp\":{\"value\":409604},\"r11\":{\"value\":0},\"rcx\":{\"value\":172295},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57571,\"frames\":[{\"imageOffset\":141644,\"imageIndex\":5}],\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":18446744073709551615},\"r12\":{\"value\":0},\"rosetta\":{\"tmp2\":{\"value\":0},\"tmp1\":{\"value\":0},\"tmp0\":{\"value\":0}},\"rbx\":{\"value\":0},\"r8\":{\"value\":0},\"r15\":{\"value\":0},\"r10\":{\"value\":0},\"rdx\":{\"value\":13200936960},\"rdi\":{\"value\":0},\"r9\":{\"value\":0},\"r13\":{\"value\":0},\"rflags\":{\"value\":515},\"rax\":{\"value\":13201473536},\"rsp\":{\"value\":278532},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":0},\"rsi\":{\"value\":0}}},{\"id\":57600,\"name\":\"org.libusb.device-hotplug\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":545387832147968},\"r12\":{\"value\":4294967295},\"rosetta\":{\"tmp2\":{\"value\":140703344606842},\"tmp1\":{\"value\":140705765665640},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":545387832147968},\"r8\":{\"value\":0},\"r15\":{\"value\":545387832147968},\"r10\":{\"value\":545387832147968},\"rdx\":{\"value\":8589934592},\"rdi\":{\"value\":4294967295},\"r9\":{\"value\":545387832147968},\"r13\":{\"value\":21592279046},\"rflags\":{\"value\":643},\"rax\":{\"value\":268451845},\"rsp\":{\"value\":0},\"r11\":{\"value\":0},\"rcx\":{\"value\":21592279046},\"r14\":{\"value\":2},\"rsi\":{\"value\":2}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":6766,\"symbol\":\"mach_msg2_trap\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":65146,\"symbol\":\"mach_msg2_internal\",\"symbolLocation\":84,\"imageIndex\":13},{\"imageOffset\":35730,\"symbol\":\"mach_msg_overwrite\",\"symbolLocation\":653,\"imageIndex\":13},{\"imageOffset\":7519,\"symbol\":\"mach_msg\",\"symbolLocation\":19,\"imageIndex\":13},{\"imageOffset\":506697,\"symbol\":\"__CFRunLoopServiceMachPort\",\"symbolLocation\":143,\"imageIndex\":10},{\"imageOffset\":501180,\"symbol\":\"__CFRunLoopRun\",\"symbolLocation\":1371,\"imageIndex\":10},{\"imageOffset\":498329,\"symbol\":\"CFRunLoopRunSpecific\",\"symbolLocation\":557,\"imageIndex\":10},{\"imageOffset\":1001609,\"symbol\":\"CFRunLoopRun\",\"symbolLocation\":40,\"imageIndex\":10},{\"imageOffset\":107249643,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]},{\"id\":57601,\"name\":\"UsbEventHandler\",\"threadState\":{\"flavor\":\"x86_THREAD_STATE\",\"rbp\":{\"value\":6837},\"r12\":{\"value\":140553247458160},\"rosetta\":{\"tmp2\":{\"value\":140703344576620},\"tmp1\":{\"value\":140705765665356},\"tmp0\":{\"value\":18446744073709551615}},\"rbx\":{\"value\":2147483},\"r8\":{\"value\":12297829382473034410},\"r15\":{\"value\":140553247458168},\"r10\":{\"value\":2147483},\"rdx\":{\"value\":60000},\"rdi\":{\"value\":140553247457824},\"r9\":{\"value\":6837},\"r13\":{\"value\":140553247458184},\"rflags\":{\"value\":658},\"rax\":{\"value\":4},\"rsp\":{\"value\":25997},\"r11\":{\"value\":0},\"rcx\":{\"value\":0},\"r14\":{\"value\":2},\"rsi\":{\"value\":13210394000}},\"frames\":[{\"imageOffset\":140705765665400,\"imageIndex\":8},{\"imageOffset\":34934,\"symbol\":\"poll\",\"symbolLocation\":10,\"imageIndex\":13},{\"imageOffset\":107236535,\"imageIndex\":2},{\"imageOffset\":107235803,\"imageIndex\":2},{\"imageOffset\":107236928,\"imageIndex\":2},{\"imageOffset\":107177423,\"imageIndex\":2},{\"imageOffset\":75149017,\"imageIndex\":2},{\"imageOffset\":25090,\"symbol\":\"_pthread_start\",\"symbolLocation\":99,\"imageIndex\":14},{\"imageOffset\":7083,\"symbol\":\"thread_start\",\"symbolLocation\":15,\"imageIndex\":14}]}], \"usedImages\" : [ { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 8600920064, \"size\" : 655360, \"uuid\" : \"d5406f23-6967-39c4-beb5-6ae3293c7753\", \"path\" : \"\\/usr\\/lib\\/dyld\", \"name\" : \"dyld\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 4624252928, \"size\" : 65536, \"uuid\" : \"7e101877-a6ff-3331-99a3-4222cb254447\", \"path\" : \"\\/usr\\/lib\\/libobjc-trampolines.dylib\", \"name\" : \"libobjc-trampolines.dylib\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 4649046016, \"CFBundleShortVersionString\" : \"115.0.5790.98\", \"CFBundleIdentifier\" : \"io.nwjs.nwjs.framework\", \"size\" : 189177856, \"uuid\" : \"4c4c447b-5555-3144-a1ec-62791bcf166d\", \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin 4.app\\/Contents\\/Frameworks\\/nwjs Framework.framework\\/Versions\\/115.0.5790.98\\/nwjs Framework\", \"name\" : \"nwjs Framework\", \"CFBundleVersion\" : \"5790.98\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 4442103808, \"CFBundleShortVersionString\" : \"1.0\", \"CFBundleIdentifier\" : \"com.apple.AutomaticAssessmentConfiguration\", \"size\" : 32768, \"uuid\" : \"b30252ae-24c6-3839-b779-661ef263b52d\", \"path\" : \"\\/System\\/Library\\/Frameworks\\/AutomaticAssessmentConfiguration.framework\\/Versions\\/A\\/AutomaticAssessmentConfiguration\", \"name\" : \"AutomaticAssessmentConfiguration\", \"CFBundleVersion\" : \"12.0.0\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 4447277056, \"size\" : 1720320, \"uuid\" : \"4c4c4416-5555-3144-a164-70bbf0436f17\", \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin 4.app\\/Contents\\/Frameworks\\/nwjs Framework.framework\\/Versions\\/115.0.5790.98\\/libffmpeg.dylib\", \"name\" : \"libffmpeg.dylib\" }, { \"source\" : \"P\", \"arch\" : \"arm64\", \"base\" : 140703124766720, \"size\" : 196608, \"uuid\" : \"2c5acb8c-fbaf-31ab-aeb3-90905c3fa905\", \"path\" : \"\\/usr\\/libexec\\/rosetta\\/runtime\", \"name\" : \"runtime\" }, { \"source\" : \"P\", \"arch\" : \"arm64\", \"base\" : 4436361216, \"size\" : 344064, \"uuid\" : \"a61ec9e9-1174-3dc6-9cdb-0d31811f4850\", \"path\" : \"\\/Library\\/Apple\\/*\\/libRosettaRuntime\", \"name\" : \"libRosettaRuntime\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 4301590528, \"CFBundleShortVersionString\" : \"7.8\", \"CFBundleIdentifier\" : \"org.pgadmin.pgadmin4\", \"size\" : 176128, \"uuid\" : \"4c4c4402-5555-3144-a1c7-07729cda43c0\", \"path\" : \"\\/Library\\/PostgreSQL\\/16\\/pgAdmin 4.app\\/Contents\\/MacOS\\/pgAdmin 4\", \"name\" : \"pgAdmin 4\", \"CFBundleVersion\" : \"4280.88\" }, { \"size\" : 0, \"source\" : \"A\", \"base\" : 0, \"uuid\" : \"00000000-0000-0000-0000-000000000000\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 140703344979968, \"size\" : 40960, \"uuid\" : \"c94f952c-2787-30d2-ab77-ee474abd88d6\", \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_platform.dylib\", \"name\" : \"libsystem_platform.dylib\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 140703345197056, \"CFBundleShortVersionString\" : \"6.9\", \"CFBundleIdentifier\" : \"com.apple.CoreFoundation\", \"size\" : 4820989, \"uuid\" : \"4d842118-bb65-3f01-9087-ff1a2e3ab0d5\", \"path\" : \"\\/System\\/Library\\/Frameworks\\/CoreFoundation.framework\\/Versions\\/A\\/CoreFoundation\", \"name\" : \"CoreFoundation\", \"CFBundleVersion\" : \"2106\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 140703527325696, \"CFBundleShortVersionString\" : \"2.1.1\", \"CFBundleIdentifier\" : \"com.apple.HIToolbox\", \"size\" : 2736117, \"uuid\" : \"06bf0872-3b34-3c7b-ad5b-7a447d793405\", \"path\" : \"\\/System\\/Library\\/Frameworks\\/Carbon.framework\\/Versions\\/A\\/Frameworks\\/HIToolbox.framework\\/Versions\\/A\\/HIToolbox\", \"name\" : \"HIToolbox\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 140703401480192, \"CFBundleShortVersionString\" : \"6.9\", \"CFBundleIdentifier\" : \"com.apple.AppKit\", \"size\" : 20996092, \"uuid\" : \"27fed5dd-d148-3238-bc95-1dac5dd57fa1\", \"path\" : \"\\/System\\/Library\\/Frameworks\\/AppKit.framework\\/Versions\\/C\\/AppKit\", \"name\" : \"AppKit\", \"CFBundleVersion\" : \"2487.20.107\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 140703344541696, \"size\" : 241656, \"uuid\" : \"4df0d732-7fc4-3200-8176-f1804c63f2c8\", \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_kernel.dylib\", \"name\" : \"libsystem_kernel.dylib\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 140703344783360, \"size\" : 49152, \"uuid\" : \"c64722b0-e96a-3fa5-96c3-b4beaf0c494a\", \"path\" : \"\\/usr\\/lib\\/system\\/libsystem_pthread.dylib\", \"name\" : \"libsystem_pthread.dylib\" }, { \"source\" : \"P\", \"arch\" : \"x86_64\", \"base\" : 140703361028096, \"CFBundleShortVersionString\" : \"6.9\", \"CFBundleIdentifier\" : \"com.apple.Foundation\", \"size\" : 12840956, \"uuid\" : \"581d66fd-7cef-3a8c-8647-1d962624703b\", \"path\" : \"\\/System\\/Library\\/Frameworks\\/Foundation.framework\\/Versions\\/C\\/Foundation\", \"name\" : \"Foundation\", \"CFBundleVersion\" : \"2106\" }], \"sharedCache\" : { \"base\" : 140703340380160, \"size\" : 21474836480, \"uuid\" : \"67c86f0b-dd40-3694-909d-52e210cbd5fa\"}, \"legacyInfo\" : { \"threadTriggered\" : { \"name\" : \"CrBrowserMain\", \"queue\" : \"com.apple.main-thread\" }}, \"logWritingSignature\" : \"8b321ae8a79f5edf7aad3381809b3fbd28f3768b\", \"trialInfo\" : { \"rollouts\" : [ { \"rolloutId\" : \"60da5e84ab0ca017dace9abf\", \"factorPackIds\" : { }, \"deploymentId\" : 240000008 }, { \"rolloutId\" : \"63f9578e238e7b23a1f3030a\", \"factorPackIds\" : { }, \"deploymentId\" : 240000005 } ], \"experiments\" : [ { \"treatmentId\" : \"a092db1b-c401-44fa-9c54-518b7d69ca61\", \"experimentId\" : \"64a844035c85000c0f42398a\", \"deploymentId\" : 400000019 } ]}, \"reportNotes\" : [ \"PC register does not match crashing frame (0x0 vs 0x100812560)\"]}Model: Mac14,9, BootROM 10151.41.12, proc 10:6:4 processors, 16 GB, SMC Graphics: Apple M2 Pro, Apple M2 Pro, Built-InDisplay: Color LCD, 3024 x 1964 Retina, Main, MirrorOff, OnlineMemory Module: LPDDR5, MicronAirPort: spairport_wireless_card_type_wifi (0x14E4, 0x4388), wl0: Sep 1 2023 19:33:37 version 23.10.765.4.41.51.121 FWID 01-e2f09e46AirPort: Bluetooth: Version (null), 0 services, 0 devices, 0 incoming serial portsNetwork Service: Wi-Fi, AirPort, en0USB Device: USB31BusUSB Device: USB31BusUSB Device: USB31BusThunderbolt Bus: MacBook Pro, Apple Inc.Thunderbolt Bus: MacBook Pro, Apple Inc.Thunderbolt Bus: MacBook Pro, Apple Inc.Thanks & Regards,Kanmani\n\n-- Thanks,Aditya ToshniwalpgAdmin Hacker | Sr. Software Architect | enterprisedb.com\"Don't Complain about Heat, Plant a TREE\"",
"msg_date": "Wed, 15 Nov 2023 15:16:37 +0530",
"msg_from": "Aditya Toshniwal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue with launching PGAdmin 4 on Mac OC"
}
] |
[
{
"msg_contents": "I have been working on adding using thread-safe locale APIs within \nPostgres where appropriate[0]. The patch that I originally submitted \ncrashed during initdb (whoops!), so I worked on fixing the crash, which \nled me to having to touch some code in chklocale.c, which became \na frustrating experience because chklocale.c is compiled in 3 different \nconfigurations.\n\n> pgport_variants = {\n> '_srv': internal_lib_args + {\n> 'dependencies': [backend_port_code],\n> },\n> '': default_lib_args + {\n> 'dependencies': [frontend_port_code],\n> },\n> '_shlib': default_lib_args + {\n> 'pic': true,\n> 'dependencies': [frontend_port_code],\n> },\n> }\n\nThis means that some APIs I added or changed in pg_locale.c, can't be \nused without conditional compilation depending on what variant is being \ncompiled. Additionally, I also have conditional compilation based on \nHAVE_USELOCALE and WIN32.\n\nI would like to propose removing HAVE_USELOCALE, and just have WIN32, \nwhich means that Postgres would require uselocale(3) on anything that \nisn't WIN32.\n\n[0]: https://www.postgresql.org/message-id/[email protected]\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 15 Nov 2023 04:27:49 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> I would like to propose removing HAVE_USELOCALE, and just have WIN32, \n> which means that Postgres would require uselocale(3) on anything that \n> isn't WIN32.\n\nYou would need to do some research and try to prove that that won't\nbe a problem on any modern platform. Presumably it once was a problem,\nor we'd not have bothered with a configure check.\n\n(Some git archaeology might yield useful info about when and why\nwe added the check.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Nov 2023 12:45:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 6:45 AM Tom Lane <[email protected]> wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > I would like to propose removing HAVE_USELOCALE, and just have WIN32,\n> > which means that Postgres would require uselocale(3) on anything that\n> > isn't WIN32.\n>\n> You would need to do some research and try to prove that that won't\n> be a problem on any modern platform. Presumably it once was a problem,\n> or we'd not have bothered with a configure check.\n>\n> (Some git archaeology might yield useful info about when and why\n> we added the check.)\n\nAccording to data I scraped from the build farm, the last two systems\nwe had that didn't have uselocale() were curculio (OpenBSD 5.9) and\nwrasse (Solaris 11.3), but those were both shut down (though wrasse\nstill runs old branches) as they were well out of support. OpenBSD\ngained uselocale() in 6.2, and Solaris in 11.4, as part of the same\nsuite of POSIX changes that we already required in commit 8d9a9f03.\n\n+1 for the change.\n\nhttps://man.openbsd.org/uselocale.3\nhttps://docs.oracle.com/cd/E88353_01/html/E37843/uselocale-3c.html\n\n\n",
"msg_date": "Thu, 16 Nov 2023 07:38:55 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> \"Tristan Partin\" <[email protected]> writes:\n>> I would like to propose removing HAVE_USELOCALE, and just have WIN32, \n>> which means that Postgres would require uselocale(3) on anything that \n>> isn't WIN32.\n>\n> You would need to do some research and try to prove that that won't\n> be a problem on any modern platform. Presumably it once was a problem,\n> or we'd not have bothered with a configure check.\n>\n> (Some git archaeology might yield useful info about when and why\n> we added the check.)\n\nFor reference, the Perl effort to use the POSIX.1-2008 thread-safe\nlocale APIs have revealed several platform-specific bugs that cause it\nto disable them on FreeBSD and macOS:\n\nhttps://github.com/perl/perl5/commit/9cbc12c368981c56d4d8e40cc9417ac26bec2c35\nhttps://github.com/perl/perl5/commit/dd4eb78c55aab441aec1639b1dd49f88bd960831\n\nand work around bugs on others (e.g. OpenBSD):\n\nhttps://github.com/perl/perl5/commit/0f3830f3997cf7ef1531bad26d2e0f13220dd862\n\nBut Perl actually makes use of per-thread locales, because it has a\nseparate interpereer per thread, each of which can have a different\nlocale active. Since Postgres isn't actually multi-threaded (yet),\nthese concerns might not apply to the same degree.\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n",
"msg_date": "Wed, 15 Nov 2023 18:42:31 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> \"Tristan Partin\" <[email protected]> writes:\n>>> I would like to propose removing HAVE_USELOCALE, and just have WIN32, \n>>> which means that Postgres would require uselocale(3) on anything that \n>>> isn't WIN32.\n\n>> You would need to do some research and try to prove that that won't\n>> be a problem on any modern platform. Presumably it once was a problem,\n>> or we'd not have bothered with a configure check.\n\n> For reference, the Perl effort to use the POSIX.1-2008 thread-safe\n> locale APIs have revealed several platform-specific bugs that cause it\n> to disable them on FreeBSD and macOS:\n> https://github.com/perl/perl5/commit/9cbc12c368981c56d4d8e40cc9417ac26bec2c35\n> https://github.com/perl/perl5/commit/dd4eb78c55aab441aec1639b1dd49f88bd960831\n> and work around bugs on others (e.g. OpenBSD):\n> https://github.com/perl/perl5/commit/0f3830f3997cf7ef1531bad26d2e0f13220dd862\n> But Perl actually makes use of per-thread locales, because it has a\n> separate interpereer per thread, each of which can have a different\n> locale active. Since Postgres isn't actually multi-threaded (yet),\n> these concerns might not apply to the same degree.\n\nInteresting. That need not stop us from dropping the configure\ncheck for uselocale(), but it might be a problem for Tristan's\nlarger ambitions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Nov 2023 14:04:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 7:42 AM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n> Tom Lane <[email protected]> writes:\n>\n> > \"Tristan Partin\" <[email protected]> writes:\n> >> I would like to propose removing HAVE_USELOCALE, and just have WIN32,\n> >> which means that Postgres would require uselocale(3) on anything that\n> >> isn't WIN32.\n> >\n> > You would need to do some research and try to prove that that won't\n> > be a problem on any modern platform. Presumably it once was a problem,\n> > or we'd not have bothered with a configure check.\n> >\n> > (Some git archaeology might yield useful info about when and why\n> > we added the check.)\n>\n> For reference, the Perl effort to use the POSIX.1-2008 thread-safe\n> locale APIs have revealed several platform-specific bugs that cause it\n> to disable them on FreeBSD and macOS:\n>\n> https://github.com/perl/perl5/commit/9cbc12c368981c56d4d8e40cc9417ac26bec2c35\n\nInteresting that C vs C.UTF-8 has come up there, something that has\nalso confused us and others (in fact I still owe Daniel Vérité a\nresponse to his complaint about how we treat the latter; I got stuck\non a logical problem with the proposal and then dumped core...). The\nidea of C.UTF-8 is relatively new, and seems to have shaken a few bugs\nout in a few places. Anyway, that in particular is a brand new\nFreeBSD bug report and I am sure it will be addressed soon.\n\n> https://github.com/perl/perl5/commit/dd4eb78c55aab441aec1639b1dd49f88bd960831\n\nAs for macOS, one thing I noticed is that the FreeBSD -> macOS\npipeline appears to have re-awoken after many years of slumber. I\ndon't know anything about that other than that when I recently\nupgraded my Mac to 14.1, suddenly a few userspace tools are now\nrunning the recentish FreeBSD versions of certain userland tools (tar,\ngrep, ...?), instead of something from the Jurassic. Whether that\nmight apply to libc, who can say... they seemed to have quite ancient\nBSD locale code last time I checked.\n\n> https://github.com/perl/perl5/commit/0f3830f3997cf7ef1531bad26d2e0f13220dd862\n\nThat linked issue appears to be fixed already.\n\n> But Perl actually makes use of per-thread locales, because it has a\n> separate interpereer per thread, each of which can have a different\n> locale active. Since Postgres isn't actually multi-threaded (yet),\n> these concerns might not apply to the same degree.\n\nECPG might use them in multi-threaded code. I'm not sure if it's a\nproblem and whose problem it is.\n\n\n",
"msg_date": "Thu, 16 Nov 2023 08:16:31 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> On Thu, Nov 16, 2023 at 6:45 AM Tom Lane <[email protected]> wrote:\n>> You would need to do some research and try to prove that that won't\n>> be a problem on any modern platform. Presumably it once was a problem,\n>> or we'd not have bothered with a configure check.\n\n> According to data I scraped from the build farm, the last two systems\n> we had that didn't have uselocale() were curculio (OpenBSD 5.9) and\n> wrasse (Solaris 11.3), but those were both shut down (though wrasse\n> still runs old branches) as they were well out of support.\n\nAFAICS, NetBSD still doesn't have it. They have no on-line man page\nfor it, and my animal mamba shows it as not found.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Nov 2023 15:51:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 9:51 AM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > On Thu, Nov 16, 2023 at 6:45 AM Tom Lane <[email protected]> wrote:\n> >> You would need to do some research and try to prove that that won't\n> >> be a problem on any modern platform. Presumably it once was a problem,\n> >> or we'd not have bothered with a configure check.\n>\n> > According to data I scraped from the build farm, the last two systems\n> > we had that didn't have uselocale() were curculio (OpenBSD 5.9) and\n> > wrasse (Solaris 11.3), but those were both shut down (though wrasse\n> > still runs old branches) as they were well out of support.\n>\n> AFAICS, NetBSD still doesn't have it. They have no on-line man page\n> for it, and my animal mamba shows it as not found.\n\nOh :-( I see that but had missed that sidewinder was NetBSD and my\nscraped data predates mamba. Sorry for the wrong info.\n\nCurrently pg_locale.c requires systems to have *either* uselocale() or\nmbstowcs_l()/wcstombs_l(), but NetBSD satisfies the second\nrequirement. The other uses of uselocale() are in ECPG code that must\nbe falling back to the setlocale() path. In other words, isn't it the\ncase that we don't require uselocale() to compile ECPG stuff, but it'll\nprobably crash or corrupt itself or give wrong answers if you push it\non NetBSD, so... uhh, really we do require it?\n\n\n",
"msg_date": "Thu, 16 Nov 2023 10:08:12 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> Currently pg_locale.c requires systems to have *either* uselocale() or\n> mbstowcs_l()/wcstombs_l(), but NetBSD satisfies the second\n> requirement.\n\nCheck.\n\n> The other uses of uselocale() are in ECPG code that must\n> be falling back to the setlocale() path. In other words, isn't it the\n> case that we don't require uselocale() to compile ECPG stuff, but it'll\n> probably crash or corrupt itself or give wrong answers if you push it\n> on NetBSD, so... uhh, really we do require it?\n\nDunno. mamba is getting through the ecpg regression tests okay,\nbut we all know that doesn't prove a lot. (AFAICS, ecpg only\ncares about this to the extent of not wanting an LC_NUMERIC\nlocale where the decimal point isn't '.'. I'm not sure that\nNetBSD supports any such locale anyway --- I think they're like\nOpenBSD in having only pro-forma locale support.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Nov 2023 16:17:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 10:17 AM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > The other uses of uselocale() are in ECPG code that must\n> > be falling back to the setlocale() path. In other words, isn't it the\n> > case that we don't require uselocale() to compile ECPG stuff, but it'll\n> > probably crash or corrupt itself or give wrong answers if you push it\n> > on NetBSD, so... uhh, really we do require it?\n>\n> Dunno. mamba is getting through the ecpg regression tests okay,\n> but we all know that doesn't prove a lot. (AFAICS, ecpg only\n> cares about this to the extent of not wanting an LC_NUMERIC\n> locale where the decimal point isn't '.'. I'm not sure that\n> NetBSD supports any such locale anyway --- I think they're like\n> OpenBSD in having only pro-forma locale support.)\n\nIdea #1\n\nFor output, which happens with sprintf(ptr, \"%.15g%s\", ...) in\nexecute.c, perhaps we could use our in-tree Ryu routine instead?\n\nFor input, which happens with strtod() in data.c, rats, we don't have\na parser and I understand that it is not for the faint of heart (naive\nimplementation gets subtle things wrong, cf \"How to read floating\npoint numbers accurately\" by W D Clinger + whatever improvements have\nhappened in this space since 1990).\n\nIdea #2\n\nPerhaps we could use snprintf_l() and strtod_l() where available.\nThey're not standard, but they are obvious extensions that NetBSD and\nWindows have, and those are the two systems for which we are doing\ngrotty things in that code. That would amount to extending\npg_locale.c's philosophy: either you must have uselocale(), or the\nfull set of _l() functions (that POSIX failed to standardise, dunno\nwhat the history is behind that, seems weird).\n\n\n",
"msg_date": "Thu, 16 Nov 2023 11:40:07 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> Idea #1\n\n> For output, which happens with sprintf(ptr, \"%.15g%s\", ...) in\n> execute.c, perhaps we could use our in-tree Ryu routine instead?\n\n> For input, which happens with strtod() in data.c, rats, we don't have\n> a parser and I understand that it is not for the faint of heart\n\nYeah. Getting rid of ecpg's use of uselocale() would certainly be\nnice, but I'm not ready to add our own implementation of strtod()\nto get there.\n\n> Idea #2\n\n> Perhaps we could use snprintf_l() and strtod_l() where available.\n> They're not standard, but they are obvious extensions that NetBSD and\n> Windows have, and those are the two systems for which we are doing\n> grotty things in that code.\n\nOooh, shiny. I do not see any man page for strtod_l, but I do see\nthat it's declared on mamba's host. I wonder how long they've had it?\nThe man page for snprintf_l appears to be quite ancient, so we could\nhope that strtod_l is available on all versions anyone cares about.\n\n> That would amount to extending\n> pg_locale.c's philosophy: either you must have uselocale(), or the\n> full set of _l() functions (that POSIX failed to standardise, dunno\n> what the history is behind that, seems weird).\n\nYeah. I'd say the _l functions should be preferred over uselocale()\nif available, but sadly they're not there on common systems. (It\nlooks like glibc has strtod_l but not snprintf_l, which is odd.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Nov 2023 18:06:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 12:06 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > Perhaps we could use snprintf_l() and strtod_l() where available.\n> > They're not standard, but they are obvious extensions that NetBSD and\n> > Windows have, and those are the two systems for which we are doing\n> > grotty things in that code.\n>\n> Oooh, shiny. I do not see any man page for strtod_l, but I do see\n> that it's declared on mamba's host. I wonder how long they've had it?\n> The man page for snprintf_l appears to be quite ancient, so we could\n> hope that strtod_l is available on all versions anyone cares about.\n\nA decade[1]. And while I'm doing archeology, I noticed that POSIX has\nagreed[2] in principle that *all* functions affected by the thread's\ncurrent locale should have a _l() variant, it's just that no one has\nsent in the patch.\n\n> > That would amount to extending\n> > pg_locale.c's philosophy: either you must have uselocale(), or the\n> > full set of _l() functions (that POSIX failed to standardise, dunno\n> > what the history is behind that, seems weird).\n>\n> Yeah. I'd say the _l functions should be preferred over uselocale()\n> if available, but sadly they're not there on common systems. (It\n> looks like glibc has strtod_l but not snprintf_l, which is odd.)\n\nHere is a first attempt. In this version, new functions are exported\nby pgtypeslib. I realised that I had to do it in there because ECPG's\nuselocale() jiggery-pokery is clearly intended to affect the\nconversions happening in there too, and we probably don't want\ncircular dependencies between pgtypeslib and ecpglib. I think this\nmeans that pgtypeslib is actually subtly b0rked if you use it\nindependently without an ECPG connection (is that a thing people do?),\nbecause all that code copied-and-pasted from the backend when run in\nfrontend code with eg a French locale will produce eg \"0,42\"; this\npatch doesn't change that.\n\nI also had a go[3] at doing it with static inlined functions, to avoid\ncreating a load of new exported functions and associated function call\noverheads. It worked fine, except on Windows: I needed a global\nvariable PGTYPESclocale that all the inlined functions can see when\ncalled from ecpglib or pgtypeslib code, but if I put that in the\nexports list then on that platform it seems to contain garbage; there\nis probably some other magic needed to export non-function symbols\nfrom the DLL or something like that, I didn't look into it. See CI\nfailure + crash dumps.\n\n[1] https://github.com/NetBSD/src/commit/c99aac45e540bc210cc660619a6b5323cbb5c17f\n[2] https://www.austingroupbugs.net/view.php?id=1004\n[3] https://github.com/macdice/postgres/tree/strtod_l_inline",
"msg_date": "Fri, 17 Nov 2023 08:57:47 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> On Thu, Nov 16, 2023 at 12:06 PM Tom Lane <[email protected]> wrote:\n>> Thomas Munro <[email protected]> writes:\n>>> Perhaps we could use snprintf_l() and strtod_l() where available.\n>>> They're not standard, but they are obvious extensions that NetBSD and\n>>> Windows have, and those are the two systems for which we are doing\n>>> grotty things in that code.\n\n>> Yeah. I'd say the _l functions should be preferred over uselocale()\n>> if available, but sadly they're not there on common systems. (It\n>> looks like glibc has strtod_l but not snprintf_l, which is odd.)\n\n> Here is a first attempt.\n\nI've not reviewed this closely, but I did try it on mamba's host.\nIt compiles and passes regression testing, but I see two warnings:\n\ncommon.c: In function 'PGTYPESsprintf':\ncommon.c:120:2: warning: function 'PGTYPESsprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]\n 120 | return vsprintf_l(str, PGTYPESclocale, format, args);\n | ^~~~~~\ncommon.c: In function 'PGTYPESsnprintf':\ncommon.c:136:2: warning: function 'PGTYPESsnprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]\n 136 | return vsnprintf_l(str, size, PGTYPESclocale, format, args);\n | ^~~~~~\n\nThat happens because on NetBSD, we define PG_PRINTF_ATTRIBUTE as\n\"__syslog__\" so that the compiler will not warn about use of %m\n(apparently, they support %m in syslog() but not printf(), sigh).\n\nI think this is telling us about an actual problem: these new\nfunctions are based on libc's printf not what we have in snprintf.c,\nand therefore we really shouldn't be assuming that they will support\nany format specs beyond what POSIX requires for printf. If somebody\ntried to use %m in one of these calls, we'd like to get warnings about\nthat.\n\nI experimented with the attached delta patch and it does silence\nthese warnings. I suspect that ecpg_log() should be marked as\npg_attribute_std_printf() too, because it has the same issue,\nbut I didn't try that. (Probably, we see no warning for that\nbecause the compiler isn't quite bright enough to connect the\nformat argument with the string that gets passed to vfprintf().)\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 17 Nov 2023 13:18:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "I wrote:\n> I've not reviewed this closely, but I did try it on mamba's host.\n> It compiles and passes regression testing, but I see two warnings:\n\n> common.c: In function 'PGTYPESsprintf':\n> common.c:120:2: warning: function 'PGTYPESsprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]\n> 120 | return vsprintf_l(str, PGTYPESclocale, format, args);\n> | ^~~~~~\n> common.c: In function 'PGTYPESsnprintf':\n> common.c:136:2: warning: function 'PGTYPESsnprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]\n> 136 | return vsnprintf_l(str, size, PGTYPESclocale, format, args);\n> | ^~~~~~\n\n> I think this is telling us about an actual problem: these new\n> functions are based on libc's printf not what we have in snprintf.c,\n> and therefore we really shouldn't be assuming that they will support\n> any format specs beyond what POSIX requires for printf.\n\nWait, I just realized that there's more to this. ecpglib *does*\nrely on our snprintf.c functions:\n\n$ nm --ext --undef src/interfaces/ecpg/ecpglib/*.o | grep printf \n U pg_snprintf\n U pg_fprintf\n U pg_snprintf\n U pg_printf\n U pg_snprintf\n U pg_sprintf\n U pg_fprintf\n U pg_snprintf\n U pg_vfprintf\n U pg_snprintf\n U pg_sprintf\n U pg_sprintf\n\nWe are getting these warnings because vsprintf_l and\nvsnprintf_l don't have snprintf.c implementations, so the\ncompiler sees the attributes attached to them by stdio.h.\n\nThis raises the question of whether changing snprintf.c\ncould be part of the solution. I'm not sure that we want\nto try to emulate vs[n]printf_l directly, but perhaps there's\nanother way?\n\nIn any case, my concern about ecpg_log() is misplaced.\nThat is really using pg_vfprintf, so it's correctly marked.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Nov 2023 17:58:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 08:57:47 +1300, Thomas Munro wrote:\n> I also had a go[3] at doing it with static inlined functions, to avoid\n> creating a load of new exported functions and associated function call\n> overheads. It worked fine, except on Windows: I needed a global\n> variable PGTYPESclocale that all the inlined functions can see when\n> called from ecpglib or pgtypeslib code, but if I put that in the\n> exports list then on that platform it seems to contain garbage; there\n> is probably some other magic needed to export non-function symbols\n> from the DLL or something like that, I didn't look into it. See CI\n> failure + crash dumps.\n\nI suspect you'd need __declspec(dllimport) on the variable to make that work.\nI.e. use PGDLLIMPORT and define BUILDING_DLL while building the libraries, so\nthey see __declspec (dllexport). I luckily forgot the details, but functions\njust call into some thunk that does necessary magic, but that option doesn't\nexist for variables, so the compiler/linker have to do stuff, hence needing\n__declspec(dllimport).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Nov 2023 16:03:23 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Sat, Nov 18, 2023 at 11:58 AM Tom Lane <[email protected]> wrote:\n> I wrote:\n> > I've not reviewed this closely, but I did try it on mamba's host.\n> > It compiles and passes regression testing, but I see two warnings:\n>\n> > common.c: In function 'PGTYPESsprintf':\n> > common.c:120:2: warning: function 'PGTYPESsprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]\n> > 120 | return vsprintf_l(str, PGTYPESclocale, format, args);\n> > | ^~~~~~\n> > common.c: In function 'PGTYPESsnprintf':\n> > common.c:136:2: warning: function 'PGTYPESsnprintf' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]\n> > 136 | return vsnprintf_l(str, size, PGTYPESclocale, format, args);\n> > | ^~~~~~\n>\n> > I think this is telling us about an actual problem: these new\n> > functions are based on libc's printf not what we have in snprintf.c,\n> > and therefore we really shouldn't be assuming that they will support\n> > any format specs beyond what POSIX requires for printf.\n\nRight, thanks.\n\n> We are getting these warnings because vsprintf_l and\n> vsnprintf_l don't have snprintf.c implementations, so the\n> compiler sees the attributes attached to them by stdio.h.\n>\n> This raises the question of whether changing snprintf.c\n> could be part of the solution. I'm not sure that we want\n> to try to emulate vs[n]printf_l directly, but perhaps there's\n> another way?\n\nYeah, I have been wondering about that too.\n\nThe stuff I posted so far was just about how to remove some gross and\nincorrect code from ecpg, a somewhat niche frontend part of\nPostgreSQL. I guess Tristan is thinking bigger: removing obstacles to\ngoing multi-threaded in the backend. Clearly locales are one of the\nplaces where global state will bite us, so we either need to replace\nsetlocale() with uselocale() for the database default locale, or use\nexplicit locale arguments with _l() functions everywhere and pass in\nthe right locale. Due to incompleteness of (a) libc implementations\nand (b) the standard, we can't directly do either, so we'll need to\ncope with that.\n\nThought experiment: If we supplied our own fallback _l() replacement\nfunctions where missing, and those did uselocale() save/restore, many\nsystems wouldn't need them, for example glibc has strtod_l() as you\nnoted, and several other systems have systematically added them for\nall sorts of stuff. The case of the *printf* family is quite\ninteresting, because there we already have our own implement for other\nreasons, so it might make sense to add the _l() variants to our\nsnprintf.c implementations. On glibc, snprintf.c would have to do a\nuselocale() save/restore where it punts %g to the system snprintf, but\nif that offends some instruction cycle bean counter, perhaps we could\nreplace that bit with Ryu anyway (or is it not general enough to\nhandle all the stuff %g et al can do? I haven't looked).\n\nI am not sure how you would ever figure out what other stuff is\naffected by the global locale in general, for example code hiding in\nextensions etc, but, I mean, that's what's wrong with global state in\na nutshell and it has often been speculated that multi-threaded\nPostgreSQL might have a way to say 'I still want one process per\nsession because my extensions don't all identify themselves as\nthread-safe yet'.\n\nBTW is this comment in snprintf.c true?\n\n * 1. No locale support: the radix character is always '.' and the '\n * (single quote) format flag is ignored.\n\nIt is in the backend but only because we nail down LC_NUMERIC early\non, not because of any property of snprintf.c, no?\n\n\n",
"msg_date": "Mon, 20 Nov 2023 11:00:14 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> BTW is this comment in snprintf.c true?\n\n> * 1. No locale support: the radix character is always '.' and the '\n> * (single quote) format flag is ignored.\n\n> It is in the backend but only because we nail down LC_NUMERIC early\n> on, not because of any property of snprintf.c, no?\n\nHmm, the second part of it is true. But given that we punt float\nformatting to libc, I think you are right that the first part\ndepends on LC_NUMERIC being frozen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 Nov 2023 17:36:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 11:36 AM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > BTW is this comment in snprintf.c true?\n>\n> > * 1. No locale support: the radix character is always '.' and the '\n> > * (single quote) format flag is ignored.\n>\n> > It is in the backend but only because we nail down LC_NUMERIC early\n> > on, not because of any property of snprintf.c, no?\n>\n> Hmm, the second part of it is true. But given that we punt float\n> formatting to libc, I think you are right that the first part\n> depends on LC_NUMERIC being frozen.\n\nIf we are sure that we'll *never* want locale-aware printf-family\nfunctions (ie we *always* want \"C\" locale), then in the thought\nexperiment above where I suggested we supply replacement _l()\nfunctions, we could just skip that for the printf family, but make\nthat above comment actually true. Perhaps with Ryu, but otherwise by\npunting to libc _l() or uselocale() save/restore.\n\n\n",
"msg_date": "Mon, 20 Nov 2023 13:00:11 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> If we are sure that we'll *never* want locale-aware printf-family\n> functions (ie we *always* want \"C\" locale), then in the thought\n> experiment above where I suggested we supply replacement _l()\n> functions, we could just skip that for the printf family, but make\n> that above comment actually true. Perhaps with Ryu, but otherwise by\n> punting to libc _l() or uselocale() save/restore.\n\nIt is pretty annoying that we've got that shiny Ryu code and can't\nuse it here. From memory, we did look into that and concluded that\nRyu wasn't amenable to providing \"exactly this many digits\" as is\nrequired by most variants of printf's conversion specs. But maybe\nsomebody should go try harder. (Worst case, you could do rounding\noff by hand on the produced digit string, but that's ugly...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Nov 2023 11:40:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 5:40 AM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > If we are sure that we'll *never* want locale-aware printf-family\n> > functions (ie we *always* want \"C\" locale), then in the thought\n> > experiment above where I suggested we supply replacement _l()\n> > functions, we could just skip that for the printf family, but make\n> > that above comment actually true. Perhaps with Ryu, but otherwise by\n> > punting to libc _l() or uselocale() save/restore.\n\nHere is a new attempt at this can of portability worms. This time:\n\n* pg_get_c_locale() is available to anyone who needs a \"C\" locale_t\n* ECPG uses strtod_l(..., pg_get_c_locale()) for parsing\n* snprintf.c always uses \"C\" for floats, so it conforms to its own\ndocumented behaviour, and ECPG doesn't have to do anything special\n\nI'm not trying to offer a working *printf_l() family to the whole tree\nbecause it seems like really we only ever care about \"C\" for this\npurpose. So snprintf.c internally uses pg_get_c_locale() with\nsnprintf_l(), _snprintf_l() or uselocale()/snprintf()/uselocale()\ndepending on platform.\n\n> It is pretty annoying that we've got that shiny Ryu code and can't\n> use it here. From memory, we did look into that and concluded that\n> Ryu wasn't amenable to providing \"exactly this many digits\" as is\n> required by most variants of printf's conversion specs. But maybe\n> somebody should go try harder. (Worst case, you could do rounding\n> off by hand on the produced digit string, but that's ugly...)\n\nYeah it does seem like a promising idea, but I haven't looked into it myself.",
"msg_date": "Sat, 10 Aug 2024 13:29:51 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Sat, Aug 10, 2024 at 1:29 PM Thomas Munro <[email protected]> wrote:\n> Here is a new attempt at this can of portability worms.\n\nSlightly better version:\n\n* it's OK to keep relying on the global locale in the backend; for\nnow, we know that LC_NUMERIC is set in main(), and in the\nmulti-threaded future calling setlocale() even transiently will be\nbanned, so it seems it'll be OK to just keep doing that, right?\n\n* we could use LC_C_LOCALE to get a \"C\" locale slightly more\nefficiently on those; we could define it ourselves for other systems,\nusing pg_get_c_locale()",
"msg_date": "Sat, 10 Aug 2024 15:48:45 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Sat, Aug 10, 2024 at 3:48 PM Thomas Munro <[email protected]> wrote:\n> * we could use LC_C_LOCALE to get a \"C\" locale slightly more\n> efficiently on those\n\nOops, lost some words, I meant \"on those systems that have them (macOS\nand NetBSD AFAIK)\"\n\n\n",
"msg_date": "Sat, 10 Aug 2024 15:52:23 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "v4 adds error handling, in case newlocale(\"C\") fails. I created CF\nentry #5166 for this.",
"msg_date": "Sun, 11 Aug 2024 10:11:00 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "Hey Thomas,\n\nThanks for picking this up. I think your patch looks really good. Are \nyou familiar with gcc's function poisoning?\n\n\t#include <stdio.h>\n\t#pragma GCC poison puts\n\t\n\tint main(){\n\t#pragma GCC bless begin puts\n\t puts(\"a\");\n\t#pragma GCC bless end puts\n\t}\n\nI wonder if we could use function poisoning to our advantage. For \ninstance in ecpg, it looks like you got all of the strtod() invocations \nand replaced them with strtod_l(). Here is a patch with an example of \nwhat I'm talking about.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 13 Aug 2024 18:17:31 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On Wed, Aug 14, 2024 at 11:17 AM Tristan Partin <[email protected]> wrote:\n> Thanks for picking this up. I think your patch looks really good.\n\nThanks for looking!\n\n> Are\n> you familiar with gcc's function poisoning?\n>\n> #include <stdio.h>\n> #pragma GCC poison puts\n>\n> int main(){\n> #pragma GCC bless begin puts\n> puts(\"a\");\n> #pragma GCC bless end puts\n> }\n>\n> I wonder if we could use function poisoning to our advantage. For\n> instance in ecpg, it looks like you got all of the strtod() invocations\n> and replaced them with strtod_l(). Here is a patch with an example of\n> what I'm talking about.\n\nThanks, this looks very useful.\n\n\n",
"msg_date": "Thu, 15 Aug 2024 20:49:11 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
},
{
"msg_contents": "On 11.08.24 00:11, Thomas Munro wrote:\n> v4 adds error handling, in case newlocale(\"C\") fails. I created CF\n> entry #5166 for this.\n\nI took a look at this. It was quite a complicated discussion that led \nto this, but I agree with the solution that was arrived at.\n\nI suggest that the simplification of the xlocale.h configure tests could \nbe committed separately. This would also be useful independent of this, \nand it's a sizeable chunk of this patch.\n\nAlso, you're removing the configure test for _configthreadlocale(). \nPresumably because you're removing all the uses. But wouldn't we need \nthat back later in the backend maybe? Or is that test even relevant \nanymore, that is, are there Windows versions that don't have it?\n\nAdding global includes to port.h doesn't seem great. That's not a place \none would normally look. We already include <locale.h> in c.h anyway, \nso it would probably be even better overall if you just added a \nconditional #include <xlocale.h> to c.h as well.\n\nFor Windows, we already have things like\n\n#define strcoll_l _strcoll_l\n\nin src/include/port/win32_port.h, so it would seem more sensible to add \nstrtod_l to that list, instead of in port.h.\n\nThe error handling with pg_ensure_c_locale(), that's the sort of thing \nI'm afraid will be hard to test or even know how it will behave. And it \ncreates this weird coupling between pgtypeslib and ecpglib that you \nmentioned earlier. And if there are other users of PG_C_LOCALE in the \nfuture, there will be similar questions about the proper initialization \nand error handling sequence.\n\nI would consider instead making a local static variable in each function \nthat needs this. For example, numericvar_to_double() might do\n\n{\n static locale_t c_locale;\n\n if (!c_locale)\n {\n c_locale = pg_get_c_locale();\n if (!c_locale)\n return -1; /* local error reporting convention */\n }\n\n ...\n}\n\nThis is a bit more code in total, but then you only initialize what you \nneed and you can handle errors locally.\n\n\n\n",
"msg_date": "Wed, 28 Aug 2024 20:50:31 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On non-Windows, hard depend on uselocale(3)"
}
] |
[
{
"msg_contents": "Hi,\n\nA while back I had proposed annotations for palloc() et al that let the\ncompiler know about which allocators pair with what freeing functions. One\nthing that allows the compiler to do is to detect use after free.\n\nOne such complaint is:\n\n../../../../../home/andres/src/postgresql/src/backend/commands/tablecmds.c: In function ‘ATExecAttachPartition’:\n../../../../../home/andres/src/postgresql/src/backend/commands/tablecmds.c:18758:25: warning: pointer ‘partBoundConstraint’ may be used after ‘list_concat’ [-Wuse-after-free]\n18758 | get_proposed_default_constraint(partBoundConstraint);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../../../home/andres/src/postgresql/src/backend/commands/tablecmds.c:18711:26: note: call to ‘list_concat’ here\n18711 | partConstraint = list_concat(partBoundConstraint,\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n18712 | RelationGetPartitionQual(rel));\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\nAnd it seems quite right:\n\n\tpartConstraint = list_concat(partBoundConstraint,\n\t\t\t\t\t\t\t\t RelationGetPartitionQual(rel));\n\nAt this point partBoundConstraint may not be used anymore, because\nlist_concat() might have reallocated.\n\nBut then a few lines later:\n\n\t\t/* we already hold a lock on the default partition */\n\t\tdefaultrel = table_open(defaultPartOid, NoLock);\n\t\tdefPartConstraint =\n\t\t\tget_proposed_default_constraint(partBoundConstraint);\n\nWe use partBoundConstraint again.\n\nI unfortunately can't quickly enough identify what partConstraint,\ndefPartConstraint, partBoundConstraint are, so I don't don't really know what\nthe fix here is.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Nov 2023 08:57:37 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Potential use-after-free in partion related code"
},
{
"msg_contents": "On 2023-Nov-15, Andres Freund wrote:\n\n> \tpartConstraint = list_concat(partBoundConstraint,\n> \t\t\t\t\t\t\t\t RelationGetPartitionQual(rel));\n> \n> At this point partBoundConstraint may not be used anymore, because\n> list_concat() might have reallocated.\n> \n> But then a few lines later:\n> \n> \t\t/* we already hold a lock on the default partition */\n> \t\tdefaultrel = table_open(defaultPartOid, NoLock);\n> \t\tdefPartConstraint =\n> \t\t\tget_proposed_default_constraint(partBoundConstraint);\n> \n> We use partBoundConstraint again.\n\nYeah, this is wrong if partBoundConstraint is reallocated by\nlist_concat. One possible fix is to change list_concat to\nlist_concat_copy(), which leaves the original list unmodified.\n\nAFAICT the bug came in with 6f6b99d1335b, which added default\npartitions.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Now I have my system running, not a byte was off the shelf;\nIt rarely breaks and when it does I fix the code myself.\nIt's stable, clean and elegant, and lightning fast as well,\nAnd it doesn't cost a nickel, so Bill Gates can go to hell.\"\n\n\n",
"msg_date": "Wed, 15 Nov 2023 19:02:50 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential use-after-free in partion related code"
}
] |
[
{
"msg_contents": "Hi,\n\nAfter a recent commit 6a72c42f (a related discussion [1]) which\nremoved MemoryContextResetAndDeleteChildren(), I think there are a\ncouple of other backward compatibility macros out there that can be\nremoved. These macros are tuplestore_donestoring() which was\nintroduced by commit dd04e95 21 years ago and SPI_push() and friends\nwhich were made no-ops macros by commit 1833f1a 7 years ago. Debian\ncode search shows very minimal usages of these macros. Here's a patch\nattached to remove them.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/20231114175953.GD2062604%40nathanxps13\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 16 Nov 2023 19:11:41 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Do away with a few backwards compatibility macros"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 07:11:41PM +0530, Bharath Rupireddy wrote:\n> After a recent commit 6a72c42f (a related discussion [1]) which\n> removed MemoryContextResetAndDeleteChildren(), I think there are a\n> couple of other backward compatibility macros out there that can be\n> removed. These macros are tuplestore_donestoring() which was\n> introduced by commit dd04e95 21 years ago and SPI_push() and friends\n> which were made no-ops macros by commit 1833f1a 7 years ago. Debian\n> code search shows very minimal usages of these macros. Here's a patch\n> attached to remove them.\n\nI'm fine with this because all of these macros are no-ops for all supported\nversions of Postgres. Even if an extension is using them today, you'll get\nthe same behavior as before if you remove the uses and rebuild against\nv12-v16.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 09:46:22 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do away with a few backwards compatibility macros"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 09:46:22AM -0600, Nathan Bossart wrote:\n> On Thu, Nov 16, 2023 at 07:11:41PM +0530, Bharath Rupireddy wrote:\n>> After a recent commit 6a72c42f (a related discussion [1]) which\n>> removed MemoryContextResetAndDeleteChildren(), I think there are a\n>> couple of other backward compatibility macros out there that can be\n>> removed. These macros are tuplestore_donestoring() which was\n>> introduced by commit dd04e95 21 years ago and SPI_push() and friends\n>> which were made no-ops macros by commit 1833f1a 7 years ago. Debian\n>> code search shows very minimal usages of these macros. Here's a patch\n>> attached to remove them.\n> \n> I'm fine with this because all of these macros are no-ops for all supported\n> versions of Postgres. Even if an extension is using them today, you'll get\n> the same behavior as before if you remove the uses and rebuild against\n> v12-v16.\n\nBarring objections, I'll plan on committing this in the next week or so.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 20 Nov 2023 22:58:40 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do away with a few backwards compatibility macros"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Thu, Nov 16, 2023 at 09:46:22AM -0600, Nathan Bossart wrote:\n>> I'm fine with this because all of these macros are no-ops for all supported\n>> versions of Postgres. Even if an extension is using them today, you'll get\n>> the same behavior as before if you remove the uses and rebuild against\n>> v12-v16.\n\n> Barring objections, I'll plan on committing this in the next week or so.\n\nNo objection here, but should we try to establish some sort of project\npolicy around this sort of change (ie, removal of backwards-compatibility\nsupport)? \"Once it no longer matters for any supported version\" sounds\nabout right to me, but maybe somebody has an argument for thinking about\nit differently.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Nov 2023 00:05:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do away with a few backwards compatibility macros"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 12:05:36AM -0500, Tom Lane wrote:\n> No objection here, but should we try to establish some sort of project\n> policy around this sort of change (ie, removal of backwards-compatibility\n> support)? \"Once it no longer matters for any supported version\" sounds\n> about right to me, but maybe somebody has an argument for thinking about\n> it differently.\n\nThat seems reasonable to me. I don't think we need to mandate that\nbackwards-compatibility support be removed as soon as it is eligible, but\nit can be considered fair game at that point.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 21 Nov 2023 09:52:03 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do away with a few backwards compatibility macros"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 9:22 PM Nathan Bossart <[email protected]> wrote:\n>\n> On Tue, Nov 21, 2023 at 12:05:36AM -0500, Tom Lane wrote:\n> > No objection here, but should we try to establish some sort of project\n> > policy around this sort of change (ie, removal of backwards-compatibility\n> > support)? \"Once it no longer matters for any supported version\" sounds\n> > about right to me, but maybe somebody has an argument for thinking about\n> > it differently.\n>\n> That seems reasonable to me. I don't think we need to mandate that\n> backwards-compatibility support be removed as soon as it is eligible, but\n> it can be considered fair game at that point.\n\nI think it's easy to miss/enforce a documented policy. IMV, moving\ntowards pg_attribute_deprecated as Alvaro Herrera said in the other\nthread https://www.postgresql.org/message-id/202311141920.edtj56saukiv%40alvherre.pgsql\ncan help. Authors then can declare the variables and functions as\ndeprecated so that the code compilation with\n-Wno-deprecated-declarations can help track all such deprecated code.\n\nHaving said that, I'm all +1 if the v1 patch proposed in this thread gets in.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Nov 2023 16:29:18 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do away with a few backwards compatibility macros"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 04:29:18PM +0530, Bharath Rupireddy wrote:\n> I think it's easy to miss/enforce a documented policy. IMV, moving\n> towards pg_attribute_deprecated as Alvaro Herrera said in the other\n> thread https://www.postgresql.org/message-id/202311141920.edtj56saukiv%40alvherre.pgsql\n> can help. Authors then can declare the variables and functions as\n> deprecated so that the code compilation with\n> -Wno-deprecated-declarations can help track all such deprecated code.\n\nI'm +1 for adding pg_attribute_deprecated once we have something to use it\nfor.\n\n> Having said that, I'm all +1 if the v1 patch proposed in this thread gets in.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Nov 2023 13:14:39 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do away with a few backwards compatibility macros"
}
] |
[
{
"msg_contents": "Hello,\n\nI work on pgrx, a Rust crate (really, a set of them) that allows\npeople to use Rust to write extensions against Postgres, exporting\nwhat Postgres sees as ordinary \"C\" dynamic libraries. Unfortunately,\nthe build system for this is a touch complicated, as it cannot simply\nrun pgxs.mk, and as-of Postgres 16 it has been periodically failing on\nplatforms it used to do fine on, due to troubles involved with the\nSIMD extension headers.\n\nI have root-caused the exact problem, but the bug is... social, rather\nthan technical in nature: people with inadequate options at their\ndisposal have implemented increasingly clever educated guesses that\nare increasingly prone to going wrong, rather than just asking anyone\nto help them increase their options. Rather than continuing this\ntrend, I figured I would simply start doing things to hopefully draw\nthe line here. I will be looking to follow up with the bindgen tools\nthat fail to handle this correctly, but it would be nice if this\nstopped shipping in Postgres 16.\"${PG_NEXT_MINOR}\", as pgrx does need\nthe definitions in pg_wchar.h to have enough data to correctly\ndetermine database encoding and preserve certain Rust library\ninvariants (\"all Rust strings are correctly-formed UTF-8, anything\nelse is just a sequence of bytes\") without also obliterating\nperformance.\n\nOn the off-chance that everyone agrees with me without reserve, the\nattached patch moves the relevant code around (and includes the gory\ndetails). This seems to be unlikely to be the only mildly-exotic build\nsystem failure caused by such an overexposed implementation detail, so\nwhile I'm not married to this particular code motion, it seems best to\nimprove this some way.",
"msg_date": "Thu, 16 Nov 2023 12:10:59 -0800",
"msg_from": "Jubilee Young <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "Jubilee Young <[email protected]> writes:\n> I have root-caused the exact problem, but the bug is... social, rather\n> than technical in nature: people with inadequate options at their\n> disposal have implemented increasingly clever educated guesses that\n> are increasingly prone to going wrong, rather than just asking anyone\n> to help them increase their options. Rather than continuing this\n> trend, I figured I would simply start doing things to hopefully draw\n> the line here. I will be looking to follow up with the bindgen tools\n> that fail to handle this correctly, but it would be nice if this\n> stopped shipping in Postgres 16.\"${PG_NEXT_MINOR}\", as pgrx does need\n> the definitions in pg_wchar.h to have enough data to correctly\n> determine database encoding and preserve certain Rust library\n> invariants (\"all Rust strings are correctly-formed UTF-8, anything\n> else is just a sequence of bytes\") without also obliterating\n> performance.\n\nIt would be nice if you would state your problem straightforwardly,\nrather than burying us in irrelevant-to-us details; but apparently\nwhat you are unhappy about is that pg_wchar.h now #include's simd.h.\nThat evidently stems from commit 121d2d3d7 trying to make\nis_valid_ascii() faster.\n\nCurrently the only caller of is_valid_ascii() is in wchar.c,\nand so we could easily fix this by moving is_valid_ascii()\ninto wchar.c as your patch proposes. However ... I suppose the\npoint of having it as a \"static inline\" in a header file is to\nbe able to optimize other call sites too. So I wonder if there\nused to be some, or this was just somebody's over-eagerness to\nexpose stuff they thought possibly might be useful. And maybe\nmore to the point, are we worried about there being other\ncallers in future? I'm really not sure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Nov 2023 17:49:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 12:10:59PM -0800, Jubilee Young wrote:\n> On the off-chance that everyone agrees with me without reserve, the\n> attached patch moves the relevant code around (and includes the gory\n> details). This seems to be unlikely to be the only mildly-exotic build\n> system failure caused by such an overexposed implementation detail, so\n> while I'm not married to this particular code motion, it seems best to\n> improve this some way.\n\nIt looks like is_valid_ascii() was originally added to pg_wchar.h so that\nit could easily be used elsewhere [0] [1], but that doesn't seem to have\nhappened yet.\n\nWould moving this definition to a separate header file be a viable option?\nThat'd still break any existing projects that are using it, but at least\nthere'd be an easy fix. I'm not sure there _are_ any other projects using\nit, anyway. However, both of these proposals feel like they might be\nslightly beyond what we'd ordinarily consider back-patching.\n\nThat being said, it's not unheard of for Postgres to make adjustments for\nthird-party code (search for \"pljava\" in commits 62aba76 and f4aa3a1). I\nread the description of the pgrx problem [2], and I'm still trying to\nunderstand the scope of the issue. I don't think it's reasonable to expect\nsomeone building an extension to always use the exact same compiler that\nwas used by the packager, but I also don't fully understand why different\ncompilations of an inline function would cause problems.\n\n[0] https://postgr.es/m/CAFBsxsHG%3Dg6W8Mie%2B_NO8dV6O0pO2stxrnS%3Dme5ZmGqk--fd5g%40mail.gmail.com\n[1] https://postgr.es/m/CAFBsxsH1Yutrmu%2B6LLHKK8iXY%2BvG--Do6zN%2B2900spHXQNNQKQ%40mail.gmail.com\n[2] https://github.com/pgcentralfoundation/pgrx/issues/1298\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 16:54:02 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> It looks like is_valid_ascii() was originally added to pg_wchar.h so that\n> it could easily be used elsewhere [0] [1], but that doesn't seem to have\n> happened yet.\n\nIt seems to be new as of v15, so there wouldn't have been a lot of time\nfor external code to adopt it. As far as I can tell from Debian Code\nSearch, nobody has yet.\n\n> Would moving this definition to a separate header file be a viable option?\n> That'd still break any existing projects that are using it, but at least\n> there'd be an easy fix.\n\nThat would provide a little bit of cover, at least, compared to just\nhiding it in the .c file.\n\nI'm generally sympathetic to the idea that simd.h was a rather large\ndependency to add to something as widely used as pg_wchar.h. So I'd\nfavor getting it out of there just on compilation-time grounds,\nindependently of whether it's causing active problems. That argument\nwouldn't justify a back-patch, but \"it's causing problems\" might.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Nov 2023 18:06:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 2:54 PM Nathan Bossart <[email protected]> wrote:\n> That being said, it's not unheard of for Postgres to make adjustments for\n> third-party code (search for \"pljava\" in commits 62aba76 and f4aa3a1). I\n> read the description of the pgrx problem [2], and I'm still trying to\n> understand the scope of the issue. I don't think it's reasonable to expect\n> someone building an extension to always use the exact same compiler that\n> was used by the packager, but I also don't fully understand why different\n> compilations of an inline function would cause problems.\n>\n> [0] https://postgr.es/m/CAFBsxsHG%3Dg6W8Mie%2B_NO8dV6O0pO2stxrnS%3Dme5ZmGqk--fd5g%40mail.gmail.com\n> [1] https://postgr.es/m/CAFBsxsH1Yutrmu%2B6LLHKK8iXY%2BvG--Do6zN%2B2900spHXQNNQKQ%40mail.gmail.com\n> [2] https://github.com/pgcentralfoundation/pgrx/issues/1298\n>\n\nWe don't directly `#include` C into Rust, but use libclang to preprocess and\ncompile a wrapping C header into a list of symbols Rust will look for at link\ntime. Our failure is in libclang and how we steer it:\n- The Clang-C API (libclang.so) cannot determine where system headers are.\n- A clang executable can determine where system headers are, but our bindgen\nmay be asked to use a libclang.so without a matching clang executable!\n- This is partly because system packagers do not agree on what clang parts\nmust be installed together, nor even on the clang directory's layout.\n- Thus, it is currently impossible to, given a libclang.so, determine with\n100% accuracy where version-appropriate system headers are and include them,\nnor does it do so implicitly.\n- Users cannot be expected to always have reasonable directory layouts, nor\nalways have one clang + libclang.so + clang/\"$MAJOR\"/include on the system.\n- We have tried various solutions and have had several users report, by various\nchannels, that their builds are breaking, even after they charitably try out\nthe patches I offer in their CI. Especially after system updates.\n\nThe clang-sys and rust-bindgen crates committed a series of unfortunate hacks\nthat surprisingly work. But the only real solution is actually exposing the\nC++ API for header searches to Clang-C, and then move that up the deps chain.\nPerhaps libclang-18.so will not have this problem?\n\n- Jubilee\n\n\n",
"msg_date": "Thu, 16 Nov 2023 17:11:03 -0800",
"msg_from": "Jubilee Young <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 06:06:30PM -0500, Tom Lane wrote:\n> I'm generally sympathetic to the idea that simd.h was a rather large\n> dependency to add to something as widely used as pg_wchar.h. So I'd\n> favor getting it out of there just on compilation-time grounds,\n> independently of whether it's causing active problems. That argument\n> wouldn't justify a back-patch, but \"it's causing problems\" might.\n\nGiven the lack of evidence of anyone else using is_valid_ascii(), I'm\nleaning towards back-patching being the better option in this case. I\ndon't know if it'll be feasible to keep simd.h out of all headers that\nthird-party code might want to use forever, but that's not an argument\nagainst doing this right now for pgrx.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 22:38:22 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 5:54 AM Nathan Bossart <[email protected]> wrote:\n>\n> It looks like is_valid_ascii() was originally added to pg_wchar.h so that\n> it could easily be used elsewhere [0] [1], but that doesn't seem to have\n> happened yet.\n>\n> Would moving this definition to a separate header file be a viable option?\n\nSeems fine to me. (I believe the original motivation for making it an\ninline function was for in pg_mbstrlen_with_len(), but trying that\nhasn't been a priority.)\n\n\n",
"msg_date": "Fri, 17 Nov 2023 17:26:20 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 2:26 AM John Naylor <[email protected]> wrote:\n>\n> On Fri, Nov 17, 2023 at 5:54 AM Nathan Bossart <[email protected]> wrote:\n> >\n> > It looks like is_valid_ascii() was originally added to pg_wchar.h so that\n> > it could easily be used elsewhere [0] [1], but that doesn't seem to have\n> > happened yet.\n> >\n> > Would moving this definition to a separate header file be a viable option?\n>\n> Seems fine to me. (I believe the original motivation for making it an\n> inline function was for in pg_mbstrlen_with_len(), but trying that\n> hasn't been a priority.)\n\nIn that case, I took a look across the codebase and saw a\nutils/ascii.h that doesn't\nseem to have gotten much love, but I suppose one could argue that it's intended\nto be a backend-only header file?\n\nAs the codebase is growing some enhanced UTF-8 support, you'll want somewhere\nthat contains the optimized US-ASCII routines, because, as US-ASCII is\na subset of\nUTF-8, and often faster to handle, it's typical for such codepaths to look like\n\n```c\nwhile (i < len && no_multibyte_chars) {\n i = i + ascii_op_version(i, buffer, &no_multibyte_chars);\n}\n\nwhile (i < len) {\n i = i + utf8_op_version(i, buffer);\n}\n```\n\nSo it should probably end up living somewhere near the UTF-8 support, and\nthe easiest way to make it not go into something pgrx currently\nincludes would be\nto make it a new header file, though there's a fair amount of API we\ndon't touch.\n\n From the pgrx / Rust perspective, Postgres function calls are passed\nvia callback\nto a \"guard function\" that guarantees that longjmp and setjmp don't\ncause trouble\n(and makes sure we participate in that). So we only want to call\nPostgres functions\nif we \"can't replace\" them, as the overhead is quite a lot. That means\nUTF-8-per-se\nfunctions aren't very interesting to us as the Rust language already\nsupports it, but\nwe do benefit from access to transcoding to/from UTF-8.\n\n—Jubilee\n\n\n",
"msg_date": "Mon, 20 Nov 2023 10:50:36 -0800",
"msg_from": "Jubilee Young <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 10:50:36AM -0800, Jubilee Young wrote:\n> In that case, I took a look across the codebase and saw a\n> utils/ascii.h that doesn't\n> seem to have gotten much love, but I suppose one could argue that it's intended\n> to be a backend-only header file?\n\nThat might work. It's not #included in very many files, so adding\nport/simd.h shouldn't be too bad. And ascii.h is also pretty inexpensive,\nso including it in wchar.c seems permissible, too. I'm not certain this\ndoesn't cause problems with libpgcommon, but I don't see why it would,\neither.\n\n> So it should probably end up living somewhere near the UTF-8 support, and\n> the easiest way to make it not go into something pgrx currently\n> includes would be\n> to make it a new header file, though there's a fair amount of API we\n> don't touch.\n\nDoes pgrx use ascii.h at all?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 20 Nov 2023 16:52:22 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-16 17:11:03 -0800, Jubilee Young wrote:\n> We don't directly `#include` C into Rust, but use libclang to preprocess and\n> compile a wrapping C header into a list of symbols Rust will look for at link\n> time. Our failure is in libclang and how we steer it:\n> - The Clang-C API (libclang.so) cannot determine where system headers are.\n> - A clang executable can determine where system headers are, but our bindgen\n> may be asked to use a libclang.so without a matching clang executable!\n> - This is partly because system packagers do not agree on what clang parts\n> must be installed together, nor even on the clang directory's layout.\n> - Thus, it is currently impossible to, given a libclang.so, determine with\n> 100% accuracy where version-appropriate system headers are and include them,\n> nor does it do so implicitly.\n\nI remember battling this in the past, independent of rust :(\n\n\nWhat I don't quite get is why SIMD headers are particularly more problematic\nthan others - there's other headers that are compiler specific?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Nov 2023 16:10:23 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 05:14:17PM -0800, Jubilee Young wrote:\n> On Mon, Nov 20, 2023 at 2:52 PM Nathan Bossart <[email protected]> wrote:\n>> Does pgrx use ascii.h at all?\n> \n> We don't use utils/ascii.h, no.\n\nAlright. The next minor release isn't until February, so I'll let this one\nsit a little while longer in case anyone objects to back-patching something\nlike this [0].\n\n[0] https://postgr.es/m/attachment/152305/move_is_valid_ascii_v2.patch\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 20 Nov 2023 22:39:43 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "> On Nov 20, 2023, at 7:10 PM, Andres Freund <[email protected]> wrote:\n> \n> \n> What I don't quite get is why SIMD headers are particularly more problematic\n> than others - there's other headers that are compiler specific?\n\nThe short answer is the rust-based bindings generation tool pgrx uses (bindgen) is a little brain dead and gets confused when there’s multiple compiler builtins headers on the host.\n\nThis simd header is the first of its kind we’ve run across that’s exposed via Postgres’ “public api”. And bindgen wants to follow all the includes, it gets confused, picks the wrong one, and then errors happen.\n\nAnd I don’t know that it makes much sense for Postgres to include such a header into 3rd-party code anyways. \n\nI think Jubilee is also working with them to fix this, but we’re hoping Jubilee’s patch here (or similar) can get merged so we can clear our build drama.\n\neric\n\n",
"msg_date": "Tue, 21 Nov 2023 10:14:32 -0500",
"msg_from": "Eric Ridge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "(I hope you don't mind I'm reposting your reply -- I accidentally replied directly to you b/c phone)\n\n> On Nov 21, 2023, at 11:56 AM, Andres Freund <[email protected]> wrote:\n> \n> Hi,\n> \n> On 2023-11-21 10:11:18 -0500, Eric Ridge wrote:\n>> On Mon, Nov 20, 2023 at 7:10 PM Andres Freund <[email protected]> wrote:\n\n<snip>\n\n>> And I don’t know that it makes much sense for Postgres to include such a\n>> header into 3rd-party code anyways.\n> \n> Well, we want to expose such functionality to extensions. For some cases using\n> full functions would to be too expensive, hence using static inlines. In case\n> of exposing simd stuff, that means we need to include headers.\n\nSure. Probably not my place to make that kind of broad statement anyways. The \"static inlines\" are problematic for us in pgrx-land too, but that's a different problem for another day.\n\n\n> I'm not against splitting this out of pg_wchar.h, to be clear - that's a too\n> widely used header for, so there's a good independent reason for such a\n> change. I just don't really believe that moving simd.h out of there will end\n> the issue, we'll add more inlines using simd over time...\n\nYeah and that's why Jubilee is working with the bindgen folks to tighten this up for good.\n\n(We tracked all of the pg16 betas and didn't run into this until after pg16 went gold. I personally haven't groveled through the git logs to see when this header/static inline was added, but we'd have reported this sooner had we found it sooner.)\n\neric\n\n",
"msg_date": "Tue, 21 Nov 2023 12:19:38 -0500",
"msg_from": "Eric Ridge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 10:39:43PM -0600, Nathan Bossart wrote:\n> Alright. The next minor release isn't until February, so I'll let this one\n> sit a little while longer in case anyone objects to back-patching something\n> like this [0].\n> \n> [0] https://postgr.es/m/attachment/152305/move_is_valid_ascii_v2.patch\n\nBarring objections, I plan to commit this and back-patch it to v16 in the\nnext few days.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 Jan 2024 16:43:29 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "On Thu, Jan 04, 2024 at 04:43:29PM -0600, Nathan Bossart wrote:\n> On Mon, Nov 20, 2023 at 10:39:43PM -0600, Nathan Bossart wrote:\n>> Alright. The next minor release isn't until February, so I'll let this one\n>> sit a little while longer in case anyone objects to back-patching something\n>> like this [0].\n>> \n>> [0] https://postgr.es/m/attachment/152305/move_is_valid_ascii_v2.patch\n> \n> Barring objections, I plan to commit this and back-patch it to v16 in the\n> next few days.\n\nApologies for the delay. We're getting close to the February release, so I\nshould probably take care of this one soon...\n\nI see that I was planning on back-patching this to v16, but since\nis_valid_ascii() was introduced in v15, I'm wondering if it'd be better to\nback-patch it there so that is_valid_ascii() lives in the same file for all\nversions where it exists. Thoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 26 Jan 2024 12:14:48 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> I see that I was planning on back-patching this to v16, but since\n> is_valid_ascii() was introduced in v15, I'm wondering if it'd be better to\n> back-patch it there so that is_valid_ascii() lives in the same file for all\n> versions where it exists. Thoughts?\n\nYeah, if we're going to back-patch at all, that probably makes sense.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jan 2024 13:24:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 01:24:19PM -0500, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> I see that I was planning on back-patching this to v16, but since\n>> is_valid_ascii() was introduced in v15, I'm wondering if it'd be better to\n>> back-patch it there so that is_valid_ascii() lives in the same file for all\n>> versions where it exists. Thoughts?\n> \n> Yeah, if we're going to back-patch at all, that probably makes sense.\n\nCommitted/back-patched.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 29 Jan 2024 12:12:49 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hide exposed impl detail of wchar.c"
}
] |
[
{
"msg_contents": "In the \"Allow tests to pass in OpenSSL FIPS mode\" thread [0] it was discovered\nthat 3DES is joining the ranks of NIST disallowed algorithms. The attached\npatch adds a small note to the pgcrypto documentation about deprecated uses of\nalgorithms. I've kept it to \"official\" notices such as RFC's and NIST SP's.\nThere might be more that deserve a notice, but this seemed like a good start.\n\nAny thoughts on whether this would be helpful?\n\n--\nDaniel Gustafsson\n\n[0] https://postgr.es/m/[email protected]",
"msg_date": "Thu, 16 Nov 2023 21:49:54 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Adding deprecation notices to pgcrypto documentation"
},
{
"msg_contents": "> On 16 Nov 2023, at 21:49, Daniel Gustafsson <[email protected]> wrote:\n> \n> In the \"Allow tests to pass in OpenSSL FIPS mode\" thread [0] it was discovered\n> that 3DES is joining the ranks of NIST disallowed algorithms. The attached\n> patch adds a small note to the pgcrypto documentation about deprecated uses of\n> algorithms. I've kept it to \"official\" notices such as RFC's and NIST SP's.\n> There might be more that deserve a notice, but this seemed like a good start.\n> \n> Any thoughts on whether this would be helpful?\n\nCleaning out my inbox I came across this which I still think is worth doing,\nany objections to going ahead with it?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 4 Mar 2024 22:03:13 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding deprecation notices to pgcrypto documentation"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 10:03:13PM +0100, Daniel Gustafsson wrote:\n> Cleaning out my inbox I came across this which I still think is worth doing,\n> any objections to going ahead with it?\n\nI think the general idea is reasonable, but I have two small comments:\n\n* Should this be a \"Warning\" and/or moved to the top of the page? This\n seems like a relatively important notice that folks should see when\n beginning to use pgcrypto.\n\n* Should we actually document the exact list of algorithms along with\n detailed reasons? This list seems prone to becoming outdated.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 16:49:26 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding deprecation notices to pgcrypto documentation"
},
{
"msg_contents": "> On 4 Mar 2024, at 23:49, Nathan Bossart <[email protected]> wrote:\n> \n> On Mon, Mar 04, 2024 at 10:03:13PM +0100, Daniel Gustafsson wrote:\n>> Cleaning out my inbox I came across this which I still think is worth doing,\n>> any objections to going ahead with it?\n> \n> I think the general idea is reasonable, but I have two small comments:\n> \n> * Should this be a \"Warning\" and/or moved to the top of the page? This\n> seems like a relatively important notice that folks should see when\n> beginning to use pgcrypto.\n\nGood question. If we do we'd probably need to move other equally important\nbits of information from \"Security Limitations\" as well so perhaps it's best to\nkeep it as is for now, or putting it under Notes.\n\n> * Should we actually document the exact list of algorithms along with\n> detailed reasons? This list seems prone to becoming outdated.\n\nIf we don't detail the list then I think that it's not worth doing, doing the\nresearch isn't entirely trivial as one might not even know where to look or\nwhat to look for.\n\nI don't think this list will move faster than we can keep up with it,\nespecially since it's more or less listing everything that pgcrypto supports at\nthis point.\n\nLooking at this some more I propose that we also remove the table of hash\nbenchmarks, as it's widely misleading. Modern hardware can generate far more\nthan what we list here, and it gives the impression that these algorithms can\nonly be broken with brute force which is untrue. The table was first published\nin 2008 and hasn't been updated since.\n\nAttached is an updated patchset.\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 5 Mar 2024 11:50:36 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding deprecation notices to pgcrypto documentation"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 11:50:36AM +0100, Daniel Gustafsson wrote:\n>> On 4 Mar 2024, at 23:49, Nathan Bossart <[email protected]> wrote:\n>> * Should this be a \"Warning\" and/or moved to the top of the page? This\n>> seems like a relatively important notice that folks should see when\n>> beginning to use pgcrypto.\n> \n> Good question. If we do we'd probably need to move other equally important\n> bits of information from \"Security Limitations\" as well so perhaps it's best to\n> keep it as is for now, or putting it under Notes.\n\nFair point.\n\n>> * Should we actually document the exact list of algorithms along with\n>> detailed reasons? This list seems prone to becoming outdated.\n> \n> If we don't detail the list then I think that it's not worth doing, doing the\n> research isn't entirely trivial as one might not even know where to look or\n> what to look for.\n> \n> I don't think this list will move faster than we can keep up with it,\n> especially since it's more or less listing everything that pgcrypto supports at\n> this point.\n\nAlso fair. Would updates to this list be back-patched?\n\n> Looking at this some more I propose that we also remove the table of hash\n> benchmarks, as it's widely misleading. Modern hardware can generate far more\n> than what we list here, and it gives the impression that these algorithms can\n> only be broken with brute force which is untrue. The table was first published\n> in 2008 and hasn't been updated since.\n\nIt looks like it was updated in 2013 [0] (commit d6464fd). If there are\nstill objections to removing it, I think it should at least be given its\ndecennial update.\n\n[0] https://postgr.es/m/CAPVvHdPj5rmf294FbWi2TuEy%3DhSxZMNjTURESaM5zY8P_wCJMg%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 5 Mar 2024 10:32:43 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding deprecation notices to pgcrypto documentation"
},
{
"msg_contents": "On 05.03.24 11:50, Daniel Gustafsson wrote:\n>> * Should we actually document the exact list of algorithms along with\n>> detailed reasons? This list seems prone to becoming outdated.\n> \n> If we don't detail the list then I think that it's not worth doing, doing the\n> research isn't entirely trivial as one might not even know where to look or\n> what to look for.\n> \n> I don't think this list will move faster than we can keep up with it,\n> especially since it's more or less listing everything that pgcrypto supports at\n> this point.\n\nThe more detail we provide, the more detailed questions can be asked \nabout it. Like:\n\nThe introduction says certain algorithms are vulnerable to attacks. Is \n3DES vulnerable to attacks? Or just deprecated?\n\nWhat about something like CAST5? This is in the OpenSSL legacy \nprovider, but I don't think it's know to be vulnerable. Is its status \ndifferent from 3DES?\n\nIt says MD5 should not be used for digital signatures. But is password \nhashing a digital signature? How are these related? Similarly about \nSHA-1, which has a different level of detail.\n\nBlowfish is advised against, but by whom? By us?\n\n\n\n\n",
"msg_date": "Wed, 6 Mar 2024 10:57:15 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding deprecation notices to pgcrypto documentation"
},
{
"msg_contents": "> On 6 Mar 2024, at 10:57, Peter Eisentraut <[email protected]> wrote:\n> \n> On 05.03.24 11:50, Daniel Gustafsson wrote:\n>>> * Should we actually document the exact list of algorithms along with\n>>> detailed reasons? This list seems prone to becoming outdated.\n>> If we don't detail the list then I think that it's not worth doing, doing the\n>> research isn't entirely trivial as one might not even know where to look or\n>> what to look for.\n>> I don't think this list will move faster than we can keep up with it,\n>> especially since it's more or less listing everything that pgcrypto supports at\n>> this point.\n> \n> The more detail we provide, the more detailed questions can be asked about it.\n\nTo make it more palatable then, let's remove everything apart from the NIST\nrecommendations?\n\n> The introduction says certain algorithms are vulnerable to attacks. Is 3DES vulnerable to attacks? Or just deprecated?\n\nBoth, 3DES in CBC mode is vulnerable to birthday attacks (CVE-2016-2183) and is\ndisallowed for encryption (NIST-SP800-131A) after 2023.\n\n> What about something like CAST5? This is in the OpenSSL legacy provider, but I don't think it's know to be vulnerable. Is its status different from 3DES?\n\nCAST is vulnerable but CAST5, which is another name for CAST-128, is not known\nto be vulnerable as long as a 128 bit key is used (which is what pgcrypto use).\nIt is AFAIK considered a legacy cipher due to the small block size.\n\n> It says MD5 should not be used for digital signatures. But is password hashing a digital signature? How are these related? Similarly about SHA-1, which has a different level of detail.\n\nA digital signature is a mathematical construction to verify the authenticity\nof a message, so I guess password hashing falls under that. The fact that MD5\nis vulnerable to collision attacks makes MD5 a particularly poor choice for\nthat particular application IMO.\n\n> Blowfish is advised against, but by whom? By us?\n\nBlowfish in CBC mode is vulnerable to birthday attacks (CVE-2016-6329). The\nauthor of Blowfish among others, he had this to say in 2007 [0]:\n\n\t\"There weren't enough alternatives to DES out there. I wrote Blowfish\n\tas such an alternative, but I didn't even know if it would survive a\n\tyear of cryptanalysis. Writing encryption algorithms is hard, and it's\n\talways amazing if one you write actually turns out to be secure. At\n\tthis point, though, I'm amazed it's still being used. If people ask, I\n\trecommend Twofish instead.\"\n\n--\nDaniel Gustafsson\n\n[0] https://web.archive.org/web/20161202063854/https://www.computerworld.com.au/article/46254/bruce_almighty_schneier_preaches_security_linux_faithful/?pp=3\n\n",
"msg_date": "Wed, 6 Mar 2024 11:50:51 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding deprecation notices to pgcrypto documentation"
}
] |
[
{
"msg_contents": "Hi,\n\nI've often had to analyze what caused corruption in PG instances, where the\nsymptoms match not having had backup_label in place when bringing on the\nnode. However that's surprisingly hard - the only log messages that indicate\nuse of backup_label are at DEBUG1.\n\nGiven how crucial use of backup_label is and how frequently people do get it\nwrong, I think we should add a LOG message - it's not like use of backup_label\nis a frequent thing in the life of a postgres instance and is going to swamp\nthe log. And I think we should backpatch that addition.\n\nMedium term I think we should go further, and leave evidence in pg_control\nabout the last use of ControlFile->backupStartPoint, instead of resetting it.\n\nI realize that there's a discussion about removing backup_label - but I think\nthat's fairly orthogonal. Should we go with the pg_control approach, we should\nstill emit a useful message when starting in a state that's \"equivalent\" to\nhaving used the backup_label.\n\nThoughts?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Nov 2023 20:18:11 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use of backup_label not noted in log"
},
{
"msg_contents": "On Thu, 2023-11-16 at 20:18 -0800, Andres Freund wrote:\n> I've often had to analyze what caused corruption in PG instances, where the\n> symptoms match not having had backup_label in place when bringing on the\n> node. However that's surprisingly hard - the only log messages that indicate\n> use of backup_label are at DEBUG1.\n> \n> Given how crucial use of backup_label is and how frequently people do get it\n> wrong, I think we should add a LOG message - it's not like use of backup_label\n> is a frequent thing in the life of a postgres instance and is going to swamp\n> the log. And I think we should backpatch that addition.\n\n+1\n\nI am not sure about the backpatch: it is not a bug, and we should not wantonly\nintroduce new log messages in a minor release. Some monitoring system may\nget confused.\n\nWhat about adding it to the \"redo starts at\" message, something like\n\n redo starts at 12/12345678, taken from control file\n\nor\n\n redo starts at 12/12345678, taken from backup label\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 17 Nov 2023 06:41:46 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 11/17/23 00:18, Andres Freund wrote:\n> \n> I've often had to analyze what caused corruption in PG instances, where the\n> symptoms match not having had backup_label in place when bringing on the\n> node. However that's surprisingly hard - the only log messages that indicate\n> use of backup_label are at DEBUG1.\n> \n> Given how crucial use of backup_label is and how frequently people do get it\n> wrong, I think we should add a LOG message - it's not like use of backup_label\n> is a frequent thing in the life of a postgres instance and is going to swamp\n> the log. And I think we should backpatch that addition.\n\n+1 for the message and I think a backpatch is fine as long as it is a \nnew message. If monitoring systems can't handle an unrecognized message \nthen that feels like a problem on their part.\n\n> Medium term I think we should go further, and leave evidence in pg_control\n> about the last use of ControlFile->backupStartPoint, instead of resetting it.\n\nMichael also thinks this is a good idea.\n\n> I realize that there's a discussion about removing backup_label - but I think\n> that's fairly orthogonal. Should we go with the pg_control approach, we should\n> still emit a useful message when starting in a state that's \"equivalent\" to\n> having used the backup_label.\n\nAgreed, this new message could easily be adapted to the recovery in \npg_control patch.\n\nRegards,\n-David\n\n\n",
"msg_date": "Sat, 18 Nov 2023 09:26:40 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 11/17/23 01:41, Laurenz Albe wrote:\n> On Thu, 2023-11-16 at 20:18 -0800, Andres Freund wrote:\n>> I've often had to analyze what caused corruption in PG instances, where the\n>> symptoms match not having had backup_label in place when bringing on the\n>> node. However that's surprisingly hard - the only log messages that indicate\n>> use of backup_label are at DEBUG1.\n>>\n>> Given how crucial use of backup_label is and how frequently people do get it\n>> wrong, I think we should add a LOG message - it's not like use of backup_label\n>> is a frequent thing in the life of a postgres instance and is going to swamp\n>> the log. And I think we should backpatch that addition.\n> \n> +1\n> \n> I am not sure about the backpatch: it is not a bug, and we should not wantonly\n> introduce new log messages in a minor release. Some monitoring system may\n> get confused.\n> \n> What about adding it to the \"redo starts at\" message, something like\n> \n> redo starts at 12/12345678, taken from control file\n> \n> or\n> \n> redo starts at 12/12345678, taken from backup label\n\nI think a backpatch is OK as long as it is a separate message, but I \nlike your idea of adding to the \"redo starts\" message going forward.\n\nI know this isn't really a bug, but not being able to tell where \nrecovery information came from seems like a major omission in the logging.\n\nRegards,\n-David\n\n\n",
"msg_date": "Sat, 18 Nov 2023 09:30:01 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 06:41:46 +0100, Laurenz Albe wrote:\n> On Thu, 2023-11-16 at 20:18 -0800, Andres Freund wrote:\n> > I've often had to analyze what caused corruption in PG instances, where the\n> > symptoms match not having had backup_label in place when bringing on the\n> > node. However that's surprisingly hard - the only log messages that indicate\n> > use of backup_label are at DEBUG1.\n> >\n> > Given how crucial use of backup_label is and how frequently people do get it\n> > wrong, I think we should add a LOG message - it's not like use of backup_label\n> > is a frequent thing in the life of a postgres instance and is going to swamp\n> > the log.� And I think we should backpatch that addition.\n>\n> +1\n>\n> I am not sure about the backpatch: it is not a bug, and we should not wantonly\n> introduce new log messages in a minor release. Some monitoring system may\n> get confused.\n\nI think log monitoring need (and do) handle unknown log messages\ngracefully. You're constantly encountering them. If were to change an\nexisting log message in the back branches it'd be a different story.\n\nThe reason for backpatching is that this is by far the most common reason for\ncorrupted systems in the wild that I have seen. And there's no way to\ndetermine from the logs whether something has gone right or wrong - not really\na bug, but a pretty substantial weakness. And we're going to have to deal with\n< 17 releases for 5 years, so making this at least somewhat diagnosable seems\nlike a good idea.\n\n\n> What about adding it to the \"redo starts at\" message, something like\n>\n> redo starts at 12/12345678, taken from control file\n>\n> or\n>\n> redo starts at 12/12345678, taken from backup label\n\nI think it'd make sense to log use of backup_label earlier than that - the\nlocations from backup_label might end up not being available in the archive,\nthe primary or locally, and we'll error out with \"could not locate a valid\ncheckpoint record\".\n\nI'd probably just do it within the if (read_backup_label()) block in\nInitWalRecovery(), *before* the ReadCheckpointRecord().\n\nI do like the idea of expanding the \"redo starts at\" message\nthough. E.g. including minRecoveryLSN, ControlFile->backupStartPoint,\nControlFile->backupEndPoint would provide information about when the node\nmight become consistent.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Nov 2023 10:01:42 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-18 09:30:01 -0400, David Steele wrote:\n> I know this isn't really a bug, but not being able to tell where recovery\n> information came from seems like a major omission in the logging.\n\nYea. I was preparing to forecefully suggest that some monitoring tooling\nshould verify that new standbys and PITRs needs to check that backup_label was\nactually used, just to remember that there's nothing they could realistically\nuse (using DEBUG1 in production imo isn't realistic).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Nov 2023 10:05:46 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-18 10:01:42 -0800, Andres Freund wrote:\n> > What about adding it to the \"redo starts at\" message, something like\n> >\n> > redo starts at 12/12345678, taken from control file\n> >\n> > or\n> >\n> > redo starts at 12/12345678, taken from backup label\n> \n> I think it'd make sense to log use of backup_label earlier than that - the\n> locations from backup_label might end up not being available in the archive,\n> the primary or locally, and we'll error out with \"could not locate a valid\n> checkpoint record\".\n> \n> I'd probably just do it within the if (read_backup_label()) block in\n> InitWalRecovery(), *before* the ReadCheckpointRecord().\n\nNot enamored with the phrasing of the log messages, but here's a prototype:\n\nWhen starting up with backup_label present:\nLOG: starting from base backup with redo LSN A/34100028, checkpoint LSN A/34100080 on timeline ID 1\n\nWhen restarting before reaching the end of the backup, but after backup_label\nhas been removed:\nLOG: continuing to start from base backup with redo LSN A/34100028\nLOG: entering standby mode\nLOG: redo starts at A/3954B958\n\nNote that the LSN in the \"continuing\" case is the one the backup started at,\nnot where recovery will start.\n\n\nI've wondered whether it's worth also adding an explicit message just after\nReachedEndOfBackup(), but it seems far less urgent due to the existing\n\"consistent recovery state reached at %X/%X\" message.\n\n\nWe are quite inconsistent about how we spell LSNs. Sometimes with LSN\npreceding, sometimes not. Sometimes with (LSN). Etc.\n\n\n> I do like the idea of expanding the \"redo starts at\" message\n> though. E.g. including minRecoveryLSN, ControlFile->backupStartPoint,\n> ControlFile->backupEndPoint would provide information about when the node\n> might become consistent.\n\nPlaying around with this a bit, I'm wondering if we instead should remove that\nmessage, and emit something more informative earlier on. If there's a problem,\nyou kinda want the information before we finally get to the loop in\nPerformWalLRecovery(). If e.g. there's no WAL you'll only get\nLOG: invalid checkpoint record\nPANIC: could not locate a valid checkpoint record\n\nwhich is delightfully lacking in details.\n\n\nThere also are some other oddities:\n\nIf the primary is down when starting up, and we'd need WAL from the primary\nfor the first record, the \"redo start at\" message is delayed until that\nhappens, because we emit the message not before we read the first record, but\nafter. That's just plain odd.\n\nAnd sometimes we'll start referencing the LSN at which we are starting\nrecovery before the \"redo starts at\" message. If e.g. we shut down\nat a restart point, we'll emit\n\n LOG: consistent recovery state reached at ...\nbefore\n LOG: redo starts at ...\n\n\nBut that's all clearly just material for HEAD.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 18 Nov 2023 13:49:15 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On Sat, Nov 18, 2023 at 01:49:15PM -0800, Andres Freund wrote:\n> Note that the LSN in the \"continuing\" case is the one the backup started at,\n> not where recovery will start.\n> \n> I've wondered whether it's worth also adding an explicit message just after\n> ReachedEndOfBackup(), but it seems far less urgent due to the existing\n> \"consistent recovery state reached at %X/%X\" message.\n\nUpgrading the surrounding DEBUG1 to a LOG is another option, but I\nagree that I've seen less that as being an actual problem in the field\ncompared to the famous I-removed-a-backup-label-and-I-m-still-up,\nuntil this user sees signs of corruption after recovery was finished,\nsometimes days after putting back an instance online.\n\n> Playing around with this a bit, I'm wondering if we instead should remove that\n> message, and emit something more informative earlier on. If there's a problem,\n> you kinda want the information before we finally get to the loop in\n> PerformWalLRecovery(). If e.g. there's no WAL you'll only get\n> LOG: invalid checkpoint record\n> PANIC: could not locate a valid checkpoint record\n\nI was looking at this code a few weeks ago and have on my stack of\nlist to do an item about sending a patch to make this exact message\nPANIC more talkative as there are a lot of instances with\nlog_min_messages > log.\n\n> which is delightfully lacking in details.\n\nWith a user panicking as much as the server itself, that's even more\ntasty.\n\n+ if (ControlFile->backupStartPoint != InvalidXLogRecPtr)\n+ ereport(LOG,\n+ (errmsg(\"continuing to start from base backup with redo LSN %X/%X\",\n+ LSN_FORMAT_ARGS(ControlFile->backupStartPoint))));\n\n\"Continuing to start\" sounds a bit weird to me, though, considering\nthat there are a few LOGs that say \"starting\" when there is a signal\nfile, but I don't have a better idea on top of my mind. So that\nsounds OK here.\n--\nMichael",
"msg_date": "Mon, 20 Nov 2023 17:30:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "I can accept that adding log messages to back branches is ok.\nPerhaps I am too nervous about things like that, because as an extension\ndeveloper I have been bitten too often by ABI breaks in minor releases\nin the past.\n\nOn Mon, 2023-11-20 at 17:30 +0900, Michael Paquier wrote:\n> + if (ControlFile->backupStartPoint != InvalidXLogRecPtr)\n> + ereport(LOG,\n> + (errmsg(\"continuing to start from base backup with redo LSN %X/%X\",\n> + LSN_FORMAT_ARGS(ControlFile->backupStartPoint))));\n> \n> \"Continuing to start\" sounds a bit weird to me, though, considering\n> that there are a few LOGs that say \"starting\" when there is a signal\n> file, but I don't have a better idea on top of my mind. So that\n> sounds OK here.\n\nWe can only reach that message in recovery or standby mode, right?\nSo why not write \"continuing to recover from base backup\"?\n\n\nIf we add a message for starting with \"backup_label\", shouldn't\nwe also add a corresponding message for starting from a checkpoint\nfound in the control file? If you see that in a problem report,\nyou immediately know what is going on.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 20 Nov 2023 11:35:15 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "[Resending since I accidentally replied off-list]\n\nOn 11/18/23 17:49, Andres Freund wrote:\n> On 2023-11-18 10:01:42 -0800, Andres Freund wrote:\n>>> What about adding it to the \"redo starts at\" message, something like\n>>>\n>>> redo starts at 12/12345678, taken from control file\n>>>\n>>> or\n>>>\n>>> redo starts at 12/12345678, taken from backup label\n>>\n>> I think it'd make sense to log use of backup_label earlier than that - the\n>> locations from backup_label might end up not being available in the archive,\n>> the primary or locally, and we'll error out with \"could not locate a valid\n>> checkpoint record\".\n>>\n>> I'd probably just do it within the if (read_backup_label()) block in\n>> InitWalRecovery(), *before* the ReadCheckpointRecord().\n> \n> Not enamored with the phrasing of the log messages, but here's a prototype:\n> \n> When starting up with backup_label present:\n> LOG: starting from base backup with redo LSN A/34100028, checkpoint LSN A/34100080 on timeline ID 1\n\nI'd prefer something like:\n\nLOG: starting backup recovery with redo...\n\n> When restarting before reaching the end of the backup, but after backup_label\n> has been removed:\n> LOG: continuing to start from base backup with redo LSN A/34100028\n> LOG: entering standby mode\n> LOG: redo starts at A/3954B958\n\nAnd here:\n\nLOG: restarting backup recovery with redo...\n\n> Note that the LSN in the \"continuing\" case is the one the backup started at,\n> not where recovery will start.\n> \n> I've wondered whether it's worth also adding an explicit message just after\n> ReachedEndOfBackup(), but it seems far less urgent due to the existing\n> \"consistent recovery state reached at %X/%X\" message.\n\nI think the current message is sufficient, but what do you have in mind?\n\n> We are quite inconsistent about how we spell LSNs. Sometimes with LSN\n> preceding, sometimes not. Sometimes with (LSN). Etc.\n\nWell, this could be improved in HEAD for sure.\n\n>> I do like the idea of expanding the \"redo starts at\" message\n>> though. E.g. including minRecoveryLSN, ControlFile->backupStartPoint,\n>> ControlFile->backupEndPoint would provide information about when the node\n>> might become consistent.\n\n+1\n\n> Playing around with this a bit, I'm wondering if we instead should remove that\n> message, and emit something more informative earlier on. If there's a problem,\n> you kinda want the information before we finally get to the loop in\n> PerformWalLRecovery(). If e.g. there's no WAL you'll only get\n> LOG: invalid checkpoint record\n> PANIC: could not locate a valid checkpoint record\n> \n> which is delightfully lacking in details.\n\nI've been thinking about improving this myself. It would probably also \nhelp a lot to hint that restore_command may be missing or not returning \nresults (but also not erroring). But there are a bunch of ways to get to \nthis message so we'd need to be careful.\n\n> There also are some other oddities:\n> \n> If the primary is down when starting up, and we'd need WAL from the primary\n> for the first record, the \"redo start at\" message is delayed until that\n> happens, because we emit the message not before we read the first record, but\n> after. That's just plain odd.\n\nAgreed. Moving it up would be better.\n\n> And sometimes we'll start referencing the LSN at which we are starting\n> recovery before the \"redo starts at\" message. If e.g. we shut down\n> at a restart point, we'll emit\n> \n> LOG: consistent recovery state reached at ...\n> before\n> LOG: redo starts at ...\n\nHuh, I haven't seen that one. Definitely confusing.\n\n> But that's all clearly just material for HEAD.\n\nAbsolutely. I've been thinking about some of this as well, but want to \nsee if we can remove the backup label first so we don't have to rework a \nbunch of stuff.\n\nOf course, that shouldn't stop you from proceeding. I'm sure anything \nyou are thinking of here could be adapted.\n\nRegards,\n-David\n\n\n",
"msg_date": "Mon, 20 Nov 2023 09:56:47 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 11/20/23 06:35, Laurenz Albe wrote:\n> \n> If we add a message for starting with \"backup_label\", shouldn't\n> we also add a corresponding message for starting from a checkpoint\n> found in the control file? If you see that in a problem report,\n> you immediately know what is going on.\n\n+1. It is easier to detect the presence of a message than the absence of \none.\n\nRegards,\n-David\n\n\n",
"msg_date": "Mon, 20 Nov 2023 09:59:30 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 5:35 AM Laurenz Albe <[email protected]> wrote:\n> I can accept that adding log messages to back branches is ok.\n> Perhaps I am too nervous about things like that, because as an extension\n> developer I have been bitten too often by ABI breaks in minor releases\n> in the past.\n\nI think that adding a log message to the back branches would probably\nmake my life better not worse, because when people do strange things\nand then send me the log messages to figure out what the heck\nhappened, it would be there, and I'd have a clue. However, the world\ndoesn't revolve around me. I can imagine users getting spooked if a\nnew message that they've never seen before, and I think that risk\nshould be considered. There are good reasons for keeping the\nback-branches stable, and as you said before, this isn't a bug fix.\n\nI do also think it is worth considering how this proposal interacts\nwith the proposal to remove backup_label. If that proposal goes\nthrough, then this proposal is obsolete, I believe. But if this is a\ngood idea, does that mean that's not a good idea? Or would we try to\nmake the pg_control which that patch would drop in place have some\ninternal difference which we could use to drive a similar log message?\nMaybe we should, because knowing whether or not the user followed the\nbackup procedure correctly would indeed be a big help and it would be\nregrettable to gain that capability only to lose it again...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 Nov 2023 11:24:25 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 11/20/23 12:24, Robert Haas wrote:\n> On Mon, Nov 20, 2023 at 5:35 AM Laurenz Albe <[email protected]> wrote:\n>> I can accept that adding log messages to back branches is ok.\n>> Perhaps I am too nervous about things like that, because as an extension\n>> developer I have been bitten too often by ABI breaks in minor releases\n>> in the past.\n> \n> I think that adding a log message to the back branches would probably\n> make my life better not worse, because when people do strange things\n> and then send me the log messages to figure out what the heck\n> happened, it would be there, and I'd have a clue. However, the world\n> doesn't revolve around me. I can imagine users getting spooked if a\n> new message that they've never seen before, and I think that risk\n> should be considered. There are good reasons for keeping the\n> back-branches stable, and as you said before, this isn't a bug fix.\n\nPersonally I think that the value of the information outweighs the \nweirdness of a new message appearing.\n\n> I do also think it is worth considering how this proposal interacts\n> with the proposal to remove backup_label. If that proposal goes\n> through, then this proposal is obsolete, I believe. \n\nNot at all. I don't even think the messages will need to be reworded, or \nnot much since they don't mention backup_label.\n\n> But if this is a\n> good idea, does that mean that's not a good idea? Or would we try to\n> make the pg_control which that patch would drop in place have some\n> internal difference which we could use to drive a similar log message?\n\nThe recovery in pg_control patch has all the same recovery info stored, \nso similar (or the same) log message would still be appropriate.\n\n> Maybe we should, because knowing whether or not the user followed the\n> backup procedure correctly would indeed be a big help and it would be\n> regrettable to gain that capability only to lose it again...\n\nThe info is certainly valuable and we wouldn't lose it, unless there is \nsomething I'm not getting.\n\nRegards,\n-David\n\n\n",
"msg_date": "Mon, 20 Nov 2023 13:30:34 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-20 11:24:25 -0500, Robert Haas wrote:\n> I do also think it is worth considering how this proposal interacts\n> with the proposal to remove backup_label. If that proposal goes\n> through, then this proposal is obsolete, I believe.\n\nI think it's the opposite, if anything. Today you can at least tell there was\nuse of a backup_label by looking for backup_label.old and you can verify\nfairly easily in a restore script that backup_label is present. If we \"just\"\nuse pg_control, neither of those is as easy. I.e. log messages would be more\nimportant, not less. Depending on how things work out, we might need to\nreformulate and/or move them a bit, but that doesn't seem like a big deal.\n\n\n> But if this is a good idea, does that mean that's not a good idea? Or would\n> we try to make the pg_control which that patch would drop in place have some\n> internal difference which we could use to drive a similar log message?\n\nI think we absolutely have to. If there's no way to tell whether an \"external\"\npg_backup_start/stop() procedure actually used the proper pg_control, it'd\nmake the situation substantially worse compared to today's, already bad,\nsituation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Nov 2023 10:24:42 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-20 17:30:31 +0900, Michael Paquier wrote:\n> On Sat, Nov 18, 2023 at 01:49:15PM -0800, Andres Freund wrote:\n> > Note that the LSN in the \"continuing\" case is the one the backup started at,\n> > not where recovery will start.\n> > \n> > I've wondered whether it's worth also adding an explicit message just after\n> > ReachedEndOfBackup(), but it seems far less urgent due to the existing\n> > \"consistent recovery state reached at %X/%X\" message.\n> \n> Upgrading the surrounding DEBUG1 to a LOG is another option, but I\n> agree that I've seen less that as being an actual problem in the field\n> compared to the famous I-removed-a-backup-label-and-I-m-still-up,\n> until this user sees signs of corruption after recovery was finished,\n> sometimes days after putting back an instance online.\n\n\"end of backup reached\" could scare users, it doesn't obviously indicate\nsomething \"good\". \"completed backup recovery, started at %X/%X\" or such would\nbe better imo.\n\n\n> + if (ControlFile->backupStartPoint != InvalidXLogRecPtr)\n> + ereport(LOG,\n> + (errmsg(\"continuing to start from base backup with redo LSN %X/%X\",\n> + LSN_FORMAT_ARGS(ControlFile->backupStartPoint))));\n> \n> \"Continuing to start\" sounds a bit weird to me, though, considering\n> that there are a few LOGs that say \"starting\" when there is a signal\n> file, but I don't have a better idea on top of my mind. So that\n> sounds OK here.\n\nI didn't like it much either - but I like David's proposal in his sibling\nreply:\n\nLOG: starting backup recovery with redo LSN A/34100028, checkpoint LSN A/34100080 on timeline ID 1\nLOG: restarting backup recovery with redo LSN A/34100028\nand adding the message from above:\nLOG: completing backup recovery with redo LSN A/34100028\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Nov 2023 10:36:33 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-20 11:35:15 +0100, Laurenz Albe wrote:\n> On Mon, 2023-11-20 at 17:30 +0900, Michael Paquier wrote:\n> > + if (ControlFile->backupStartPoint != InvalidXLogRecPtr)\n> > + ereport(LOG,\n> > + (errmsg(\"continuing to start from base backup with redo LSN %X/%X\",\n> > + LSN_FORMAT_ARGS(ControlFile->backupStartPoint))));\n> >\n> > \"Continuing to start\" sounds a bit weird to me, though, considering\n> > that there are a few LOGs that say \"starting\" when there is a signal\n> > file, but I don't have a better idea on top of my mind. So that\n> > sounds OK here.\n>\n> We can only reach that message in recovery or standby mode, right?\n> So why not write \"continuing to recover from base backup\"?\n\nIt can be reached without either too, albeit much less commonly.\n\n\n> If we add a message for starting with \"backup_label\", shouldn't\n> we also add a corresponding message for starting from a checkpoint\n> found in the control file? If you see that in a problem report,\n> you immediately know what is going on.\n\nMaybe - the reason I hesitate on that is that emitting an additional log\nmessage when starting from a base backup just adds something \"once once the\nlifetime of a node\". Whereas emitting something every start obviously doesn't\nimpose any limit.\n\nYou also can extrapolate from the messages absence that we started up without\nbackup_label, it's not like there would be a lot of messages inbetween\n \"database system was interrupted; last ...\"\nand\n \"starting backup recovery ...\"\n(commonly there would be no messages)\n\nWe can do more on HEAD of course, but we should be wary of just spamming the\nlog unnecessarily as well.\n\n\nI guess we could add this message at the same time, including in the back\nbranches. Initially I thought that might be unwise, because replacing\n\t\telog(DEBUG1, \"end of backup reached\");\nwith a different message could theoretically cause issues, even if unlikely,\ngiven that it's a DEBUG1 message.\n\nBut I think we actually want to emit the message a bit later, just *after* we\nupdated the control file, as that's the actually relevant piece after which we\nwon't go back to the \"backup recovery\" state. I am somewhat agnostic about\nwhether we should add that in the back branches or not.\n\n\nHere's the state with my updated patch, when starting up from a base backup:\n\nLOG: starting PostgreSQL 17devel on x86_64-linux, compiled by gcc-14.0.0, 64-bit\nLOG: listening on IPv6 address \"::1\", port 5441\nLOG: listening on IPv4 address \"127.0.0.1\", port 5441\nLOG: listening on Unix socket \"/tmp/.s.PGSQL.5441\"\nLOG: database system was interrupted; last known up at 2023-11-20 10:55:49 PST\nLOG: starting recovery from base backup with redo LSN E/AFF07F20, checkpoint LSN E/B01B17F0, on timeline ID 1\nLOG: entering standby mode\nLOG: redo starts at E/AFF07F20\nLOG: completed recovery from base backup with redo LSN E/AFF07F20\nLOG: consistent recovery state reached at E/B420FC80\n\n\nBesides the phrasing and the additional log message (I have no opinion about\nwhether it should be backpatched or not), I used %u for TimelineID as\nappropriate, and added a comma before \"on timeline\".\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 20 Nov 2023 11:03:28 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 11/20/23 14:27, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-19 14:28:12 -0400, David Steele wrote:\n>> On 11/18/23 17:49, Andres Freund wrote:\n>>> On 2023-11-18 10:01:42 -0800, Andres Freund wrote:\n>>> Not enamored with the phrasing of the log messages, but here's a prototype:\n>>>\n>>> When starting up with backup_label present:\n>>> LOG: starting from base backup with redo LSN A/34100028, checkpoint LSN A/34100080 on timeline ID 1\n>>\n>> I'd prefer something like:\n>>\n>> LOG: starting backup recovery with redo...\n> \n>>> When restarting before reaching the end of the backup, but after backup_label\n>>> has been removed:\n>>> LOG: continuing to start from base backup with redo LSN A/34100028\n>>> LOG: entering standby mode\n>>> LOG: redo starts at A/3954B958\n>>\n>> And here:\n>>\n>> LOG: restarting backup recovery with redo...\n> \n> I like it.\n\nCool.\n\n>>> I've wondered whether it's worth also adding an explicit message just after\n>>> ReachedEndOfBackup(), but it seems far less urgent due to the existing\n>>> \"consistent recovery state reached at %X/%X\" message.\n>>\n>> I think the current message is sufficient, but what do you have in mind?\n> \n> Well, the consistency message is emitted after every restart. Whereas a single\n> instance only should go through backup recovery once. So it seems worthwhile\n> to differentiate the two in log messages.\n\nAh, right. That works for me, then.\n\nRegards,\n-David\n\n\n",
"msg_date": "Mon, 20 Nov 2023 15:08:15 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 11/20/23 15:03, Andres Freund wrote:\n> On 2023-11-20 11:35:15 +0100, Laurenz Albe wrote:\n> \n>> If we add a message for starting with \"backup_label\", shouldn't\n>> we also add a corresponding message for starting from a checkpoint\n>> found in the control file? If you see that in a problem report,\n>> you immediately know what is going on.\n> \n> Maybe - the reason I hesitate on that is that emitting an additional log\n> message when starting from a base backup just adds something \"once once the\n> lifetime of a node\". Whereas emitting something every start obviously doesn't\n> impose any limit.\n\nHmm, yeah, that would be a bit much.\n\n> Here's the state with my updated patch, when starting up from a base backup:\n> \n> LOG: starting PostgreSQL 17devel on x86_64-linux, compiled by gcc-14.0.0, 64-bit\n> LOG: listening on IPv6 address \"::1\", port 5441\n> LOG: listening on IPv4 address \"127.0.0.1\", port 5441\n> LOG: listening on Unix socket \"/tmp/.s.PGSQL.5441\"\n> LOG: database system was interrupted; last known up at 2023-11-20 10:55:49 PST\n> LOG: starting recovery from base backup with redo LSN E/AFF07F20, checkpoint LSN E/B01B17F0, on timeline ID 1\n> LOG: entering standby mode\n> LOG: redo starts at E/AFF07F20\n> LOG: completed recovery from base backup with redo LSN E/AFF07F20\n> LOG: consistent recovery state reached at E/B420FC80\n> \n> Besides the phrasing and the additional log message (I have no opinion about\n> whether it should be backpatched or not), I used %u for TimelineID as\n> appropriate, and added a comma before \"on timeline\".\n\nI still wonder if we need \"base backup\" in the messages? That sort of \nimplies (at least to me) you used pg_basebackup but that may not be the \ncase.\n\nFWIW, I also prefer \"backup recovery\" over \"recovery from backup\". \n\"recovery from backup\" reads fine here, but if gets more awkward when \nyou want to say something like \"recovery from backup settings\". In that \ncase, I think \"backup recovery settings\" reads better. Not important for \nthis patch, maybe, but the recovery in pg_control patch went the other \nway and I definitely think it makes sense to keep them consistent, \nwhichever way we go.\n\nOther than that, looks good for HEAD. Whether we back patch or not is \nanother question, of course.\n\nRegards,\n-David\n\n\n\n",
"msg_date": "Mon, 20 Nov 2023 15:31:20 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 03:31:20PM -0400, David Steele wrote:\n> On 11/20/23 15:03, Andres Freund wrote:\n>> Besides the phrasing and the additional log message (I have no opinion about\n>> whether it should be backpatched or not), I used %u for TimelineID as\n>> appropriate, and added a comma before \"on timeline\".\n\nThe \"starting/restarting/completed recovery\" line sounds better here,\nso I'm OK with your suggestions.\n\n> I still wonder if we need \"base backup\" in the messages? That sort of\n> implies (at least to me) you used pg_basebackup but that may not be the\n> case.\n\nOr just s/base backup/backup/?\n\n> Other than that, looks good for HEAD. Whether we back patch or not is\n> another question, of course.\n\nI'd rather see more information in the back-branches more quickly, so\ncount me in the bucket of folks in favor of a backpatch.\n--\nMichael",
"msg_date": "Tue, 21 Nov 2023 12:54:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On Mon, 2023-11-20 at 11:03 -0800, Andres Freund wrote:\n> > If we add a message for starting with \"backup_label\", shouldn't\n> > we also add a corresponding message for starting from a checkpoint\n> > found in the control file? If you see that in a problem report,\n> > you immediately know what is going on.\n> \n> Maybe - the reason I hesitate on that is that emitting an additional log\n> message when starting from a base backup just adds something \"once once the\n> lifetime of a node\". Whereas emitting something every start obviously doesn't\n> impose any limit.\n\nThe message should only be shown if PostgreSQL replays WAL, that is,\nafter a crash. That would (hopefully) make it a rare message too.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 21 Nov 2023 08:42:42 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 11/20/23 23:54, Michael Paquier wrote:\n> On Mon, Nov 20, 2023 at 03:31:20PM -0400, David Steele wrote:\n> \n>> I still wonder if we need \"base backup\" in the messages? That sort of\n>> implies (at least to me) you used pg_basebackup but that may not be the\n>> case.\n> \n> Or just s/base backup/backup/?\n\nThat's what I meant but did not explain very well.\n\nRegards,\n-David\n\n\n",
"msg_date": "Tue, 21 Nov 2023 07:25:06 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 11/20/23 15:08, David Steele wrote:\n> On 11/20/23 14:27, Andres Freund wrote:\n> \n>>>> I've wondered whether it's worth also adding an explicit message \n>>>> just after\n>>>> ReachedEndOfBackup(), but it seems far less urgent due to the existing\n>>>> \"consistent recovery state reached at %X/%X\" message.\n>>>\n>>> I think the current message is sufficient, but what do you have in mind?\n>>\n>> Well, the consistency message is emitted after every restart. Whereas \n>> a single\n>> instance only should go through backup recovery once. So it seems \n>> worthwhile\n>> to differentiate the two in log messages.\n> \n> Ah, right. That works for me, then.\n\nAny status on this patch? If we do back patch it would be nice to see \nthis in the upcoming minor releases. I'm in favor of a back patch, as I \nthink this is minimally invasive and would be very useful for debugging \nrecovery issues.\n\nI like the phrasing you demonstrated in [1] but doesn't seem like \nthere's a new patch for that, so I have attached one.\n\nHappy to do whatever else I can to get this across the line.\n\nRegards,\n-David\n\n---\n\n[1] \nhttps://www.postgresql.org/message-id/20231120183633.c4lhoq4hld4u56dd%40awork3.anarazel.de",
"msg_date": "Fri, 19 Jan 2024 09:32:26 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On Fri, Jan 19, 2024 at 09:32:26AM -0400, David Steele wrote:\n> Any status on this patch? If we do back patch it would be nice to see this\n> in the upcoming minor releases. I'm in favor of a back patch, as I think\n> this is minimally invasive and would be very useful for debugging recovery\n> issues.\n\nI am not sure about the backpatch part, but on a second look I'm OK\nwith applying it on HEAD for now with the LOG added for the startup of\nrecovery when the backup_label file is read, for the recovery\ncompleted from a backup, and for the restart from a backup.\n\n> I like the phrasing you demonstrated in [1] but doesn't seem like there's a\n> new patch for that, so I have attached one.\n\n+ if (ControlFile->backupStartPoint != InvalidXLogRecPtr)\n\nNit 1: I would use XLogRecPtrIsInvalid here.\n\n+ ereport(LOG,\n+ (errmsg(\"completed backup recovery with redo LSN %X/%X\",\n+ LSN_FORMAT_ARGS(oldBackupStartPoint))));\n\nNit 2: How about adding backupEndPoint in this LOG? That would give:\n\"completed backup recovery with redo LSN %X/%X and end LSN %X/%X\".\n--\nMichael",
"msg_date": "Mon, 22 Jan 2024 16:36:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On Mon, Jan 22, 2024 at 04:36:27PM +0900, Michael Paquier wrote:\n> + if (ControlFile->backupStartPoint != InvalidXLogRecPtr)\n> \n> Nit 1: I would use XLogRecPtrIsInvalid here.\n> \n> + ereport(LOG,\n> + (errmsg(\"completed backup recovery with redo LSN %X/%X\",\n> + LSN_FORMAT_ARGS(oldBackupStartPoint))));\n> \n> Nit 2: How about adding backupEndPoint in this LOG? That would give:\n> \"completed backup recovery with redo LSN %X/%X and end LSN %X/%X\".\n\nHearing nothing, I've just applied a version of the patch with these\ntwo modifications on HEAD. If this needs tweaks, just let me know.\n--\nMichael",
"msg_date": "Thu, 25 Jan 2024 17:12:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 1/25/24 04:12, Michael Paquier wrote:\n> On Mon, Jan 22, 2024 at 04:36:27PM +0900, Michael Paquier wrote:\n>> + if (ControlFile->backupStartPoint != InvalidXLogRecPtr)\n>>\n>> Nit 1: I would use XLogRecPtrIsInvalid here.\n>>\n>> + ereport(LOG,\n>> + (errmsg(\"completed backup recovery with redo LSN %X/%X\",\n>> + LSN_FORMAT_ARGS(oldBackupStartPoint))));\n>>\n>> Nit 2: How about adding backupEndPoint in this LOG? That would give:\n>> \"completed backup recovery with redo LSN %X/%X and end LSN %X/%X\".\n> \n> Hearing nothing, I've just applied a version of the patch with these\n> two modifications on HEAD. If this needs tweaks, just let me know.\n\nI had planned to update the patch this morning -- so thanks for doing \nthat. I think having the end point in the message makes perfect sense.\n\nI would still advocate for a back patch here. It is frustrating to get \nlogs from users that just say:\n\nLOG: invalid checkpoint record\nPANIC: could not locate a valid checkpoint record\n\nIt would be very helpful to know what the checkpoint record LSN was in \nthis case.\n\nRegards,\n-David\n\n\n",
"msg_date": "Thu, 25 Jan 2024 08:56:52 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jan 25, 2024 at 08:56:52AM -0400, David Steele wrote:\n> I would still advocate for a back patch here. It is frustrating to get logs\n> from users that just say:\n> \n> LOG: invalid checkpoint record\n> PANIC: could not locate a valid checkpoint record\n> \n> It would be very helpful to know what the checkpoint record LSN was in this\n> case.\n\nI agree.\n\n\nMichael\n\n\n",
"msg_date": "Thu, 25 Jan 2024 14:29:59 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 1/25/24 09:29, Michael Banck wrote:\n> Hi,\n> \n> On Thu, Jan 25, 2024 at 08:56:52AM -0400, David Steele wrote:\n>> I would still advocate for a back patch here. It is frustrating to get logs\n>> from users that just say:\n>>\n>> LOG: invalid checkpoint record\n>> PANIC: could not locate a valid checkpoint record\n>>\n>> It would be very helpful to know what the checkpoint record LSN was in this\n>> case.\n> \n> I agree.\n\nAnother thing to note here -- knowing the LSN is important but also \nknowing that backup recovery was attempted (i.e. backup_label exists) is \nreally crucial. Knowing both just saves so much time in back and forth \ndebugging.\n\nIt appears the tally for back patching is:\n\nFor: Andres, David, Michael B\nNot Sure: Robert, Laurenz, Michael P\n\nIt seems at least nobody is dead set against it.\n\nRegards,\n-David\n\n\n",
"msg_date": "Thu, 25 Jan 2024 17:37:18 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "David Steele <[email protected]> writes:\n> Another thing to note here -- knowing the LSN is important but also \n> knowing that backup recovery was attempted (i.e. backup_label exists) is \n> really crucial. Knowing both just saves so much time in back and forth \n> debugging.\n\n> It appears the tally for back patching is:\n\n> For: Andres, David, Michael B\n> Not Sure: Robert, Laurenz, Michael P\n\n> It seems at least nobody is dead set against it.\n\nWe're talking about 1d35f705e, right? That certainly looks harmless\nand potentially useful. I'm +1 for back-patching.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jan 2024 16:42:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 1/25/24 17:42, Tom Lane wrote:\n> David Steele <[email protected]> writes:\n>> Another thing to note here -- knowing the LSN is important but also\n>> knowing that backup recovery was attempted (i.e. backup_label exists) is\n>> really crucial. Knowing both just saves so much time in back and forth\n>> debugging.\n> \n>> It appears the tally for back patching is:\n> \n>> For: Andres, David, Michael B\n>> Not Sure: Robert, Laurenz, Michael P\n> \n>> It seems at least nobody is dead set against it.\n> \n> We're talking about 1d35f705e, right? That certainly looks harmless\n> and potentially useful. I'm +1 for back-patching.\n\nThat's the one. If we were modifying existing messages I would be \nagainst it, but new, infrequent (but oh so helpful) messages seem fine.\n\nRegards,\n-David\n\n\n",
"msg_date": "Thu, 25 Jan 2024 17:44:52 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 08:56:52AM -0400, David Steele wrote:\n> I would still advocate for a back patch here. It is frustrating to get logs\n> from users that just say:\n> \n> LOG: invalid checkpoint record\n> PANIC: could not locate a valid checkpoint record\n> \n> It would be very helpful to know what the checkpoint record LSN was in this\n> case.\n\nYes, I've pested over this one in the past when debugging corruption\nissues. To me, this would just mean to appens to the PANIC an \"at\n%X/%X\", but perhaps you have more in mind for these code paths?\n--\nMichael",
"msg_date": "Fri, 26 Jan 2024 09:52:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 05:44:52PM -0400, David Steele wrote:\n> On 1/25/24 17:42, Tom Lane wrote:\n>> We're talking about 1d35f705e, right? That certainly looks harmless\n>> and potentially useful. I'm +1 for back-patching.\n> \n> That's the one. If we were modifying existing messages I would be against\n> it, but new, infrequent (but oh so helpful) messages seem fine.\n\nWell, I'm OK with this consensus on 1d35f705e if folks think this is\nuseful enough for all the stable branches.\n--\nMichael",
"msg_date": "Fri, 26 Jan 2024 12:08:46 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "\n\nOn 1/25/24 20:52, Michael Paquier wrote:\n> On Thu, Jan 25, 2024 at 08:56:52AM -0400, David Steele wrote:\n>> I would still advocate for a back patch here. It is frustrating to get logs\n>> from users that just say:\n>>\n>> LOG: invalid checkpoint record\n>> PANIC: could not locate a valid checkpoint record\n>>\n>> It would be very helpful to know what the checkpoint record LSN was in this\n>> case.\n> \n> Yes, I've pested over this one in the past when debugging corruption\n> issues. To me, this would just mean to appens to the PANIC an \"at\n> %X/%X\", but perhaps you have more in mind for these code paths?\n\nI think adding the LSN to the panic message would be a good change for HEAD.\n\nHowever, that still would not take the place of the additional messages \nin 1d35f705e showing that the LSN came from a backup_label.\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 26 Jan 2024 08:20:05 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 12:08:46PM +0900, Michael Paquier wrote:\n> Well, I'm OK with this consensus on 1d35f705e if folks think this is\n> useful enough for all the stable branches.\n\nI have done that down to REL_15_STABLE for now as this is able to\napply cleanly there. Older branches have a lack of information here,\nactually, because read_backup_label() does not return the TLI\nretrieved from the start WAL segment, so we don't have the whole\npackage of information.\n--\nMichael",
"msg_date": "Mon, 29 Jan 2024 09:09:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On 1/28/24 20:09, Michael Paquier wrote:\n> On Fri, Jan 26, 2024 at 12:08:46PM +0900, Michael Paquier wrote:\n>> Well, I'm OK with this consensus on 1d35f705e if folks think this is\n>> useful enough for all the stable branches.\n> \n> I have done that down to REL_15_STABLE for now as this is able to\n> apply cleanly there. Older branches have a lack of information here,\n> actually, because read_backup_label() does not return the TLI\n> retrieved from the start WAL segment, so we don't have the whole\n> package of information.\n\nI took a pass at this on PG14 and things definitely look a lot different \nback there. Not only is the timeline missing, but there are two sections \nof code for ending a backup, one for standby backup and one for primary.\n\nI'm satisfied with the back patches as they stand, unless anyone else \nwants to have a look.\n\nRegards,\n-David\n\n\n",
"msg_date": "Mon, 29 Jan 2024 10:03:19 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 10:03:19AM -0400, David Steele wrote:\n> I took a pass at this on PG14 and things definitely look a lot different\n> back there. Not only is the timeline missing, but there are two sections of\n> code for ending a backup, one for standby backup and one for primary.\n\nUnfortunately. The TLI from the start WAL segment lacking from these\nAPIs is really annoying especially if the backup_label is gone for\nsome reason..\n\n> I'm satisfied with the back patches as they stand, unless anyone else wants\n> to have a look.\n\nOkay, thanks for double-checking!\n--\nMichael",
"msg_date": "Tue, 30 Jan 2024 09:51:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of backup_label not noted in log"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems there's a long-standing data loss issue related to the initial\nsync of tables in the built-in logical replication (publications etc.).\nI can reproduce it fairly reliably, but I haven't figured out all the\ndetails yet and I'm a bit out of ideas, so I'm sharing what I know with\nthe hope someone takes a look and either spots the issue or has some\nother insight ...\n\nOn the pgsql-bugs, Depesz reported reported [1] cases where tables are\nadded to a publication but end up missing rows on the subscriber. I\ndidn't know what might be the issue, but given his experience I decided\nto take a do some blind attempts to reproduce the issue.\n\nI'm not going to repeat all the details from the pgsql-bugs thread, but\nI ended up writing a script that does randomized stress test tablesync\nunder concurrent load. Attached are two scripts, where crash-test.sh\ndoes the main work, while run.sh drives the test - executes\ncrash-test.sh in a loop and generates random parameters for it.\n\nThe run.sh generates number of tables, refresh interval (after how many\ntables we refresh subscription) and how long to sleep between steps (to\nallow pgbench to do more work).\n\nThe crash-test.sh then does this:\n\n 1) initializes two clusters (expects $PATH to have pg_ctl etc.)\n\n 2) configures them for logical replication (wal_level, ...)\n\n 3) creates publication and subscription on the nodes\n\n 4) creates some a bunch of tables\n\n 5) starts a pgbench that inserts data into the tables\n\n 6) adds the tables to the publication one by one, occasionally\n refreshing the subscription\n\n 7) waits for tablesync of all the tables to complete (so that the\n tables get into the 'r' state, thus replicating normally)\n\n 8) stops the pgbench\n\n 9) waits for the subscriber to fully catch up\n\n 10) compares that the tables on publisher/subscriber nodes\n\nTo run this, just make sure PATH includes pg, and do e.g.\n\n ./run.sh 10\n\nwhich does 10 runs of crash-test.sh with random parameters. Each run can\ntake a couple minutes, depending on the parameters, hardware etc.\n\n\nObviously, we expect the tables to match on the two nodes, but the\nscript regularly detects cases where the subscriber is missing some of\nthe rows. The script dumps those tables, and the rows contain timestamps\nand LSNs to allow \"rough correlation\" (imperfect thanks to concurrency).\n\nDepesz reported \"gaps\" in the data, i.e. missing a chunk of data, but\nthen following rows seemingly replicated. I did see such cases too, but\nmost of the time I see a missing chunk of rows at the end (but maybe if\nthe test continued a bit longer, it'd replicate some rows).\n\nThe report talks about replication between pg12->pg14, but I don't think\nthe cross-version part is necessary - I'm able to reproduce the issue on\nindividual versions (e.g. 12->12) since 12 (I haven't tried 11, but I'd\nbe surprised if it wasn't affected too).\n\nThe rows include `pg_current_wal_lsn()` to roughly track the LSN where\nthe row is inserted, and the \"gap\" of missing rows for each table seems\nto match pg_subscription_rel.srsublsn, i.e. the LSN up to which\ntablesync copied data, and the table should be replicated as usual.\n\nAnother interesting observation is that the issue only happens for \"bulk\ninsert\" transactions, i.e.\n\n BEGIN;\n ... INSERT into all tables ...\n COMMIT;\n\nbut not when each insert is a separate transaction. A bit strange.\n\n\nAfter quite a bit of debugging, I came to the conclusion this happens\nbecause we fail to invalidate caches on the publisher, so it does not\nrealize it should start sending rows for that table.\n\nIn particular, we initially build RelationSyncEntry when the table is\nnot yet included in the publication, so we end up with pubinsert=false,\nthus not replicating the inserts. Which makes sense, but we then seems\nto fail to invalidate the entry after it's added to the publication.\n\nThe other problem is that even if we happen to invalidate the entry, we\ncall GetRelationPublications(). But even if it happens long after the\ntable gets added to the publication (both in time and LSN terms), it\nstill returns NIL as if the table had no publications. And we end up\nwith pubinsert=false, skipping the inserts again.\n\nAttached are three patches against master. 0001 adds some debug logging\nthat I found useful when investigating the issue. 0002 illustrates the\nissue by forcefully invalidating the entry for each change, and\nimplementing a non-syscache variant of the GetRelationPublication().\nThis makes the code unbearably slow, but with both changes in place I\ncan no longer reproduce the issue. Undoing either of the two changes\nmakes it reproducible again. (I'll talk about 0003 later.)\n\nI suppose timing matters, so it's possible it gets \"fixed\" simply\nbecause of that, but I find that unlikely given the number of runs I did\nwithout observing any failure.\n\nOverall, this looks, walks and quacks like a cache invalidation issue,\nlikely a missing invalidation somewhere in the ALTER PUBLICATION code.\nIf we fail to invalidate the pg_publication_rel syscache somewhere, that\nobviously explain why GetRelationPublications() returns stale data, but\nit would also explain why the RelationSyncEntry is not invalidated, as\nthat happens in a syscache callback.\n\nBut I tried to do various crazy things in the ALTER PUBLICATION code,\nand none of that worked, so I'm a bit confused/lost.\n\n\nHowever, while randomly poking at different things, I realized that if I\nchange the lock obtained on the relation in OpenTableList() from\nShareUpdateExclusiveLock to ShareRowExclusiveLock, the issue goes away.\nI don't know why it works, and I don't even recall what exactly led me\nto the idea of changing it.\n\nThis is what 0003 does - it reverts 0002 and changes the lock level.\n\nAFAIK the logical decoding code doesn't actually acquire locks on the\ndecoded tables, so why would this change matter? The only place that\ndoes lock the relation is the tablesync, which gets RowExclusiveLock on\nit. And it's interesting that RowExclusiveLock does not conflict with\nShareUpdateExclusiveLock, but does with ShareRowExclusiveLock. But why\nwould this even matter, when the tablesync can only touch the table\nafter it gets added to the publication?\n\n\nregards\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 17 Nov 2023 15:36:25 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "long-standing data loss bug in initial sync of logical replication"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 15:36:25 +0100, Tomas Vondra wrote:\n> It seems there's a long-standing data loss issue related to the initial\n> sync of tables in the built-in logical replication (publications etc.).\n\n:(\n\n\n> Overall, this looks, walks and quacks like a cache invalidation issue,\n> likely a missing invalidation somewhere in the ALTER PUBLICATION code.\n\nIt could also be be that pgoutput doesn't have sufficient invalidation\nhandling.\n\n\nOne thing that looks bogus on the DDL side is how the invalidation handling\ninteracts with locking.\n\n\nFor tables etc the invalidation handling works because we hold a lock on the\nrelation before modifying the catalog and don't release that lock until\ntransaction end. That part is crucial: We queue shared invalidations at\ntransaction commit, *after* the transaction is marked as visible, but *before*\nlocks are released. That guarantees that any backend processing invalidations\nwill see the new contents. However, if the lock on the modified object is\nreleased before transaction commit, other backends can build and use a cache\nentry that hasn't processed invalidations (invaliations are processed when\nacquiring locks).\n\nWhile there is such an object for publications, it seems to be acquired too\nlate to actually do much good in a number of paths. And not at all in others.\n\nE.g.:\n\n\tpubform = (Form_pg_publication) GETSTRUCT(tup);\n\n\t/*\n\t * If the publication doesn't publish changes via the root partitioned\n\t * table, the partition's row filter and column list will be used. So\n\t * disallow using WHERE clause and column lists on partitioned table in\n\t * this case.\n\t */\n\tif (!pubform->puballtables && publish_via_partition_root_given &&\n\t\t!publish_via_partition_root)\n {\n\t\t/*\n\t\t * Lock the publication so nobody else can do anything with it. This\n\t\t * prevents concurrent alter to add partitioned table(s) with WHERE\n\t\t * clause(s) and/or column lists which we don't allow when not\n\t\t * publishing via root.\n\t\t */\n\t\tLockDatabaseObject(PublicationRelationId, pubform->oid, 0,\n\t\t\t\t\t\t AccessShareLock);\n\na) Another session could have modified the publication and made puballtables out-of-date\nb) The LockDatabaseObject() uses AccessShareLock, so others can get past this\n point as well\n\nb) seems like a copy-paste bug or such?\n\n\nI don't see any locking of the publication around RemovePublicationRelById(),\nfor example.\n\nI might just be misunderstanding things the way publication locking is\nintended to work.\n\n\n\n\n\n> However, while randomly poking at different things, I realized that if I\n> change the lock obtained on the relation in OpenTableList() from\n> ShareUpdateExclusiveLock to ShareRowExclusiveLock, the issue goes away.\n\nThat's odd. There's cases where changing the lock level can cause invalidation\nprocessing to happen because there is no pre-existing lock for the \"new\" lock\nlevel, but there was for the old. But OpenTableList() is used when altering\nthe publications, so I don't see how that connects.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Nov 2023 17:54:43 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 17:54:43 -0800, Andres Freund wrote:\n> On 2023-11-17 15:36:25 +0100, Tomas Vondra wrote:\n> > Overall, this looks, walks and quacks like a cache invalidation issue,\n> > likely a missing invalidation somewhere in the ALTER PUBLICATION code.\n\nI can confirm that something is broken with invalidation handling.\n\nTo test this I just used pg_recvlogical to stdout. It's just interesting\nwhether something arrives, that's easy to discern even with binary output.\n\nCREATE PUBLICATION pb;\nsrc/bin/pg_basebackup/pg_recvlogical --plugin=pgoutput --start --slot test -d postgres -o proto_version=4 -o publication_names=pb -o messages=true -f -\n\nS1: CREATE TABLE d(data text not null);\nS1: INSERT INTO d VALUES('d1');\nS2: BEGIN; INSERT INTO d VALUES('d2');\nS1: ALTER PUBLICATION pb ADD TABLE d;\nS2: COMMIT\nS2: INSERT INTO d VALUES('d3');\nS1: INSERT INTO d VALUES('d4');\nRL: <nothing>\n\nWithout the 'd2' insert in an in-progress transaction, pgoutput *does* react\nto the ALTER PUBLICATION.\n\nI think the problem here is insufficient locking. The ALTER PUBLICATION pb ADD\nTABLE d basically modifies the catalog state of 'd', without a lock preventing\nother sessions from having a valid cache entry that they could continue to\nuse. Due to this, decoding S2's transactions that started before S2's commit,\nwill populate the cache entry with the state as of the time of S1's last\naction, i.e. no need to output the change.\n\nThe reason this can happen is because OpenTableList() uses\nShareUpdateExclusiveLock. That allows the ALTER PUBLICATION to happen while\nthere's an ongoing INSERT.\n\nI think this isn't just a logical decoding issue. S2's cache state just after\nthe ALTER PUBLICATION is going to be wrong - the table is already locked,\ntherefore further operations on the table don't trigger cache invalidation\nprocessing - but the catalog state *has* changed. It's a bigger problem for\nlogical decoding though, as it's a bit more lazy about invalidation processing\nthan normal transactions, allowing the problem to persist for longer.\n\n\nI guess it's not really feasible to just increase the lock level here though\n:(. The use of ShareUpdateExclusiveLock isn't new, and suddenly using AEL\nwould perhaps lead to new deadlocks and such? But it also seems quite wrong.\n\n\nWe could brute force this in the logical decoding infrastructure, by\ndistributing invalidations from catalog modifying transactions to all\nconcurrent in-progress transactions (like already done for historic catalog\nsnapshot, c.f. SnapBuildDistributeNewCatalogSnapshot()). But I think that'd\nbe a fairly significant increase in overhead.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Nov 2023 18:54:45 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On 11/18/23 02:54, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-17 15:36:25 +0100, Tomas Vondra wrote:\n>> It seems there's a long-standing data loss issue related to the initial\n>> sync of tables in the built-in logical replication (publications etc.).\n> \n> :(\n> \n\nYeah :-(\n\n> \n>> Overall, this looks, walks and quacks like a cache invalidation issue,\n>> likely a missing invalidation somewhere in the ALTER PUBLICATION code.\n> \n> It could also be be that pgoutput doesn't have sufficient invalidation\n> handling.\n> \n\nI'm not sure about the details, but it can't be just about pgoutput\nfailing to react to some syscache invalidation. As described, just\nresetting the RelationSyncEntry doesn't fix the issue - it's the\nsyscache that's not invalidated, IMO. But maybe that's what you mean.\n\n> \n> One thing that looks bogus on the DDL side is how the invalidation handling\n> interacts with locking.\n> \n> \n> For tables etc the invalidation handling works because we hold a lock on the\n> relation before modifying the catalog and don't release that lock until\n> transaction end. That part is crucial: We queue shared invalidations at\n> transaction commit, *after* the transaction is marked as visible, but *before*\n> locks are released. That guarantees that any backend processing invalidations\n> will see the new contents. However, if the lock on the modified object is\n> released before transaction commit, other backends can build and use a cache\n> entry that hasn't processed invalidations (invaliations are processed when\n> acquiring locks).\n> \n\nRight.\n\n> While there is such an object for publications, it seems to be acquired too\n> late to actually do much good in a number of paths. And not at all in others.\n> \n> E.g.:\n> \n> \tpubform = (Form_pg_publication) GETSTRUCT(tup);\n> \n> \t/*\n> \t * If the publication doesn't publish changes via the root partitioned\n> \t * table, the partition's row filter and column list will be used. So\n> \t * disallow using WHERE clause and column lists on partitioned table in\n> \t * this case.\n> \t */\n> \tif (!pubform->puballtables && publish_via_partition_root_given &&\n> \t\t!publish_via_partition_root)\n> {\n> \t\t/*\n> \t\t * Lock the publication so nobody else can do anything with it. This\n> \t\t * prevents concurrent alter to add partitioned table(s) with WHERE\n> \t\t * clause(s) and/or column lists which we don't allow when not\n> \t\t * publishing via root.\n> \t\t */\n> \t\tLockDatabaseObject(PublicationRelationId, pubform->oid, 0,\n> \t\t\t\t\t\t AccessShareLock);\n> \n> a) Another session could have modified the publication and made puballtables out-of-date\n> b) The LockDatabaseObject() uses AccessShareLock, so others can get past this\n> point as well\n> \n> b) seems like a copy-paste bug or such?\n> \n> \n> I don't see any locking of the publication around RemovePublicationRelById(),\n> for example.\n> \n> I might just be misunderstanding things the way publication locking is\n> intended to work.\n> \n\nI've been asking similar questions while investigating this, but the\ninteractions with logical decoding (which kinda happens concurrently in\nterms of WAL, but not concurrently in terms of time), historical\nsnapshots etc. make my head spin.\n\n> \n>> However, while randomly poking at different things, I realized that if I\n>> change the lock obtained on the relation in OpenTableList() from\n>> ShareUpdateExclusiveLock to ShareRowExclusiveLock, the issue goes away.\n> \n> That's odd. There's cases where changing the lock level can cause invalidation\n> processing to happen because there is no pre-existing lock for the \"new\" lock\n> level, but there was for the old. But OpenTableList() is used when altering\n> the publications, so I don't see how that connects.\n> \n\nYeah, I had the idea that maybe the transaction already holds the lock\non the table, and changing this to ShareRowExclusiveLock makes it\ndifferent, possibly triggering a new invalidation or something. But I\ndid check with gdb, and if I set a breakpoint at OpenTableList, there\nare no locks on the table.\n\nBut the effect is hard to deny - if I run the test 100 times, with the\nSharedUpdateExclusiveLock I get maybe 80 failures. After changing it to\nShareRowExclusiveLock I get 0. Sure, there's some randomness for cases\nlike this, but this is pretty unlikely.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 18 Nov 2023 11:30:53 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "\n\nOn 11/18/23 03:54, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-17 17:54:43 -0800, Andres Freund wrote:\n>> On 2023-11-17 15:36:25 +0100, Tomas Vondra wrote:\n>>> Overall, this looks, walks and quacks like a cache invalidation issue,\n>>> likely a missing invalidation somewhere in the ALTER PUBLICATION code.\n> \n> I can confirm that something is broken with invalidation handling.\n> \n> To test this I just used pg_recvlogical to stdout. It's just interesting\n> whether something arrives, that's easy to discern even with binary output.\n> \n> CREATE PUBLICATION pb;\n> src/bin/pg_basebackup/pg_recvlogical --plugin=pgoutput --start --slot test -d postgres -o proto_version=4 -o publication_names=pb -o messages=true -f -\n> \n> S1: CREATE TABLE d(data text not null);\n> S1: INSERT INTO d VALUES('d1');\n> S2: BEGIN; INSERT INTO d VALUES('d2');\n> S1: ALTER PUBLICATION pb ADD TABLE d;\n> S2: COMMIT\n> S2: INSERT INTO d VALUES('d3');\n> S1: INSERT INTO d VALUES('d4');\n> RL: <nothing>\n> \n> Without the 'd2' insert in an in-progress transaction, pgoutput *does* react\n> to the ALTER PUBLICATION.\n> \n> I think the problem here is insufficient locking. The ALTER PUBLICATION pb ADD\n> TABLE d basically modifies the catalog state of 'd', without a lock preventing\n> other sessions from having a valid cache entry that they could continue to\n> use. Due to this, decoding S2's transactions that started before S2's commit,\n> will populate the cache entry with the state as of the time of S1's last\n> action, i.e. no need to output the change.\n> \n> The reason this can happen is because OpenTableList() uses\n> ShareUpdateExclusiveLock. That allows the ALTER PUBLICATION to happen while\n> there's an ongoing INSERT.\n> \n\nI guess this would also explain why changing the lock mode from\nShareUpdateExclusiveLock to ShareRowExclusiveLock changes the behavior.\nINSERT acquires RowExclusiveLock, which doesn't conflict only with the\nlatter.\n\n> I think this isn't just a logical decoding issue. S2's cache state just after\n> the ALTER PUBLICATION is going to be wrong - the table is already locked,\n> therefore further operations on the table don't trigger cache invalidation\n> processing - but the catalog state *has* changed. It's a bigger problem for\n> logical decoding though, as it's a bit more lazy about invalidation processing\n> than normal transactions, allowing the problem to persist for longer.\n> \n\nYeah. I'm wondering if there's some other operation acquiring a lock\nweaker than RowExclusiveLock that might be affected by this. Because\nthen we'd need to get an even stronger lock ...\n\n> \n> I guess it's not really feasible to just increase the lock level here though\n> :(. The use of ShareUpdateExclusiveLock isn't new, and suddenly using AEL\n> would perhaps lead to new deadlocks and such? But it also seems quite wrong.\n> \n\nIf this really is about the lock being too weak, then I don't see why\nwould it be wrong? If it's required for correctness, it's not really\nwrong, IMO. Sure, stronger locks are not great ...\n\nI'm not sure about the risk of deadlocks. If you do\n\n ALTER PUBLICATION ... ADD TABLE\n\nit's not holding many other locks. It essentially gets a lock just a\nlock on pg_publication catalog, and then the publication row. That's it.\n\nIf we increase the locks from ShareUpdateExclusive to ShareRowExclusive,\nwe're making it conflict with RowExclusive. Which is just DML, and I\nthink we need to do that.\n\nSo maybe that's fine? For me, a detected deadlock is better than\nsilently missing some of the data.\n\n> \n> We could brute force this in the logical decoding infrastructure, by\n> distributing invalidations from catalog modifying transactions to all\n> concurrent in-progress transactions (like already done for historic catalog\n> snapshot, c.f. SnapBuildDistributeNewCatalogSnapshot()). But I think that'd\n> be a fairly significant increase in overhead.\n> \n\nI have no idea what the overhead would be - perhaps not too bad,\nconsidering catalog changes are not too common (I'm sure there are\nextreme cases). And maybe we could even restrict this only to\n\"interesting\" catalogs, or something like that? (However I hate those\nweird differences in behavior, it can easily lead to bugs.)\n\nBut it feels more like a band-aid than actually fixing the issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 18 Nov 2023 11:56:47 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-18 11:56:47 +0100, Tomas Vondra wrote:\n> > I guess it's not really feasible to just increase the lock level here though\n> > :(. The use of ShareUpdateExclusiveLock isn't new, and suddenly using AEL\n> > would perhaps lead to new deadlocks and such? But it also seems quite wrong.\n> > \n> \n> If this really is about the lock being too weak, then I don't see why\n> would it be wrong?\n\nSorry, that was badly formulated. The wrong bit is the use of\nShareUpdateExclusiveLock.\n\n\n> If it's required for correctness, it's not really wrong, IMO. Sure, stronger\n> locks are not great ...\n> \n> I'm not sure about the risk of deadlocks. If you do\n> \n> ALTER PUBLICATION ... ADD TABLE\n> \n> it's not holding many other locks. It essentially gets a lock just a\n> lock on pg_publication catalog, and then the publication row. That's it.\n> \n> If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,\n> we're making it conflict with RowExclusive. Which is just DML, and I\n> think we need to do that.\n\n From what I can tell it needs to to be an AccessExlusiveLock. Completely\nindependent of logical decoding. The way the cache stays coherent is catalog\nmodifications conflicting with anything that builds cache entries. We have a\nfew cases where we do use lower level locks, but for those we have explicit\nanalysis for why that's ok (see e.g. reloptions.c) or we block until nobody\ncould have an old view of the catalog (various CONCURRENTLY) operations.\n\n\n> So maybe that's fine? For me, a detected deadlock is better than\n> silently missing some of the data.\n\nThat certainly is true.\n\n\n> > We could brute force this in the logical decoding infrastructure, by\n> > distributing invalidations from catalog modifying transactions to all\n> > concurrent in-progress transactions (like already done for historic catalog\n> > snapshot, c.f. SnapBuildDistributeNewCatalogSnapshot()). But I think that'd\n> > be a fairly significant increase in overhead.\n> > \n> \n> I have no idea what the overhead would be - perhaps not too bad,\n> considering catalog changes are not too common (I'm sure there are\n> extreme cases). And maybe we could even restrict this only to\n> \"interesting\" catalogs, or something like that? (However I hate those\n> weird differences in behavior, it can easily lead to bugs.)\n>\n> But it feels more like a band-aid than actually fixing the issue.\n\nAgreed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Nov 2023 10:12:57 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On 11/18/23 19:12, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-18 11:56:47 +0100, Tomas Vondra wrote:\n>>> I guess it's not really feasible to just increase the lock level here though\n>>> :(. The use of ShareUpdateExclusiveLock isn't new, and suddenly using AEL\n>>> would perhaps lead to new deadlocks and such? But it also seems quite wrong.\n>>>\n>>\n>> If this really is about the lock being too weak, then I don't see why\n>> would it be wrong?\n> \n> Sorry, that was badly formulated. The wrong bit is the use of\n> ShareUpdateExclusiveLock.\n> \n\nAh, you meant the current lock mode seems wrong, not that changing the\nlocks seems wrong. Yeah, true.\n\n> \n>> If it's required for correctness, it's not really wrong, IMO. Sure, stronger\n>> locks are not great ...\n>>\n>> I'm not sure about the risk of deadlocks. If you do\n>>\n>> ALTER PUBLICATION ... ADD TABLE\n>>\n>> it's not holding many other locks. It essentially gets a lock just a\n>> lock on pg_publication catalog, and then the publication row. That's it.\n>>\n>> If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,\n>> we're making it conflict with RowExclusive. Which is just DML, and I\n>> think we need to do that.\n> \n> From what I can tell it needs to to be an AccessExlusiveLock. Completely\n> independent of logical decoding. The way the cache stays coherent is catalog\n> modifications conflicting with anything that builds cache entries. We have a\n> few cases where we do use lower level locks, but for those we have explicit\n> analysis for why that's ok (see e.g. reloptions.c) or we block until nobody\n> could have an old view of the catalog (various CONCURRENTLY) operations.\n> \n\nYeah, I got too focused on the issue I triggered, which seems to be\nfixed by using SRE (still don't understand why ...). But you're probably\nright there may be other cases where SRE would not be sufficient, I\ncertainly can't prove it'd be safe.\n\n> \n>> So maybe that's fine? For me, a detected deadlock is better than\n>> silently missing some of the data.\n> \n> That certainly is true.\n> \n> \n>>> We could brute force this in the logical decoding infrastructure, by\n>>> distributing invalidations from catalog modifying transactions to all\n>>> concurrent in-progress transactions (like already done for historic catalog\n>>> snapshot, c.f. SnapBuildDistributeNewCatalogSnapshot()). But I think that'd\n>>> be a fairly significant increase in overhead.\n>>>\n>>\n>> I have no idea what the overhead would be - perhaps not too bad,\n>> considering catalog changes are not too common (I'm sure there are\n>> extreme cases). And maybe we could even restrict this only to\n>> \"interesting\" catalogs, or something like that? (However I hate those\n>> weird differences in behavior, it can easily lead to bugs.)\n>>\n>> But it feels more like a band-aid than actually fixing the issue.\n> \n> Agreed.\n> \n\n... and it would no not fix the other places outside logical decoding.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 18 Nov 2023 21:45:35 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-18 21:45:35 +0100, Tomas Vondra wrote:\n> On 11/18/23 19:12, Andres Freund wrote:\n> >> If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,\n> >> we're making it conflict with RowExclusive. Which is just DML, and I\n> >> think we need to do that.\n> > \n> > From what I can tell it needs to to be an AccessExlusiveLock. Completely\n> > independent of logical decoding. The way the cache stays coherent is catalog\n> > modifications conflicting with anything that builds cache entries. We have a\n> > few cases where we do use lower level locks, but for those we have explicit\n> > analysis for why that's ok (see e.g. reloptions.c) or we block until nobody\n> > could have an old view of the catalog (various CONCURRENTLY) operations.\n> > \n> \n> Yeah, I got too focused on the issue I triggered, which seems to be\n> fixed by using SRE (still don't understand why ...). But you're probably\n> right there may be other cases where SRE would not be sufficient, I\n> certainly can't prove it'd be safe.\n\nI think it makes sense here: SRE prevents the problematic \"scheduling\" in your\ntest - with SRE no DML started before ALTER PUB ... ADD can commit after.\n\nI'm not sure there are any cases where using SRE instead of AE would cause\nproblems for logical decoding, but it seems very hard to prove. I'd be very\nsurprised if just using SRE would not lead to corrupted cache contents in some\nsituations. The cases where a lower lock level is ok are ones where we just\ndon't care that the cache is coherent in that moment.\n\nIn a way, the logical decoding cache-invalidation situation is a lot more\natomic than the \"normal\" situation. During normal operation locking is\nstrictly required to prevent incoherent states when building a cache entry\nafter a transaction committed, but before the sinval entries have been\nqueued. But in the logical decoding case that window doesn't exist.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Nov 2023 13:05:19 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "\n\nOn 11/18/23 22:05, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-18 21:45:35 +0100, Tomas Vondra wrote:\n>> On 11/18/23 19:12, Andres Freund wrote:\n>>>> If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,\n>>>> we're making it conflict with RowExclusive. Which is just DML, and I\n>>>> think we need to do that.\n>>>\n>>> From what I can tell it needs to to be an AccessExlusiveLock. Completely\n>>> independent of logical decoding. The way the cache stays coherent is catalog\n>>> modifications conflicting with anything that builds cache entries. We have a\n>>> few cases where we do use lower level locks, but for those we have explicit\n>>> analysis for why that's ok (see e.g. reloptions.c) or we block until nobody\n>>> could have an old view of the catalog (various CONCURRENTLY) operations.\n>>>\n>>\n>> Yeah, I got too focused on the issue I triggered, which seems to be\n>> fixed by using SRE (still don't understand why ...). But you're probably\n>> right there may be other cases where SRE would not be sufficient, I\n>> certainly can't prove it'd be safe.\n> \n> I think it makes sense here: SRE prevents the problematic \"scheduling\" in your\n> test - with SRE no DML started before ALTER PUB ... ADD can commit after.\n> \n\nIf understand correctly, with the current code (which only gets\nShareUpdateExclusiveLock), we may end up in a situation like this\n(sessions A and B):\n\n A: starts \"ALTER PUBLICATION p ADD TABLE t\" and gets the SUE lock\n A: writes the invalidation message(s) into WAL\n B: inserts into table \"t\"\n B: commit\n A: commit\n\nWith the stronger SRE lock, the commits would have to happen in the\nopposite order, because as you say it prevents the bad ordering.\n\nBut why would this matter for logical decoding? We accumulate the the\ninvalidations and execute them at transaction commit, or did I miss\nsomething?\n\nSo what I think should happen is we get to apply B first, which won't\nsee the table as part of the publication. It might even build the cache\nentries (syscache+relsync), reflecting that. But then we get to execute\nA, along with all the invalidations, and that should invalidate them.\n\nI'm clearly missing something, because the SRE does change the behavior,\nso there has to be a difference (and by my reasoning it shouldn't be).\n\nOr maybe it's the other way around? Won't B get the invalidation, but\nuse a historical snapshot that doesn't yet see the table in publication?\n\n> I'm not sure there are any cases where using SRE instead of AE would cause\n> problems for logical decoding, but it seems very hard to prove. I'd be very\n> surprised if just using SRE would not lead to corrupted cache contents in some\n> situations. The cases where a lower lock level is ok are ones where we just\n> don't care that the cache is coherent in that moment.\n> \n\nAre you saying it might break cases that are not corrupted now? How\ncould obtaining a stronger lock have such effect?\n\n> In a way, the logical decoding cache-invalidation situation is a lot more\n> atomic than the \"normal\" situation. During normal operation locking is\n> strictly required to prevent incoherent states when building a cache entry\n> after a transaction committed, but before the sinval entries have been\n> queued. But in the logical decoding case that window doesn't exist.\n> \n\nBecause we apply the invalidations at commit time, so it happens as a\nsingle operation that can't interleave with other sessions?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 19 Nov 2023 02:15:33 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On 2023-11-19 02:15:33 +0100, Tomas Vondra wrote:\n> \n> \n> On 11/18/23 22:05, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-11-18 21:45:35 +0100, Tomas Vondra wrote:\n> >> On 11/18/23 19:12, Andres Freund wrote:\n> >>>> If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,\n> >>>> we're making it conflict with RowExclusive. Which is just DML, and I\n> >>>> think we need to do that.\n> >>>\n> >>> From what I can tell it needs to to be an AccessExlusiveLock. Completely\n> >>> independent of logical decoding. The way the cache stays coherent is catalog\n> >>> modifications conflicting with anything that builds cache entries. We have a\n> >>> few cases where we do use lower level locks, but for those we have explicit\n> >>> analysis for why that's ok (see e.g. reloptions.c) or we block until nobody\n> >>> could have an old view of the catalog (various CONCURRENTLY) operations.\n> >>>\n> >>\n> >> Yeah, I got too focused on the issue I triggered, which seems to be\n> >> fixed by using SRE (still don't understand why ...). But you're probably\n> >> right there may be other cases where SRE would not be sufficient, I\n> >> certainly can't prove it'd be safe.\n> > \n> > I think it makes sense here: SRE prevents the problematic \"scheduling\" in your\n> > test - with SRE no DML started before ALTER PUB ... ADD can commit after.\n> > \n> \n> If understand correctly, with the current code (which only gets\n> ShareUpdateExclusiveLock), we may end up in a situation like this\n> (sessions A and B):\n> \n> A: starts \"ALTER PUBLICATION p ADD TABLE t\" and gets the SUE lock\n> A: writes the invalidation message(s) into WAL\n> B: inserts into table \"t\"\n> B: commit\n> A: commit\n\nI don't think this the problematic sequence - at least it's not what I had\nreproed in\nhttps://postgr.es/m/20231118025445.crhaeeuvoe2g5dv6%40awork3.anarazel.de\n\nAdding line numbers:\n\n1) S1: CREATE TABLE d(data text not null);\n2) S1: INSERT INTO d VALUES('d1');\n3) S2: BEGIN; INSERT INTO d VALUES('d2');\n4) S1: ALTER PUBLICATION pb ADD TABLE d;\n5) S2: COMMIT\n6) S2: INSERT INTO d VALUES('d3');\n7) S1: INSERT INTO d VALUES('d4');\n8) RL: <nothing>\n\nThe problem with the sequence is that the insert from 3) is decoded *after* 4)\nand that to decode the insert (which happened before the ALTER) the catalog\nsnapshot and cache state is from *before* the ALTER TABLE. Because the\ntransaction started in 3) doesn't actually modify any catalogs, no\ninvalidations are executed after decoding it. The result is that the cache\nlooks like it did at 3), not like after 4). Undesirable timetravel...\n\nIt's worth noting that here the cache state is briefly correct, after 4), it's\njust that after 5) it stays the old state.\n\nIf 4) instead uses a SRE lock, then S1 will be blocked until S2 commits, and\neverything is fine.\n\n\n\n> > I'm not sure there are any cases where using SRE instead of AE would cause\n> > problems for logical decoding, but it seems very hard to prove. I'd be very\n> > surprised if just using SRE would not lead to corrupted cache contents in some\n> > situations. The cases where a lower lock level is ok are ones where we just\n> > don't care that the cache is coherent in that moment.\n\n> Are you saying it might break cases that are not corrupted now? How\n> could obtaining a stronger lock have such effect?\n\nNo, I mean that I don't know if using SRE instead of AE would have negative\nconsequences for logical decoding. I.e. whether, from a logical decoding POV,\nit'd suffice to increase the lock level to just SRE instead of AE.\n\nSince I don't see how it'd be correct otherwise, it's kind of a moot question.\n\n\n> > In a way, the logical decoding cache-invalidation situation is a lot more\n> > atomic than the \"normal\" situation. During normal operation locking is\n> > strictly required to prevent incoherent states when building a cache entry\n> > after a transaction committed, but before the sinval entries have been\n> > queued. But in the logical decoding case that window doesn't exist.\n> > \n> Because we apply the invalidations at commit time, so it happens as a\n> single operation that can't interleave with other sessions?\n\nYea, the situation is much simpler during logical decoding than \"originally\" -\nthere's no concurrency.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Nov 2023 18:18:30 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "Hi,\n\nOn 19.11.2023 09:18, Andres Freund wrote:\n> Yea, the situation is much simpler during logical decoding than \"originally\" -\n> there's no concurrency.\n>\n> Greetings,\n>\n> Andres Freund\n>\nWe've encountered a similar error on our industrial server.\n\nThe case: After adding a table to logical replication, table \ninitialization proceeds normally, but new data from the publisher's \ntable does not appear on the subscriber server. After we added the \ntable, we checked and saw that the data was present on the subscriber \nand everything was normal, we discovered the error after some time. I \nhave attached scripts to the email.\n\nThe patch from the first message also solves this problem.\n\n-- \nBest regards,\nVadim Lakt",
"msg_date": "Tue, 16 Jan 2024 17:24:02 +0700",
"msg_from": "Vadim Lakt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Sun, Nov 19, 2023 at 7:48 AM Andres Freund <[email protected]> wrote:\n>\n> On 2023-11-19 02:15:33 +0100, Tomas Vondra wrote:\n> >\n> > If understand correctly, with the current code (which only gets\n> > ShareUpdateExclusiveLock), we may end up in a situation like this\n> > (sessions A and B):\n> >\n> > A: starts \"ALTER PUBLICATION p ADD TABLE t\" and gets the SUE lock\n> > A: writes the invalidation message(s) into WAL\n> > B: inserts into table \"t\"\n> > B: commit\n> > A: commit\n>\n> I don't think this the problematic sequence - at least it's not what I had\n> reproed in\n> https://postgr.es/m/20231118025445.crhaeeuvoe2g5dv6%40awork3.anarazel.de\n>\n> Adding line numbers:\n>\n> 1) S1: CREATE TABLE d(data text not null);\n> 2) S1: INSERT INTO d VALUES('d1');\n> 3) S2: BEGIN; INSERT INTO d VALUES('d2');\n> 4) S1: ALTER PUBLICATION pb ADD TABLE d;\n> 5) S2: COMMIT\n> 6) S2: INSERT INTO d VALUES('d3');\n> 7) S1: INSERT INTO d VALUES('d4');\n> 8) RL: <nothing>\n>\n> The problem with the sequence is that the insert from 3) is decoded *after* 4)\n> and that to decode the insert (which happened before the ALTER) the catalog\n> snapshot and cache state is from *before* the ALTER TABLE. Because the\n> transaction started in 3) doesn't actually modify any catalogs, no\n> invalidations are executed after decoding it. The result is that the cache\n> looks like it did at 3), not like after 4). Undesirable timetravel...\n>\n> It's worth noting that here the cache state is briefly correct, after 4), it's\n> just that after 5) it stays the old state.\n>\n> If 4) instead uses a SRE lock, then S1 will be blocked until S2 commits, and\n> everything is fine.\n>\n\nI agree, your analysis looks right to me.\n\n>\n>\n> > > I'm not sure there are any cases where using SRE instead of AE would cause\n> > > problems for logical decoding, but it seems very hard to prove. I'd be very\n> > > surprised if just using SRE would not lead to corrupted cache contents in some\n> > > situations. The cases where a lower lock level is ok are ones where we just\n> > > don't care that the cache is coherent in that moment.\n>\n> > Are you saying it might break cases that are not corrupted now? How\n> > could obtaining a stronger lock have such effect?\n>\n> No, I mean that I don't know if using SRE instead of AE would have negative\n> consequences for logical decoding. I.e. whether, from a logical decoding POV,\n> it'd suffice to increase the lock level to just SRE instead of AE.\n>\n> Since I don't see how it'd be correct otherwise, it's kind of a moot question.\n>\n\nWe lost track of this thread and the bug is still open. IIUC, the\nconclusion is to use SRE in OpenTableList() to fix the reported issue.\nAndres, Tomas, please let me know if my understanding is wrong,\notherwise, let's proceed and fix this issue.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Jun 2024 16:24:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On 6/24/24 12:54, Amit Kapila wrote:\n> ...\n>>\n>>>> I'm not sure there are any cases where using SRE instead of AE would cause\n>>>> problems for logical decoding, but it seems very hard to prove. I'd be very\n>>>> surprised if just using SRE would not lead to corrupted cache contents in some\n>>>> situations. The cases where a lower lock level is ok are ones where we just\n>>>> don't care that the cache is coherent in that moment.\n>>\n>>> Are you saying it might break cases that are not corrupted now? How\n>>> could obtaining a stronger lock have such effect?\n>>\n>> No, I mean that I don't know if using SRE instead of AE would have negative\n>> consequences for logical decoding. I.e. whether, from a logical decoding POV,\n>> it'd suffice to increase the lock level to just SRE instead of AE.\n>>\n>> Since I don't see how it'd be correct otherwise, it's kind of a moot question.\n>>\n> \n> We lost track of this thread and the bug is still open. IIUC, the\n> conclusion is to use SRE in OpenTableList() to fix the reported issue.\n> Andres, Tomas, please let me know if my understanding is wrong,\n> otherwise, let's proceed and fix this issue.\n> \n\nIt's in the commitfest [https://commitfest.postgresql.org/48/4766/] so I\ndon't think we 'lost track' of it, but it's true we haven't done much\nprogress recently.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jun 2024 16:36:04 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Mon, Jun 24, 2024 at 8:06 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 6/24/24 12:54, Amit Kapila wrote:\n> > ...\n> >>\n> >>>> I'm not sure there are any cases where using SRE instead of AE would cause\n> >>>> problems for logical decoding, but it seems very hard to prove. I'd be very\n> >>>> surprised if just using SRE would not lead to corrupted cache contents in some\n> >>>> situations. The cases where a lower lock level is ok are ones where we just\n> >>>> don't care that the cache is coherent in that moment.\n> >>\n> >>> Are you saying it might break cases that are not corrupted now? How\n> >>> could obtaining a stronger lock have such effect?\n> >>\n> >> No, I mean that I don't know if using SRE instead of AE would have negative\n> >> consequences for logical decoding. I.e. whether, from a logical decoding POV,\n> >> it'd suffice to increase the lock level to just SRE instead of AE.\n> >>\n> >> Since I don't see how it'd be correct otherwise, it's kind of a moot question.\n> >>\n> >\n> > We lost track of this thread and the bug is still open. IIUC, the\n> > conclusion is to use SRE in OpenTableList() to fix the reported issue.\n> > Andres, Tomas, please let me know if my understanding is wrong,\n> > otherwise, let's proceed and fix this issue.\n> >\n>\n> It's in the commitfest [https://commitfest.postgresql.org/48/4766/] so I\n> don't think we 'lost track' of it, but it's true we haven't done much\n> progress recently.\n>\n\nOkay, thanks for pointing to the CF entry. Would you like to take care\nof this? Are you seeing anything more than the simple fix to use SRE\nin OpenTableList()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 25 Jun 2024 10:34:58 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On 6/25/24 07:04, Amit Kapila wrote:\n> On Mon, Jun 24, 2024 at 8:06 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 6/24/24 12:54, Amit Kapila wrote:\n>>> ...\n>>>>\n>>>>>> I'm not sure there are any cases where using SRE instead of AE would cause\n>>>>>> problems for logical decoding, but it seems very hard to prove. I'd be very\n>>>>>> surprised if just using SRE would not lead to corrupted cache contents in some\n>>>>>> situations. The cases where a lower lock level is ok are ones where we just\n>>>>>> don't care that the cache is coherent in that moment.\n>>>>\n>>>>> Are you saying it might break cases that are not corrupted now? How\n>>>>> could obtaining a stronger lock have such effect?\n>>>>\n>>>> No, I mean that I don't know if using SRE instead of AE would have negative\n>>>> consequences for logical decoding. I.e. whether, from a logical decoding POV,\n>>>> it'd suffice to increase the lock level to just SRE instead of AE.\n>>>>\n>>>> Since I don't see how it'd be correct otherwise, it's kind of a moot question.\n>>>>\n>>>\n>>> We lost track of this thread and the bug is still open. IIUC, the\n>>> conclusion is to use SRE in OpenTableList() to fix the reported issue.\n>>> Andres, Tomas, please let me know if my understanding is wrong,\n>>> otherwise, let's proceed and fix this issue.\n>>>\n>>\n>> It's in the commitfest [https://commitfest.postgresql.org/48/4766/] so I\n>> don't think we 'lost track' of it, but it's true we haven't done much\n>> progress recently.\n>>\n> \n> Okay, thanks for pointing to the CF entry. Would you like to take care\n> of this? Are you seeing anything more than the simple fix to use SRE\n> in OpenTableList()?\n> \n\nI did not find a simpler fix than adding the SRE, and I think pretty\nmuch any other fix is guaranteed to be more complex. I don't remember\nall the details without relearning all the details, but IIRC the main\nchallenge for me was to convince myself it's a sufficient and reliable\nfix (and not working simply by chance).\n\nI won't have time to look into this anytime soon, so feel free to take\ncare of this and push the fix.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 26 Jun 2024 13:27:17 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, Jun 26, 2024 at 4:57 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 6/25/24 07:04, Amit Kapila wrote:\n> > On Mon, Jun 24, 2024 at 8:06 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> On 6/24/24 12:54, Amit Kapila wrote:\n> >>> ...\n> >>>>\n> >>>>>> I'm not sure there are any cases where using SRE instead of AE would cause\n> >>>>>> problems for logical decoding, but it seems very hard to prove. I'd be very\n> >>>>>> surprised if just using SRE would not lead to corrupted cache contents in some\n> >>>>>> situations. The cases where a lower lock level is ok are ones where we just\n> >>>>>> don't care that the cache is coherent in that moment.\n> >>>>\n> >>>>> Are you saying it might break cases that are not corrupted now? How\n> >>>>> could obtaining a stronger lock have such effect?\n> >>>>\n> >>>> No, I mean that I don't know if using SRE instead of AE would have negative\n> >>>> consequences for logical decoding. I.e. whether, from a logical decoding POV,\n> >>>> it'd suffice to increase the lock level to just SRE instead of AE.\n> >>>>\n> >>>> Since I don't see how it'd be correct otherwise, it's kind of a moot question.\n> >>>>\n> >>>\n> >>> We lost track of this thread and the bug is still open. IIUC, the\n> >>> conclusion is to use SRE in OpenTableList() to fix the reported issue.\n> >>> Andres, Tomas, please let me know if my understanding is wrong,\n> >>> otherwise, let's proceed and fix this issue.\n> >>>\n> >>\n> >> It's in the commitfest [https://commitfest.postgresql.org/48/4766/] so I\n> >> don't think we 'lost track' of it, but it's true we haven't done much\n> >> progress recently.\n> >>\n> >\n> > Okay, thanks for pointing to the CF entry. Would you like to take care\n> > of this? Are you seeing anything more than the simple fix to use SRE\n> > in OpenTableList()?\n> >\n>\n> I did not find a simpler fix than adding the SRE, and I think pretty\n> much any other fix is guaranteed to be more complex. I don't remember\n> all the details without relearning all the details, but IIRC the main\n> challenge for me was to convince myself it's a sufficient and reliable\n> fix (and not working simply by chance).\n>\n> I won't have time to look into this anytime soon, so feel free to take\n> care of this and push the fix.\n>\n\nOkay, I'll take care of this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 27 Jun 2024 08:38:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Thu, 27 Jun 2024 at 08:38, Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jun 26, 2024 at 4:57 PM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > On 6/25/24 07:04, Amit Kapila wrote:\n> > > On Mon, Jun 24, 2024 at 8:06 PM Tomas Vondra\n> > > <[email protected]> wrote:\n> > >>\n> > >> On 6/24/24 12:54, Amit Kapila wrote:\n> > >>> ...\n> > >>>>\n> > >>>>>> I'm not sure there are any cases where using SRE instead of AE would cause\n> > >>>>>> problems for logical decoding, but it seems very hard to prove. I'd be very\n> > >>>>>> surprised if just using SRE would not lead to corrupted cache contents in some\n> > >>>>>> situations. The cases where a lower lock level is ok are ones where we just\n> > >>>>>> don't care that the cache is coherent in that moment.\n> > >>>>\n> > >>>>> Are you saying it might break cases that are not corrupted now? How\n> > >>>>> could obtaining a stronger lock have such effect?\n> > >>>>\n> > >>>> No, I mean that I don't know if using SRE instead of AE would have negative\n> > >>>> consequences for logical decoding. I.e. whether, from a logical decoding POV,\n> > >>>> it'd suffice to increase the lock level to just SRE instead of AE.\n> > >>>>\n> > >>>> Since I don't see how it'd be correct otherwise, it's kind of a moot question.\n> > >>>>\n> > >>>\n> > >>> We lost track of this thread and the bug is still open. IIUC, the\n> > >>> conclusion is to use SRE in OpenTableList() to fix the reported issue.\n> > >>> Andres, Tomas, please let me know if my understanding is wrong,\n> > >>> otherwise, let's proceed and fix this issue.\n> > >>>\n> > >>\n> > >> It's in the commitfest [https://commitfest.postgresql.org/48/4766/] so I\n> > >> don't think we 'lost track' of it, but it's true we haven't done much\n> > >> progress recently.\n> > >>\n> > >\n> > > Okay, thanks for pointing to the CF entry. Would you like to take care\n> > > of this? Are you seeing anything more than the simple fix to use SRE\n> > > in OpenTableList()?\n> > >\n> >\n> > I did not find a simpler fix than adding the SRE, and I think pretty\n> > much any other fix is guaranteed to be more complex. I don't remember\n> > all the details without relearning all the details, but IIRC the main\n> > challenge for me was to convince myself it's a sufficient and reliable\n> > fix (and not working simply by chance).\n> >\n> > I won't have time to look into this anytime soon, so feel free to take\n> > care of this and push the fix.\n> >\n>\n> Okay, I'll take care of this.\n\nThis issue is present in all supported versions. I was able to\nreproduce it using the steps recommended by Andres and Tomas's\nscripts. I also conducted a small test through TAP tests to verify the\nproblem. Attached is the alternate_lock_HEAD.patch, which includes the\nlock modification(Tomas's change) and the TAP test.\nTo reproduce the issue in the HEAD version, we cannot use the same\ntest as in the alternate_lock_HEAD patch because the behavior changes\nslightly after the fix to wait for the lock until the open transaction\ncompletes. The attached issue_reproduce_testcase_head.patch can be\nused to reproduce the issue through TAP test in HEAD.\nThe changes made in the HEAD version do not directly apply to older\nbranches. For PG14, PG13, and PG12 branches, you can use the\nalternate_lock_PG14.patch.\n\nRegards,\nVignesh",
"msg_date": "Mon, 1 Jul 2024 10:50:52 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Mon, Jul 1, 2024 at 10:51 AM vignesh C <[email protected]> wrote:\n>\n>\n> This issue is present in all supported versions. I was able to\n> reproduce it using the steps recommended by Andres and Tomas's\n> scripts. I also conducted a small test through TAP tests to verify the\n> problem. Attached is the alternate_lock_HEAD.patch, which includes the\n> lock modification(Tomas's change) and the TAP test.\n>\n\n@@ -1568,7 +1568,7 @@ OpenTableList(List *tables)\n /* Allow query cancel in case this takes a long time */\n CHECK_FOR_INTERRUPTS();\n\n- rel = table_openrv(t->relation, ShareUpdateExclusiveLock);\n+ rel = table_openrv(t->relation, ShareRowExclusiveLock);\n\nThe comment just above this code (\"Open, share-lock, and check all the\nexplicitly-specified relations\") needs modification. It would be\nbetter to explain the reason of why we would need SRE lock here.\n\n> To reproduce the issue in the HEAD version, we cannot use the same\n> test as in the alternate_lock_HEAD patch because the behavior changes\n> slightly after the fix to wait for the lock until the open transaction\n> completes.\n>\n\nBut won't the test that reproduces the problem in HEAD be successful\nafter the code change? If so, can't we use the same test instead of\nslight modification to verify the lock mode?\n\n> The attached issue_reproduce_testcase_head.patch can be\n> used to reproduce the issue through TAP test in HEAD.\n> The changes made in the HEAD version do not directly apply to older\n> branches. For PG14, PG13, and PG12 branches, you can use the\n> alternate_lock_PG14.patch.\n>\n\nWhy didn't you include the test in the back branches? If it is due to\nbackground psql stuff, then won't commit\n(https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=187b8991f70fc3d2a13dc709edd408a8df0be055)\ncan address it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Jul 2024 17:05:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Tue, 9 Jul 2024 at 17:05, Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jul 1, 2024 at 10:51 AM vignesh C <[email protected]> wrote:\n> >\n> >\n> > This issue is present in all supported versions. I was able to\n> > reproduce it using the steps recommended by Andres and Tomas's\n> > scripts. I also conducted a small test through TAP tests to verify the\n> > problem. Attached is the alternate_lock_HEAD.patch, which includes the\n> > lock modification(Tomas's change) and the TAP test.\n> >\n>\n> @@ -1568,7 +1568,7 @@ OpenTableList(List *tables)\n> /* Allow query cancel in case this takes a long time */\n> CHECK_FOR_INTERRUPTS();\n>\n> - rel = table_openrv(t->relation, ShareUpdateExclusiveLock);\n> + rel = table_openrv(t->relation, ShareRowExclusiveLock);\n>\n> The comment just above this code (\"Open, share-lock, and check all the\n> explicitly-specified relations\") needs modification. It would be\n> better to explain the reason of why we would need SRE lock here.\n\nUpdated comments for the same.\n\n> > To reproduce the issue in the HEAD version, we cannot use the same\n> > test as in the alternate_lock_HEAD patch because the behavior changes\n> > slightly after the fix to wait for the lock until the open transaction\n> > completes.\n> >\n>\n> But won't the test that reproduces the problem in HEAD be successful\n> after the code change? If so, can't we use the same test instead of\n> slight modification to verify the lock mode?\n\nBefore the patch fix, the ALTER PUBLICATION command would succeed\nimmediately. Now, the ALTER PUBLICATION command waits until it\nacquires the ShareRowExclusiveLock. This change means that in test\ncases, previously we waited until the table was added to the\npublication, whereas now, after applying the patch, we wait until the\nALTER PUBLICATION command is actively waiting for the\nShareRowExclusiveLock. This waiting step ensures consistent execution\nand sequencing of tests each time.\n\n> > The attached issue_reproduce_testcase_head.patch can be\n> > used to reproduce the issue through TAP test in HEAD.\n> > The changes made in the HEAD version do not directly apply to older\n> > branches. For PG14, PG13, and PG12 branches, you can use the\n> > alternate_lock_PG14.patch.\n> >\n>\n> Why didn't you include the test in the back branches? If it is due to\n> background psql stuff, then won't commit\n> (https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=187b8991f70fc3d2a13dc709edd408a8df0be055)\n> can address it?\n\nIndeed, I initially believed it wasn't available. Currently, I haven't\nincorporated the back branch patch, but I plan to include it in a\nsubsequent version once there are no review comments on the HEAD\npatch.\n\nThe updated v2 version patch has the fix for the comments.\n\nRegards,\nVignesh",
"msg_date": "Tue, 9 Jul 2024 20:13:42 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Tue, Jul 9, 2024 at 8:14 PM vignesh C <[email protected]> wrote:\n>\n> On Tue, 9 Jul 2024 at 17:05, Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Jul 1, 2024 at 10:51 AM vignesh C <[email protected]> wrote:\n> > >\n> > >\n> > > This issue is present in all supported versions. I was able to\n> > > reproduce it using the steps recommended by Andres and Tomas's\n> > > scripts. I also conducted a small test through TAP tests to verify the\n> > > problem. Attached is the alternate_lock_HEAD.patch, which includes the\n> > > lock modification(Tomas's change) and the TAP test.\n> > >\n> >\n> > @@ -1568,7 +1568,7 @@ OpenTableList(List *tables)\n> > /* Allow query cancel in case this takes a long time */\n> > CHECK_FOR_INTERRUPTS();\n> >\n> > - rel = table_openrv(t->relation, ShareUpdateExclusiveLock);\n> > + rel = table_openrv(t->relation, ShareRowExclusiveLock);\n> >\n> > The comment just above this code (\"Open, share-lock, and check all the\n> > explicitly-specified relations\") needs modification. It would be\n> > better to explain the reason of why we would need SRE lock here.\n>\n> Updated comments for the same.\n>\n\nThe patch missed to use the ShareRowExclusiveLock for partitions, see\nattached. I haven't tested it but they should also face the same\nproblem. Apart from that, I have changed the comments in a few places\nin the patch.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 10 Jul 2024 12:28:41 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, 10 Jul 2024 at 12:28, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jul 9, 2024 at 8:14 PM vignesh C <[email protected]> wrote:\n> >\n> > On Tue, 9 Jul 2024 at 17:05, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Jul 1, 2024 at 10:51 AM vignesh C <[email protected]> wrote:\n> > > >\n> > > >\n> > > > This issue is present in all supported versions. I was able to\n> > > > reproduce it using the steps recommended by Andres and Tomas's\n> > > > scripts. I also conducted a small test through TAP tests to verify the\n> > > > problem. Attached is the alternate_lock_HEAD.patch, which includes the\n> > > > lock modification(Tomas's change) and the TAP test.\n> > > >\n> > >\n> > > @@ -1568,7 +1568,7 @@ OpenTableList(List *tables)\n> > > /* Allow query cancel in case this takes a long time */\n> > > CHECK_FOR_INTERRUPTS();\n> > >\n> > > - rel = table_openrv(t->relation, ShareUpdateExclusiveLock);\n> > > + rel = table_openrv(t->relation, ShareRowExclusiveLock);\n> > >\n> > > The comment just above this code (\"Open, share-lock, and check all the\n> > > explicitly-specified relations\") needs modification. It would be\n> > > better to explain the reason of why we would need SRE lock here.\n> >\n> > Updated comments for the same.\n> >\n>\n> The patch missed to use the ShareRowExclusiveLock for partitions, see\n> attached. I haven't tested it but they should also face the same\n> problem. Apart from that, I have changed the comments in a few places\n> in the patch.\n\nI could not hit the updated ShareRowExclusiveLock changes through the\npartition table, instead I could verify it using the inheritance\ntable. Added a test for the same and also attaching the backbranch\npatch.\n\nRegards,\nVignesh",
"msg_date": "Wed, 10 Jul 2024 22:07:29 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, Jul 10, 2024 at 10:39 PM vignesh C <[email protected]> wrote:\n>\n> On Wed, 10 Jul 2024 at 12:28, Amit Kapila <[email protected]> wrote:\n> > The patch missed to use the ShareRowExclusiveLock for partitions, see\n> > attached. I haven't tested it but they should also face the same\n> > problem. Apart from that, I have changed the comments in a few places\n> > in the patch.\n>\n> I could not hit the updated ShareRowExclusiveLock changes through the\n> partition table, instead I could verify it using the inheritance\n> table. Added a test for the same and also attaching the backbranch\n> patch.\n>\n\nHi,\n\nI tested alternative-experimental-fix-lock.patch provided by Tomas\n(replaces SUE with SRE in OpenTableList). I believe there are a couple\nof scenarios the patch does not cover.\n\n1. It doesn't handle the case of \"ALTER PUBLICATION <pub> ADD TABLES\nIN SCHEMA <schema>\".\n\nI took crash-test.sh provided by Tomas and modified it to add all\ntables in the schema to publication using the following command :\n\n ALTER PUBLICATION p ADD TABLES IN SCHEMA public\n\nThe modified script is attached (crash-test-with-schema.sh). With this\nscript, I can reproduce the issue even with the patch applied. This is\nbecause the code path to add a schema to the publication doesn't go\nthrough OpenTableList.\n\nI have also attached a script run-test-with-schema.sh to run\ncrash-test-with-schema.sh in a loop with randomly generated parameters\n(modified from run.sh provided by Tomas).\n\n2. The second issue is a deadlock which happens when the alter\npublication command is run for a comma separated list of tables.\n\nI created another script create-test-tables-order-reverse.sh. This\nscript runs a command like the following :\n\n ALTER PUBLICATION p ADD TABLE test_2,test_1\n\nRunning the above script, I was able to get a deadlock error (the\noutput is attached in deadlock.txt). In the alter publication command,\nI added the tables in the reverse order to increase the probability of\nthe deadlock. But it should happen with any order of tables.\n\nI am not sure if the deadlock is a major issue because detecting the\ndeadlock is better than data loss. The schema issue is probably more\nimportant. I didn't test it out with the latest patches sent by\nVignesh but since the code changes in that patch are also in\nOpenTableList, I think the schema scenario won't be covered by those.\n\nThanks & Regards,\nNitin Motiani\nGoogle",
"msg_date": "Wed, 10 Jul 2024 23:22:36 +0530",
"msg_from": "Nitin Motiani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, Jul 10, 2024 at 11:22 PM Nitin Motiani <[email protected]> wrote:\n>\n> On Wed, Jul 10, 2024 at 10:39 PM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 10 Jul 2024 at 12:28, Amit Kapila <[email protected]> wrote:\n> > > The patch missed to use the ShareRowExclusiveLock for partitions, see\n> > > attached. I haven't tested it but they should also face the same\n> > > problem. Apart from that, I have changed the comments in a few places\n> > > in the patch.\n> >\n> > I could not hit the updated ShareRowExclusiveLock changes through the\n> > partition table, instead I could verify it using the inheritance\n> > table. Added a test for the same and also attaching the backbranch\n> > patch.\n> >\n>\n> Hi,\n>\n> I tested alternative-experimental-fix-lock.patch provided by Tomas\n> (replaces SUE with SRE in OpenTableList). I believe there are a couple\n> of scenarios the patch does not cover.\n>\n> 1. It doesn't handle the case of \"ALTER PUBLICATION <pub> ADD TABLES\n> IN SCHEMA <schema>\".\n>\n> I took crash-test.sh provided by Tomas and modified it to add all\n> tables in the schema to publication using the following command :\n>\n> ALTER PUBLICATION p ADD TABLES IN SCHEMA public\n>\n> The modified script is attached (crash-test-with-schema.sh). With this\n> script, I can reproduce the issue even with the patch applied. This is\n> because the code path to add a schema to the publication doesn't go\n> through OpenTableList.\n>\n> I have also attached a script run-test-with-schema.sh to run\n> crash-test-with-schema.sh in a loop with randomly generated parameters\n> (modified from run.sh provided by Tomas).\n>\n> 2. The second issue is a deadlock which happens when the alter\n> publication command is run for a comma separated list of tables.\n>\n> I created another script create-test-tables-order-reverse.sh. This\n> script runs a command like the following :\n>\n> ALTER PUBLICATION p ADD TABLE test_2,test_1\n>\n> Running the above script, I was able to get a deadlock error (the\n> output is attached in deadlock.txt). In the alter publication command,\n> I added the tables in the reverse order to increase the probability of\n> the deadlock. But it should happen with any order of tables.\n>\n> I am not sure if the deadlock is a major issue because detecting the\n> deadlock is better than data loss. The schema issue is probably more\n> important. I didn't test it out with the latest patches sent by\n> Vignesh but since the code changes in that patch are also in\n> OpenTableList, I think the schema scenario won't be covered by those.\n>\n\n\nHi,\n\nI looked further into the scenario of adding the tables in schema to\nthe publication. Since in that case, the entry is added to\npg_publication_namespace instead of pg_publication_rel, the codepaths\nfor 'add table' and 'add tables in schema' are different. And in the\n'add tables in schema' scenario, the OpenTableList function is not\ncalled to get the relation ids. Therefore even with the proposed\npatch, the data loss issue still persists in that case.\n\nTo validate this idea, I tried locking all the affected tables in the\nschema just before the invalidation for those relations (in\nShareRowExclusiveLock mode). I am attaching the small patch for that\n(alter_pub_for_schema.patch) where the change is made in the function\npublication_add_schema in pg_publication.c. I am not sure if this is\nthe best place to make this change or if it is the right fix. It is\nconceptually similar to the proposed change in OpenTableList but here\nwe are not just changing the lockmode but taking locks which were not\ntaken before. But with this change, the data loss errors went away in\nmy test script.\n\nAnother issue which persists with this change is the deadlock. Since\nmultiple table locks are acquired, the test script detects deadlock a\nfew times. Therefore I'm also attaching another modified script which\ndoes a few retries in case of deadlock. The script is\ncrash-test-with-retries-for-schema.sh. It runs the following command\nin a retry loop :\n\n ALTER PUBLICATION p ADD TABLES IN SCHEMA public\n\nIf the command fails, it sleeps for a random amount of time (upper\nbound by a MAXWAIT parameter) and then retries the command. If it\nfails to run the command in the max number of retries, the final\nreturn value from the script is DEADLOCK as we can't do a consistency\ncheck in this scenario. Also attached is another script\nrun-with-deadlock-detection.sh which can run the above script for\nmultiple iterations.\n\nI tried the test scripts with and without alter_pub_for_schema.patch.\nWithout the patch, I get the final output ERROR majority of the time\nwhich means that the publication was altered successfully but the data\nwas lost on the subscriber. When I run it with the patch, I get a mix\nof OK (no data loss) and DEADLOCK (the publication was not altered)\nbut no ERROR. I think by changing the parameters of sleep time and\nnumber of retries we can get different fractions of OK and DEADLOCK.\n\nI am not sure if this is the right or a clean way to fix the issue but\nI think conceptually this might be the right direction. Please let me\nknow if my understanding is wrong or if I'm missing something.\n\nThanks & Regards,\nNitin Motiani\nGoogle",
"msg_date": "Thu, 11 Jul 2024 18:19:36 +0530",
"msg_from": "Nitin Motiani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Thu, Jul 11, 2024 at 6:19 PM Nitin Motiani <[email protected]> wrote:\n>\n> On Wed, Jul 10, 2024 at 11:22 PM Nitin Motiani <[email protected]> wrote:\n> >\n> > On Wed, Jul 10, 2024 at 10:39 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Wed, 10 Jul 2024 at 12:28, Amit Kapila <[email protected]> wrote:\n> > > > The patch missed to use the ShareRowExclusiveLock for partitions, see\n> > > > attached. I haven't tested it but they should also face the same\n> > > > problem. Apart from that, I have changed the comments in a few places\n> > > > in the patch.\n> > >\n> > > I could not hit the updated ShareRowExclusiveLock changes through the\n> > > partition table, instead I could verify it using the inheritance\n> > > table. Added a test for the same and also attaching the backbranch\n> > > patch.\n> > >\n> >\n> > Hi,\n> >\n> > I tested alternative-experimental-fix-lock.patch provided by Tomas\n> > (replaces SUE with SRE in OpenTableList). I believe there are a couple\n> > of scenarios the patch does not cover.\n> >\n> > 1. It doesn't handle the case of \"ALTER PUBLICATION <pub> ADD TABLES\n> > IN SCHEMA <schema>\".\n> >\n> > I took crash-test.sh provided by Tomas and modified it to add all\n> > tables in the schema to publication using the following command :\n> >\n> > ALTER PUBLICATION p ADD TABLES IN SCHEMA public\n> >\n> > The modified script is attached (crash-test-with-schema.sh). With this\n> > script, I can reproduce the issue even with the patch applied. This is\n> > because the code path to add a schema to the publication doesn't go\n> > through OpenTableList.\n> >\n> > I have also attached a script run-test-with-schema.sh to run\n> > crash-test-with-schema.sh in a loop with randomly generated parameters\n> > (modified from run.sh provided by Tomas).\n> >\n> > 2. The second issue is a deadlock which happens when the alter\n> > publication command is run for a comma separated list of tables.\n> >\n> > I created another script create-test-tables-order-reverse.sh. This\n> > script runs a command like the following :\n> >\n> > ALTER PUBLICATION p ADD TABLE test_2,test_1\n> >\n> > Running the above script, I was able to get a deadlock error (the\n> > output is attached in deadlock.txt). In the alter publication command,\n> > I added the tables in the reverse order to increase the probability of\n> > the deadlock. But it should happen with any order of tables.\n> >\n> > I am not sure if the deadlock is a major issue because detecting the\n> > deadlock is better than data loss.\n> >\n\nThe deadlock reported in this case is an expected behavior. This is no\ndifferent that locking tables or rows in reverse order.\n\n>\n> I looked further into the scenario of adding the tables in schema to\n> the publication. Since in that case, the entry is added to\n> pg_publication_namespace instead of pg_publication_rel, the codepaths\n> for 'add table' and 'add tables in schema' are different. And in the\n> 'add tables in schema' scenario, the OpenTableList function is not\n> called to get the relation ids. Therefore even with the proposed\n> patch, the data loss issue still persists in that case.\n>\n> To validate this idea, I tried locking all the affected tables in the\n> schema just before the invalidation for those relations (in\n> ShareRowExclusiveLock mode).\n>\n\nThis sounds like a reasonable approach to fix the issue. However, we\nshould check SET publication_object as well, especially the drop part\nin it. It should not happen that we miss sending the data for ADD but\nfor DROP, we send data when we shouldn't have sent it.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 15 Jul 2024 15:30:51 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Mon, 15 Jul 2024 at 15:31, Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jul 11, 2024 at 6:19 PM Nitin Motiani <[email protected]> wrote:\n> >\n> > On Wed, Jul 10, 2024 at 11:22 PM Nitin Motiani <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 10, 2024 at 10:39 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > On Wed, 10 Jul 2024 at 12:28, Amit Kapila <[email protected]> wrote:\n> > > > > The patch missed to use the ShareRowExclusiveLock for partitions, see\n> > > > > attached. I haven't tested it but they should also face the same\n> > > > > problem. Apart from that, I have changed the comments in a few places\n> > > > > in the patch.\n> > > >\n> > > > I could not hit the updated ShareRowExclusiveLock changes through the\n> > > > partition table, instead I could verify it using the inheritance\n> > > > table. Added a test for the same and also attaching the backbranch\n> > > > patch.\n> > > >\n> > >\n> > > Hi,\n> > >\n> > > I tested alternative-experimental-fix-lock.patch provided by Tomas\n> > > (replaces SUE with SRE in OpenTableList). I believe there are a couple\n> > > of scenarios the patch does not cover.\n> > >\n> > > 1. It doesn't handle the case of \"ALTER PUBLICATION <pub> ADD TABLES\n> > > IN SCHEMA <schema>\".\n> > >\n> > > I took crash-test.sh provided by Tomas and modified it to add all\n> > > tables in the schema to publication using the following command :\n> > >\n> > > ALTER PUBLICATION p ADD TABLES IN SCHEMA public\n> > >\n> > > The modified script is attached (crash-test-with-schema.sh). With this\n> > > script, I can reproduce the issue even with the patch applied. This is\n> > > because the code path to add a schema to the publication doesn't go\n> > > through OpenTableList.\n> > >\n> > > I have also attached a script run-test-with-schema.sh to run\n> > > crash-test-with-schema.sh in a loop with randomly generated parameters\n> > > (modified from run.sh provided by Tomas).\n> > >\n> > > 2. The second issue is a deadlock which happens when the alter\n> > > publication command is run for a comma separated list of tables.\n> > >\n> > > I created another script create-test-tables-order-reverse.sh. This\n> > > script runs a command like the following :\n> > >\n> > > ALTER PUBLICATION p ADD TABLE test_2,test_1\n> > >\n> > > Running the above script, I was able to get a deadlock error (the\n> > > output is attached in deadlock.txt). In the alter publication command,\n> > > I added the tables in the reverse order to increase the probability of\n> > > the deadlock. But it should happen with any order of tables.\n> > >\n> > > I am not sure if the deadlock is a major issue because detecting the\n> > > deadlock is better than data loss.\n> > >\n>\n> The deadlock reported in this case is an expected behavior. This is no\n> different that locking tables or rows in reverse order.\n>\n> >\n> > I looked further into the scenario of adding the tables in schema to\n> > the publication. Since in that case, the entry is added to\n> > pg_publication_namespace instead of pg_publication_rel, the codepaths\n> > for 'add table' and 'add tables in schema' are different. And in the\n> > 'add tables in schema' scenario, the OpenTableList function is not\n> > called to get the relation ids. Therefore even with the proposed\n> > patch, the data loss issue still persists in that case.\n> >\n> > To validate this idea, I tried locking all the affected tables in the\n> > schema just before the invalidation for those relations (in\n> > ShareRowExclusiveLock mode).\n> >\n>\n> This sounds like a reasonable approach to fix the issue. However, we\n> should check SET publication_object as well, especially the drop part\n> in it. It should not happen that we miss sending the data for ADD but\n> for DROP, we send data when we shouldn't have sent it.\n\nThere were few other scenarios, similar to the one you mentioned,\nwhere the issue occurred. For example: a) When specifying a subset of\nexisting tables in the ALTER PUBLICATION ... SET TABLE command, the\ntables that were supposed to be removed from the publication were not\nlocked in ShareRowExclusiveLock mode. b) The ALTER PUBLICATION ...\nDROP TABLES IN SCHEMA command did not lock the relations that will be\nremoved from the publication in ShareRowExclusiveLock mode. Both of\nthese scenarios resulted in data inconsistency due to inadequate\nlocking. The attached patch addresses these issues.\n\nRegards,\nVignesh",
"msg_date": "Mon, 15 Jul 2024 23:42:45 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Mon, Jul 15, 2024 at 11:42 PM vignesh C <[email protected]> wrote:\n>\n> On Mon, 15 Jul 2024 at 15:31, Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Jul 11, 2024 at 6:19 PM Nitin Motiani <[email protected]> wrote:\n> > > I looked further into the scenario of adding the tables in schema to\n> > > the publication. Since in that case, the entry is added to\n> > > pg_publication_namespace instead of pg_publication_rel, the codepaths\n> > > for 'add table' and 'add tables in schema' are different. And in the\n> > > 'add tables in schema' scenario, the OpenTableList function is not\n> > > called to get the relation ids. Therefore even with the proposed\n> > > patch, the data loss issue still persists in that case.\n> > >\n> > > To validate this idea, I tried locking all the affected tables in the\n> > > schema just before the invalidation for those relations (in\n> > > ShareRowExclusiveLock mode).\n> > >\n> >\n> > This sounds like a reasonable approach to fix the issue. However, we\n> > should check SET publication_object as well, especially the drop part\n> > in it. It should not happen that we miss sending the data for ADD but\n> > for DROP, we send data when we shouldn't have sent it.\n>\n> There were few other scenarios, similar to the one you mentioned,\n> where the issue occurred. For example: a) When specifying a subset of\n> existing tables in the ALTER PUBLICATION ... SET TABLE command, the\n> tables that were supposed to be removed from the publication were not\n> locked in ShareRowExclusiveLock mode. b) The ALTER PUBLICATION ...\n> DROP TABLES IN SCHEMA command did not lock the relations that will be\n> removed from the publication in ShareRowExclusiveLock mode. Both of\n> these scenarios resulted in data inconsistency due to inadequate\n> locking. The attached patch addresses these issues.\n>\n\nHi,\n\nA couple of questions on the latest patch :\n\n1. I see there is this logic in PublicationDropSchemas to first check\nif there is a valid entry for the schema in pg_publication_namespace\n\n psid = GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,\n\nAnum_pg_publication_namespace_oid,\n\nObjectIdGetDatum(schemaid),\n\nObjectIdGetDatum(pubid));\n if (!OidIsValid(psid))\n {\n if (missing_ok)\n continue;\n\n ereport(ERROR,\n (errcode(ERRCODE_UNDEFINED_OBJECT),\n errmsg(\"tables from schema\n\\\"%s\\\" are not part of the publication\",\n\nget_namespace_name(schemaid))));\n }\n\nYour proposed change locks the schemaRels before this code block.\nWould it be better to lock the schemaRels after the error check? So\nthat just in case, the publication on the schema is not valid anymore,\nthe lock is not held unnecessarily on all its tables.\n\n2. The function publication_add_schema explicitly invalidates cache by\ncalling InvalidatePublicationRels(schemaRels). That is not present in\nthe current PublicationDropSchemas code. Is that something which\nshould be added in the drop scenario also? Please let me know if there\nis some context that I'm missing regarding why this was not added\noriginally for the drop scenario.\n\nThanks & Regards,\nNitin Motiani\nGoogle\n\n\n",
"msg_date": "Tue, 16 Jul 2024 00:48:19 +0530",
"msg_from": "Nitin Motiani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 12:48 AM Nitin Motiani <[email protected]> wrote:\n>\n> A couple of questions on the latest patch :\n>\n> 1. I see there is this logic in PublicationDropSchemas to first check\n> if there is a valid entry for the schema in pg_publication_namespace\n>\n> psid = GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,\n>\n> Anum_pg_publication_namespace_oid,\n>\n> ObjectIdGetDatum(schemaid),\n>\n> ObjectIdGetDatum(pubid));\n> if (!OidIsValid(psid))\n> {\n> if (missing_ok)\n> continue;\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_UNDEFINED_OBJECT),\n> errmsg(\"tables from schema\n> \\\"%s\\\" are not part of the publication\",\n>\n> get_namespace_name(schemaid))));\n> }\n>\n> Your proposed change locks the schemaRels before this code block.\n> Would it be better to lock the schemaRels after the error check? So\n> that just in case, the publication on the schema is not valid anymore,\n> the lock is not held unnecessarily on all its tables.\n>\n\nGood point. It is better to lock the relations in\nRemovePublicationSchemaById() where we are invalidating relcache as\nwell. See the response to your next point as well.\n\n> 2. The function publication_add_schema explicitly invalidates cache by\n> calling InvalidatePublicationRels(schemaRels). That is not present in\n> the current PublicationDropSchemas code. Is that something which\n> should be added in the drop scenario also? Please let me know if there\n> is some context that I'm missing regarding why this was not added\n> originally for the drop scenario.\n>\n\nThe required invalidation happens in the function\nRemovePublicationSchemaById(). So, we should lock in\nRemovePublicationSchemaById() as that would avoid calling\nGetSchemaPublicationRelations() multiple times.\n\nOne related comment:\n@@ -1219,8 +1219,14 @@ AlterPublicationTables(AlterPublicationStmt\n*stmt, HeapTuple tup,\n oldrel = palloc(sizeof(PublicationRelInfo));\n oldrel->whereClause = NULL;\n oldrel->columns = NIL;\n+\n+ /*\n+ * Data loss due to concurrency issues are avoided by locking\n+ * the relation in ShareRowExclusiveLock as described atop\n+ * OpenTableList.\n+ */\n oldrel->relation = table_open(oldrelid,\n- ShareUpdateExclusiveLock);\n+ ShareRowExclusiveLock);\n\nIsn't it better to lock the required relations in RemovePublicationRelById()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 16 Jul 2024 09:29:46 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 9:29 AM Amit Kapila <[email protected]> wrote:\n>\n> One related comment:\n> @@ -1219,8 +1219,14 @@ AlterPublicationTables(AlterPublicationStmt\n> *stmt, HeapTuple tup,\n> oldrel = palloc(sizeof(PublicationRelInfo));\n> oldrel->whereClause = NULL;\n> oldrel->columns = NIL;\n> +\n> + /*\n> + * Data loss due to concurrency issues are avoided by locking\n> + * the relation in ShareRowExclusiveLock as described atop\n> + * OpenTableList.\n> + */\n> oldrel->relation = table_open(oldrelid,\n> - ShareUpdateExclusiveLock);\n> + ShareRowExclusiveLock);\n>\n> Isn't it better to lock the required relations in RemovePublicationRelById()?\n>\n\nOn my CentOS VM, the test file '100_bugs.pl' takes ~11s without a\npatch and ~13.3s with a patch. So, 2 to 2.3s additional time for newly\nadded tests. It isn't worth adding this much extra time for one bug\nfix. Can we combine table and schema tests into one single test and\navoid inheritance table tests as the code for those will mostly follow\nthe same path as a regular table?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 16 Jul 2024 11:59:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Tue, 16 Jul 2024 at 11:59, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jul 16, 2024 at 9:29 AM Amit Kapila <[email protected]> wrote:\n> >\n> > One related comment:\n> > @@ -1219,8 +1219,14 @@ AlterPublicationTables(AlterPublicationStmt\n> > *stmt, HeapTuple tup,\n> > oldrel = palloc(sizeof(PublicationRelInfo));\n> > oldrel->whereClause = NULL;\n> > oldrel->columns = NIL;\n> > +\n> > + /*\n> > + * Data loss due to concurrency issues are avoided by locking\n> > + * the relation in ShareRowExclusiveLock as described atop\n> > + * OpenTableList.\n> > + */\n> > oldrel->relation = table_open(oldrelid,\n> > - ShareUpdateExclusiveLock);\n> > + ShareRowExclusiveLock);\n> >\n> > Isn't it better to lock the required relations in RemovePublicationRelById()?\n> >\n>\n> On my CentOS VM, the test file '100_bugs.pl' takes ~11s without a\n> patch and ~13.3s with a patch. So, 2 to 2.3s additional time for newly\n> added tests. It isn't worth adding this much extra time for one bug\n> fix. Can we combine table and schema tests into one single test and\n> avoid inheritance table tests as the code for those will mostly follow\n> the same path as a regular table?\n\nYes, that is better. The attached v6 version patch has the changes for the same.\nThe patch also addresses the comments from [1].\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LZDW2AVDYFZdZcvmsKVGajH2-gZmjXr9BsYiy8ct_fEw%40mail.gmail.com\n\nRegards,\nVignesh",
"msg_date": "Tue, 16 Jul 2024 18:54:07 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 6:54 PM vignesh C <[email protected]> wrote:\n>\n> On Tue, 16 Jul 2024 at 11:59, Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jul 16, 2024 at 9:29 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > One related comment:\n> > > @@ -1219,8 +1219,14 @@ AlterPublicationTables(AlterPublicationStmt\n> > > *stmt, HeapTuple tup,\n> > > oldrel = palloc(sizeof(PublicationRelInfo));\n> > > oldrel->whereClause = NULL;\n> > > oldrel->columns = NIL;\n> > > +\n> > > + /*\n> > > + * Data loss due to concurrency issues are avoided by locking\n> > > + * the relation in ShareRowExclusiveLock as described atop\n> > > + * OpenTableList.\n> > > + */\n> > > oldrel->relation = table_open(oldrelid,\n> > > - ShareUpdateExclusiveLock);\n> > > + ShareRowExclusiveLock);\n> > >\n> > > Isn't it better to lock the required relations in RemovePublicationRelById()?\n> > >\n> >\n> > On my CentOS VM, the test file '100_bugs.pl' takes ~11s without a\n> > patch and ~13.3s with a patch. So, 2 to 2.3s additional time for newly\n> > added tests. It isn't worth adding this much extra time for one bug\n> > fix. Can we combine table and schema tests into one single test and\n> > avoid inheritance table tests as the code for those will mostly follow\n> > the same path as a regular table?\n>\n> Yes, that is better. The attached v6 version patch has the changes for the same.\n> The patch also addresses the comments from [1].\n>\n\nThanks, I don't see any noticeable difference in test timing with new\ntests. I have slightly modified the comments in the attached diff\npatch (please rename it to .patch).\n\nBTW, I noticed that we don't take any table-level locks for Create\nPublication .. For ALL TABLES (and Drop Publication). Can that create\na similar problem? I haven't tested so not sure but even if there is a\nproblem for the Create case, it should lead to some ERROR like missing\npublication.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 17 Jul 2024 11:54:45 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, 17 Jul 2024 at 11:54, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jul 16, 2024 at 6:54 PM vignesh C <[email protected]> wrote:\n> >\n> > On Tue, 16 Jul 2024 at 11:59, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Tue, Jul 16, 2024 at 9:29 AM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > One related comment:\n> > > > @@ -1219,8 +1219,14 @@ AlterPublicationTables(AlterPublicationStmt\n> > > > *stmt, HeapTuple tup,\n> > > > oldrel = palloc(sizeof(PublicationRelInfo));\n> > > > oldrel->whereClause = NULL;\n> > > > oldrel->columns = NIL;\n> > > > +\n> > > > + /*\n> > > > + * Data loss due to concurrency issues are avoided by locking\n> > > > + * the relation in ShareRowExclusiveLock as described atop\n> > > > + * OpenTableList.\n> > > > + */\n> > > > oldrel->relation = table_open(oldrelid,\n> > > > - ShareUpdateExclusiveLock);\n> > > > + ShareRowExclusiveLock);\n> > > >\n> > > > Isn't it better to lock the required relations in RemovePublicationRelById()?\n> > > >\n> > >\n> > > On my CentOS VM, the test file '100_bugs.pl' takes ~11s without a\n> > > patch and ~13.3s with a patch. So, 2 to 2.3s additional time for newly\n> > > added tests. It isn't worth adding this much extra time for one bug\n> > > fix. Can we combine table and schema tests into one single test and\n> > > avoid inheritance table tests as the code for those will mostly follow\n> > > the same path as a regular table?\n> >\n> > Yes, that is better. The attached v6 version patch has the changes for the same.\n> > The patch also addresses the comments from [1].\n> >\n>\n> Thanks, I don't see any noticeable difference in test timing with new\n> tests. I have slightly modified the comments in the attached diff\n> patch (please rename it to .patch).\n>\n> BTW, I noticed that we don't take any table-level locks for Create\n> Publication .. For ALL TABLES (and Drop Publication). Can that create\n> a similar problem? I haven't tested so not sure but even if there is a\n> problem for the Create case, it should lead to some ERROR like missing\n> publication.\n\nI tested these scenarios, and as you expected, it throws an error for\nthe create publication case:\n2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\ndata from WAL stream: ERROR: publication \"pub1\" does not exist\n CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\ncallback, associated LSN 0/1510CD8\n2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n\"logical replication apply worker\" (PID 481526) exited with exit code\n1\n\nThe steps for this process are as follows:\n1) Create tables in both the publisher and subscriber.\n2) On the publisher: Create a replication slot.\n3) On the subscriber: Create a subscription using the slot created by\nthe publisher.\n4) On the publisher:\n4.a) Session 1: BEGIN; INSERT INTO T1;\n4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n4.c) Session 1: COMMIT;\n\nSince we are throwing out a \"publication does not exist\" error, there\nis no inconsistency issue here.\n\nHowever, an issue persists with DROP ALL TABLES publication, where\ndata continues to replicate even after the publication is dropped.\nThis happens because the open transaction consumes the invalidation,\ncausing the publications to be revalidated using old snapshot. As a\nresult, both the open transactions and the subsequent transactions are\ngetting replicated.\n\nWe can reproduce this issue by following these steps in a logical\nreplication setup with an \"ALL TABLES\" publication:\nOn the publisher:\nSession 1: BEGIN; INSERT INTO T1 VALUES (val1);\nIn another session on the publisher:\nSession 2: DROP PUBLICATION\nBack in Session 1 on the publisher:\nCOMMIT;\nFinally, in Session 1 on the publisher:\nINSERT INTO T1 VALUES (val2);\n\nEven after dropping the publication, both val1 and val2 are still\nbeing replicated to the subscriber. This means that both the\nin-progress concurrent transaction and the subsequent transactions are\nbeing replicated.\n\nI don't think locking all tables is a viable solution in this case, as\nit would require asking the user to refrain from performing any\noperations on any of the tables in the database while creating a\npublication.\n\nThoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 17 Jul 2024 17:25:04 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n>\n> On Wed, 17 Jul 2024 at 11:54, Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jul 16, 2024 at 6:54 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Tue, 16 Jul 2024 at 11:59, Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Tue, Jul 16, 2024 at 9:29 AM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > One related comment:\n> > > > > @@ -1219,8 +1219,14 @@ AlterPublicationTables(AlterPublicationStmt\n> > > > > *stmt, HeapTuple tup,\n> > > > > oldrel = palloc(sizeof(PublicationRelInfo));\n> > > > > oldrel->whereClause = NULL;\n> > > > > oldrel->columns = NIL;\n> > > > > +\n> > > > > + /*\n> > > > > + * Data loss due to concurrency issues are avoided by locking\n> > > > > + * the relation in ShareRowExclusiveLock as described atop\n> > > > > + * OpenTableList.\n> > > > > + */\n> > > > > oldrel->relation = table_open(oldrelid,\n> > > > > - ShareUpdateExclusiveLock);\n> > > > > + ShareRowExclusiveLock);\n> > > > >\n> > > > > Isn't it better to lock the required relations in RemovePublicationRelById()?\n> > > > >\n> > > >\n> > > > On my CentOS VM, the test file '100_bugs.pl' takes ~11s without a\n> > > > patch and ~13.3s with a patch. So, 2 to 2.3s additional time for newly\n> > > > added tests. It isn't worth adding this much extra time for one bug\n> > > > fix. Can we combine table and schema tests into one single test and\n> > > > avoid inheritance table tests as the code for those will mostly follow\n> > > > the same path as a regular table?\n> > >\n> > > Yes, that is better. The attached v6 version patch has the changes for the same.\n> > > The patch also addresses the comments from [1].\n> > >\n> >\n> > Thanks, I don't see any noticeable difference in test timing with new\n> > tests. I have slightly modified the comments in the attached diff\n> > patch (please rename it to .patch).\n> >\n> > BTW, I noticed that we don't take any table-level locks for Create\n> > Publication .. For ALL TABLES (and Drop Publication). Can that create\n> > a similar problem? I haven't tested so not sure but even if there is a\n> > problem for the Create case, it should lead to some ERROR like missing\n> > publication.\n>\n> I tested these scenarios, and as you expected, it throws an error for\n> the create publication case:\n> 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> data from WAL stream: ERROR: publication \"pub1\" does not exist\n> CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> callback, associated LSN 0/1510CD8\n> 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> \"logical replication apply worker\" (PID 481526) exited with exit code\n> 1\n>\n> The steps for this process are as follows:\n> 1) Create tables in both the publisher and subscriber.\n> 2) On the publisher: Create a replication slot.\n> 3) On the subscriber: Create a subscription using the slot created by\n> the publisher.\n> 4) On the publisher:\n> 4.a) Session 1: BEGIN; INSERT INTO T1;\n> 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> 4.c) Session 1: COMMIT;\n>\n> Since we are throwing out a \"publication does not exist\" error, there\n> is no inconsistency issue here.\n>\n> However, an issue persists with DROP ALL TABLES publication, where\n> data continues to replicate even after the publication is dropped.\n> This happens because the open transaction consumes the invalidation,\n> causing the publications to be revalidated using old snapshot. As a\n> result, both the open transactions and the subsequent transactions are\n> getting replicated.\n>\n> We can reproduce this issue by following these steps in a logical\n> replication setup with an \"ALL TABLES\" publication:\n> On the publisher:\n> Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> In another session on the publisher:\n> Session 2: DROP PUBLICATION\n> Back in Session 1 on the publisher:\n> COMMIT;\n> Finally, in Session 1 on the publisher:\n> INSERT INTO T1 VALUES (val2);\n>\n> Even after dropping the publication, both val1 and val2 are still\n> being replicated to the subscriber. This means that both the\n> in-progress concurrent transaction and the subsequent transactions are\n> being replicated.\n>\n\nHi,\n\nI tried the 'DROP PUBLICATION' command even for a publication with a\nsingle table. And there also the data continues to get replicated.\n\nTo test this, I did a similar experiment as the above but instead of\ncreating publication on all tables, I did it for one specific table.\n\nHere are the steps :\n1. Create table test_1 and test_2 on both the publisher and subscriber\ninstances.\n2. Create publication p for table test_1 on the publisher.\n3. Create a subscription s which subscribes to p.\n4. On the publisher\n4a) Session 1 : BEGIN; INSERT INTO test_1 VALUES(val1);\n4b) Session 2 : DROP PUBLICATION p;\n4c) Session 1 : Commit;\n5. On the publisher : INSERT INTO test_1 VALUES(val2);\n\nAfter these, when I check the subscriber, both val1 and val2 have been\nreplicated. I tried a few more inserts on publisher after this and\nthey all got replicated to the subscriber. Only after explicitly\ncreating a new publication p2 for test_1 on the publisher, the\nreplication stopped. Most likely because the create publication\ncommand invalidated the cache.\n\nMy guess is that this issue probably comes from the fact that\nRemoveObjects in dropcmds.c doesn't do any special handling or\ninvalidation for the object drop command.\n\nPlease let me know if I'm missing something in my setup or if my\nunderstanding of the drop commands is wrong.\n\nThanks\n\n\n",
"msg_date": "Thu, 18 Jul 2024 15:05:26 +0530",
"msg_from": "Nitin Motiani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 3:05 PM Nitin Motiani <[email protected]> wrote:\n>\n> On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n> >\n> > I tested these scenarios, and as you expected, it throws an error for\n> > the create publication case:\n> > 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> > data from WAL stream: ERROR: publication \"pub1\" does not exist\n> > CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> > callback, associated LSN 0/1510CD8\n> > 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> > \"logical replication apply worker\" (PID 481526) exited with exit code\n> > 1\n> >\n> > The steps for this process are as follows:\n> > 1) Create tables in both the publisher and subscriber.\n> > 2) On the publisher: Create a replication slot.\n> > 3) On the subscriber: Create a subscription using the slot created by\n> > the publisher.\n> > 4) On the publisher:\n> > 4.a) Session 1: BEGIN; INSERT INTO T1;\n> > 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> > 4.c) Session 1: COMMIT;\n> >\n> > Since we are throwing out a \"publication does not exist\" error, there\n> > is no inconsistency issue here.\n> >\n> > However, an issue persists with DROP ALL TABLES publication, where\n> > data continues to replicate even after the publication is dropped.\n> > This happens because the open transaction consumes the invalidation,\n> > causing the publications to be revalidated using old snapshot. As a\n> > result, both the open transactions and the subsequent transactions are\n> > getting replicated.\n> >\n> > We can reproduce this issue by following these steps in a logical\n> > replication setup with an \"ALL TABLES\" publication:\n> > On the publisher:\n> > Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> > In another session on the publisher:\n> > Session 2: DROP PUBLICATION\n> > Back in Session 1 on the publisher:\n> > COMMIT;\n> > Finally, in Session 1 on the publisher:\n> > INSERT INTO T1 VALUES (val2);\n> >\n> > Even after dropping the publication, both val1 and val2 are still\n> > being replicated to the subscriber. This means that both the\n> > in-progress concurrent transaction and the subsequent transactions are\n> > being replicated.\n> >\n>\n> Hi,\n>\n> I tried the 'DROP PUBLICATION' command even for a publication with a\n> single table. And there also the data continues to get replicated.\n>\n> To test this, I did a similar experiment as the above but instead of\n> creating publication on all tables, I did it for one specific table.\n>\n> Here are the steps :\n> 1. Create table test_1 and test_2 on both the publisher and subscriber\n> instances.\n> 2. Create publication p for table test_1 on the publisher.\n> 3. Create a subscription s which subscribes to p.\n> 4. On the publisher\n> 4a) Session 1 : BEGIN; INSERT INTO test_1 VALUES(val1);\n> 4b) Session 2 : DROP PUBLICATION p;\n> 4c) Session 1 : Commit;\n> 5. On the publisher : INSERT INTO test_1 VALUES(val2);\n>\n> After these, when I check the subscriber, both val1 and val2 have been\n> replicated. I tried a few more inserts on publisher after this and\n> they all got replicated to the subscriber. Only after explicitly\n> creating a new publication p2 for test_1 on the publisher, the\n> replication stopped. Most likely because the create publication\n> command invalidated the cache.\n>\n> My guess is that this issue probably comes from the fact that\n> RemoveObjects in dropcmds.c doesn't do any special handling or\n> invalidation for the object drop command.\n>\n\nI checked further and I see that RemovePublicationById does do cache\ninvalidation but it is only done in the scenario when the publication\nis on all tables. This is done without taking any locks. But for the\nother cases (eg. publication on one table), I don't see any cache\ninvalidation in RemovePublicationById. That would explain why the\nreplication kept happening for multiple transactions after the drop\npublication command in my example..\n\nThanks & Regards\nNitin Motiani\nGoogle\n\n\n",
"msg_date": "Thu, 18 Jul 2024 15:25:05 +0530",
"msg_from": "Nitin Motiani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 3:25 PM Nitin Motiani <[email protected]> wrote:\n>\n> On Thu, Jul 18, 2024 at 3:05 PM Nitin Motiani <[email protected]> wrote:\n> >\n> > On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n> > >\n> > > I tested these scenarios, and as you expected, it throws an error for\n> > > the create publication case:\n> > > 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> > > data from WAL stream: ERROR: publication \"pub1\" does not exist\n> > > CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> > > callback, associated LSN 0/1510CD8\n> > > 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> > > \"logical replication apply worker\" (PID 481526) exited with exit code\n> > > 1\n> > >\n> > > The steps for this process are as follows:\n> > > 1) Create tables in both the publisher and subscriber.\n> > > 2) On the publisher: Create a replication slot.\n> > > 3) On the subscriber: Create a subscription using the slot created by\n> > > the publisher.\n> > > 4) On the publisher:\n> > > 4.a) Session 1: BEGIN; INSERT INTO T1;\n> > > 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> > > 4.c) Session 1: COMMIT;\n> > >\n> > > Since we are throwing out a \"publication does not exist\" error, there\n> > > is no inconsistency issue here.\n> > >\n> > > However, an issue persists with DROP ALL TABLES publication, where\n> > > data continues to replicate even after the publication is dropped.\n> > > This happens because the open transaction consumes the invalidation,\n> > > causing the publications to be revalidated using old snapshot. As a\n> > > result, both the open transactions and the subsequent transactions are\n> > > getting replicated.\n> > >\n> > > We can reproduce this issue by following these steps in a logical\n> > > replication setup with an \"ALL TABLES\" publication:\n> > > On the publisher:\n> > > Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> > > In another session on the publisher:\n> > > Session 2: DROP PUBLICATION\n> > > Back in Session 1 on the publisher:\n> > > COMMIT;\n> > > Finally, in Session 1 on the publisher:\n> > > INSERT INTO T1 VALUES (val2);\n> > >\n> > > Even after dropping the publication, both val1 and val2 are still\n> > > being replicated to the subscriber. This means that both the\n> > > in-progress concurrent transaction and the subsequent transactions are\n> > > being replicated.\n> > >\n> >\n> > Hi,\n> >\n> > I tried the 'DROP PUBLICATION' command even for a publication with a\n> > single table. And there also the data continues to get replicated.\n> >\n> > To test this, I did a similar experiment as the above but instead of\n> > creating publication on all tables, I did it for one specific table.\n> >\n> > Here are the steps :\n> > 1. Create table test_1 and test_2 on both the publisher and subscriber\n> > instances.\n> > 2. Create publication p for table test_1 on the publisher.\n> > 3. Create a subscription s which subscribes to p.\n> > 4. On the publisher\n> > 4a) Session 1 : BEGIN; INSERT INTO test_1 VALUES(val1);\n> > 4b) Session 2 : DROP PUBLICATION p;\n> > 4c) Session 1 : Commit;\n> > 5. On the publisher : INSERT INTO test_1 VALUES(val2);\n> >\n> > After these, when I check the subscriber, both val1 and val2 have been\n> > replicated. I tried a few more inserts on publisher after this and\n> > they all got replicated to the subscriber. Only after explicitly\n> > creating a new publication p2 for test_1 on the publisher, the\n> > replication stopped. Most likely because the create publication\n> > command invalidated the cache.\n> >\n> > My guess is that this issue probably comes from the fact that\n> > RemoveObjects in dropcmds.c doesn't do any special handling or\n> > invalidation for the object drop command.\n> >\n>\n> I checked further and I see that RemovePublicationById does do cache\n> invalidation but it is only done in the scenario when the publication\n> is on all tables. This is done without taking any locks. But for the\n> other cases (eg. publication on one table), I don't see any cache\n> invalidation in RemovePublicationById. That would explain why the\n> replication kept happening for multiple transactions after the drop\n> publication command in my example..\n>\n\nSorry, I missed that for the individual table scenario, the\ninvalidation would happen in RemovePublicationRelById. That is\ninvalidating the cache for all relids. But this is also not taking any\nlocks. So that would explain why dropping the publication on a single\ntable doesn't invalidate the cache in an ongoing transaction. I'm not\nsure why the replication kept happening even in subsequent\ntransactions.\n\nEither way I think the SRE lock should be taken for all relids in that\nfunction also before the invalidations.\n\nThanks & Regards\nNitin Motiani\nGoogle\n\n\n",
"msg_date": "Thu, 18 Jul 2024 15:30:41 +0530",
"msg_from": "Nitin Motiani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 3:30 PM Nitin Motiani <[email protected]> wrote:\n>\n> On Thu, Jul 18, 2024 at 3:25 PM Nitin Motiani <[email protected]> wrote:\n> >\n> > On Thu, Jul 18, 2024 at 3:05 PM Nitin Motiani <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > I tested these scenarios, and as you expected, it throws an error for\n> > > > the create publication case:\n> > > > 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> > > > data from WAL stream: ERROR: publication \"pub1\" does not exist\n> > > > CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> > > > callback, associated LSN 0/1510CD8\n> > > > 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> > > > \"logical replication apply worker\" (PID 481526) exited with exit code\n> > > > 1\n> > > >\n> > > > The steps for this process are as follows:\n> > > > 1) Create tables in both the publisher and subscriber.\n> > > > 2) On the publisher: Create a replication slot.\n> > > > 3) On the subscriber: Create a subscription using the slot created by\n> > > > the publisher.\n> > > > 4) On the publisher:\n> > > > 4.a) Session 1: BEGIN; INSERT INTO T1;\n> > > > 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> > > > 4.c) Session 1: COMMIT;\n> > > >\n> > > > Since we are throwing out a \"publication does not exist\" error, there\n> > > > is no inconsistency issue here.\n> > > >\n> > > > However, an issue persists with DROP ALL TABLES publication, where\n> > > > data continues to replicate even after the publication is dropped.\n> > > > This happens because the open transaction consumes the invalidation,\n> > > > causing the publications to be revalidated using old snapshot. As a\n> > > > result, both the open transactions and the subsequent transactions are\n> > > > getting replicated.\n> > > >\n> > > > We can reproduce this issue by following these steps in a logical\n> > > > replication setup with an \"ALL TABLES\" publication:\n> > > > On the publisher:\n> > > > Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> > > > In another session on the publisher:\n> > > > Session 2: DROP PUBLICATION\n> > > > Back in Session 1 on the publisher:\n> > > > COMMIT;\n> > > > Finally, in Session 1 on the publisher:\n> > > > INSERT INTO T1 VALUES (val2);\n> > > >\n> > > > Even after dropping the publication, both val1 and val2 are still\n> > > > being replicated to the subscriber. This means that both the\n> > > > in-progress concurrent transaction and the subsequent transactions are\n> > > > being replicated.\n> > > >\n> > >\n> > > Hi,\n> > >\n> > > I tried the 'DROP PUBLICATION' command even for a publication with a\n> > > single table. And there also the data continues to get replicated.\n> > >\n> > > To test this, I did a similar experiment as the above but instead of\n> > > creating publication on all tables, I did it for one specific table.\n> > >\n> > > Here are the steps :\n> > > 1. Create table test_1 and test_2 on both the publisher and subscriber\n> > > instances.\n> > > 2. Create publication p for table test_1 on the publisher.\n> > > 3. Create a subscription s which subscribes to p.\n> > > 4. On the publisher\n> > > 4a) Session 1 : BEGIN; INSERT INTO test_1 VALUES(val1);\n> > > 4b) Session 2 : DROP PUBLICATION p;\n> > > 4c) Session 1 : Commit;\n> > > 5. On the publisher : INSERT INTO test_1 VALUES(val2);\n> > >\n> > > After these, when I check the subscriber, both val1 and val2 have been\n> > > replicated. I tried a few more inserts on publisher after this and\n> > > they all got replicated to the subscriber. Only after explicitly\n> > > creating a new publication p2 for test_1 on the publisher, the\n> > > replication stopped. Most likely because the create publication\n> > > command invalidated the cache.\n> > >\n> > > My guess is that this issue probably comes from the fact that\n> > > RemoveObjects in dropcmds.c doesn't do any special handling or\n> > > invalidation for the object drop command.\n> > >\n> >\n> > I checked further and I see that RemovePublicationById does do cache\n> > invalidation but it is only done in the scenario when the publication\n> > is on all tables. This is done without taking any locks. But for the\n> > other cases (eg. publication on one table), I don't see any cache\n> > invalidation in RemovePublicationById. That would explain why the\n> > replication kept happening for multiple transactions after the drop\n> > publication command in my example..\n> >\n>\n> Sorry, I missed that for the individual table scenario, the\n> invalidation would happen in RemovePublicationRelById. That is\n> invalidating the cache for all relids. But this is also not taking any\n> locks. So that would explain why dropping the publication on a single\n> table doesn't invalidate the cache in an ongoing transaction. I'm not\n> sure why the replication kept happening even in subsequent\n> transactions.\n>\n> Either way I think the SRE lock should be taken for all relids in that\n> function also before the invalidations.\n>\n\nMy apologies. I wasn't testing with the latest patch. I see this has\nalready been done in the v6 patch file.\n\nThanks & Regards\nNitin Motiani\nGoogle\n\n\n",
"msg_date": "Thu, 18 Jul 2024 15:47:26 +0530",
"msg_from": "Nitin Motiani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n>\n> On Wed, 17 Jul 2024 at 11:54, Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jul 16, 2024 at 6:54 PM vignesh C <[email protected]> wrote:\n> >\n> > BTW, I noticed that we don't take any table-level locks for Create\n> > Publication .. For ALL TABLES (and Drop Publication). Can that create\n> > a similar problem? I haven't tested so not sure but even if there is a\n> > problem for the Create case, it should lead to some ERROR like missing\n> > publication.\n>\n> I tested these scenarios, and as you expected, it throws an error for\n> the create publication case:\n> 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> data from WAL stream: ERROR: publication \"pub1\" does not exist\n> CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> callback, associated LSN 0/1510CD8\n> 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> \"logical replication apply worker\" (PID 481526) exited with exit code\n> 1\n>\n> The steps for this process are as follows:\n> 1) Create tables in both the publisher and subscriber.\n> 2) On the publisher: Create a replication slot.\n> 3) On the subscriber: Create a subscription using the slot created by\n> the publisher.\n> 4) On the publisher:\n> 4.a) Session 1: BEGIN; INSERT INTO T1;\n> 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> 4.c) Session 1: COMMIT;\n>\n> Since we are throwing out a \"publication does not exist\" error, there\n> is no inconsistency issue here.\n>\n> However, an issue persists with DROP ALL TABLES publication, where\n> data continues to replicate even after the publication is dropped.\n> This happens because the open transaction consumes the invalidation,\n> causing the publications to be revalidated using old snapshot. As a\n> result, both the open transactions and the subsequent transactions are\n> getting replicated.\n>\n> We can reproduce this issue by following these steps in a logical\n> replication setup with an \"ALL TABLES\" publication:\n> On the publisher:\n> Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> In another session on the publisher:\n> Session 2: DROP PUBLICATION\n> Back in Session 1 on the publisher:\n> COMMIT;\n> Finally, in Session 1 on the publisher:\n> INSERT INTO T1 VALUES (val2);\n>\n> Even after dropping the publication, both val1 and val2 are still\n> being replicated to the subscriber. This means that both the\n> in-progress concurrent transaction and the subsequent transactions are\n> being replicated.\n>\n> I don't think locking all tables is a viable solution in this case, as\n> it would require asking the user to refrain from performing any\n> operations on any of the tables in the database while creating a\n> publication.\n>\n\nIndeed, locking all tables in the database to prevent concurrent DMLs\nfor this scenario also looks odd to me. The other alternative\npreviously suggested by Andres is to distribute catalog modifying\ntransactions to all concurrent in-progress transactions [1] but as\nmentioned this could add an overhead. One possibility to reduce\noverhead is that we selectively distribute invalidations for\ncatalogs-related publications but I haven't analyzed the feasibility.\n\nWe need more opinions to decide here, so let me summarize the problem\nand solutions discussed. As explained with an example in an email [1],\nthe problem related to logical decoding is that it doesn't process\ninvalidations corresponding to DDLs for the already in-progress\ntransactions. We discussed preventing DMLs in the first place when\nconcurrent DDLs like ALTER PUBLICATION ... ADD TABLE ... are in\nprogress. The solution discussed was to acquire\nShareUpdateExclusiveLock for all the tables being added via such\ncommands. Further analysis revealed that the same handling is required\nfor ALTER PUBLICATION ... ADD TABLES IN SCHEMA which means locking all\nthe tables in the specified schemas. Then DROP PUBLICATION also seems\nto have similar symptoms which means in the worst case (where\npublication is for ALL TABLES) we have to lock all the tables in the\ndatabase. We are not sure if that is good so the other alternative we\ncan pursue is to distribute invalidations in logical decoding\ninfrastructure [1] which has its downsides.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/20231118025445.crhaeeuvoe2g5dv6%40awork3.anarazel.de\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 25 Jul 2024 10:23:31 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 9:53 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 17 Jul 2024 at 11:54, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Tue, Jul 16, 2024 at 6:54 PM vignesh C <[email protected]> wrote:\n> > >\n> > > BTW, I noticed that we don't take any table-level locks for Create\n> > > Publication .. For ALL TABLES (and Drop Publication). Can that create\n> > > a similar problem? I haven't tested so not sure but even if there is a\n> > > problem for the Create case, it should lead to some ERROR like missing\n> > > publication.\n> >\n> > I tested these scenarios, and as you expected, it throws an error for\n> > the create publication case:\n> > 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> > data from WAL stream: ERROR: publication \"pub1\" does not exist\n> > CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> > callback, associated LSN 0/1510CD8\n> > 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> > \"logical replication apply worker\" (PID 481526) exited with exit code\n> > 1\n> >\n> > The steps for this process are as follows:\n> > 1) Create tables in both the publisher and subscriber.\n> > 2) On the publisher: Create a replication slot.\n> > 3) On the subscriber: Create a subscription using the slot created by\n> > the publisher.\n> > 4) On the publisher:\n> > 4.a) Session 1: BEGIN; INSERT INTO T1;\n> > 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> > 4.c) Session 1: COMMIT;\n> >\n> > Since we are throwing out a \"publication does not exist\" error, there\n> > is no inconsistency issue here.\n> >\n> > However, an issue persists with DROP ALL TABLES publication, where\n> > data continues to replicate even after the publication is dropped.\n> > This happens because the open transaction consumes the invalidation,\n> > causing the publications to be revalidated using old snapshot. As a\n> > result, both the open transactions and the subsequent transactions are\n> > getting replicated.\n> >\n> > We can reproduce this issue by following these steps in a logical\n> > replication setup with an \"ALL TABLES\" publication:\n> > On the publisher:\n> > Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> > In another session on the publisher:\n> > Session 2: DROP PUBLICATION\n> > Back in Session 1 on the publisher:\n> > COMMIT;\n> > Finally, in Session 1 on the publisher:\n> > INSERT INTO T1 VALUES (val2);\n> >\n> > Even after dropping the publication, both val1 and val2 are still\n> > being replicated to the subscriber. This means that both the\n> > in-progress concurrent transaction and the subsequent transactions are\n> > being replicated.\n> >\n> > I don't think locking all tables is a viable solution in this case, as\n> > it would require asking the user to refrain from performing any\n> > operations on any of the tables in the database while creating a\n> > publication.\n> >\n>\n> Indeed, locking all tables in the database to prevent concurrent DMLs\n> for this scenario also looks odd to me. The other alternative\n> previously suggested by Andres is to distribute catalog modifying\n> transactions to all concurrent in-progress transactions [1] but as\n> mentioned this could add an overhead. One possibility to reduce\n> overhead is that we selectively distribute invalidations for\n> catalogs-related publications but I haven't analyzed the feasibility.\n>\n> We need more opinions to decide here, so let me summarize the problem\n> and solutions discussed. As explained with an example in an email [1],\n> the problem related to logical decoding is that it doesn't process\n> invalidations corresponding to DDLs for the already in-progress\n> transactions. We discussed preventing DMLs in the first place when\n> concurrent DDLs like ALTER PUBLICATION ... ADD TABLE ... are in\n> progress. The solution discussed was to acquire\n> ShareUpdateExclusiveLock for all the tables being added via such\n> commands. Further analysis revealed that the same handling is required\n> for ALTER PUBLICATION ... ADD TABLES IN SCHEMA which means locking all\n> the tables in the specified schemas. Then DROP PUBLICATION also seems\n> to have similar symptoms which means in the worst case (where\n> publication is for ALL TABLES) we have to lock all the tables in the\n> database. We are not sure if that is good so the other alternative we\n> can pursue is to distribute invalidations in logical decoding\n> infrastructure [1] which has its downsides.\n>\n> Thoughts?\n\nThank you for summarizing the problem and solutions!\n\nI think it's worth trying the idea of distributing invalidation\nmessages, and we will see if there could be overheads or any further\nobstacles. IIUC this approach would resolve another issue we discussed\nbefore too[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAenVqiMjpN-PvGHL1N9DWnHSq673bfgr6phmBUzx=kLQ@mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 30 Jul 2024 14:56:25 -0700",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 3:27 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jul 24, 2024 at 9:53 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Wed, 17 Jul 2024 at 11:54, Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Tue, Jul 16, 2024 at 6:54 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > BTW, I noticed that we don't take any table-level locks for Create\n> > > > Publication .. For ALL TABLES (and Drop Publication). Can that create\n> > > > a similar problem? I haven't tested so not sure but even if there is a\n> > > > problem for the Create case, it should lead to some ERROR like missing\n> > > > publication.\n> > >\n> > > I tested these scenarios, and as you expected, it throws an error for\n> > > the create publication case:\n> > > 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> > > data from WAL stream: ERROR: publication \"pub1\" does not exist\n> > > CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> > > callback, associated LSN 0/1510CD8\n> > > 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> > > \"logical replication apply worker\" (PID 481526) exited with exit code\n> > > 1\n> > >\n> > > The steps for this process are as follows:\n> > > 1) Create tables in both the publisher and subscriber.\n> > > 2) On the publisher: Create a replication slot.\n> > > 3) On the subscriber: Create a subscription using the slot created by\n> > > the publisher.\n> > > 4) On the publisher:\n> > > 4.a) Session 1: BEGIN; INSERT INTO T1;\n> > > 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> > > 4.c) Session 1: COMMIT;\n> > >\n> > > Since we are throwing out a \"publication does not exist\" error, there\n> > > is no inconsistency issue here.\n> > >\n> > > However, an issue persists with DROP ALL TABLES publication, where\n> > > data continues to replicate even after the publication is dropped.\n> > > This happens because the open transaction consumes the invalidation,\n> > > causing the publications to be revalidated using old snapshot. As a\n> > > result, both the open transactions and the subsequent transactions are\n> > > getting replicated.\n> > >\n> > > We can reproduce this issue by following these steps in a logical\n> > > replication setup with an \"ALL TABLES\" publication:\n> > > On the publisher:\n> > > Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> > > In another session on the publisher:\n> > > Session 2: DROP PUBLICATION\n> > > Back in Session 1 on the publisher:\n> > > COMMIT;\n> > > Finally, in Session 1 on the publisher:\n> > > INSERT INTO T1 VALUES (val2);\n> > >\n> > > Even after dropping the publication, both val1 and val2 are still\n> > > being replicated to the subscriber. This means that both the\n> > > in-progress concurrent transaction and the subsequent transactions are\n> > > being replicated.\n> > >\n> > > I don't think locking all tables is a viable solution in this case, as\n> > > it would require asking the user to refrain from performing any\n> > > operations on any of the tables in the database while creating a\n> > > publication.\n> > >\n> >\n> > Indeed, locking all tables in the database to prevent concurrent DMLs\n> > for this scenario also looks odd to me. The other alternative\n> > previously suggested by Andres is to distribute catalog modifying\n> > transactions to all concurrent in-progress transactions [1] but as\n> > mentioned this could add an overhead. One possibility to reduce\n> > overhead is that we selectively distribute invalidations for\n> > catalogs-related publications but I haven't analyzed the feasibility.\n> >\n> > We need more opinions to decide here, so let me summarize the problem\n> > and solutions discussed. As explained with an example in an email [1],\n> > the problem related to logical decoding is that it doesn't process\n> > invalidations corresponding to DDLs for the already in-progress\n> > transactions. We discussed preventing DMLs in the first place when\n> > concurrent DDLs like ALTER PUBLICATION ... ADD TABLE ... are in\n> > progress. The solution discussed was to acquire\n> > ShareUpdateExclusiveLock for all the tables being added via such\n> > commands. Further analysis revealed that the same handling is required\n> > for ALTER PUBLICATION ... ADD TABLES IN SCHEMA which means locking all\n> > the tables in the specified schemas. Then DROP PUBLICATION also seems\n> > to have similar symptoms which means in the worst case (where\n> > publication is for ALL TABLES) we have to lock all the tables in the\n> > database. We are not sure if that is good so the other alternative we\n> > can pursue is to distribute invalidations in logical decoding\n> > infrastructure [1] which has its downsides.\n> >\n> > Thoughts?\n>\n> Thank you for summarizing the problem and solutions!\n>\n> I think it's worth trying the idea of distributing invalidation\n> messages, and we will see if there could be overheads or any further\n> obstacles. IIUC this approach would resolve another issue we discussed\n> before too[1].\n>\n\nYes, and we also discussed having a similar solution at the time when\nthat problem was reported. So, it is clear that even though locking\ntables can work for commands alter ALTER PUBLICATION ... ADD TABLE\n..., we need a solution for distributing invalidations to the\nin-progress transactions during logical decoding for other cases as\nreported by you previously.\n\nThanks for looking into this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 31 Jul 2024 09:36:06 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, 31 Jul 2024 at 09:36, Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 31, 2024 at 3:27 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Jul 24, 2024 at 9:53 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > On Wed, 17 Jul 2024 at 11:54, Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > On Tue, Jul 16, 2024 at 6:54 PM vignesh C <[email protected]> wrote:\n> > > > >\n> > > > > BTW, I noticed that we don't take any table-level locks for Create\n> > > > > Publication .. For ALL TABLES (and Drop Publication). Can that create\n> > > > > a similar problem? I haven't tested so not sure but even if there is a\n> > > > > problem for the Create case, it should lead to some ERROR like missing\n> > > > > publication.\n> > > >\n> > > > I tested these scenarios, and as you expected, it throws an error for\n> > > > the create publication case:\n> > > > 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> > > > data from WAL stream: ERROR: publication \"pub1\" does not exist\n> > > > CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> > > > callback, associated LSN 0/1510CD8\n> > > > 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> > > > \"logical replication apply worker\" (PID 481526) exited with exit code\n> > > > 1\n> > > >\n> > > > The steps for this process are as follows:\n> > > > 1) Create tables in both the publisher and subscriber.\n> > > > 2) On the publisher: Create a replication slot.\n> > > > 3) On the subscriber: Create a subscription using the slot created by\n> > > > the publisher.\n> > > > 4) On the publisher:\n> > > > 4.a) Session 1: BEGIN; INSERT INTO T1;\n> > > > 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> > > > 4.c) Session 1: COMMIT;\n> > > >\n> > > > Since we are throwing out a \"publication does not exist\" error, there\n> > > > is no inconsistency issue here.\n> > > >\n> > > > However, an issue persists with DROP ALL TABLES publication, where\n> > > > data continues to replicate even after the publication is dropped.\n> > > > This happens because the open transaction consumes the invalidation,\n> > > > causing the publications to be revalidated using old snapshot. As a\n> > > > result, both the open transactions and the subsequent transactions are\n> > > > getting replicated.\n> > > >\n> > > > We can reproduce this issue by following these steps in a logical\n> > > > replication setup with an \"ALL TABLES\" publication:\n> > > > On the publisher:\n> > > > Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> > > > In another session on the publisher:\n> > > > Session 2: DROP PUBLICATION\n> > > > Back in Session 1 on the publisher:\n> > > > COMMIT;\n> > > > Finally, in Session 1 on the publisher:\n> > > > INSERT INTO T1 VALUES (val2);\n> > > >\n> > > > Even after dropping the publication, both val1 and val2 are still\n> > > > being replicated to the subscriber. This means that both the\n> > > > in-progress concurrent transaction and the subsequent transactions are\n> > > > being replicated.\n> > > >\n> > > > I don't think locking all tables is a viable solution in this case, as\n> > > > it would require asking the user to refrain from performing any\n> > > > operations on any of the tables in the database while creating a\n> > > > publication.\n> > > >\n> > >\n> > > Indeed, locking all tables in the database to prevent concurrent DMLs\n> > > for this scenario also looks odd to me. The other alternative\n> > > previously suggested by Andres is to distribute catalog modifying\n> > > transactions to all concurrent in-progress transactions [1] but as\n> > > mentioned this could add an overhead. One possibility to reduce\n> > > overhead is that we selectively distribute invalidations for\n> > > catalogs-related publications but I haven't analyzed the feasibility.\n> > >\n> > > We need more opinions to decide here, so let me summarize the problem\n> > > and solutions discussed. As explained with an example in an email [1],\n> > > the problem related to logical decoding is that it doesn't process\n> > > invalidations corresponding to DDLs for the already in-progress\n> > > transactions. We discussed preventing DMLs in the first place when\n> > > concurrent DDLs like ALTER PUBLICATION ... ADD TABLE ... are in\n> > > progress. The solution discussed was to acquire\n> > > ShareUpdateExclusiveLock for all the tables being added via such\n> > > commands. Further analysis revealed that the same handling is required\n> > > for ALTER PUBLICATION ... ADD TABLES IN SCHEMA which means locking all\n> > > the tables in the specified schemas. Then DROP PUBLICATION also seems\n> > > to have similar symptoms which means in the worst case (where\n> > > publication is for ALL TABLES) we have to lock all the tables in the\n> > > database. We are not sure if that is good so the other alternative we\n> > > can pursue is to distribute invalidations in logical decoding\n> > > infrastructure [1] which has its downsides.\n> > >\n> > > Thoughts?\n> >\n> > Thank you for summarizing the problem and solutions!\n> >\n> > I think it's worth trying the idea of distributing invalidation\n> > messages, and we will see if there could be overheads or any further\n> > obstacles. IIUC this approach would resolve another issue we discussed\n> > before too[1].\n> >\n>\n> Yes, and we also discussed having a similar solution at the time when\n> that problem was reported. So, it is clear that even though locking\n> tables can work for commands alter ALTER PUBLICATION ... ADD TABLE\n> ..., we need a solution for distributing invalidations to the\n> in-progress transactions during logical decoding for other cases as\n> reported by you previously.\n>\n> Thanks for looking into this.\n>\n\nThanks, I am working on to implement a solution for distributing\ninvalidations. Will share a patch for the same.\n\nThanks and Regards,\nShlok Kyal\n\n\n",
"msg_date": "Wed, 31 Jul 2024 11:17:00 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Wed, 31 Jul 2024 at 11:17, Shlok Kyal <[email protected]> wrote:\n>\n> On Wed, 31 Jul 2024 at 09:36, Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jul 31, 2024 at 3:27 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 24, 2024 at 9:53 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n> > > > >\n> > > > > On Wed, 17 Jul 2024 at 11:54, Amit Kapila <[email protected]> wrote:\n> > > > > >\n> > > > > > On Tue, Jul 16, 2024 at 6:54 PM vignesh C <[email protected]> wrote:\n> > > > > >\n> > > > > > BTW, I noticed that we don't take any table-level locks for Create\n> > > > > > Publication .. For ALL TABLES (and Drop Publication). Can that create\n> > > > > > a similar problem? I haven't tested so not sure but even if there is a\n> > > > > > problem for the Create case, it should lead to some ERROR like missing\n> > > > > > publication.\n> > > > >\n> > > > > I tested these scenarios, and as you expected, it throws an error for\n> > > > > the create publication case:\n> > > > > 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> > > > > data from WAL stream: ERROR: publication \"pub1\" does not exist\n> > > > > CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> > > > > callback, associated LSN 0/1510CD8\n> > > > > 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> > > > > \"logical replication apply worker\" (PID 481526) exited with exit code\n> > > > > 1\n> > > > >\n> > > > > The steps for this process are as follows:\n> > > > > 1) Create tables in both the publisher and subscriber.\n> > > > > 2) On the publisher: Create a replication slot.\n> > > > > 3) On the subscriber: Create a subscription using the slot created by\n> > > > > the publisher.\n> > > > > 4) On the publisher:\n> > > > > 4.a) Session 1: BEGIN; INSERT INTO T1;\n> > > > > 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> > > > > 4.c) Session 1: COMMIT;\n> > > > >\n> > > > > Since we are throwing out a \"publication does not exist\" error, there\n> > > > > is no inconsistency issue here.\n> > > > >\n> > > > > However, an issue persists with DROP ALL TABLES publication, where\n> > > > > data continues to replicate even after the publication is dropped.\n> > > > > This happens because the open transaction consumes the invalidation,\n> > > > > causing the publications to be revalidated using old snapshot. As a\n> > > > > result, both the open transactions and the subsequent transactions are\n> > > > > getting replicated.\n> > > > >\n> > > > > We can reproduce this issue by following these steps in a logical\n> > > > > replication setup with an \"ALL TABLES\" publication:\n> > > > > On the publisher:\n> > > > > Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> > > > > In another session on the publisher:\n> > > > > Session 2: DROP PUBLICATION\n> > > > > Back in Session 1 on the publisher:\n> > > > > COMMIT;\n> > > > > Finally, in Session 1 on the publisher:\n> > > > > INSERT INTO T1 VALUES (val2);\n> > > > >\n> > > > > Even after dropping the publication, both val1 and val2 are still\n> > > > > being replicated to the subscriber. This means that both the\n> > > > > in-progress concurrent transaction and the subsequent transactions are\n> > > > > being replicated.\n> > > > >\n> > > > > I don't think locking all tables is a viable solution in this case, as\n> > > > > it would require asking the user to refrain from performing any\n> > > > > operations on any of the tables in the database while creating a\n> > > > > publication.\n> > > > >\n> > > >\n> > > > Indeed, locking all tables in the database to prevent concurrent DMLs\n> > > > for this scenario also looks odd to me. The other alternative\n> > > > previously suggested by Andres is to distribute catalog modifying\n> > > > transactions to all concurrent in-progress transactions [1] but as\n> > > > mentioned this could add an overhead. One possibility to reduce\n> > > > overhead is that we selectively distribute invalidations for\n> > > > catalogs-related publications but I haven't analyzed the feasibility.\n> > > >\n> > > > We need more opinions to decide here, so let me summarize the problem\n> > > > and solutions discussed. As explained with an example in an email [1],\n> > > > the problem related to logical decoding is that it doesn't process\n> > > > invalidations corresponding to DDLs for the already in-progress\n> > > > transactions. We discussed preventing DMLs in the first place when\n> > > > concurrent DDLs like ALTER PUBLICATION ... ADD TABLE ... are in\n> > > > progress. The solution discussed was to acquire\n> > > > ShareUpdateExclusiveLock for all the tables being added via such\n> > > > commands. Further analysis revealed that the same handling is required\n> > > > for ALTER PUBLICATION ... ADD TABLES IN SCHEMA which means locking all\n> > > > the tables in the specified schemas. Then DROP PUBLICATION also seems\n> > > > to have similar symptoms which means in the worst case (where\n> > > > publication is for ALL TABLES) we have to lock all the tables in the\n> > > > database. We are not sure if that is good so the other alternative we\n> > > > can pursue is to distribute invalidations in logical decoding\n> > > > infrastructure [1] which has its downsides.\n> > > >\n> > > > Thoughts?\n> > >\n> > > Thank you for summarizing the problem and solutions!\n> > >\n> > > I think it's worth trying the idea of distributing invalidation\n> > > messages, and we will see if there could be overheads or any further\n> > > obstacles. IIUC this approach would resolve another issue we discussed\n> > > before too[1].\n> > >\n> >\n> > Yes, and we also discussed having a similar solution at the time when\n> > that problem was reported. So, it is clear that even though locking\n> > tables can work for commands alter ALTER PUBLICATION ... ADD TABLE\n> > ..., we need a solution for distributing invalidations to the\n> > in-progress transactions during logical decoding for other cases as\n> > reported by you previously.\n> >\n> > Thanks for looking into this.\n> >\n>\n> Thanks, I am working on to implement a solution for distributing\n> invalidations. Will share a patch for the same.\n\nCreated a patch for distributing invalidations.\nHere we collect the invalidation messages for the current transaction\nand distribute it to all the inprogress transactions, whenever we are\ndistributing the snapshots..Thoughts?\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Thu, 8 Aug 2024 16:24:22 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Thu, 8 Aug 2024 at 16:24, Shlok Kyal <[email protected]> wrote:\n>\n> On Wed, 31 Jul 2024 at 11:17, Shlok Kyal <[email protected]> wrote:\n> >\n> > On Wed, 31 Jul 2024 at 09:36, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 31, 2024 at 3:27 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Wed, Jul 24, 2024 at 9:53 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n> > > > > >\n> > > > > > On Wed, 17 Jul 2024 at 11:54, Amit Kapila <[email protected]> wrote:\n> > > > > > >\n> > > > > > > On Tue, Jul 16, 2024 at 6:54 PM vignesh C <[email protected]> wrote:\n> > > > > > >\n> > > > > > > BTW, I noticed that we don't take any table-level locks for Create\n> > > > > > > Publication .. For ALL TABLES (and Drop Publication). Can that create\n> > > > > > > a similar problem? I haven't tested so not sure but even if there is a\n> > > > > > > problem for the Create case, it should lead to some ERROR like missing\n> > > > > > > publication.\n> > > > > >\n> > > > > > I tested these scenarios, and as you expected, it throws an error for\n> > > > > > the create publication case:\n> > > > > > 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> > > > > > data from WAL stream: ERROR: publication \"pub1\" does not exist\n> > > > > > CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> > > > > > callback, associated LSN 0/1510CD8\n> > > > > > 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> > > > > > \"logical replication apply worker\" (PID 481526) exited with exit code\n> > > > > > 1\n> > > > > >\n> > > > > > The steps for this process are as follows:\n> > > > > > 1) Create tables in both the publisher and subscriber.\n> > > > > > 2) On the publisher: Create a replication slot.\n> > > > > > 3) On the subscriber: Create a subscription using the slot created by\n> > > > > > the publisher.\n> > > > > > 4) On the publisher:\n> > > > > > 4.a) Session 1: BEGIN; INSERT INTO T1;\n> > > > > > 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> > > > > > 4.c) Session 1: COMMIT;\n> > > > > >\n> > > > > > Since we are throwing out a \"publication does not exist\" error, there\n> > > > > > is no inconsistency issue here.\n> > > > > >\n> > > > > > However, an issue persists with DROP ALL TABLES publication, where\n> > > > > > data continues to replicate even after the publication is dropped.\n> > > > > > This happens because the open transaction consumes the invalidation,\n> > > > > > causing the publications to be revalidated using old snapshot. As a\n> > > > > > result, both the open transactions and the subsequent transactions are\n> > > > > > getting replicated.\n> > > > > >\n> > > > > > We can reproduce this issue by following these steps in a logical\n> > > > > > replication setup with an \"ALL TABLES\" publication:\n> > > > > > On the publisher:\n> > > > > > Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> > > > > > In another session on the publisher:\n> > > > > > Session 2: DROP PUBLICATION\n> > > > > > Back in Session 1 on the publisher:\n> > > > > > COMMIT;\n> > > > > > Finally, in Session 1 on the publisher:\n> > > > > > INSERT INTO T1 VALUES (val2);\n> > > > > >\n> > > > > > Even after dropping the publication, both val1 and val2 are still\n> > > > > > being replicated to the subscriber. This means that both the\n> > > > > > in-progress concurrent transaction and the subsequent transactions are\n> > > > > > being replicated.\n> > > > > >\n> > > > > > I don't think locking all tables is a viable solution in this case, as\n> > > > > > it would require asking the user to refrain from performing any\n> > > > > > operations on any of the tables in the database while creating a\n> > > > > > publication.\n> > > > > >\n> > > > >\n> > > > > Indeed, locking all tables in the database to prevent concurrent DMLs\n> > > > > for this scenario also looks odd to me. The other alternative\n> > > > > previously suggested by Andres is to distribute catalog modifying\n> > > > > transactions to all concurrent in-progress transactions [1] but as\n> > > > > mentioned this could add an overhead. One possibility to reduce\n> > > > > overhead is that we selectively distribute invalidations for\n> > > > > catalogs-related publications but I haven't analyzed the feasibility.\n> > > > >\n> > > > > We need more opinions to decide here, so let me summarize the problem\n> > > > > and solutions discussed. As explained with an example in an email [1],\n> > > > > the problem related to logical decoding is that it doesn't process\n> > > > > invalidations corresponding to DDLs for the already in-progress\n> > > > > transactions. We discussed preventing DMLs in the first place when\n> > > > > concurrent DDLs like ALTER PUBLICATION ... ADD TABLE ... are in\n> > > > > progress. The solution discussed was to acquire\n> > > > > ShareUpdateExclusiveLock for all the tables being added via such\n> > > > > commands. Further analysis revealed that the same handling is required\n> > > > > for ALTER PUBLICATION ... ADD TABLES IN SCHEMA which means locking all\n> > > > > the tables in the specified schemas. Then DROP PUBLICATION also seems\n> > > > > to have similar symptoms which means in the worst case (where\n> > > > > publication is for ALL TABLES) we have to lock all the tables in the\n> > > > > database. We are not sure if that is good so the other alternative we\n> > > > > can pursue is to distribute invalidations in logical decoding\n> > > > > infrastructure [1] which has its downsides.\n> > > > >\n> > > > > Thoughts?\n> > > >\n> > > > Thank you for summarizing the problem and solutions!\n> > > >\n> > > > I think it's worth trying the idea of distributing invalidation\n> > > > messages, and we will see if there could be overheads or any further\n> > > > obstacles. IIUC this approach would resolve another issue we discussed\n> > > > before too[1].\n> > > >\n> > >\n> > > Yes, and we also discussed having a similar solution at the time when\n> > > that problem was reported. So, it is clear that even though locking\n> > > tables can work for commands alter ALTER PUBLICATION ... ADD TABLE\n> > > ..., we need a solution for distributing invalidations to the\n> > > in-progress transactions during logical decoding for other cases as\n> > > reported by you previously.\n> > >\n> > > Thanks for looking into this.\n> > >\n> >\n> > Thanks, I am working on to implement a solution for distributing\n> > invalidations. Will share a patch for the same.\n>\n> Created a patch for distributing invalidations.\n> Here we collect the invalidation messages for the current transaction\n> and distribute it to all the inprogress transactions, whenever we are\n> distributing the snapshots..Thoughts?\n\nIn the v7 patch, I am looping through the reorder buffer of the\ncurrent committed transaction and storing all invalidation messages in\na list. Then I am distributing those invalidations.\nBut I found that for a transaction we already store all the\ninvalidation messages (see [1]). So we don't need to loop through the\nreorder buffer and store the invalidations.\n\nI have modified the patch accordingly and attached the same.\n\n[1]: https://github.com/postgres/postgres/blob/7da1bdc2c2f17038f2ae1900be90a0d7b5e361e0/src/include/replication/reorderbuffer.h#L384\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Fri, 9 Aug 2024 16:50:31 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Thu, 8 Aug 2024 at 16:24, Shlok Kyal <[email protected]> wrote:\n>\n> On Wed, 31 Jul 2024 at 11:17, Shlok Kyal <[email protected]> wrote:\n> >\n> > On Wed, 31 Jul 2024 at 09:36, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 31, 2024 at 3:27 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Wed, Jul 24, 2024 at 9:53 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > On Wed, Jul 17, 2024 at 5:25 PM vignesh C <[email protected]> wrote:\n> > > > > >\n> > > > > > On Wed, 17 Jul 2024 at 11:54, Amit Kapila <[email protected]> wrote:\n> > > > > > >\n> > > > > > > On Tue, Jul 16, 2024 at 6:54 PM vignesh C <[email protected]> wrote:\n> > > > > > >\n> > > > > > > BTW, I noticed that we don't take any table-level locks for Create\n> > > > > > > Publication .. For ALL TABLES (and Drop Publication). Can that create\n> > > > > > > a similar problem? I haven't tested so not sure but even if there is a\n> > > > > > > problem for the Create case, it should lead to some ERROR like missing\n> > > > > > > publication.\n> > > > > >\n> > > > > > I tested these scenarios, and as you expected, it throws an error for\n> > > > > > the create publication case:\n> > > > > > 2024-07-17 14:50:01.145 IST [481526] 481526 ERROR: could not receive\n> > > > > > data from WAL stream: ERROR: publication \"pub1\" does not exist\n> > > > > > CONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change\n> > > > > > callback, associated LSN 0/1510CD8\n> > > > > > 2024-07-17 14:50:01.147 IST [481450] 481450 LOG: background worker\n> > > > > > \"logical replication apply worker\" (PID 481526) exited with exit code\n> > > > > > 1\n> > > > > >\n> > > > > > The steps for this process are as follows:\n> > > > > > 1) Create tables in both the publisher and subscriber.\n> > > > > > 2) On the publisher: Create a replication slot.\n> > > > > > 3) On the subscriber: Create a subscription using the slot created by\n> > > > > > the publisher.\n> > > > > > 4) On the publisher:\n> > > > > > 4.a) Session 1: BEGIN; INSERT INTO T1;\n> > > > > > 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES\n> > > > > > 4.c) Session 1: COMMIT;\n> > > > > >\n> > > > > > Since we are throwing out a \"publication does not exist\" error, there\n> > > > > > is no inconsistency issue here.\n> > > > > >\n> > > > > > However, an issue persists with DROP ALL TABLES publication, where\n> > > > > > data continues to replicate even after the publication is dropped.\n> > > > > > This happens because the open transaction consumes the invalidation,\n> > > > > > causing the publications to be revalidated using old snapshot. As a\n> > > > > > result, both the open transactions and the subsequent transactions are\n> > > > > > getting replicated.\n> > > > > >\n> > > > > > We can reproduce this issue by following these steps in a logical\n> > > > > > replication setup with an \"ALL TABLES\" publication:\n> > > > > > On the publisher:\n> > > > > > Session 1: BEGIN; INSERT INTO T1 VALUES (val1);\n> > > > > > In another session on the publisher:\n> > > > > > Session 2: DROP PUBLICATION\n> > > > > > Back in Session 1 on the publisher:\n> > > > > > COMMIT;\n> > > > > > Finally, in Session 1 on the publisher:\n> > > > > > INSERT INTO T1 VALUES (val2);\n> > > > > >\n> > > > > > Even after dropping the publication, both val1 and val2 are still\n> > > > > > being replicated to the subscriber. This means that both the\n> > > > > > in-progress concurrent transaction and the subsequent transactions are\n> > > > > > being replicated.\n> > > > > >\n> > > > > > I don't think locking all tables is a viable solution in this case, as\n> > > > > > it would require asking the user to refrain from performing any\n> > > > > > operations on any of the tables in the database while creating a\n> > > > > > publication.\n> > > > > >\n> > > > >\n> > > > > Indeed, locking all tables in the database to prevent concurrent DMLs\n> > > > > for this scenario also looks odd to me. The other alternative\n> > > > > previously suggested by Andres is to distribute catalog modifying\n> > > > > transactions to all concurrent in-progress transactions [1] but as\n> > > > > mentioned this could add an overhead. One possibility to reduce\n> > > > > overhead is that we selectively distribute invalidations for\n> > > > > catalogs-related publications but I haven't analyzed the feasibility.\n> > > > >\n> > > > > We need more opinions to decide here, so let me summarize the problem\n> > > > > and solutions discussed. As explained with an example in an email [1],\n> > > > > the problem related to logical decoding is that it doesn't process\n> > > > > invalidations corresponding to DDLs for the already in-progress\n> > > > > transactions. We discussed preventing DMLs in the first place when\n> > > > > concurrent DDLs like ALTER PUBLICATION ... ADD TABLE ... are in\n> > > > > progress. The solution discussed was to acquire\n> > > > > ShareUpdateExclusiveLock for all the tables being added via such\n> > > > > commands. Further analysis revealed that the same handling is required\n> > > > > for ALTER PUBLICATION ... ADD TABLES IN SCHEMA which means locking all\n> > > > > the tables in the specified schemas. Then DROP PUBLICATION also seems\n> > > > > to have similar symptoms which means in the worst case (where\n> > > > > publication is for ALL TABLES) we have to lock all the tables in the\n> > > > > database. We are not sure if that is good so the other alternative we\n> > > > > can pursue is to distribute invalidations in logical decoding\n> > > > > infrastructure [1] which has its downsides.\n> > > > >\n> > > > > Thoughts?\n> > > >\n> > > > Thank you for summarizing the problem and solutions!\n> > > >\n> > > > I think it's worth trying the idea of distributing invalidation\n> > > > messages, and we will see if there could be overheads or any further\n> > > > obstacles. IIUC this approach would resolve another issue we discussed\n> > > > before too[1].\n> > > >\n> > >\n> > > Yes, and we also discussed having a similar solution at the time when\n> > > that problem was reported. So, it is clear that even though locking\n> > > tables can work for commands alter ALTER PUBLICATION ... ADD TABLE\n> > > ..., we need a solution for distributing invalidations to the\n> > > in-progress transactions during logical decoding for other cases as\n> > > reported by you previously.\n> > >\n> > > Thanks for looking into this.\n> > >\n> >\n> > Thanks, I am working on to implement a solution for distributing\n> > invalidations. Will share a patch for the same.\n>\n> Created a patch for distributing invalidations.\n> Here we collect the invalidation messages for the current transaction\n> and distribute it to all the inprogress transactions, whenever we are\n> distributing the snapshots..Thoughts?\n\nSince we are applying invalidations to all in-progress transactions,\nthe publisher will only replicate half of the transaction data up to\nthe point of invalidation, while the remaining half will not be\nreplicated.\nEx:\nSession1:\nBEGIN;\nINSERT INTO tab_conc VALUES (1);\n\nSession2:\nALTER PUBLICATION regress_pub1 DROP TABLE tab_conc;\n\nSession1:\nINSERT INTO tab_conc VALUES (2);\nINSERT INTO tab_conc VALUES (3);\nCOMMIT;\n\nAfter the above the subscriber data looks like:\npostgres=# select * from tab_conc ;\n a\n---\n 1\n(1 row)\n\nYou can reproduce the issue using the attached test.\nI'm not sure if this behavior is ok. At present, we’ve replicated the\nfirst record within the same transaction, but the second and third\nrecords are being skipped. Would it be better to apply invalidations\nafter the transaction is underway?\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Thu, 15 Aug 2024 21:30:32 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 9:31 PM vignesh C <[email protected]> wrote:\n>\n> On Thu, 8 Aug 2024 at 16:24, Shlok Kyal <[email protected]> wrote:\n> >\n> > On Wed, 31 Jul 2024 at 11:17, Shlok Kyal <[email protected]> wrote:\n> > >\n> >\n> > Created a patch for distributing invalidations.\n> > Here we collect the invalidation messages for the current transaction\n> > and distribute it to all the inprogress transactions, whenever we are\n> > distributing the snapshots..Thoughts?\n>\n> Since we are applying invalidations to all in-progress transactions,\n> the publisher will only replicate half of the transaction data up to\n> the point of invalidation, while the remaining half will not be\n> replicated.\n> Ex:\n> Session1:\n> BEGIN;\n> INSERT INTO tab_conc VALUES (1);\n>\n> Session2:\n> ALTER PUBLICATION regress_pub1 DROP TABLE tab_conc;\n>\n> Session1:\n> INSERT INTO tab_conc VALUES (2);\n> INSERT INTO tab_conc VALUES (3);\n> COMMIT;\n>\n> After the above the subscriber data looks like:\n> postgres=# select * from tab_conc ;\n> a\n> ---\n> 1\n> (1 row)\n>\n> You can reproduce the issue using the attached test.\n> I'm not sure if this behavior is ok. At present, we’ve replicated the\n> first record within the same transaction, but the second and third\n> records are being skipped.\n>\n\nThis can happen even without a concurrent DDL if some of the tables in\nthe database are part of the publication and others are not. In such a\ncase inserts for publicized tables will be replicated but other\ninserts won't. Sending the partial data of the transaction isn't a\nproblem to me. Do you have any other concerns that I am missing?\n\n> Would it be better to apply invalidations\n> after the transaction is underway?\n>\n\nBut that won't fix the problem reported by Sawada-san in an email [1].\n\nBTW, we should do some performance testing by having a mix of DML and\nDDLs to see the performance impact of this patch.\n\n[1] - https://www.postgresql.org/message-id/CAD21AoAenVqiMjpN-PvGHL1N9DWnHSq673bfgr6phmBUzx=kLQ@mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Aug 2024 16:10:22 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Tue, 20 Aug 2024 at 16:10, Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 15, 2024 at 9:31 PM vignesh C <[email protected]> wrote:\n> >\n> > On Thu, 8 Aug 2024 at 16:24, Shlok Kyal <[email protected]> wrote:\n> > >\n> > > On Wed, 31 Jul 2024 at 11:17, Shlok Kyal <[email protected]> wrote:\n> > > >\n> > >\n> > > Created a patch for distributing invalidations.\n> > > Here we collect the invalidation messages for the current transaction\n> > > and distribute it to all the inprogress transactions, whenever we are\n> > > distributing the snapshots..Thoughts?\n> >\n> > Since we are applying invalidations to all in-progress transactions,\n> > the publisher will only replicate half of the transaction data up to\n> > the point of invalidation, while the remaining half will not be\n> > replicated.\n> > Ex:\n> > Session1:\n> > BEGIN;\n> > INSERT INTO tab_conc VALUES (1);\n> >\n> > Session2:\n> > ALTER PUBLICATION regress_pub1 DROP TABLE tab_conc;\n> >\n> > Session1:\n> > INSERT INTO tab_conc VALUES (2);\n> > INSERT INTO tab_conc VALUES (3);\n> > COMMIT;\n> >\n> > After the above the subscriber data looks like:\n> > postgres=# select * from tab_conc ;\n> > a\n> > ---\n> > 1\n> > (1 row)\n> >\n> > You can reproduce the issue using the attached test.\n> > I'm not sure if this behavior is ok. At present, we’ve replicated the\n> > first record within the same transaction, but the second and third\n> > records are being skipped.\n> >\n>\n> This can happen even without a concurrent DDL if some of the tables in\n> the database are part of the publication and others are not. In such a\n> case inserts for publicized tables will be replicated but other\n> inserts won't. Sending the partial data of the transaction isn't a\n> problem to me. Do you have any other concerns that I am missing?\n\nMy main concern was about sending only part of the data from a\ntransaction table and leaving out the rest. However, since this is\nhappening elsewhere as well, I'm okay with it.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 20 Aug 2024 17:49:25 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "> BTW, we should do some performance testing by having a mix of DML and\n> DDLs to see the performance impact of this patch.\n>\n> [1] - https://www.postgresql.org/message-id/CAD21AoAenVqiMjpN-PvGHL1N9DWnHSq673bfgr6phmBUzx=kLQ@mail.gmail.com\n>\n\nI did some performance testing and I found some performance impact for\nthe following case:\n\n1. Created a publisher, subscriber set up on a single table, say 'tab_conc1';\n2. Created a second publisher, subscriber set on a single table say 'tp';\n3. Created 'tcount' no. of tables. These tables are not part of any publication.\n4. There are two sessions running in parallel, let's say S1 and S2.\n5. Begin a transaction in S1.\n6. Now in a loop (this loop runs 100 times):\n S1: Insert a row in table 'tab_conc1'\n S1: Insert a row in all 'tcount' tables.\n S2: BEGIN; Alter publication for 2nd publication; COMMIT;\n The current logic in the patch will call the function\n'rel_sync_cache_publication_cb' during invalidation. This will\ninvalidate the cache for all the tables. So cache related to all the\ntables i.e. table 'tab_conc1', 'tcount' tables will be invalidated.\n7. COMMIT the transaction in S1.\n\nThe performance in this case is:\nNo. of tables | With patch (in ms) | With head (in ms)\n-----------------------------------------------------------------------------\ntcount = 100 | 101376.4 | 101357.8\ntcount = 1000 | 994085.4 | 993471.4\n\nFor 100 tables the performance is slow by '0.018%' and for 1000 tables\nperformance is slow by '0.06%'.\nThese results are the average of 5 runs.\n\nOther than this I tested the following cases but did not find any\nperformance impact:\n1. with 'tcount = 10'. But I didn't find any performance impact.\n2. with 'tcount = 0' and running the loop 1000 times. But I didn't\nfind any performance impact.\n\nI have also attached the test script and the machine configurations on\nwhich performance testing was done.\nNext I am planning to test solely on the logical decoding side and\nwill share the results.\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Fri, 30 Aug 2024 15:05:48 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Fri, Aug 30, 2024 at 3:06 PM Shlok Kyal <[email protected]> wrote:\n>\n> Next I am planning to test solely on the logical decoding side and\n> will share the results.\n>\n\nThanks, the next set of proposed tests makes sense to me. It will also\nbe useful to generate some worst-case scenarios where the number of\ninvalidations is more to see the distribution cost in such cases. For\nexample, Truncate/Drop a table with 100 or 1000 partitions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 2 Sep 2024 10:12:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 4:10 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 15, 2024 at 9:31 PM vignesh C <[email protected]> wrote:\n> > Since we are applying invalidations to all in-progress transactions,\n> > the publisher will only replicate half of the transaction data up to\n> > the point of invalidation, while the remaining half will not be\n> > replicated.\n> > Ex:\n> > Session1:\n> > BEGIN;\n> > INSERT INTO tab_conc VALUES (1);\n> >\n> > Session2:\n> > ALTER PUBLICATION regress_pub1 DROP TABLE tab_conc;\n> >\n> > Session1:\n> > INSERT INTO tab_conc VALUES (2);\n> > INSERT INTO tab_conc VALUES (3);\n> > COMMIT;\n> >\n> > After the above the subscriber data looks like:\n> > postgres=# select * from tab_conc ;\n> > a\n> > ---\n> > 1\n> > (1 row)\n> >\n> > You can reproduce the issue using the attached test.\n> > I'm not sure if this behavior is ok. At present, we’ve replicated the\n> > first record within the same transaction, but the second and third\n> > records are being skipped.\n> >\n>\n> This can happen even without a concurrent DDL if some of the tables in\n> the database are part of the publication and others are not. In such a\n> case inserts for publicized tables will be replicated but other\n> inserts won't. Sending the partial data of the transaction isn't a\n> problem to me. Do you have any other concerns that I am missing?\n>\n\nHi,\n\nI think that the partial data replication for one table is a bigger\nissue than the case of data being sent for a subset of the tables in\nthe transaction. This can lead to inconsistent data if the same row is\nupdated multiple times or deleted in the same transaction. In such a\ncase if only the partial updates from the transaction are sent to the\nsubscriber, it might end up with the data which was never visible on\nthe publisher side.\n\nHere is an example I tried with the patch v8-001 :\n\nI created following 2 tables on the publisher and the subscriber :\n\nCREATE TABLE delete_test(id int primary key, name varchar(100));\nCREATE TABLE update_test(id int primary key, name varchar(100));\n\nI added both the tables to the publication p on the publisher and\ncreated a subscription s on the subscriber.\n\nI run 2 sessions on the publisher and do the following :\n\nSession 1 :\nBEGIN;\nINSERT INTO delete_test VALUES(0, 'Nitin');\n\nSession 2 :\nALTER PUBLICATION p DROP TABLE delete_test;\n\nSession 1 :\nDELETE FROM delete_test WHERE id=0;\nCOMMIT;\n\nAfter the commit there should be no new row created on the publisher.\nBut because the partial data was replicated, this is what the select\non the subscriber shows :\n\nSELECT * FROM delete_test;\n id | name\n----+-----------\n 0 | Nitin\n(1 row)\n\nI don't think the above is a common use case. But this is still an\nissue because the subscriber has the data which never existed on the\npublisher.\n\nSimilar issue can be seen with an update command.\n\nSession 1 :\nBEGIN;\nINSERT INTO update_test VALUES(1, 'Chiranjiv');\n\nSession 2 :\nALTER PUBLICATION p DROP TABLE update_test;\n\nSession 1:\nUPDATE update_test SET name='Eeshan' where id=1;\nCOMMIT;\n\nAfter the commit, this is the state on the publisher :\nSELECT * FROM update_test;\n 1 | Eeshan\n(1 row)\n\nWhile this is the state on the subscriber :\nSELECT * FROM update_test;\n 1 | Chiranjiv\n(1 row)\n\nI think the update during a transaction scenario might be more common\nthan deletion right after insertion. But both of these seem like real\nissues to consider. Please let me know if I'm missing something.\n\nThanks & Regards\nNitin Motiani\nGoogle\n\n\n",
"msg_date": "Mon, 2 Sep 2024 21:19:41 +0530",
"msg_from": "Nitin Motiani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Mon, Sep 2, 2024 at 9:19 PM Nitin Motiani <[email protected]> wrote:\n>\n> I think that the partial data replication for one table is a bigger\n> issue than the case of data being sent for a subset of the tables in\n> the transaction. This can lead to inconsistent data if the same row is\n> updated multiple times or deleted in the same transaction. In such a\n> case if only the partial updates from the transaction are sent to the\n> subscriber, it might end up with the data which was never visible on\n> the publisher side.\n>\n> Here is an example I tried with the patch v8-001 :\n>\n> I created following 2 tables on the publisher and the subscriber :\n>\n> CREATE TABLE delete_test(id int primary key, name varchar(100));\n> CREATE TABLE update_test(id int primary key, name varchar(100));\n>\n> I added both the tables to the publication p on the publisher and\n> created a subscription s on the subscriber.\n>\n> I run 2 sessions on the publisher and do the following :\n>\n> Session 1 :\n> BEGIN;\n> INSERT INTO delete_test VALUES(0, 'Nitin');\n>\n> Session 2 :\n> ALTER PUBLICATION p DROP TABLE delete_test;\n>\n> Session 1 :\n> DELETE FROM delete_test WHERE id=0;\n> COMMIT;\n>\n> After the commit there should be no new row created on the publisher.\n> But because the partial data was replicated, this is what the select\n> on the subscriber shows :\n>\n> SELECT * FROM delete_test;\n> id | name\n> ----+-----------\n> 0 | Nitin\n> (1 row)\n>\n> I don't think the above is a common use case. But this is still an\n> issue because the subscriber has the data which never existed on the\n> publisher.\n>\n\nI don't think that is the correct conclusion because the user has\nintentionally avoided sending part of the transaction changes. This\ncan happen in various ways without the patch as well. For example, if\nthe user has performed the ALTER in the same transaction.\n\nPublisher:\n=========\nBEGIN\npostgres=*# Insert into delete_test values(0, 'Nitin');\nINSERT 0 1\npostgres=*# Alter Publication pub1 drop table delete_test;\nALTER PUBLICATION\npostgres=*# Delete from delete_test where id=0;\nDELETE 1\npostgres=*# commit;\nCOMMIT\npostgres=# select * from delete_test;\n id | name\n----+------\n(0 rows)\n\nSubscriber:\n=========\npostgres=# select * from delete_test;\n id | name\n----+-------\n 0 | Nitin\n(1 row)\n\nThis can also happen when the user has published only 'inserts' but\nnot 'updates' or 'deletes'.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 5 Sep 2024 16:04:16 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Mon, 2 Sept 2024 at 10:12, Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Aug 30, 2024 at 3:06 PM Shlok Kyal <[email protected]> wrote:\n> >\n> > Next I am planning to test solely on the logical decoding side and\n> > will share the results.\n> >\n>\n> Thanks, the next set of proposed tests makes sense to me. It will also\n> be useful to generate some worst-case scenarios where the number of\n> invalidations is more to see the distribution cost in such cases. For\n> example, Truncate/Drop a table with 100 or 1000 partitions.\n>\n> --\n> With Regards,\n> Amit Kapila.\n\nHi,\n\nI did some performance testing solely on the logical decoding side and\nfound some degradation in performance, for the following testcase:\n1. Created a publisher on a single table, say 'tab_conc1';\n2. Created a second publisher on a single table say 'tp';\n4. two sessions are running in parallel, let's say S1 and S2.\n5. Begin a transaction in S1.\n6. Now in a loop (this loop runs 'count' times):\n S1: Insert a row in table 'tab_conc1'\n S2: BEGIN; Alter publication DROP/ ADD tp; COMMIT\n7. COMMIT the transaction in S1.\n8. run 'pg_logical_slot_get_binary_changes' to get the decoding changes.\n\nObservation:\nWith fix a new entry is added in decoding. During debugging I found\nthat this entry only comes when we do a 'INSERT' in Session 1 after we\ndo 'ALTER PUBLICATION' in another session in parallel (or we can say\ndue to invalidation). Also, I observed that this new entry is related\nto sending replica identity, attributes,etc as function\n'logicalrep_write_rel' is called.\n\nPerformance:\nWe see a performance degradation as we are sending new entries during\nlogical decoding. Results are an average of 5 runs.\n\ncount | Head (sec) | Fix (sec) | Degradation (%)\n------------------------------------------------------------------------------\n10000 | 1.298 | 1.574 | 21.26348228\n50000 | 22.892 | 24.997 | 9.195352088\n100000 | 88.602 | 93.759 | 5.820410374\n\nI have also attached the test script here.\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Mon, 9 Sep 2024 10:41:36 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Mon, 2 Sept 2024 at 10:12, Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Aug 30, 2024 at 3:06 PM Shlok Kyal <[email protected]> wrote:\n> >\n> > Next I am planning to test solely on the logical decoding side and\n> > will share the results.\n> >\n>\n> Thanks, the next set of proposed tests makes sense to me. It will also\n> be useful to generate some worst-case scenarios where the number of\n> invalidations is more to see the distribution cost in such cases. For\n> example, Truncate/Drop a table with 100 or 1000 partitions.\n>\n> --\n> With Regards,\n> Amit Kapila.\n\nAlso, I did testing with a table with partitions. To test for the\nscenario where the number of invalidations are more than distribution.\nFollowing is the test case:\n1. Created a publisher on a single table, say 'tconc_1';\n2. Created a second publisher on a partition table say 'tp';\n3. Created 'tcount' partitions for the table 'tp'.\n4. two sessions are running in parallel, let's say S1 and S2.\n5. Begin a transaction in S1.\n6. S1: Insert a row in table 'tconc_1'\n S2: BEGIN; TRUNCATE TABLE tp; COMMIT;\n With patch, this will add 'tcount * 3' invalidation messages to\ntransaction in session 1.\n S1: Insert a row in table 't_conc1'\n7. COMMIT the transaction in S1.\n8. run 'pg_logical_slot_get_binary_changes' to get the decoding changes.\n\nPerformance:\nWe see a degradation in performance. Results are an average of 5 runs.\n\ncount of partitions | Head (sec) | Fix (sec) | Degradation (%)\n-------------------------------------------------------------------------------------\n1000 | 0.114 | 0.118 | 3.50877193\n5000 | 0.502 | 0.522 | 3.984063745\n10000 | 1.012 | 1.024 | 1.185770751\n\nI have also attached the test script here. And will also do further testing.\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Mon, 9 Sep 2024 10:51:43 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Friday, August 9, 2024 7:21 PM Shlok Kyal <[email protected]> wrote:\r\n\r\nHi,\r\n\r\n> \r\n> In the v7 patch, I am looping through the reorder buffer of the current committed\r\n> transaction and storing all invalidation messages in a list. Then I am\r\n> distributing those invalidations.\r\n> But I found that for a transaction we already store all the invalidation messages\r\n> (see [1]). So we don't need to loop through the reorder buffer and store the\r\n> invalidations.\r\n> \r\n> I have modified the patch accordingly and attached the same.\r\n\r\nI have tested this patch across various scenarios and did not find issues.\r\n\r\nI confirmed that changes are correctly replicated after adding the table or\r\nschema to the publication, and changes will not be replicated after removing\r\nthe table or schema from the publication. This behavior is consistent in both\r\nstreaming and non-streaming modes. Additionally, I verified that invalidations\r\noccurring within subtransactions are appropriately distributed.\r\n\r\nPlease refer to the attached ISOLATION tests which tested the above cases.\r\nThis also inspires me if it would be cheaper to write an ISOLATION test for this\r\nbug instead of building a real pub/sub cluster. But I am not against the current\r\ntests in the V8 patch as that can check the replicated data in a visible way.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Tue, 10 Sep 2024 04:25:24 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 4:04 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Sep 2, 2024 at 9:19 PM Nitin Motiani <[email protected]> wrote:\n> >\n> > I think that the partial data replication for one table is a bigger\n> > issue than the case of data being sent for a subset of the tables in\n> > the transaction. This can lead to inconsistent data if the same row is\n> > updated multiple times or deleted in the same transaction. In such a\n> > case if only the partial updates from the transaction are sent to the\n> > subscriber, it might end up with the data which was never visible on\n> > the publisher side.\n> >\n> > Here is an example I tried with the patch v8-001 :\n> >\n> > I created following 2 tables on the publisher and the subscriber :\n> >\n> > CREATE TABLE delete_test(id int primary key, name varchar(100));\n> > CREATE TABLE update_test(id int primary key, name varchar(100));\n> >\n> > I added both the tables to the publication p on the publisher and\n> > created a subscription s on the subscriber.\n> >\n> > I run 2 sessions on the publisher and do the following :\n> >\n> > Session 1 :\n> > BEGIN;\n> > INSERT INTO delete_test VALUES(0, 'Nitin');\n> >\n> > Session 2 :\n> > ALTER PUBLICATION p DROP TABLE delete_test;\n> >\n> > Session 1 :\n> > DELETE FROM delete_test WHERE id=0;\n> > COMMIT;\n> >\n> > After the commit there should be no new row created on the publisher.\n> > But because the partial data was replicated, this is what the select\n> > on the subscriber shows :\n> >\n> > SELECT * FROM delete_test;\n> > id | name\n> > ----+-----------\n> > 0 | Nitin\n> > (1 row)\n> >\n> > I don't think the above is a common use case. But this is still an\n> > issue because the subscriber has the data which never existed on the\n> > publisher.\n> >\n>\n> I don't think that is the correct conclusion because the user has\n> intentionally avoided sending part of the transaction changes. This\n> can happen in various ways without the patch as well. For example, if\n> the user has performed the ALTER in the same transaction.\n>\n> Publisher:\n> =========\n> BEGIN\n> postgres=*# Insert into delete_test values(0, 'Nitin');\n> INSERT 0 1\n> postgres=*# Alter Publication pub1 drop table delete_test;\n> ALTER PUBLICATION\n> postgres=*# Delete from delete_test where id=0;\n> DELETE 1\n> postgres=*# commit;\n> COMMIT\n> postgres=# select * from delete_test;\n> id | name\n> ----+------\n> (0 rows)\n>\n> Subscriber:\n> =========\n> postgres=# select * from delete_test;\n> id | name\n> ----+-------\n> 0 | Nitin\n> (1 row)\n>\n> This can also happen when the user has published only 'inserts' but\n> not 'updates' or 'deletes'.\n>\n\nThanks for the clarification. I didn't think of this case. The change\nseems fine if this can already happen.\n\nThanks & Regards\nNitin Motiani\nGoogle\n\n\n",
"msg_date": "Tue, 10 Sep 2024 14:20:50 +0530",
"msg_from": "Nitin Motiani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Mon, 9 Sept 2024 at 10:41, Shlok Kyal <[email protected]> wrote:\n>\n> On Mon, 2 Sept 2024 at 10:12, Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Aug 30, 2024 at 3:06 PM Shlok Kyal <[email protected]> wrote:\n> > >\n> > > Next I am planning to test solely on the logical decoding side and\n> > > will share the results.\n> > >\n> >\n> > Thanks, the next set of proposed tests makes sense to me. It will also\n> > be useful to generate some worst-case scenarios where the number of\n> > invalidations is more to see the distribution cost in such cases. For\n> > example, Truncate/Drop a table with 100 or 1000 partitions.\n> >\n> > --\n> > With Regards,\n> > Amit Kapila.\n>\n> Hi,\n>\n> I did some performance testing solely on the logical decoding side and\n> found some degradation in performance, for the following testcase:\n> 1. Created a publisher on a single table, say 'tab_conc1';\n> 2. Created a second publisher on a single table say 'tp';\n> 4. two sessions are running in parallel, let's say S1 and S2.\n> 5. Begin a transaction in S1.\n> 6. Now in a loop (this loop runs 'count' times):\n> S1: Insert a row in table 'tab_conc1'\n> S2: BEGIN; Alter publication DROP/ ADD tp; COMMIT\n> 7. COMMIT the transaction in S1.\n> 8. run 'pg_logical_slot_get_binary_changes' to get the decoding changes.\n>\n> Observation:\n> With fix a new entry is added in decoding. During debugging I found\n> that this entry only comes when we do a 'INSERT' in Session 1 after we\n> do 'ALTER PUBLICATION' in another session in parallel (or we can say\n> due to invalidation). Also, I observed that this new entry is related\n> to sending replica identity, attributes,etc as function\n> 'logicalrep_write_rel' is called.\n>\n> Performance:\n> We see a performance degradation as we are sending new entries during\n> logical decoding. Results are an average of 5 runs.\n>\n> count | Head (sec) | Fix (sec) | Degradation (%)\n> ------------------------------------------------------------------------------\n> 10000 | 1.298 | 1.574 | 21.26348228\n> 50000 | 22.892 | 24.997 | 9.195352088\n> 100000 | 88.602 | 93.759 | 5.820410374\n>\n> I have also attached the test script here.\n>\n\nFor the above case I tried to investigate the inconsistent degradation\nand found out that Serialization was happening for a large number of\n'count'. So, I tried adjusting 'logical_decoding_work_mem' to a large\nvalue, so that we can avoid serialization here. I ran the above\nperformance test again and got the following results:\n\ncount | Head (sec) | Fix (sec) | Degradation (%)\n-----------------------------------------------------------------------------------\n10000 | 0.415446 | 0.53596167 | 29.00874482\n50000 | 7.950266 | 10.37375567 | 30.48312685\n75000 | 17.192372 | 22.246715 | 29.39875312\n100000 | 30.555903 | 39.431542 | 29.04721552\n\n These results are an average of 3 runs. Here the degradation is\nconsistent around ~30%.\n\nThanks and Regards,\nShlok Kyal\n\n\n",
"msg_date": "Fri, 13 Sep 2024 10:57:17 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "> In the v7 patch, I am looping through the reorder buffer of the\n> current committed transaction and storing all invalidation messages in\n> a list. Then I am distributing those invalidations.\n> But I found that for a transaction we already store all the\n> invalidation messages (see [1]). So we don't need to loop through the\n> reorder buffer and store the invalidations.\n>\n> I have modified the patch accordingly and attached the same.\n>\n> [1]: https://github.com/postgres/postgres/blob/7da1bdc2c2f17038f2ae1900be90a0d7b5e361e0/src/include/replication/reorderbuffer.h#L384\n\nHi,\n\nI tried to add changes to selectively invalidate the cache to reduce\nthe performance degradation during the distribution of invalidations.\n\nHere is the analysis for selective invalidation.\nObservation:\nCurrently when there is a change in a publication, cache related to\nall the tables is invalidated including the ones that are not part of\nany publication and even tables of different publications. For\nexample, suppose pub1 includes tables t1 to t1000, while pub2 contains\njust table t1001. If pub2 is altered, even though it only has t1001,\nthis change will also invalidate all the tables t1 through t1000 in\npub1.\nSimilarly for a namespace, whenever we alter a schema or we add/drop a\nschema to the publication, cache related to all the tables is\ninvalidated including the ones that are on of different schema. For\nexample, suppose pub1 includes tables t1 to t1000 in schema sc1, while\npub2 contains just table t1001 in schema sc2. If schema ‘sc2’ is\nchanged or if it is dropped from publication ‘pub2’ even though it\nonly has t1001, this change will invalidate all the tables t1 through\nt1000 in schema sc1.\n‘rel_sync_cache_publication_cb’ function is called during the\nexecution of invalidation in both above cases. And\n‘rel_sync_cache_publication_cb’ invalidates all the tables in the\ncache.\n\nSolution:\n1. When we alter a publication using commands like ‘ALTER PUBLICATION\npub_name DROP TABLE table_name’, first all tables in the publications\nare invalidated using the function ‘rel_sync_cache_relation_cb’. Then\nagain ‘rel_sync_cache_publication_cb’ function is called which\ninvalidates all the tables. This happens because of the following\ncallback registered:\nCacheRegisterSyscacheCallback(PUBLICATIONRELMAP,\nrel_sync_cache_publication_cb, (Datum) 0);\n\nSo, I feel this second function call can be avoided. And I have\nincluded changes for the same in the patch. Now the behavior will be\nas:\nsuppose pub1 includes tables t1 to t1000, while pub2 contains just\ntable t1001. If pub2 is altered, it will only invalidate t1001.\n\n2. When we add/drop a schema to/from a publication using command like\n‘ALTER PUBLICATION pub_name ADD TABLES in SCHEMA schema_name’, first\nall tables in that schema are invalidated using\n‘rel_sync_cache_relation_cb’ and then again\n‘rel_sync_cache_publication_cb’ function is called which invalidates\nall the tables. This happens because of the following callback\nregistered:\nCacheRegisterSyscacheCallback(PUBLICATIONNAMESPACEMAP,\nrel_sync_cache_publication_cb, (Datum) 0);\n\nSo, I feel this second function call can be avoided. And I have\nincluded changes for the same in the patch. Now the behavior will be\nas:\nsuppose pub1 includes tables t1 to t1000 in schema sc1, while pub2\ncontains just table t1001 in schema sc2. If schema ‘sc2’ dropped from\npublication ‘pub2’, it will only invalidate table t1001.\n\n3. When we alter a namespace using command like ‘ALTER SCHEMA\nschema_name RENAME to new_schema_name’ all the table in cache are\ninvalidated as ‘rel_sync_cache_publication_cb’ is called due to the\nfollowing registered callback:\nCacheRegisterSyscacheCallback(NAMESPACEOID,\nrel_sync_cache_publication_cb, (Datum) 0);\n\nSo, we added a new callback function ‘rel_sync_cache_namespacerel_cb’\nwill be called instead of function ‘rel_sync_cache_publication_cb’ ,\nwhich invalidates only the cache of the tables which are part of that\nparticular namespace. For the new function the ‘namespace id’ is added\nin the Invalidation message.\n\nFor example, if namespace ‘sc1’ has table t1 and t2 and a namespace\n‘sc2’ has table t3. Then if we rename namespace ‘sc1’ to ‘sc_new’.\nOnly tables in sc1 i.e. tables t1 and table t2 are invalidated.\n\n\nPerformance Comparison:\nI have run the same tests as shared in [1] and observed a significant\ndecrease in the degradation with the new changes. With selective\ninvalidation degradation is around ~5%. This results are an average of\n3 runs.\n\ncount | Head (sec) | Fix (sec) | Degradation (%)\n-----------------------------------------------------------------------------------------\n10000 | 0.38842567 | 0.405057 | 4.281727827\n50000 | 7.22018834 | 7.605011334 | 5.329819333\n75000 | 15.627181 | 16.38659034 | 4.859541462\n100000 | 27.37910867 | 28.8636873 | 5.422304458\n\nI have attached the patch for the same\nv9-0001 : distribute invalidation to inprogress transaction\nv9-0002: Selective invalidation\n\n[1]:https://www.postgresql.org/message-id/CANhcyEW4pq6%2BPO_eFn2q%3D23sgV1budN3y4SxpYBaKMJNADSDuA%40mail.gmail.com\n\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Thu, 26 Sep 2024 11:39:33 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "Dear Shlok,\r\n\r\n> Hi,\r\n> \r\n> I tried to add changes to selectively invalidate the cache to reduce\r\n> the performance degradation during the distribution of invalidations.\r\n\r\nThanks for improving the patch!\r\n\r\n>...\r\n> \r\n> Solution:\r\n> 1. When we alter a publication using commands like ‘ALTER PUBLICATION\r\n> pub_name DROP TABLE table_name’, first all tables in the publications\r\n> are invalidated using the function ‘rel_sync_cache_relation_cb’. Then\r\n> again ‘rel_sync_cache_publication_cb’ function is called which\r\n> invalidates all the tables.\r\n\r\nOn my environment, rel_sync_cache_publication_cb() was called first and invalidate\r\nall the entries, then rel_sync_cache_relation_cb() was called and the specified\r\nentry is invalidated - hence second is NO-OP.\r\n\r\n> This happens because of the following\r\n> callback registered:\r\n> CacheRegisterSyscacheCallback(PUBLICATIONRELMAP,\r\n> rel_sync_cache_publication_cb, (Datum) 0);\r\n\r\nBut even in this case, I could understand that you want to remove the\r\nrel_sync_cache_publication_cb() callback.\r\n\r\n> 2. When we add/drop a schema to/from a publication using command like\r\n> ‘ALTER PUBLICATION pub_name ADD TABLES in SCHEMA schema_name’, first\r\n> all tables in that schema are invalidated using\r\n> ‘rel_sync_cache_relation_cb’ and then again\r\n> ‘rel_sync_cache_publication_cb’ function is called which invalidates\r\n> all the tables.\r\n\r\nEven in this case, rel_sync_cache_publication_cb() was called first and then\r\nrel_sync_cache_relation_cb().\r\n\r\n> \r\n> 3. When we alter a namespace using command like ‘ALTER SCHEMA\r\n> schema_name RENAME to new_schema_name’ all the table in cache are\r\n> invalidated as ‘rel_sync_cache_publication_cb’ is called due to the\r\n> following registered callback:\r\n> CacheRegisterSyscacheCallback(NAMESPACEOID,\r\n> rel_sync_cache_publication_cb, (Datum) 0);\r\n>\r\n> So, we added a new callback function ‘rel_sync_cache_namespacerel_cb’\r\n> will be called instead of function ‘rel_sync_cache_publication_cb’ ,\r\n> which invalidates only the cache of the tables which are part of that\r\n> particular namespace. For the new function the ‘namespace id’ is added\r\n> in the Invalidation message.\r\n\r\nHmm, I feel this fix is too much. Unlike ALTER PUBLICATION statements, I think\r\nALTER SCHEMA is rarely executed at the production stage. However, this approach\r\nrequires adding a new cache callback system, which affects the entire postgres\r\nsystem; this is not very beneficial compared to the outcome. It should be discussed\r\non another thread to involve more people, and then we can add the improvement\r\nafter being accepted.\r\n\r\n> Performance Comparison:\r\n> I have run the same tests as shared in [1] and observed a significant\r\n> decrease in the degradation with the new changes. With selective\r\n> invalidation degradation is around ~5%. This results are an average of\r\n> 3 runs.\r\n\r\nIIUC, the executed workload did not contain ALTER SCHEMA command, so\r\nthird improvement did not contribute this improvement.\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 26 Sep 2024 11:53:07 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "Hi Kuroda-san,\n\nThanks for reviewing the patch.\n\n> > Solution:\n> > 1. When we alter a publication using commands like ‘ALTER PUBLICATION\n> > pub_name DROP TABLE table_name’, first all tables in the publications\n> > are invalidated using the function ‘rel_sync_cache_relation_cb’. Then\n> > again ‘rel_sync_cache_publication_cb’ function is called which\n> > invalidates all the tables.\n>\n> On my environment, rel_sync_cache_publication_cb() was called first and invalidate\n> all the entries, then rel_sync_cache_relation_cb() was called and the specified\n> entry is invalidated - hence second is NO-OP.\n>\n\nYou are correct. I made a silly mistake while writing the write-up.\nrel_sync_cache_publication_cb() is called first and invalidate all the\nentries, then rel_sync_cache_relation_cb() is called and the specified\nentry is invalidated\n\n> > This happens because of the following\n> > callback registered:\n> > CacheRegisterSyscacheCallback(PUBLICATIONRELMAP,\n> > rel_sync_cache_publication_cb, (Datum) 0);\n>\n> But even in this case, I could understand that you want to remove the\n> rel_sync_cache_publication_cb() callback.\n\nYes, I think rel_sync_cache_publication_cb() callback can be removed,\nas it is invalidating all the other tables as well (which are not in\nthis publication).\n\n> > 2. When we add/drop a schema to/from a publication using command like\n> > ‘ALTER PUBLICATION pub_name ADD TABLES in SCHEMA schema_name’, first\n> > all tables in that schema are invalidated using\n> > ‘rel_sync_cache_relation_cb’ and then again\n> > ‘rel_sync_cache_publication_cb’ function is called which invalidates\n> > all the tables.\n>\n> Even in this case, rel_sync_cache_publication_cb() was called first and then\n> rel_sync_cache_relation_cb().\n>\n\nYes, your observation is correct. rel_sync_cache_publication_cb() is\ncalled first and then rel_sync_cache_relation_cb().\n\n> >\n> > 3. When we alter a namespace using command like ‘ALTER SCHEMA\n> > schema_name RENAME to new_schema_name’ all the table in cache are\n> > invalidated as ‘rel_sync_cache_publication_cb’ is called due to the\n> > following registered callback:\n> > CacheRegisterSyscacheCallback(NAMESPACEOID,\n> > rel_sync_cache_publication_cb, (Datum) 0);\n> >\n> > So, we added a new callback function ‘rel_sync_cache_namespacerel_cb’\n> > will be called instead of function ‘rel_sync_cache_publication_cb’ ,\n> > which invalidates only the cache of the tables which are part of that\n> > particular namespace. For the new function the ‘namespace id’ is added\n> > in the Invalidation message.\n>\n> Hmm, I feel this fix is too much. Unlike ALTER PUBLICATION statements, I think\n> ALTER SCHEMA is rarely executed at the production stage. However, this approach\n> requires adding a new cache callback system, which affects the entire postgres\n> system; this is not very beneficial compared to the outcome. It should be discussed\n> on another thread to involve more people, and then we can add the improvement\n> after being accepted.\n>\nYes, I also agree with you. I have removed the changes in the updated patch.\n\n> > Performance Comparison:\n> > I have run the same tests as shared in [1] and observed a significant\n> > decrease in the degradation with the new changes. With selective\n> > invalidation degradation is around ~5%. This results are an average of\n> > 3 runs.\n>\n> IIUC, the executed workload did not contain ALTER SCHEMA command, so\n> third improvement did not contribute this improvement.\nI have removed the changes corresponding to the third improvement.\n\nI have addressed the comment for 0002 patch and attached the patches.\nAlso, I have moved the tests in the 0002 to 0001 patch.\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Fri, 27 Sep 2024 16:54:46 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "On Thu, 26 Sept 2024 at 11:39, Shlok Kyal <[email protected]> wrote:\n>\n> > In the v7 patch, I am looping through the reorder buffer of the\n> > current committed transaction and storing all invalidation messages in\n> > a list. Then I am distributing those invalidations.\n> > But I found that for a transaction we already store all the\n> > invalidation messages (see [1]). So we don't need to loop through the\n> > reorder buffer and store the invalidations.\n> >\n> > I have modified the patch accordingly and attached the same.\n> >\n> > [1]: https://github.com/postgres/postgres/blob/7da1bdc2c2f17038f2ae1900be90a0d7b5e361e0/src/include/replication/reorderbuffer.h#L384\n>\n> Hi,\n>\n> I tried to add changes to selectively invalidate the cache to reduce\n> the performance degradation during the distribution of invalidations.\n>\n> Here is the analysis for selective invalidation.\n> Observation:\n> Currently when there is a change in a publication, cache related to\n> all the tables is invalidated including the ones that are not part of\n> any publication and even tables of different publications. For\n> example, suppose pub1 includes tables t1 to t1000, while pub2 contains\n> just table t1001. If pub2 is altered, even though it only has t1001,\n> this change will also invalidate all the tables t1 through t1000 in\n> pub1.\n> Similarly for a namespace, whenever we alter a schema or we add/drop a\n> schema to the publication, cache related to all the tables is\n> invalidated including the ones that are on of different schema. For\n> example, suppose pub1 includes tables t1 to t1000 in schema sc1, while\n> pub2 contains just table t1001 in schema sc2. If schema ‘sc2’ is\n> changed or if it is dropped from publication ‘pub2’ even though it\n> only has t1001, this change will invalidate all the tables t1 through\n> t1000 in schema sc1.\n> ‘rel_sync_cache_publication_cb’ function is called during the\n> execution of invalidation in both above cases. And\n> ‘rel_sync_cache_publication_cb’ invalidates all the tables in the\n> cache.\n>\n> Solution:\n> 1. When we alter a publication using commands like ‘ALTER PUBLICATION\n> pub_name DROP TABLE table_name’, first all tables in the publications\n> are invalidated using the function ‘rel_sync_cache_relation_cb’. Then\n> again ‘rel_sync_cache_publication_cb’ function is called which\n> invalidates all the tables. This happens because of the following\n> callback registered:\n> CacheRegisterSyscacheCallback(PUBLICATIONRELMAP,\n> rel_sync_cache_publication_cb, (Datum) 0);\n>\n> So, I feel this second function call can be avoided. And I have\n> included changes for the same in the patch. Now the behavior will be\n> as:\n> suppose pub1 includes tables t1 to t1000, while pub2 contains just\n> table t1001. If pub2 is altered, it will only invalidate t1001.\n>\n> 2. When we add/drop a schema to/from a publication using command like\n> ‘ALTER PUBLICATION pub_name ADD TABLES in SCHEMA schema_name’, first\n> all tables in that schema are invalidated using\n> ‘rel_sync_cache_relation_cb’ and then again\n> ‘rel_sync_cache_publication_cb’ function is called which invalidates\n> all the tables. This happens because of the following callback\n> registered:\n> CacheRegisterSyscacheCallback(PUBLICATIONNAMESPACEMAP,\n> rel_sync_cache_publication_cb, (Datum) 0);\n>\n> So, I feel this second function call can be avoided. And I have\n> included changes for the same in the patch. Now the behavior will be\n> as:\n> suppose pub1 includes tables t1 to t1000 in schema sc1, while pub2\n> contains just table t1001 in schema sc2. If schema ‘sc2’ dropped from\n> publication ‘pub2’, it will only invalidate table t1001.\n>\n> 3. When we alter a namespace using command like ‘ALTER SCHEMA\n> schema_name RENAME to new_schema_name’ all the table in cache are\n> invalidated as ‘rel_sync_cache_publication_cb’ is called due to the\n> following registered callback:\n> CacheRegisterSyscacheCallback(NAMESPACEOID,\n> rel_sync_cache_publication_cb, (Datum) 0);\n>\n> So, we added a new callback function ‘rel_sync_cache_namespacerel_cb’\n> will be called instead of function ‘rel_sync_cache_publication_cb’ ,\n> which invalidates only the cache of the tables which are part of that\n> particular namespace. For the new function the ‘namespace id’ is added\n> in the Invalidation message.\n>\n> For example, if namespace ‘sc1’ has table t1 and t2 and a namespace\n> ‘sc2’ has table t3. Then if we rename namespace ‘sc1’ to ‘sc_new’.\n> Only tables in sc1 i.e. tables t1 and table t2 are invalidated.\n>\n>\n> Performance Comparison:\n> I have run the same tests as shared in [1] and observed a significant\n> decrease in the degradation with the new changes. With selective\n> invalidation degradation is around ~5%. This results are an average of\n> 3 runs.\n>\n> count | Head (sec) | Fix (sec) | Degradation (%)\n> -----------------------------------------------------------------------------------------\n> 10000 | 0.38842567 | 0.405057 | 4.281727827\n> 50000 | 7.22018834 | 7.605011334 | 5.329819333\n> 75000 | 15.627181 | 16.38659034 | 4.859541462\n> 100000 | 27.37910867 | 28.8636873 | 5.422304458\n>\n> I have attached the patch for the same\n> v9-0001 : distribute invalidation to inprogress transaction\n> v9-0002: Selective invalidation\n>\n> [1]:https://www.postgresql.org/message-id/CANhcyEW4pq6%2BPO_eFn2q%3D23sgV1budN3y4SxpYBaKMJNADSDuA%40mail.gmail.com\n>\n\nI have also prepared a bar chart for performance comparison between\nHEAD, 0001 patch and (0001+0002) patch and attached here.\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Mon, 30 Sep 2024 10:28:17 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "Dear Shlok,\r\n\r\n> I have addressed the comment for 0002 patch and attached the patches.\r\n> Also, I have moved the tests in the 0002 to 0001 patch.\r\n\r\nThanks for updating the patch. 0002 patch seems to remove cache invalidations\r\nfrom publication_invalidation_cb(). Related with it, I found an issue and had a concern.\r\n\r\n1.\r\nThe replication continues even after ALTER PUBLICATION RENAME is executed.\r\nFor example - assuming that a subscriber subscribes only \"pub\":\r\n\r\n```\r\npub=# INSERT INTO tab values (1);\r\nINSERT 0 1\r\npub=# ALTER PUBLICATION pub RENAME TO pub1;\r\nALTER PUBLICATION\r\npub=# INSERT INTO tab values (2);\r\nINSERT 0 1\r\n\r\nsub=# SELECT * FROM tab ; -- (2) should not be replicated however...\r\n a \r\n---\r\n 1\r\n 2\r\n(2 rows)\r\n```\r\n\r\nThis happens because 1) ALTER PUBLICATION RENAME statement won't be invalidate the\r\nrelation cache, and 2) publications are reloaded only when invalid RelationSyncEntry\r\nis found. In given example, the first INSERT creates the valid cache and second\r\nINSERT reuses it. Therefore, the pubname-check is skipped.\r\n\r\nFor now, the actual renaming is done at AlterObjectRename_internal(), a generic\r\nfunction. I think we must implement a dedecated function to publication and must\r\ninvalidate relcaches there.\r\n\r\n2.\r\nSimilarly with above, the relcache won't be invalidated when ALTER PUBLICATION\r\nOWNER TO is executed. This means that privilage checks may be ignored if the entry\r\nis still valid. Not sure, but is there a possibility this causes an inconsistency?\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED \r\n\r\n",
"msg_date": "Mon, 30 Sep 2024 10:03:42 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: long-standing data loss bug in initial sync of logical\n replication"
},
{
"msg_contents": "> 2.\r\n> Similarly with above, the relcache won't be invalidated when ALTER\r\n> PUBLICATION\r\n> OWNER TO is executed. This means that privilage checks may be ignored if the\r\n> entry\r\n> is still valid. Not sure, but is there a possibility this causes an inconsistency?\r\n\r\nHmm, IIUC, the attribute pubowner is not used for now. The paragpargh\r\n\"There are currently no privileges on publications....\" [1] may show the current\r\nstatus. However, to keep the current behavior, I suggest to invalidate the relcache\r\nof pubrelations when the owner is altered.\r\n\r\n[1]: https://www.postgresql.org/docs/devel/logical-replication-security.html\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 30 Sep 2024 11:25:48 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: long-standing data loss bug in initial sync of logical\n replication"
}
] |
[
{
"msg_contents": "Right now, if allocation fails while growing a hashtable, it's left in\nan inconsistent state and can't be used again.\n\nPatch attached.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 17 Nov 2023 10:42:54 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "simplehash: preserve consistency in case of OOM"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 10:42:54 -0800, Jeff Davis wrote:\n> Right now, if allocation fails while growing a hashtable, it's left in\n> an inconsistent state and can't be used again.\n\nI'm not against allowing this - but I am curious, in which use cases is this\nuseful?\n\n\n> @@ -446,10 +459,11 @@ SH_CREATE(MemoryContext ctx, uint32 nelements, void *private_data)\n> \t/* increase nelements by fillfactor, want to store nelements elements */\n> \tsize = Min((double) SH_MAX_SIZE, ((double) nelements) / SH_FILLFACTOR);\n> \n> -\tSH_COMPUTE_PARAMETERS(tb, size);\n> +\tsize = SH_COMPUTE_SIZE(size);\n> \n> -\ttb->data = (SH_ELEMENT_TYPE *) SH_ALLOCATE(tb, sizeof(SH_ELEMENT_TYPE) * tb->size);\n> +\ttb->data = (SH_ELEMENT_TYPE *) SH_ALLOCATE(tb, sizeof(SH_ELEMENT_TYPE) * size);\n> \n> +\tSH_UPDATE_PARAMETERS(tb, size);\n> \treturn tb;\n> }\n\nMaybe add a comment explaining why it's important to update parameters after\nallocating?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Nov 2023 12:13:34 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simplehash: preserve consistency in case of OOM"
},
{
"msg_contents": "On Fri, 2023-11-17 at 12:13 -0800, Andres Freund wrote:\n> On 2023-11-17 10:42:54 -0800, Jeff Davis wrote:\n> > Right now, if allocation fails while growing a hashtable, it's left\n> > in\n> > an inconsistent state and can't be used again.\n> \n> I'm not against allowing this - but I am curious, in which use cases\n> is this\n> useful?\n\nI committed a cache for search_path (f26c2368dc), and afterwards got\nconcerned that I missed some potential OOM hazards. I separately posted\na patch to fix those (mostly by simplifying things, which in hindsight\nwas how it should have been done to begin with). Along the way, I also\nnoticed that simplehash itself is not safe in that case.\n\nI don't think there are other bugs in the system due to simplehash and\nOOM, because it's mainly used in the executor.\n\nPlease tell me if you think the use of simplehash for a search_path\ncache is the wrong tool for the job.\n\n> Maybe add a comment explaining why it's important to update\n> parameters after\n> allocating?\n\nWill do.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 17 Nov 2023 13:00:19 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simplehash: preserve consistency in case of OOM"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 12:13 PM Andres Freund <[email protected]> wrote:\n>\n> On 2023-11-17 10:42:54 -0800, Jeff Davis wrote:\n> > Right now, if allocation fails while growing a hashtable, it's left in\n> > an inconsistent state and can't be used again.\n\n+1 to the patch.\n\n> I'm not against allowing this - but I am curious, in which use cases is this\n> useful?\n\nI don't have an answer to that, but a guess would be when the server\nis dealing with memory pressure. In my view the patch has merit purely\non the grounds of increasing robustness.\n\n> > @@ -446,10 +459,11 @@ SH_CREATE(MemoryContext ctx, uint32 nelements, void *private_data)\n> > /* increase nelements by fillfactor, want to store nelements elements */\n> > size = Min((double) SH_MAX_SIZE, ((double) nelements) / SH_FILLFACTOR);\n> >\n> > - SH_COMPUTE_PARAMETERS(tb, size);\n> > + size = SH_COMPUTE_SIZE(size);\n> >\n> > - tb->data = (SH_ELEMENT_TYPE *) SH_ALLOCATE(tb, sizeof(SH_ELEMENT_TYPE) * tb->size);\n> > + tb->data = (SH_ELEMENT_TYPE *) SH_ALLOCATE(tb, sizeof(SH_ELEMENT_TYPE) * size);\n> >\n> > + SH_UPDATE_PARAMETERS(tb, size);\n> > return tb;\n> > }\n>\n> Maybe add a comment explaining why it's important to update parameters after\n> allocating?\n\nPerhaps something like this:\n\n+ /*\n+ * Update parameters _after_ allocation succeeds; prevent\n+ * bogus/corrupted state.\n+ */\n+ SH_UPDATE_PARAMETERS(tb, size);\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Fri, 17 Nov 2023 13:00:59 -0800",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simplehash: preserve consistency in case of OOM"
},
{
"msg_contents": "On 2023-11-17 13:00:19 -0800, Jeff Davis wrote:\n> Please tell me if you think the use of simplehash for a search_path\n> cache is the wrong tool for the job.\n\nNo, seems fine. I just was curious - as you said, the older existing users\nwon't ever care about this case.\n\n\n",
"msg_date": "Fri, 17 Nov 2023 13:22:31 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simplehash: preserve consistency in case of OOM"
}
] |
[
{
"msg_contents": "I had briefly experimented changing the hash table in guc.c to use\nsimplehash. It didn't offer any measurable speedup, but the API is\nslightly nicer.\n\nI thought I'd post the patch in case others thought this was a good\ndirection or nice cleanup.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 17 Nov 2023 11:02:31 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 11:02 AM Jeff Davis <[email protected]> wrote:\n>\n> I had briefly experimented changing the hash table in guc.c to use\n> simplehash. It didn't offer any measurable speedup, but the API is\n> slightly nicer.\n>\n> I thought I'd post the patch in case others thought this was a good\n> direction or nice cleanup.\n\nThis is not a comment on the patch itself, but since GUC operations\nare not typically considered performance or space sensitive, this\ncomment from simplehash.h makes a case against it.\n\n * It's probably not worthwhile to generate such a specialized\nimplementation\n * for hash tables that aren't performance or space sensitive.\n\nBut your argument of a nicer API might make a case for the patch. I\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Fri, 17 Nov 2023 13:22:57 -0800",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Fri, 2023-11-17 at 13:22 -0800, Gurjeet Singh wrote:\n> This is not a comment on the patch itself, but since GUC operations\n> are not typically considered performance or space sensitive,\n\nA \"SET search_path\" clause on a CREATE FUNCTION is a case for better\nperformance in guc.c, because it repeatedly sets and rolls back the\nsetting on each function invocation.\n\nUnfortunately, this patch doesn't really improve the performance. The\nreason the hash table in guc.c is slow is because of the case folding\nin both hashing and comparison. I might get around to fixing that,\nwhich could have a minor impact, and perhaps then the choice between\nhsearch/simplehash would matter.\n\n> this\n> comment from simplehash.h makes a case against it.\n> \n> * It's probably not worthwhile to generate such a specialized\n> implementation\n> * for hash tables that aren't performance or space sensitive.\n> \n> But your argument of a nicer API might make a case for the patch.\n\nYeah, that's what I was thinking. simplehash is newer and has a nicer\nAPI, so if we like it and want to move more code over, this is one\nstep. But if we are fine using both hsearch.h and simplehash.h for\noverlapping use cases indefinitely, then I'll drop this.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 17 Nov 2023 13:44:21 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Fri, 2023-11-17 at 13:22 -0800, Gurjeet Singh wrote:\n>> But your argument of a nicer API might make a case for the patch.\n\n> Yeah, that's what I was thinking. simplehash is newer and has a nicer\n> API, so if we like it and want to move more code over, this is one\n> step. But if we are fine using both hsearch.h and simplehash.h for\n> overlapping use cases indefinitely, then I'll drop this.\n\nI can't imagine wanting to convert *every* hashtable in the system\nto simplehash; the added code bloat would be unreasonable. So yeah,\nI think we'll have two mechanisms indefinitely. That's not to say\nthat we might not rewrite hsearch. But simplehash was never meant\nto be a universal solution.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Nov 2023 17:04:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 13:44:21 -0800, Jeff Davis wrote:\n> On Fri, 2023-11-17 at 13:22 -0800, Gurjeet Singh wrote:\n> > This is not a comment on the patch itself, but since GUC operations\n> > are not typically considered performance or space sensitive,\n\nI don't think that's quite right - we have a lot of GUCs and they're loaded in\neach connection. And there's set/reset around transactions etc. So even\nwithout search path stuff that Jeff mentioned, it could be worth optimizing\nthis.\n\n\n> Yeah, that's what I was thinking. simplehash is newer and has a nicer\n> API, so if we like it and want to move more code over, this is one\n> step. But if we are fine using both hsearch.h and simplehash.h for\n> overlapping use cases indefinitely, then I'll drop this.\n\nRight now there are use cases where simplehash isn't really usable (if stable\npointers to hash elements are needed and/or the entries are very large). I've\nbeen wondering about providing a layer ontop of simplehash, or an option to\nsimplehash, providing that though. That then could perhaps also implement\nruntime defined key sizes.\n\nI think this would be a completely fair thing to port over - whether it's\nworth it I don't quite know, but I'd not be against it on principle or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Nov 2023 14:08:30 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Fri, 2023-11-17 at 17:04 -0500, Tom Lane wrote:\n> I can't imagine wanting to convert *every* hashtable in the system\n> to simplehash; the added code bloat would be unreasonable. So yeah,\n> I think we'll have two mechanisms indefinitely. That's not to say\n> that we might not rewrite hsearch. But simplehash was never meant\n> to be a universal solution.\n\nOK, I will withdraw the patch until/unless it provides a concrete\nbenefit.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 17 Nov 2023 14:08:56 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 17:04:04 -0500, Tom Lane wrote:\n> Jeff Davis <[email protected]> writes:\n> > On Fri, 2023-11-17 at 13:22 -0800, Gurjeet Singh wrote:\n> >> But your argument of a nicer API might make a case for the patch.\n> \n> > Yeah, that's what I was thinking. simplehash is newer and has a nicer\n> > API, so if we like it and want to move more code over, this is one\n> > step. But if we are fine using both hsearch.h and simplehash.h for\n> > overlapping use cases indefinitely, then I'll drop this.\n> \n> I can't imagine wanting to convert *every* hashtable in the system\n> to simplehash; the added code bloat would be unreasonable.\n\nYea. And it's also just not suitable for everything. Stable pointers can be\nvery useful and some places have entries that are too large to be moved during\ncollisions. Chained hashtables have their place.\n\n\n> So yeah, I think we'll have two mechanisms indefinitely. That's not to say\n> that we might not rewrite hsearch.\n\nWe probably should. It's awkward to use, the code is very hard to follow, and\nit's really not very fast. Part of that is due to serving too many masters.\nI doubt it's good idea to use the same code for highly contended, partitioned,\nshared memory hashtables and many tiny local memory hashtables. The design\ngoals are just very different.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Nov 2023 14:17:05 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Fri, 2023-11-17 at 14:08 -0800, Andres Freund wrote:\n> I think this would be a completely fair thing to port over - whether\n> it's\n> worth it I don't quite know, but I'd not be against it on principle\n> or such.\n\nRight now I don't think it offers much. I'll see if I can solve the\ncase-folding slowness first, and then maybe it will be measurable.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 17 Nov 2023 14:19:48 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 14:08:56 -0800, Jeff Davis wrote:\n> On Fri, 2023-11-17 at 17:04 -0500, Tom Lane wrote:\n> > I can't imagine wanting to convert *every* hashtable in the system\n> > to simplehash; the added code bloat would be unreasonable. So yeah,\n> > I think we'll have two mechanisms indefinitely. That's not to say\n> > that we might not rewrite hsearch. But simplehash was never meant\n> > to be a universal solution.\n> \n> OK, I will withdraw the patch until/unless it provides a concrete\n> benefit.\n\nIt might already in the space domain:\n\nSELECT count(*), sum(total_bytes) total_bytes, sum(total_nblocks) total_nblocks, sum(free_bytes) free_bytes, sum(free_chunks) free_chunks, sum(used_bytes) used_bytes\nFROM pg_backend_memory_contexts\nWHERE name LIKE 'GUC%';\n\nHEAD:\n┌───────┬─────────────┬───────────────┬────────────┬─────────────┬────────────┐\n│ count │ total_bytes │ total_nblocks │ free_bytes │ free_chunks │ used_bytes │\n├───────┼─────────────┼───────────────┼────────────┼─────────────┼────────────┤\n│ 2 │ 57344 │ 5 │ 25032 │ 10 │ 32312 │\n└───────┴─────────────┴───────────────┴────────────┴─────────────┴────────────┘\n\nyour patch:\n┌───────┬─────────────┬───────────────┬────────────┬─────────────┬────────────┐\n│ count │ total_bytes │ total_nblocks │ free_bytes │ free_chunks │ used_bytes │\n├───────┼─────────────┼───────────────┼────────────┼─────────────┼────────────┤\n│ 1 │ 36928 │ 3 │ 12360 │ 3 │ 24568 │\n└───────┴─────────────┴───────────────┴────────────┴─────────────┴────────────┘\n\n\nHowever, it fares less well at larger number of GUCs, performance wise. At\nfirst I thought that that's largely because you aren't using SH_STORE_HASH.\nWith that, it's slower when creating a large number of GUCs, but a good bit\nfaster retrieving them. But that slowness didn't seem right.\n\n\nThen I noticed that memory usage was too large when creating many GUCs - a bit\nof debugging later, I figured out that that's due to guc_name_hash() being\nterrifyingly bad. There's no bit mixing whatsoever! Which leads to very large\nnumbers of hash conflicts - which simplehash tries to defend against a bit by\nmaking the table larger.\n\n(gdb) p guc_name_hash(\"andres.c2\")\n$14 = 3798554171\n(gdb) p guc_name_hash(\"andres.c3\")\n$15 = 3798554170\n\n\nFixing that makes simplehash always faster, but still doesn't win on memory\nusage at the upper end - the two pointers in GUCHashEntry make it too big.\n\n\nI think, independent of this patch, it might be worth requiring that hash\ntable lookups applied the transformation before the lookup. A comparison\nfunction this expensive is not great...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Nov 2023 15:27:12 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Fri, 2023-11-17 at 15:27 -0800, Andres Freund wrote:\n> At\n> first I thought that that's largely because you aren't using\n> SH_STORE_HASH.\n\nI might want to use that in the search_path cache, then. The lookup\nwasn't showing up much in the profile the last I checked, but I'll take\na second look.\n\n> Then I noticed that memory usage was too large when creating many\n> GUCs - a bit\n> of debugging later, I figured out that that's due to guc_name_hash()\n> being\n> terrifyingly bad. There's no bit mixing whatsoever!\n\nWow.\n\nIt seems like hash_combine() could be more widely used in other places,\ntoo? Here it seems like a worse problem because strings really need\nmixing, and maybe ExecHashGetHashValue doesn't. But it seems easier to\nuse hash_combine() everywhere so that we don't have to think about\nstrange cases.\n\n> I think, independent of this patch, it might be worth requiring that\n> hash\n> table lookups applied the transformation before the lookup. A\n> comparison\n> function this expensive is not great...\n\nThe requested name is already case-folded in most contexts. We can do a\nlookup first, and if that fails, case-fold and try again. I'll hack up\na patch -- I believe that would be measurable for the proconfigs.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 17 Nov 2023 16:01:31 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 16:01:31 -0800, Jeff Davis wrote:\n> On Fri, 2023-11-17 at 15:27 -0800, Andres Freund wrote:\n> > At\n> > first I thought that that's largely because you aren't using\n> > SH_STORE_HASH.\n>\n> I might want to use that in the search_path cache, then. The lookup\n> wasn't showing up much in the profile the last I checked, but I'll take\n> a second look.\n\nIt also matters for insertions, fwiw.\n\n\n> > Then I noticed that memory usage was too large when creating many\n> > GUCs - a bit\n> > of debugging later, I figured out that that's due to guc_name_hash()\n> > being\n> > terrifyingly bad. There's no bit mixing whatsoever!\n>\n> Wow.\n>\n> It seems like hash_combine() could be more widely used in other places,\n> too?\n\nI don't think hash_combine() alone helps that much - you need to actually use\na hash function for the values you are combining. Using a character value\nalone as a 32bit hash value unsurprisingly leads to very distribution of bits\nset in hashvalues.\n\n\n> Here it seems like a worse problem because strings really need\n> mixing, and maybe ExecHashGetHashValue doesn't. But it seems easier to\n> use hash_combine() everywhere so that we don't have to think about\n> strange cases.\n\nYea.\n\n\n> > I think, independent of this patch, it might be worth requiring that\n> > hash\n> > table lookups applied the transformation before the lookup. A\n> > comparison\n> > function this expensive is not great...\n>\n> The requested name is already case-folded in most contexts. We can do a\n> lookup first, and if that fails, case-fold and try again. I'll hack up\n> a patch -- I believe that would be measurable for the proconfigs.\n\nI'd just always case fold before lookups. The expensive bit of the case\nfolding imo is that you need to do awkward things during hash lookups.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Nov 2023 16:10:03 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Hi,\n\nOn Fri, 2023-11-17 at 16:10 -0800, Andres Freund wrote:\n\n> > The requested name is already case-folded in most contexts. We can\n> > do a\n> > lookup first, and if that fails, case-fold and try again. I'll hack\n> > up\n> > a patch -- I believe that would be measurable for the proconfigs.\n> \n> I'd just always case fold before lookups. The expensive bit of the\n> case\n> folding imo is that you need to do awkward things during hash\n> lookups.\n\nAttached are a bunch of tiny patches and some perf numbers based on\nsimple test described here:\n\nhttps://www.postgresql.org/message-id/04c8592dbd694e4114a3ed87139a7a04e4363030.camel%40j-davis.com\n\n0001: Use simplehash (without SH_STORE_HASH)\n\n0002: fold before lookups\n\n0003: have gen->name_key alias gen->name in typical case. Saves\nallocations in typical case where the name is already folded.\n\n0004: second-chance lookup in hash table (avoids case-folding for\nalready-folded names)\n\n0005: Use SH_STORE_HASH\n\n(These are split out into tiny patches for perf measurement, some are\npretty obvious but I wanted to see the impact, if any.)\n\nNumbers below are cumulative (i.e. 0003 includes 0002 and 0001):\n master: 7899ms\n 0001: 7850\n 0002: 7958\n 0003: 7942\n 0004: 7549\n 0005: 7411\n\nI'm inclined toward all of these patches. I'll also look at adding\nSH_STORE_HASH for the search_path cache.\n\nLooks like we're on track to bring the overhead of SET search_path down\nto reasonable levels. Thank you!\n\nRegards,\n\tJeff Davis",
"msg_date": "Sun, 19 Nov 2023 14:54:41 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 5:54 AM Jeff Davis <[email protected]> wrote:\n>\n> Attached are a bunch of tiny patches and some perf numbers based on\n> simple test described here:\n>\n> https://www.postgresql.org/message-id/04c8592dbd694e4114a3ed87139a7a04e4363030.camel%40j-davis.com\n\nI tried taking I/O out, like this, thinking the times would be less variable:\n\ncat bench.sql\nselect 1 from generate_series(1,500000) x(x), lateral (SELECT\ninc_ab(x)) a offset 10000000;\n\n(with turbo off)\npgbench -n -T 30 -f bench.sql -M prepared\n\nmaster:\nlatency average = 643.625 ms\n0001-0005:\nlatency average = 607.354 ms\n\n...about 5.5% less time, similar to what Jeff found.\n\nI get a noticeable regression in 0002, though, and I think I see why:\n\n guc_name_hash(const char *name)\n {\n- uint32 result = 0;\n+ const unsigned char *bytes = (const unsigned char *)name;\n+ int blen = strlen(name);\n\nThe strlen call required for hashbytes() is not free. The lack of\nmixing in the (probably inlined after 0001) previous hash function can\nremedied directly, as in the attached:\n\n0001-0002 only:\nlatency average = 670.059 ms\n\n0001-0002, plus revert hashbytes, add finalizer:\nlatency average = 656.810 ms\n\n-#define SH_EQUAL(tb, a, b) (guc_name_compare(a, b) == 0)\n+#define SH_EQUAL(tb, a, b) (strcmp(a, b) == 0)\n\nLikewise, I suspect calling out to the C library is going to throw\naway some of the gains that were won by not needing to downcase all\nthe time, but I haven't dug deeper.",
"msg_date": "Tue, 21 Nov 2023 16:42:55 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, 2023-11-21 at 16:42 +0700, John Naylor wrote:\n> The strlen call required for hashbytes() is not free.\n\nShould we have a hash_string() that's like hash_bytes() but checks for\nthe NUL terminator itself?\n\nThat wouldn't be inlinable, but it would save on the strlen() call. It\nmight benefit some other callers?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 21 Nov 2023 09:00:25 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 12:00 AM Jeff Davis <[email protected]> wrote:\n>\n> On Tue, 2023-11-21 at 16:42 +0700, John Naylor wrote:\n> > The strlen call required for hashbytes() is not free.\n>\n> Should we have a hash_string() that's like hash_bytes() but checks for\n> the NUL terminator itself?\n>\n> That wouldn't be inlinable, but it would save on the strlen() call. It\n> might benefit some other callers?\n\nWe do have string_hash(), which...calls strlen. :-)\n\nThinking some more, I'm not quite comfortable with the number of\nplaces in these patches that have to know about the pre-downcased\nstrings, or whether we need that in the first place. If lower case is\ncommon enough to optimize for, it seems the equality function can just\ncheck strict equality on the char and only on mismatch try downcasing\nbefore returning false. Doing our own function would allow the\ncompiler to inline it, or at least keep it on the same page. Further,\nthe old hash function shouldn't need to branch to do the same\ndowncasing, since hashing is lossy anyway. In the keyword hashes, we\njust do \"*ch |= 0x20\", which downcases letters and turns undercores to\nDEL. I can take a stab at that later.\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:09:12 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "I wrote:\n\n> Thinking some more, I'm not quite comfortable with the number of\n> places in these patches that have to know about the pre-downcased\n> strings, or whether we need that in the first place. If lower case is\n> common enough to optimize for, it seems the equality function can just\n> check strict equality on the char and only on mismatch try downcasing\n> before returning false. Doing our own function would allow the\n> compiler to inline it, or at least keep it on the same page. Further,\n> the old hash function shouldn't need to branch to do the same\n> downcasing, since hashing is lossy anyway. In the keyword hashes, we\n> just do \"*ch |= 0x20\", which downcases letters and turns undercores to\n> DEL. I can take a stab at that later.\n\nv4 is a quick POC for that. I haven't verified that it's correct for\nthe case of the probe and the entry don't match, but in case it\ndoesn't it should be easy to fix. I also didn't bother with\nSH_STORE_HASH in my testing.\n\n0001 adds the murmur32 finalizer -- we should do that regardless of\nanything else in this thread.\n0002 is just Jeff's 0001\n0003 adds an equality function that downcases lazily, and teaches the\nhash function about the 0x20 trick.\n\nmaster:\nlatency average = 581.765 ms\n\nv3 0001-0005:\nlatency average = 544.576 ms\n\nv4 0001-0003:\nlatency average = 547.489 ms\n\nThis gives similar results with a tiny amount of code (excluding the\nsimplehash conversion). I didn't check if the compiler inlined these\nfunctions, but we can hint it if necessary. We could use the new\nequality function in all the call sites that currently test for\n\"guc_name_compare() == 0\", in which case it might not end up inlined,\nbut that's probably okay.\n\nWe could also try to improve the hash function's collision behavior by\ncollecting the bytes on a uint64 and calling our new murmur64 before\nreturning the lower half, but that's speculative.",
"msg_date": "Wed, 22 Nov 2023 20:06:34 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-21 16:42:55 +0700, John Naylor wrote:\n> I get a noticeable regression in 0002, though, and I think I see why:\n> \n> guc_name_hash(const char *name)\n> {\n> - uint32 result = 0;\n> + const unsigned char *bytes = (const unsigned char *)name;\n> + int blen = strlen(name);\n> \n> The strlen call required for hashbytes() is not free. The lack of\n> mixing in the (probably inlined after 0001) previous hash function can\n> remedied directly, as in the attached:\n\nI doubt this is a good hashfunction. For short strings, sure, but after\nthat... I don't think it makes sense to reduce the internal state of a hash\nfunction to something this small.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Nov 2023 12:50:30 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-11-21 16:42:55 +0700, John Naylor wrote:\n>> The strlen call required for hashbytes() is not free. The lack of\n>> mixing in the (probably inlined after 0001) previous hash function can\n>> remedied directly, as in the attached:\n\n> I doubt this is a good hashfunction. For short strings, sure, but after\n> that... I don't think it makes sense to reduce the internal state of a hash\n> function to something this small.\n\nGUC names are just about always short, though, so I'm not sure you've\nmade your point? At worst, maybe this with 64-bit state instead of 32?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Nov 2023 15:56:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-22 15:56:21 -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-11-21 16:42:55 +0700, John Naylor wrote:\n> >> The strlen call required for hashbytes() is not free. The lack of\n> >> mixing in the (probably inlined after 0001) previous hash function can\n> >> remedied directly, as in the attached:\n>\n> > I doubt this is a good hashfunction. For short strings, sure, but after\n> > that... I don't think it makes sense to reduce the internal state of a hash\n> > function to something this small.\n>\n> GUC names are just about always short, though, so I'm not sure you've\n> made your point?\n\nWith short I meant <= 6 characters (32 / 5 = 6.x). After that you're\noverwriting bits that you previously set, without dispersing the \"overwritten\"\nbits throughout the hash state.\n\nIt's pretty easy to create conflicts this way, even just on paper. E.g. I\nthink abcdefgg and cbcdefgw would have the same hash, because the accumulated\nvalue passed to murmurhash32 is the same.\n\nThe fact that this happens when a large part of the string is the same\nis bad, because it makes it more likely that prefixed strings trigger such\nconflicts, and they're obviously common with GUC strings.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Nov 2023 13:22:21 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-11-22 15:56:21 -0500, Tom Lane wrote:\n>> GUC names are just about always short, though, so I'm not sure you've\n>> made your point?\n\n> With short I meant <= 6 characters (32 / 5 = 6.x). After that you're\n> overwriting bits that you previously set, without dispersing the \"overwritten\"\n> bits throughout the hash state.\n\nI'm less than convinced about the \"overwrite\" part:\n\n+\t\t/* Merge into hash ... not very bright, but it needn't be */\n+\t\tresult = pg_rotate_left32(result, 5);\n+\t\tresult ^= (uint32) ch;\n\nRotating a 32-bit value 5 bits at a time doesn't result in successive\ncharacters lining up exactly, and even once they do, XOR is not\n\"overwrite\". I'm pretty dubious that we need something better than this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Nov 2023 16:27:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-22 16:27:56 -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-11-22 15:56:21 -0500, Tom Lane wrote:\n> >> GUC names are just about always short, though, so I'm not sure you've\n> >> made your point?\n>\n> > With short I meant <= 6 characters (32 / 5 = 6.x). After that you're\n> > overwriting bits that you previously set, without dispersing the \"overwritten\"\n> > bits throughout the hash state.\n>\n> I'm less than convinced about the \"overwrite\" part:\n>\n> +\t\t/* Merge into hash ... not very bright, but it needn't be */\n> +\t\tresult = pg_rotate_left32(result, 5);\n> +\t\tresult ^= (uint32) ch;\n>\n> Rotating a 32-bit value 5 bits at a time doesn't result in successive\n> characters lining up exactly, and even once they do, XOR is not\n> \"overwrite\".\n\nI didn't know what word to use, hence the air quotes. Yes, xor doesn't just\nset the bits to the right hand side in, but it just affects data on a per-bit\nbasis, which easily can be cancelled out.\n\n\nMy understanding of writing hash functions is that every added bit mixed in\nshould have a ~50% chance of causing each other bit to flip. The proposed\nfunction obviously doesn't get there.\n\nIt's worth noting that the limited range of the input values means that\nthere's a lot of bias toward some bits being set ('a' to 'z' all start with\n0b011).\n\n\n> I'm pretty dubious that we need something better than this.\n\nWell, we know that the current attempt at a dedicated hashfunctions for this\ndoes result in substantial amounts of conflicts. And it's hard to understand\nsuch cases when you hit them, so I think it's better to avoid exposing\nourselves to such dangers, without a distinct need.\n\nAnd I don't really see the need here to risk it, even if we are somewhat\nconfident it's fine.\n\nIf, which I mildly doubt, we can't afford to call murmurhash32 for every\ncharacter, we could just call it for 32/5 input characters together. Or we\ncould just load up to 8 characters into an 64bit integer, can call\nmurmurhash64.\n\nSomething roughly like\n\nuint64 result;\n\nwhile (*name)\n{\n uint64 value = 0;\n\n for (int i = 0; i < 8 && *name; i++)\n {\n char ch = *name++;\n\n value |= *name;\n value = value << 8;\n }\n\n result = hash_combine64(result, murmurhash64(value));\n}\n\nThe hash_combine use isn't quite right either, we should use the full\naccumulator state of a proper hash function, but it's seems very unlikely to\nmatter here.\n\n\nThe fact that string_hash() is slow due to the strlen(), which causes us to\nprocess the input twice and which is optimized to also handle very long\nstrings which typically string_hash() doesn't encounter, seems problematic far\nbeyond this case. We use string_hash() in a *lot* of places, and that strlen()\ndoes regularly show up in profiles. We should fix that.\n\nThe various hash functions being external functions also shows up in a bunch\nof profiles too. It's particularly ridiculous for cases like tag_hash(),\nwhere the caller typically knows the lenght, but calls a routine in a\ndifferent translation unit, which obviously can't be optimized for a specific\nlength.\n\n\nI think we ought to adjust our APIs around this:\n\n1) The accumulator state of the hash functions should be exposed, so one can\n accumulate values into the hash state, without reducing the internal state\n to a single 32/64 bit variable.\n\n2) For callers that know the length of data, we should use a static inline\n hash function, rather than an external function call. This should include\n special cased inline functions for adding 32/64bit of data to the hash\n state.\n\n Perhaps with a bit of logic to *not* use the inline version if the hashed\n input is long (and thus the call overhead doesn't matter). Something like\n if (__builtin_constant_p(len) && len < 128)\n /* call inline implementation */\n else\n /* call out of line implementation, not worth the code bloat */\n\n\nWe know that hash functions should have the split into init/process\ndata*/finish steps, as e.g. evidenced by pg_crc.h/pg_crc32.h.\n\n\nWith something like that, you could write a function that lowercases\ncharacters inline without incurring unnecessary overhead.\n\n hash32_state hs;\n\n hash32_init(&hs);\n\n while (*name)\n {\n char ch = *name++;\n\n /* crappy lowercase for this situation */\n ch |= 0x20;\n\n hash32_process_byte(&hs, ch);\n }\n\n return hash32_finish(&hs);\n\nPerhaps with some additional optimization for processing the input string in\n32/64 bit quantities.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:34:32 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 5:34 AM Andres Freund <[email protected]> wrote:\n\n> It's worth noting that the limited range of the input values means that\n> there's a lot of bias toward some bits being set ('a' to 'z' all start with\n> 0b011).\n\nWe can take advantage of the limited range with a single additional\ninstruction: After \"ch |= 0x20\", do \"ch -= ('a' - 1)\". That'll shrink\nletters and underscores to the range [1,31], which fits in 5 bits.\n(Other characters are much less common in a guc name). That increases\nrandomness and allows 12 chars to be xor'd in before the first bits\nrotate around.\n\n> If, which I mildly doubt, we can't afford to call murmurhash32 for every\n> character, we could just call it for 32/5 input characters together. Or we\n> could just load up to 8 characters into an 64bit integer, can call\n> murmurhash64.\n\nI'll play around with this idea, as well.\n\n> The fact that string_hash() is slow due to the strlen(), which causes us to\n> process the input twice and which is optimized to also handle very long\n> strings which typically string_hash() doesn't encounter, seems problematic far\n> beyond this case. We use string_hash() in a *lot* of places, and that strlen()\n> does regularly show up in profiles. We should fix that.\n\n+1\n\n> I think we ought to adjust our APIs around this:\n\n> 1) The accumulator state of the hash functions should be exposed, so one can\n> accumulate values into the hash state, without reducing the internal state\n> to a single 32/64 bit variable.\n\nIf so, it might make sense to vendor a small, suitably licensed hash\nfunction that already has these APIs.\n\nWhile on the subject, it'd be good to have a clear separation between\nin-memory and on-disk usage, so we can make breaking changes in the\nformer.\n\n\n",
"msg_date": "Thu, 23 Nov 2023 17:41:08 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Attached is a rough start with Andres's earlier ideas, to get\nsomething concrete out there.\n\nI took a look around at other implementations a bit. Many modern hash\nfunctions use MUM-style hashing, which typically uses 128-bit\narithmetic. Even if they already have an incremental interface and\nhave a compatible license, it seems a bit too much work to adopt just\nfor a couple string use cases. Might be useful elsewhere, though, but\nthat's off topic.\n\nHowever, I did find a couple hash functions that are much simpler to\nadapt to a bytewise interface, pass SMHasher, and are decently fast on\nshort inputs:\n\n- fast-hash, MIT licensed, and apparently has some use in software [1]\n- MX3, CC0 license (looking around, seems controversial for a code\nlicense, so didn't go further). [2] Seems to be a for-fun project, but\nthe accompanying articles are very informative on how to develop these\nthings.\n\nAfter wacking fast-hash around, it doesn't really resemble the\noriginal much, and if for some reason we went as far as switching out\nthe mixing/final functions, it may as well be called completely\noriginal work. I thought it best to start with something whose mixing\nbehavior passes SMHasher, and hopefully preserve that property.\n\nNote that the combining and final steps share most of their arithmetic\noperations. This may have been done on purpose to minimize binary\nsize, but I didn't check. Also, it incorporates input length into the\ncalculation. Since we don't know the length of C strings up front, I\nthrew that out for now. It'd be possible to track the length as we go\nand incorporate something into the final step. The hard part is\nverifying it hasn't lost any quality.\n\nv5-0001 puts fash-hash as-is into a new header, named in a way to\nconvey in-memory use e.g. hash tables.\n\nv5-0002 does the minimal to allow dynash to use this for string_hash,\ninlined but still calling strlen.\n\nv5-0003 shows one way to do a incremental interface. It might be okay\nfor simplehash with fixed length keys, but seems awkward for strings.\n\nv5-0004 shows a bytewise incremental interface, with implementations\nfor dynahash (getting rid of strlen) and guc hash.\n\n[1] https://code.google.com/archive/p/fast-hash/\n[2] https://github.com/jonmaiga/mx3",
"msg_date": "Wed, 29 Nov 2023 20:31:21 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On 29/11/2023 15:31, John Naylor wrote:\n> However, I did find a couple hash functions that are much simpler to\n> adapt to a bytewise interface, pass SMHasher, and are decently fast on\n> short inputs:\n> \n> - fast-hash, MIT licensed, and apparently has some use in software [1]\n> - MX3, CC0 license (looking around, seems controversial for a code\n> license, so didn't go further). [2] Seems to be a for-fun project, but\n> the accompanying articles are very informative on how to develop these\n> things.\n> \n> After wacking fast-hash around, it doesn't really resemble the\n> original much, and if for some reason we went as far as switching out\n> the mixing/final functions, it may as well be called completely\n> original work. I thought it best to start with something whose mixing\n> behavior passes SMHasher, and hopefully preserve that property.\n\nI didn't understand what you meant by the above. Did you wack around \nfast-hash, or who did? Who switched mixing/final functions; compared to \nwhat? The version you have in the patch matches the implementation in \nsmhasher, did you mean that the smhasher author changed it compared to \nthe original?\n\nIn any case, +1 on the implementation you had in the patch at a quick \nglance.\n\nLet's also replace the partial murmurhash implementations we have in \nhashfn.h with this. It's a very similar algorithm, and we don't need two.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 29 Nov 2023 16:59:37 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 9:59 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> I didn't understand what you meant by the above. Did you wack around\n> fast-hash, or who did?\n\nI turned it into an init/accum/final style (shouldn't affect the\nresult), and took out the input length from the calculation (will\naffect the result and I'll look into putting it back some other way).\n\n> Who switched mixing/final functions; compared to\n> what?\n\nSorry for the confusion. I didn't change those, I was speaking hypothetically.\n\n> In any case, +1 on the implementation you had in the patch at a quick\n> glance.\n>\n> Let's also replace the partial murmurhash implementations we have in\n> hashfn.h with this. It's a very similar algorithm, and we don't need two.\n\nThanks for taking a look! For small fixed-sized values, it's common to\nspecial-case a murmur-style finalizer regardless of the algorithm for\nlonger inputs. Syscache combines multiple hashes for multiple keys, so\nit's probably worth it to avoid adding cycles there.\n\n\n",
"msg_date": "Wed, 29 Nov 2023 23:26:21 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 8:31 PM John Naylor <[email protected]> wrote:\n>\n> Attached is a rough start with Andres's earlier ideas, to get\n> something concrete out there.\n\nWhile looking at the assembly out of curiosity, I found a couple bugs\nin the split API that I've fixed locally.\n\nI think the path forward is:\n\n- performance measurements with both byte-at-a-time and\nword-at-a-time, once I make sure they're fixed\n- based on the above decide which one is best for guc_name_hash\n- clean up hash function implementation\n- test with with a new guc_name_compare (using what we learned from my\nguc_name_eq) and see how well we do with keeping dynahash vs.\nsimplehash\n\nSeparately, for string_hash:\n\n- run SMHasher and see about reincorporating length in the\ncalculation. v5 should be a clear improvement in collision behavior\nover the current guc_name_hash, but we need to make sure it's at least\nas good as hash_bytes, and ideally not lose anything compared to\nstandard fast_hash.\n\n\n",
"msg_date": "Sat, 2 Dec 2023 15:35:24 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, 2023-11-29 at 20:31 +0700, John Naylor wrote:\n> v5-0001 puts fash-hash as-is into a new header, named in a way to\n> convey in-memory use e.g. hash tables.\n> \n> v5-0002 does the minimal to allow dynash to use this for string_hash,\n> inlined but still calling strlen.\n> \n> v5-0003 shows one way to do a incremental interface. It might be okay\n> for simplehash with fixed length keys, but seems awkward for strings.\n> \n> v5-0004 shows a bytewise incremental interface, with implementations\n> for dynahash (getting rid of strlen) and guc hash.\n\nI'm trying to follow the distinctions you're making between dynahash\nand simplehash -- are you saying it's easier to do incremental hashing\nwith dynahash, and if so, why?\n\nIf I understood what Andres was saying, the exposed hash state would be\nuseful for writing a hash function like guc_name_hash(). But whether we\nuse simplehash or dynahash is a separate question, right?\n\nAlso, while the |= 0x20 is a nice trick for lowercasing, did we decide\nthat it's better than my approach in patch 0004 here:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nwhich optimizes exact hits (most GUC names are already folded) before\ntrying case folding?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sun, 03 Dec 2023 13:16:20 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Mon, Dec 4, 2023 at 4:16 AM Jeff Davis <[email protected]> wrote:\n> I'm trying to follow the distinctions you're making between dynahash\n> and simplehash -- are you saying it's easier to do incremental hashing\n> with dynahash, and if so, why?\n\nThat's a good thing to clear up. This thread has taken simplehash as a\nstarting point from the very beginning. It initially showed no\nimprovement, and then we identified problems with the hashing and\nequality computations. The latter seem like independently commitable\nimprovements, so I'm curious if they help on their own, even if we\nstill need to switch to simplehash as a last step to meet your\nperformance goals.\n\n> If I understood what Andres was saying, the exposed hash state would be\n> useful for writing a hash function like guc_name_hash().\n\n From my point of view, it would at least be useful for C-strings,\nwhere we don't have the length available up front.\n\nAside from that, we have multiple places that compute full 32-bit\nhashes on multiple individual values, and then combine them with\nvarious ad-hoc ways. It could be worth exploring whether an\nincremental interface would be better in those places on a\ncase-by-case basis.\n\n(If Andres had something else in mind, I'll let him address that.)\n\n> But whether we\n> use simplehash or dynahash is a separate question, right?\n\nRight, the table implementation should treat the hash function as a\nblack box. Think of the incremental API as lower-level building blocks\nfor building hash functions.\n\n> Also, while the |= 0x20 is a nice trick for lowercasing, did we decide\n> that it's better than my approach in patch 0004 here:\n>\n> https://www.postgresql.org/message-id/[email protected]\n>\n> which optimizes exact hits (most GUC names are already folded) before\n> trying case folding?\n\nNote there were two aspects there: hashing and equality. I demonstrated in\n\nhttps://www.postgresql.org/message-id/CANWCAZbQ30O9j-bEZ_1zVCyKPpSjwbE4u19cSDDBJ%3DTYrHvPig%40mail.gmail.com\n\n... in v4-0003 that the equality function can be optimized for\nalready-folded names (and in fact measured almost equally) using way,\nway, way less code.\n\n\n",
"msg_date": "Mon, 4 Dec 2023 12:12:06 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Mon, 2023-12-04 at 12:12 +0700, John Naylor wrote:\n> That's a good thing to clear up. This thread has taken simplehash as\n> a\n> starting point from the very beginning. It initially showed no\n> improvement, and then we identified problems with the hashing and\n> equality computations. The latter seem like independently commitable\n> improvements, so I'm curious if they help on their own, even if we\n> still need to switch to simplehash as a last step to meet your\n> performance goals.\n\nThere's already a patch to use simplehash, and the API is a bit\ncleaner, and there's a minor performance improvement. It seems fairly\nnon-controversial -- should I just proceed with that patch?\n\n> > If I understood what Andres was saying, the exposed hash state\n> > would be\n> > useful for writing a hash function like guc_name_hash().\n> \n> From my point of view, it would at least be useful for C-strings,\n> where we don't have the length available up front.\n\nThat's good news.\n\nBy the way, is there any reason that we would need hash_bytes(s,\nstrlen(s)) == cstring_hash(s)?\n\n> > Also, while the |= 0x20 is a nice trick for lowercasing, did we\n> > decide\n> > that it's better than my approach in patch 0004 here:\n> > \n> > https://www.postgresql.org/message-id/[email protected]\n> > \n> > which optimizes exact hits (most GUC names are already folded)\n> > before\n> > trying case folding?\n> \n> Note there were two aspects there: hashing and equality. I\n> demonstrated in\n> \n> https://www.postgresql.org/message-id/CANWCAZbQ30O9j-bEZ_1zVCyKPpSjwbE4u19cSDDBJ%3DTYrHvPig%40mail.gmail.com\n> \n> ... in v4-0003 that the equality function can be optimized for\n> already-folded names (and in fact measured almost equally) using way,\n> way, way less code.\n\nThinking in terms of API layers, there are two approaches: (a) make the\nhash and equality functions aware of the case-insensitivity, as we\ncurrently do; or (b) make it the caller's responsibility to do case\nfolding, and the hash and equality functions are based on exact\nequality.\n\nEach approach has its own optimization techniques. In (a), we can use\nthe |= 0x20 trick, and for equality do a memcmp() check first. In (b),\nthe caller can first try lookup of the key in whatever form is\nprovided, and only if that fails, case-fold it and try again.\n\nAs a tangential point, we may eventually want to provide a more\ninternationalized definition of \"case insensitive\" for GUC names. That\nwould be slightly easier with (b) than with (a), but we can cross that\nbridge if and when we come to it.\n\nIt seems you are moving toward (a) whereas my patches moved toward (b).\nI am fine with either approach but I wanted to clarify which approach\nwe are using.\n\nIn the abstract, I kind of like approach (b) because we don't need to\nbe as special/clever with the hash functions. We would still want the\nfaster hash for C-strings, but that's general and helps all callers.\nBut you're right that it's more code, and that's not great.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 04 Dec 2023 10:57:27 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, Dec 5, 2023 at 1:57 AM Jeff Davis <[email protected]> wrote:\n>\n> On Mon, 2023-12-04 at 12:12 +0700, John Naylor wrote:\n\n> There's already a patch to use simplehash, and the API is a bit\n> cleaner, and there's a minor performance improvement. It seems fairly\n> non-controversial -- should I just proceed with that patch?\n\nI won't object if you want to commit that piece now, but I hesitate to\ncall it a performance improvement on its own.\n\n- The runtime measurements I saw reported were well within the noise level.\n- The memory usage starts out better, but with more entries is worse.\n\n> > From my point of view, it would at least be useful for C-strings,\n> > where we don't have the length available up front.\n>\n> That's good news.\n>\n> By the way, is there any reason that we would need hash_bytes(s,\n> strlen(s)) == cstring_hash(s)?\n\n\"git grep cstring_hash\" found nothing, so not sure what you're asking.\n\n> Each approach has its own optimization techniques. In (a), we can use\n> the |= 0x20 trick, and for equality do a memcmp() check first.\n\nI will assume you are referring to semantics, but on the odd chance\nreaders take this to mean the actual C library call, that wouldn't be\nan optimization, that'd be a pessimization.\n\n> As a tangential point, we may eventually want to provide a more\n> internationalized definition of \"case insensitive\" for GUC names. That\n> would be slightly easier with (b) than with (a), but we can cross that\n> bridge if and when we come to it.\n\nThe risk/reward ratio seems pretty bad.\n\n> It seems you are moving toward (a) whereas my patches moved toward (b).\n> I am fine with either approach but I wanted to clarify which approach\n> we are using.\n\nI will make my case:\n\n> In the abstract, I kind of like approach (b) because we don't need to\n> be as special/clever with the hash functions.\n\nIn the abstract, I consider (b) to be a layering violation. As a\nconsequence, the cleverness in (b) is not confined to one or two\nplaces, but is smeared over a whole bunch of places. I find it hard to\nfollow.\n\nConcretely, it also adds another pointer to the element struct. That's\nnot good for a linear open-addressing array, which simplehash has.\n\nFurther, remember the equality function is important as well. In v3,\nit was \"strcmp(a,b)==0\", which is a holdover from the dynahash API.\nOne of the advantages of the simplehash API is that we can 1) use an\nequality function, which should be slightly cheaper than a full\ncomparison function, and 2) we have the option to inline it. (It\ndoesn't make sense in turn, to jump to a shared lib page and invoke an\nindirect function call.) Once we've done that, it's already \"special\",\nso it's not a stretch to make it do what we want to begin with. If a\nnicer API is important, why not use it?\n\n\n",
"msg_date": "Wed, 6 Dec 2023 07:39:00 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, 2023-12-06 at 07:39 +0700, John Naylor wrote:\n> \"git grep cstring_hash\" found nothing, so not sure what you're\n> asking.\n\nSorry, I meant string_hash(). Your v5-0002 changes the way hashing\nworks for cstrings, and that means it's no longer equivalent to\nhash_bytes with strlen. That's probably fine, but someone might assume\nthat they are equivalent.\n\n> \n> In the abstract, I consider (b) to be a layering violation. As a\n> consequence, the cleverness in (b) is not confined to one or two\n> places, but is smeared over a whole bunch of places. I find it hard\n> to\n> follow.\n\nOK. I am fine with (a).\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 06 Dec 2023 08:48:15 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 11:48 PM Jeff Davis <[email protected]> wrote:\n>\n> On Wed, 2023-12-06 at 07:39 +0700, John Naylor wrote:\n> > \"git grep cstring_hash\" found nothing, so not sure what you're\n> > asking.\n>\n> Sorry, I meant string_hash(). Your v5-0002 changes the way hashing\n> works for cstrings, and that means it's no longer equivalent to\n> hash_bytes with strlen. That's probably fine, but someone might assume\n> that they are equivalent.\n\nThat's a good point. It might be best to leave string_hash where it is\nand remove the comment that it's the default. Then the new function (I\nlike the name cstring_hash) can live in dynahash.c where it's obvious\nwhat \"default\" means.\n\n\n",
"msg_date": "Thu, 7 Dec 2023 08:38:03 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, 2023-11-29 at 20:31 +0700, John Naylor wrote:\n> Attached is a rough start with Andres's earlier ideas, to get\n> something concrete out there.\n\nThe implementation of string hash in 0004 forgot to increment 'buf'.\n\nI tested using the new hash function APIs for my search path cache, and\nthere's a significant speedup for cases not benefiting from a86c61c9ee.\nIt's enough that we almost don't need a86c61c9ee. So a definite +1 to\nthe new APIs.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 08 Dec 2023 12:32:27 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "\nI committed 867dd2dc87, which means my use case for a fast GUC hash\ntable (quickly setting proconfigs) is now solved.\n\nAndres mentioned that it could still be useful to reduce overhead in a\nfew other places:\n\nhttps://postgr.es/m/[email protected]\n\nHow should we evaluate GUC hash table performance optimizations? Just\nmicrobenchmarks, or are there end-to-end tests where the costs are\nshowing up?\n\n(As I said in another email, I think the hash function APIs justify\nthemselves regardless of improvements to the GUC hash table.)\n\nOn Wed, 2023-12-06 at 07:39 +0700, John Naylor wrote:\n> > There's already a patch to use simplehash, and the API is a bit\n> > cleaner, and there's a minor performance improvement. It seems\n> > fairly\n> > non-controversial -- should I just proceed with that patch?\n> \n> I won't object if you want to commit that piece now, but I hesitate\n> to\n> call it a performance improvement on its own.\n> \n> - The runtime measurements I saw reported were well within the noise\n> level.\n> - The memory usage starts out better, but with more entries is worse.\n\nI suppose I'll wait until there's a reason to commit it, then.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 08 Dec 2023 12:34:59 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Sat, Dec 9, 2023 at 3:32 AM Jeff Davis <[email protected]> wrote:\n>\n> On Wed, 2023-11-29 at 20:31 +0700, John Naylor wrote:\n> > Attached is a rough start with Andres's earlier ideas, to get\n> > something concrete out there.\n>\n> The implementation of string hash in 0004 forgot to increment 'buf'.\n\nYeah, that was one of the bugs I mentioned. In v6, I fixed it so we\nget the right answer.\n\n0001 pure copy of fasthash upstream\n0002 keeps the originals for validation, and then re-implements them\nusing the new incremental interfaces\n0003 adds UINT64CONST. After writing this I saw that murmur64 didn't\nhave UINT64CONST (and obviously no buildfarm member complained), so\nprobably not needed.\n0004 Assert that the original and incrementalized versions give the\nsame answer. This requires the length to be known up front.\n0005 Demo with pgstat_hash_hash_key, which currently runs 3 finalizers\njoined with hash_combine. Might shave a few cycles.\n0006 Add bytewise interface for C strings.\n\n0007 Use it in guc_name_hash\n0008 Teach guc_name_cmp to case fold lazily\n\nI'll test these two and see if there's a detectable difference. Then\neach of these:\n\n0009 Jeff's conversion to simplehash\n0010 Use an inline equality function for guc nam. hash\n0011/12 An experiment to push case-folding down inside fasthash. It's\nnot great looking, but I'm curious if it makes a difference.\n\n0013 Get rid of strlen in dynahash with default string hashing. I'll\nhold on to this and start a new thread, because it's off-topic and has\nsome open questions.\n\nI haven't tested yet, but I want to see what CI thinks.\n\n> I tested using the new hash function APIs for my search path cache, and\n> there's a significant speedup for cases not benefiting from a86c61c9ee.\n> It's enough that we almost don't need a86c61c9ee. So a definite +1 to\n> the new APIs.\n\nDo you have a new test?",
"msg_date": "Sat, 9 Dec 2023 18:52:55 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Sat, 2023-12-09 at 18:52 +0700, John Naylor wrote:\n> > I tested using the new hash function APIs for my search path cache,\n> > and\n> > there's a significant speedup for cases not benefiting from\n> > a86c61c9ee.\n> > It's enough that we almost don't need a86c61c9ee. So a definite +1\n> > to\n> > the new APIs.\n> \n> Do you have a new test?\n\nStill using the same basic test here:\n\nhttps://www.postgresql.org/message-id/04c8592dbd694e4114a3ed87139a7a04e4363030.camel%40j-davis.com\n\nWhat I did is:\n\n a. add your v5 patches\n b. disable optimization in a86c61c9ee\n c. add attached patch to use new hash APIs\n\nI got a slowdown between (a) and (b), and then (c) closed the gap about\nhalfway. It started to get close to test noise at that point -- I could\nget some better numbers out of it if it's helpful.\n\nAlso, what I'm doing in the attached path is using part of the key as\nthe seed. Is that a good idea or should the seed be zero or come from\nsomewhere else?\n\nRegards,\n\tJeff Davis",
"msg_date": "Sat, 09 Dec 2023 11:18:15 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Sun, Dec 10, 2023 at 2:18 AM Jeff Davis <[email protected]> wrote:\n>\n> On Sat, 2023-12-09 at 18:52 +0700, John Naylor wrote:\n> > > I tested using the new hash function APIs for my search path cache,\n> > > and\n> > > there's a significant speedup for cases not benefiting from\n> > > a86c61c9ee.\n> > > It's enough that we almost don't need a86c61c9ee. So a definite +1\n> > > to\n> > > the new APIs.\n\nInteresting, thanks for testing! SearchPathCache is a better starting\npoint than dynahash for removing strlen calls anyway -- it's more\nlocalized, uses simplehash, and we can test it with at-hand tests.\n\n> > Do you have a new test?\n>\n> Still using the same basic test here:\n>\n> https://www.postgresql.org/message-id/04c8592dbd694e4114a3ed87139a7a04e4363030.camel%40j-davis.com\n>\n> What I did is:\n>\n> a. add your v5 patches\n> b. disable optimization in a86c61c9ee\n> c. add attached patch to use new hash APIs\n\nOf course, the CF bot doesn't know this, so it crashed and burned\nbefore I had a chance to check how v6 did. I'm attaching v7 which just\nimproves commit messages for reviewing, and gets rid of git whitespace\nerrors.\n\nMy local branch of master is still at 457428d9e99b6 from Dec 4. That's\nbefore both a86c61c9ee (Optimize SearchPathCache by saving the last\nentry.) and 867dd2dc87 (Cache opaque handle for GUC option to avoid\nrepeasted lookups.). My plan was to keep testing against Dec. 4, but\nlike you I'm not sure if there is a better GUC test to do now.\n\n> I got a slowdown between (a) and (b), and then (c) closed the gap about\n> halfway. It started to get close to test noise at that point -- I could\n> get some better numbers out of it if it's helpful.\n\nWe can also try (c) with using the \"chunked\" interface. Also note your\npatch may no longer apply on top of v6 or v7.\n\n> Also, what I'm doing in the attached path is using part of the key as\n> the seed. Is that a good idea or should the seed be zero or come from\n> somewhere else?\n\nI think whether to use part of the key as a seed is a judgment call.\nSee this part in resowner.c:\n\n/*\n * Most resource kinds store a pointer in 'value', and pointers are unique\n * all on their own. But some resources store plain integers (Files and\n * Buffers as of this writing), so we want to incorporate the 'kind' in\n * the hash too, otherwise those resources will collide a lot. But\n * because there are only a few resource kinds like that - and only a few\n * resource kinds to begin with - we don't need to work too hard to mix\n * 'kind' into the hash. Just add it with hash_combine(), it perturbs the\n * result enough for our purposes.\n */\n#if SIZEOF_DATUM == 8\n return hash_combine64(murmurhash64((uint64) value), (uint64) kind);\n\nGiven these comments, I'd feel free to use the \"kind\" as the seed if I\nwere writing this with fasthash.\n\nThe caller-provided seed can probably be zero unless we have a good\nreason to, like the above, but with the incremental interface there is\nan issue:\n\nhs->hash = seed ^ (len * UINT64CONST(0x880355f21e6d1965));\n\nPassing length 0 will wipe out the internal seed here, and that can't be good.\n\n1) We could by convention pass \"1\" as the length for strings. That\ncould be a macro like\n\n#define FH_UNKNOWN_LENGTH 1\n\n...and maybe Assert(len != 0 || seed != 0)\n\nOr 2) we could detect zero and force it to be one, but it's best if\nthe compiler can always constant-fold that branch. Future work may\ninvalidate that assumption.",
"msg_date": "Sun, 10 Dec 2023 13:26:31 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "I wrote:\n\n> On Sun, Dec 10, 2023 at 2:18 AM Jeff Davis <[email protected]> wrote:\n> >\n> > On Sat, 2023-12-09 at 18:52 +0700, John Naylor wrote:\n> > > > I tested using the new hash function APIs for my search path cache,\n> > > > and\n> > > > there's a significant speedup for cases not benefiting from\n> > > > a86c61c9ee.\n> > > > It's enough that we almost don't need a86c61c9ee. So a definite +1\n> > > > to\n> > > > the new APIs.\n>\n> Interesting, thanks for testing! SearchPathCache is a better starting\n> point than dynahash for removing strlen calls anyway -- it's more\n> localized, uses simplehash, and we can test it with at-hand tests.\n\nSince I had to fix a misalignment in the original to keep ubsan from\ncrashing CI anyway (v8-0005), I thought I'd take the experiment with\nsearch path cache and put the temporary validation of the hash\nfunction output in there (v8-0004). I had to finagle a bit to get the\nbytewise interface to give the same answer as the original, but that's\nokay: The bytewise interface is intended for when we don't know the\nlength up front (and therefore the internal seed can't be tweaked with\nthe length), but it's nice to make sure nothing's broken.\n\nThere is also a chunkwise version for search path cache. That might be\na little faster. Perf testing can be done as is, because I put the\nvalidation in assert builds only.\n\nI've left out the GUC stuff for now, just want to get CI green again.",
"msg_date": "Sun, 10 Dec 2023 21:57:04 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Sun, Dec 10, 2023 at 2:18 AM Jeff Davis <[email protected]> wrote:\n>\n> On Sat, 2023-12-09 at 18:52 +0700, John Naylor wrote:\n> > > I tested using the new hash function APIs for my search path cache,\n> > > and\n> > > there's a significant speedup for cases not benefiting from\n> > > a86c61c9ee.\n> > > It's enough that we almost don't need a86c61c9ee. So a definite +1\n> > > to\n> > > the new APIs.\n> >\n> > Do you have a new test?\n>\n> Still using the same basic test here:\n>\n> https://www.postgresql.org/message-id/04c8592dbd694e4114a3ed87139a7a04e4363030.camel%40j-davis.com\n>\n> What I did is:\n>\n> a. add your v5 patches\n> b. disable optimization in a86c61c9ee\n> c. add attached patch to use new hash APIs\n>\n> I got a slowdown between (a) and (b), and then (c) closed the gap about\n> halfway. It started to get close to test noise at that point -- I could\n> get some better numbers out of it if it's helpful.\n\nI tried my variant of the same test [1] (but only 20 seconds per run),\nwhich uses pgbench to take the average of a few dozen runs, and\ndoesn't use table I/O (when doing that, it's best to pre-warm the\nbuffers to reduce noise).\n\npgbench -n -T 20 -f bench.sql -M prepared\n(done three times and take the median, with turbo off)\n\n* master at 457428d9e99b6b from Dec 4:\nlatency average = 571.413 ms\n\n* v8 (bytewise hash):\nlatency average = 588.942 ms\n\nThis regression is a bit surprising, since there is no strlen call,\nand it uses roleid as a seed without a round of mixing (not sure if we\nshould do that, but just trying to verify results).\n\n* v8 with chunked interface:\nlatency average = 555.688 ms\n\nThis starts to improve things for me.\n\n* v8 with chunked, and return lower 32 bits of full 64-bit hash:\nlatency average = 556.324 ms\n\nThis is within the noise level. There doesn't seem to be much downside\nof using a couple cycles for fasthash's 32-bit reduction.\n\n* revert back to master from Dec 4 and then cherry pick a86c61c9ee\n(save last entry of SearchPathCache)\nlatency average = 545.747 ms\n\nSo chunked incremental hashing gets within ~2% of that, which is nice.\nIt seems we should use that when removing strlen, when convenient.\n\nUpdated next steps:\n* Investigate whether/how to incorporate final length into the\ncalculation when we don't have the length up front.\n* Add some desperately needed explanatory comments.\n* Use this in some existing cases where it makes sense.\n* Get back to GUC hash and dynahash.\n\n[1] https://www.postgresql.org/message-id/CANWCAZY7Cr-GdUhrCLoR4%2BJGLChTb0pQxq9ZPi1KTLs%2B_KDFqg%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 12 Dec 2023 12:22:38 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "I wrote:\n>\n> * v8 with chunked interface:\n> latency average = 555.688 ms\n>\n> This starts to improve things for me.\n>\n> * v8 with chunked, and return lower 32 bits of full 64-bit hash:\n> latency average = 556.324 ms\n>\n> This is within the noise level. There doesn't seem to be much downside\n> of using a couple cycles for fasthash's 32-bit reduction.\n>\n> * revert back to master from Dec 4 and then cherry pick a86c61c9ee\n> (save last entry of SearchPathCache)\n> latency average = 545.747 ms\n>\n> So chunked incremental hashing gets within ~2% of that, which is nice.\n> It seems we should use that when removing strlen, when convenient.\n>\n> Updated next steps:\n> * Investigate whether/how to incorporate final length into the\n> calculation when we don't have the length up front.\n> * Add some desperately needed explanatory comments.\n> * Use this in some existing cases where it makes sense.\n> * Get back to GUC hash and dynahash.\n\nFor #1 here, I cloned SMHasher and was dismayed at the complete lack\nof documentation, but after some poking around, found how to run the\ntests, using the 32-bit hash to save time. It turns out that the input\nlength is important. I've attached two files of results -- \"nolen\"\nmeans stop using the initial length to tweak the internal seed. As you\ncan see, there are 8 failures. \"pluslen\" means I then incorporated the\nlength within the finalizer. This *does* pass SMHasher, so that's\ngood. (of course this way can't produce the same hash as when we know\nthe length up front, but that's not important). The attached shows how\nthat would work, further whacking around and testing with Jeff's\nprototype for the search path cache hash table. I'll work on code\ncomments and get it polished.",
"msg_date": "Fri, 15 Dec 2023 08:20:12 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "I wrote:\n> Updated next steps:\n\n> * Add some desperately needed explanatory comments.\n\nThere is a draft of this in v10-0001. I also removed the validation\nscaffolding and ran pgindent. This could use some review and/or\nbikeshedding, in particular on the name hashfn_unstable.h. I also\nconsidered *_volatile.h or *_inmemory.h, but nothing stands out as\nmore clear.\n\n> * Use this in some existing cases where it makes sense.\n\nFor now just two:\nv10-0002 is Jeff's change to the search path cache, but with the\nchunked interface that I found to be faster.\nv10-0003 is a copy of something buried in an earlier version: use in\npgstat. Looks nicer, but not yet tested.",
"msg_date": "Mon, 18 Dec 2023 13:39:02 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Mon, 2023-12-18 at 13:39 +0700, John Naylor wrote:\n> For now just two:\n> v10-0002 is Jeff's change to the search path cache, but with the\n> chunked interface that I found to be faster.\n\nDid you consider specializing for the case of an aligned pointer? If\nit's a string (c string or byte string) it's almost always going to be\naligned, right?\n\nI hacked up a patch (attached). I lost track of which benchmark we're\nusing to test the performance, but when I test in a loop it seems\nsubstantially faster.\n\nIt reads past the NUL byte, but only to the next alignment boundary,\nwhich I think is OK (though I think I'd need to fix the patch for when\nmaxalign < 8).\n\nRegards,\n\tJeff Davis",
"msg_date": "Mon, 18 Dec 2023 23:32:30 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 2:32 PM Jeff Davis <[email protected]> wrote:\n>\n> On Mon, 2023-12-18 at 13:39 +0700, John Naylor wrote:\n> > For now just two:\n> > v10-0002 is Jeff's change to the search path cache, but with the\n> > chunked interface that I found to be faster.\n>\n> Did you consider specializing for the case of an aligned pointer? If\n> it's a string (c string or byte string) it's almost always going to be\n> aligned, right?\n\nThat wasn't the next place I thought to look (that would be the strcmp\ncall), but something like this could be worthwhile.\n\nIf we went this far, I'd like to get more use out of it than one call\nsite. I think a few other places have as their hash key a string along\nwith other values, so maybe we can pass an initialized hash state for\nstrings separately from combining in the other values. Dynahash will\nstill need to deal with truncation, so would need duplicate coding,\nbut I'm guessing with that truncation check it's makes an optimization\nlike you propose even more worthwhile.\n\n> I hacked up a patch (attached). I lost track of which benchmark we're\n> using to test the performance, but when I test in a loop it seems\n> substantially faster.\n\nThat's interesting. Note that there is no need for a new\nfasthash_accum64(), since we can do\n\nfasthash_accum(&hs, buf, FH_SIZEOF_ACCUM);\n\n...and the compiler should elide the switch statement.\n\n> It reads past the NUL byte, but only to the next alignment boundary,\n> which I think is OK (though I think I'd need to fix the patch for when\n> maxalign < 8).\n\nSeems like it, on both accounts.\n\n\n",
"msg_date": "Tue, 19 Dec 2023 16:23:29 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, 2023-12-19 at 16:23 +0700, John Naylor wrote:\n> That wasn't the next place I thought to look (that would be the\n> strcmp\n> call), but something like this could be worthwhile.\n\nThe reason I looked here is that the inner while statement (to find the\nchunk size) looked out of place and possibly slow, and there's a\nbitwise trick we can use instead.\n\nMy original test case is a bit too \"macro\" of a benchmark at this\npoint, so I'm not sure it's a good guide for these individual micro-\noptimizations.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 19 Dec 2023 12:23:06 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 3:23 AM Jeff Davis <[email protected]> wrote:\n>\n> On Tue, 2023-12-19 at 16:23 +0700, John Naylor wrote:\n> > That wasn't the next place I thought to look (that would be the\n> > strcmp\n> > call), but something like this could be worthwhile.\n>\n> The reason I looked here is that the inner while statement (to find the\n> chunk size) looked out of place and possibly slow, and there's a\n> bitwise trick we can use instead.\n\nThere are other bit tricks we can use. In v11-0005 Just for fun, I\ntranslated a couple more into C from\n\nhttps://github.com/openbsd/src/blob/master/lib/libc/arch/amd64/string/strlen.S",
"msg_date": "Wed, 20 Dec 2023 13:48:22 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 1:48 PM John Naylor <[email protected]> wrote:\n>\n> On Wed, Dec 20, 2023 at 3:23 AM Jeff Davis <[email protected]> wrote:\n> >\n> > The reason I looked here is that the inner while statement (to find the\n> > chunk size) looked out of place and possibly slow, and there's a\n> > bitwise trick we can use instead.\n>\n> There are other bit tricks we can use. In v11-0005 Just for fun, I\n> translated a couple more into C from\n>\n> https://github.com/openbsd/src/blob/master/lib/libc/arch/amd64/string/strlen.S\n\nI wanted to see if this gets us anything so ran a couple microbenchmarks.\n\n0001-0003 are same as earlier\n0004 takes Jeff's idea and adds in an optimization from NetBSD's\nstrlen (I said OpenBSD earlier, but it goes back further). I added\nstub code to simulate big-endian when requested at compile time, but a\nlater patch removes it. Since it benched well, I made the extra effort\nto generalize it for other callers. After adding to the hash state, it\nreturns the length so the caller can pass it to the finalizer.\n0005 is the benchmark (not for commit) -- I took the parser keyword\nlist and added enough padding to make every string aligned when the\nwhole thing is copied to an alloc'd area.\n\nEach of the bench_*.sql files named below are just running the\nsimilarly-named function, all with the same argument, e.g. \"select *\nfrom bench_pgstat_hash_fh(100000);\", so not attached.\n\nStrings:\n\n-- strlen + hash_bytes\npgbench -n -T 20 -f bench_hash_bytes.sql -M prepared | grep latency\nlatency average = 1036.732 ms\n\n-- word-at-a-time hashing, with bytewise lookahead\npgbench -n -T 20 -f bench_cstr_unaligned.sql -M prepared | grep latency\nlatency average = 664.632 ms\n\n-- word-at-a-time for both hashing and lookahead (Jeff's aligned\ncoding plus a technique from NetBSD strlen)\npgbench -n -T 20 -f bench_cstr_aligned.sql -M prepared | grep latency\nlatency average = 436.701 ms\n\nSo, the fully optimized aligned case is worth it if it's convenient.\n\n0006 adds a byteswap for big-endian so we can reuse little endian\ncoding for the lookahead.\n\n0007 - I also wanted to put numbers to 0003 (pgstat hash). While the\nmotivation for that was cleanup, I had a hunch it would shave cycles\nand take up less binary space. It does on both accounts:\n\n-- 3x murmur + hash_combine\npgbench -n -T 20 -f bench_pgstat_orig.sql -M prepared | grep latency\nlatency average = 333.540 ms\n\n-- fasthash32 (simple call, no state setup and final needed for a single value)\npgbench -n -T 20 -f bench_pgstat_fh.sql -M prepared | grep latency\nlatency average = 277.591 ms\n\n0008 - We can optimize the tail load when it's 4 bytes -- to save\nloads, shifts, and OR's. My compiler can't figure this out for the\npgstat hash, with its fixed 4-byte tail. It's pretty simple and should\nhelp other cases:\n\npgbench -n -T 20 -f bench_pgstat_fh.sql -M prepared | grep latency\nlatency average = 226.113 ms",
"msg_date": "Tue, 26 Dec 2023 15:00:34 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, Dec 26, 2023 at 4:01 PM John Naylor <[email protected]> wrote:\n>\n> 0001-0003 are same as earlier\n> 0004 takes Jeff's idea and adds in an optimization from NetBSD's\n> strlen (I said OpenBSD earlier, but it goes back further). I added\n> stub code to simulate big-endian when requested at compile time, but a\n> later patch removes it. Since it benched well, I made the extra effort\n> to generalize it for other callers. After adding to the hash state, it\n> returns the length so the caller can pass it to the finalizer.\n> 0005 is the benchmark (not for commit) -- I took the parser keyword\n> list and added enough padding to make every string aligned when the\n> whole thing is copied to an alloc'd area.\n>\n> Each of the bench_*.sql files named below are just running the\n> similarly-named function, all with the same argument, e.g. \"select *\n> from bench_pgstat_hash_fh(100000);\", so not attached.\n>\n> Strings:\n>\n> -- strlen + hash_bytes\n> pgbench -n -T 20 -f bench_hash_bytes.sql -M prepared | grep latency\n> latency average = 1036.732 ms\n>\n> -- word-at-a-time hashing, with bytewise lookahead\n> pgbench -n -T 20 -f bench_cstr_unaligned.sql -M prepared | grep latency\n> latency average = 664.632 ms\n>\n> -- word-at-a-time for both hashing and lookahead (Jeff's aligned\n> coding plus a technique from NetBSD strlen)\n> pgbench -n -T 20 -f bench_cstr_aligned.sql -M prepared | grep latency\n> latency average = 436.701 ms\n>\n> So, the fully optimized aligned case is worth it if it's convenient.\n>\n> 0006 adds a byteswap for big-endian so we can reuse little endian\n> coding for the lookahead.\n>\n> 0007 - I also wanted to put numbers to 0003 (pgstat hash). While the\n> motivation for that was cleanup, I had a hunch it would shave cycles\n> and take up less binary space. It does on both accounts:\n>\n> -- 3x murmur + hash_combine\n> pgbench -n -T 20 -f bench_pgstat_orig.sql -M prepared | grep latency\n> latency average = 333.540 ms\n>\n> -- fasthash32 (simple call, no state setup and final needed for a single value)\n> pgbench -n -T 20 -f bench_pgstat_fh.sql -M prepared | grep latency\n> latency average = 277.591 ms\n>\n> 0008 - We can optimize the tail load when it's 4 bytes -- to save\n> loads, shifts, and OR's. My compiler can't figure this out for the\n> pgstat hash, with its fixed 4-byte tail. It's pretty simple and should\n> help other cases:\n>\n> pgbench -n -T 20 -f bench_pgstat_fh.sql -M prepared | grep latency\n> latency average = 226.113 ms\n\n\n--- /dev/null\n+++ b/contrib/bench_hash/bench_hash.c\n@@ -0,0 +1,103 @@\n+/*-------------------------------------------------------------------------\n+ *\n+ * bench_hash.c\n+ *\n+ * Copyright (c) 2023, PostgreSQL Global Development Group\n+ *\n+ * IDENTIFICATION\n+ * src/test/modules/bench_hash/bench_hash.c\n+ *\n+ *-------------------------------------------------------------------------\n+ */\nyou added this module to contrib module (root/contrib), your intention\n(i guess) is to add in root/src/test/modules.\nlater I saw \"0005 is the benchmark (not for commit)\".\n\n\n--- /dev/null\n+++ b/src/include/common/hashfn_unstable.h\n@@ -0,0 +1,213 @@\n+/*\n+Building blocks for creating fast inlineable hash functions. The\n+unstable designation is in contrast to hashfn.h, which cannot break\n+compatibility because hashes can be writen to disk and so must produce\n+the same hashes between versions.\n+\n+ *\n+ * Portions Copyright (c) 2018-2023, PostgreSQL Global Development Group\n+ *\n+ * src/include/common/hashfn_unstable.c\n+ */\n+\nhere should be \"src/include/common/hashfn_unstable.h\". typo: `writen`\n\nIn pgbench, I use --no-vacuum --time=20 -M prepared\nMy local computer is slow. but here is the test results:\n\nselect * from bench_cstring_hash_aligned(100000); 7318.893 ms\nselect * from bench_cstring_hash_unaligned(100000); 10383.173 ms\nselect * from bench_pgstat_hash(100000); 4474.989 ms\nselect * from bench_pgstat_hash_fh(100000); 9192.245 ms\nselect * from bench_string_hash(100000); 2048.008 ms\n\n\n",
"msg_date": "Tue, 2 Jan 2024 07:55:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, Jan 2, 2024 at 6:56 AM jian he <[email protected]> wrote:\n>\n> My local computer is slow. but here is the test results:\n>\n> select * from bench_cstring_hash_aligned(100000); 7318.893 ms\n> select * from bench_cstring_hash_unaligned(100000); 10383.173 ms\n> select * from bench_pgstat_hash(100000); 4474.989 ms\n> select * from bench_pgstat_hash_fh(100000); 9192.245 ms\n> select * from bench_string_hash(100000); 2048.008 ms\n\nThis presents a 2x to 5x slowdown, so I'm skeptical this is typical --\n what kind of platform is. For starters, what CPU and compiler?\n\n\n",
"msg_date": "Wed, 3 Jan 2024 21:12:46 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, Jan 3, 2024 at 10:12 PM John Naylor <[email protected]> wrote:\n>\n> On Tue, Jan 2, 2024 at 6:56 AM jian he <[email protected]> wrote:\n> >\n> > My local computer is slow. but here is the test results:\n> >\n> > select * from bench_cstring_hash_aligned(100000); 7318.893 ms\n> > select * from bench_cstring_hash_unaligned(100000); 10383.173 ms\n> > select * from bench_pgstat_hash(100000); 4474.989 ms\n> > select * from bench_pgstat_hash_fh(100000); 9192.245 ms\n> > select * from bench_string_hash(100000); 2048.008 ms\n>\n> This presents a 2x to 5x slowdown, so I'm skeptical this is typical --\n> what kind of platform is. For starters, what CPU and compiler?\n\nI still cannot git apply your patch cleanly. in\nhttp://cfbot.cputube.org/ i cannot find your patch.\n( so, it might be that I test based on incomplete information).\nbut only hashfn_unstable.h influences bench_hash/bench_hash.c.\n\nso I attached the whole patch that I had git applied, that is the\nchanges i applied for the following tests.\nhow I test using pgbench:\npgbench --no-vacuum --time=20 --file\n/home/jian/tmp/bench_cstring_hash_aligned.sql -M prepared | grep\nlatency\n\nThe following is tested with another machine, also listed machine spec below.\nI tested 3 times, the results is very similar as following:\nselect * from bench_cstring_hash_aligned(100000); 4705.686 ms\nselect * from bench_cstring_hash_unaligned(100000); 6835.753 ms\nselect * from bench_pgstat_hash(100000); 2678.978 ms\nselect * from bench_pgstat_hash_fh(100000); 6199.017 ms\nselect * from bench_string_hash(100000); 847.699 ms\n\nsrc6=# select version();\n version\n--------------------------------------------------------------------\n PostgreSQL 17devel on x86_64-linux, compiled by gcc-11.4.0, 64-bit\n(1 row)\n\njian@jian:~/Desktop/pg_src/src6/postgres$ gcc --version\ngcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nCopyright (C) 2021 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\nlscpu:\nArchitecture: x86_64\n CPU op-mode(s): 32-bit, 64-bit\n Address sizes: 46 bits physical, 48 bits virtual\n Byte Order: Little Endian\nCPU(s): 20\n On-line CPU(s) list: 0-19\nVendor ID: GenuineIntel\n Model name: Intel(R) Core(TM) i5-14600K\n CPU family: 6\n Model: 183\n Thread(s) per core: 2\n Core(s) per socket: 14\n Socket(s): 1\n Stepping: 1\n CPU max MHz: 5300.0000\n CPU min MHz: 800.0000\n BogoMIPS: 6988.80\n Flags: fpu vme de pse tsc msr pae mce cx8 apic sep\nmtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht\ntm pbe syscall nx pdpe1gb rdtscp l\n m constant_tsc art arch_perfmon pebs bts\nrep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq\npni pclmulqdq dtes64 monitor ds_cpl vm\n x smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm\nsse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx\nf16c rdrand lahf_lm abm 3dnowprefetc\n h cpuid_fault ssbd ibrs ibpb stibp\nibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase\ntsc_adjust bmi1 avx2 smep bmi2 erms invpcid\n rdseed adx smap clflushopt clwb intel_pt\nsha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni\ndtherm ida arat pln pts hwp hwp_notify hw\n p_act_window hwp_epp hwp_pkg_req hfi umip pku\nospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm\nmd_clear serialize pconfig arch_l\n br ibt flush_l1d arch_capabilities\nVirtualization features:\n Virtualization: VT-x\nCaches (sum of all):\n L1d: 544 KiB (14 instances)\n L1i: 704 KiB (14 instances)\n L2: 20 MiB (8 instances)\n L3: 24 MiB (1 instance)\nNUMA:\n NUMA node(s): 1\n NUMA node0 CPU(s): 0-19\nVulnerabilities:\n Gather data sampling: Not affected\n Itlb multihit: Not affected\n L1tf: Not affected\n Mds: Not affected\n Meltdown: Not affected\n Mmio stale data: Not affected\n Retbleed: Not affected\n Spec rstack overflow: Not affected\n Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\n Spectre v1: Mitigation; usercopy/swapgs barriers and\n__user pointer sanitization\n Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional,\nRSB filling, PBRSB-eIBRS SW sequence\n Srbds: Not affected\n Tsx async abort: Not affected\n\njian@jian:~/Desktop/pg_src/src6/postgres$ git log\ncommit bbbf8cd54a05ad5c92e79c96133f219e80fad77c (HEAD -> master)\nAuthor: jian he <[email protected]>\nDate: Thu Jan 4 10:32:39 2024 +0800\n\n bench_hash contrib module\n\ncommit c5385929593dd8499cfb5d85ac322e8ee1819fd4\nAuthor: Peter Eisentraut <[email protected]>\nDate: Fri Dec 29 18:01:53 2023 +0100\n\n Make all Perl warnings fatal",
"msg_date": "Thu, 4 Jan 2024 11:01:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Thu, Jan 4, 2024 at 10:01 AM jian he <[email protected]> wrote:\n>\n> I still cannot git apply your patch cleanly. in\n\nI don't know why you're using that -- the git apply man page even says\n\n\"Use git-am(1) to create commits from patches generated by\ngit-format-patch(1) and/or received by email.\"\n\nOr, if that fails, use \"patch\".\n\n> http://cfbot.cputube.org/ i cannot find your patch.\n> ( so, it might be that I test based on incomplete information).\n> but only hashfn_unstable.h influences bench_hash/bench_hash.c.\n>\n> so I attached the whole patch that I had git applied, that is the\n> changes i applied for the following tests.\n\nWell, aside from the added text-editor detritus, it looks like this\nhas everything except v11-0008, without which I still get improvement\nfor the pgstat hash.\n\n> Model name: Intel(R) Core(TM) i5-14600K\n\n> The following is tested with another machine, also listed machine spec below.\n> I tested 3 times, the results is very similar as following:\n> select * from bench_cstring_hash_aligned(100000); 4705.686 ms\n> select * from bench_cstring_hash_unaligned(100000); 6835.753 ms\n> select * from bench_pgstat_hash(100000); 2678.978 ms\n> select * from bench_pgstat_hash_fh(100000); 6199.017 ms\n> select * from bench_string_hash(100000); 847.699 ms\n\nI was fully prepared to believe something like 32-bit Arm would have\ndifficulty with 64-bit shifts/multiplies etc., but this makes no sense\nat all. In this test, on my machine, HEAD's pgstat_hash is 3x faster\nthan HEAD's \"strlen + hash_bytes\", but for you it's 3x slower. To\nimprove reproducibility, I've added the .sql files and a bench script\nto v13. I invite you to run bench_hash.sh and see if that changes\nanything.\n\nv13 also\n- adds an assert that aligned and unaligned C string calculations give\nthe same result\n- properly mixes roleid in the namespace hash, since it's now\nconvenient to do so (0005 is an alternate method)\n- removes the broken makefile from the benchmark (not for commit anyway)",
"msg_date": "Fri, 5 Jan 2024 17:54:21 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 6:54 PM John Naylor <[email protected]> wrote:\n>\n> On Thu, Jan 4, 2024 at 10:01 AM jian he <[email protected]> wrote:\n> >\n> > I still cannot git apply your patch cleanly. in\n>\n> I don't know why you're using that -- the git apply man page even says\n>\n> \"Use git-am(1) to create commits from patches generated by\n> git-format-patch(1) and/or received by email.\"\n>\n> Or, if that fails, use \"patch\".\n>\n> > http://cfbot.cputube.org/ i cannot find your patch.\n> > ( so, it might be that I test based on incomplete information).\n> > but only hashfn_unstable.h influences bench_hash/bench_hash.c.\n> >\n> > so I attached the whole patch that I had git applied, that is the\n> > changes i applied for the following tests.\n>\n> Well, aside from the added text-editor detritus, it looks like this\n> has everything except v11-0008, without which I still get improvement\n> for the pgstat hash.\n>\n> > Model name: Intel(R) Core(TM) i5-14600K\n>\n> > The following is tested with another machine, also listed machine spec below.\n> > I tested 3 times, the results is very similar as following:\n> > select * from bench_cstring_hash_aligned(100000); 4705.686 ms\n> > select * from bench_cstring_hash_unaligned(100000); 6835.753 ms\n> > select * from bench_pgstat_hash(100000); 2678.978 ms\n> > select * from bench_pgstat_hash_fh(100000); 6199.017 ms\n> > select * from bench_string_hash(100000); 847.699 ms\n>\n> I was fully prepared to believe something like 32-bit Arm would have\n> difficulty with 64-bit shifts/multiplies etc., but this makes no sense\n> at all. In this test, on my machine, HEAD's pgstat_hash is 3x faster\n> than HEAD's \"strlen + hash_bytes\", but for you it's 3x slower. To\n> improve reproducibility, I've added the .sql files and a bench script\n> to v13. I invite you to run bench_hash.sh and see if that changes\n> anything.\n\ngit apply has a verbose option.\nalso personally I based on vscode editor, the color to view the changes.\n\njian@jian:~/Desktop/pg_src/src4/postgres$ git apply\n$PATCHES/v13-0006-Add-benchmarks-for-hashing.patch\n/home/jian/Downloads/patches/v13-0006-Add-benchmarks-for-hashing.patch:81:\nindent with spaces.\n if (/^PG_KEYWORD\\(\"(\\w+)\"/)\n/home/jian/Downloads/patches/v13-0006-Add-benchmarks-for-hashing.patch:82:\nindent with spaces.\n {\n/home/jian/Downloads/patches/v13-0006-Add-benchmarks-for-hashing.patch:87:\nindent with spaces.\n }\n/home/jian/Downloads/patches/v13-0006-Add-benchmarks-for-hashing.patch:89:\ntrailing whitespace.\n\n/home/jian/Downloads/patches/v13-0006-Add-benchmarks-for-hashing.patch:92:\ntrailing whitespace.\n\nwarning: squelched 11 whitespace errors\nwarning: 16 lines add whitespace errors.\n\n\njian@jian:~/Desktop/pg_src/src4/postgres$ bash runbench.sh\nselect * from bench_string_hash(100000);\n\nlatency average = 875.482 ms\nselect * from bench_cstring_hash_unaligned(100000);\nlatency average = 6539.231 ms\nselect * from bench_cstring_hash_aligned(100000);\nlatency average = 4401.278 ms\nselect * from bench_pgstat_hash(100000);\nlatency average = 2679.732 ms\nselect * from bench_pgstat_hash_fh(100000);\n\nlatency average = 5711.012 ms\njian@jian:~/Desktop/pg_src/src4/postgres$ bash runbench.sh\nselect * from bench_string_hash(100000);\n\nlatency average = 874.261 ms\nselect * from bench_cstring_hash_unaligned(100000);\nlatency average = 6538.874 ms\nselect * from bench_cstring_hash_aligned(100000);\nlatency average = 4400.546 ms\nselect * from bench_pgstat_hash(100000);\nlatency average = 2682.013 ms\nselect * from bench_pgstat_hash_fh(100000);\n\nlatency average = 5709.815 ms\n\nmeson:\n\nmeson setup ${BUILD} \\\n -Dprefix=${PG_PREFIX} \\\n -Dpgport=5459 \\\n -Dplperl=enabled \\\n -Dplpython=enabled \\\n -Dssl=openssl \\\n -Dldap=enabled \\\n -Dlibxml=enabled \\\n -Dlibxslt=enabled \\\n -Duuid=e2fs \\\n -Dzstd=enabled \\\n -Dlz4=enabled \\\n -Dsystemd=enabled \\\n -Dcassert=true \\\n -Db_coverage=true \\\n -Dicu=enabled \\\n -Dbuildtype=debug \\\n -Dwerror=true \\\n -Dc_args='-Wunused-variable\n -Wuninitialized\n-Werror=maybe-uninitialized\n -Wreturn-type\n -DWRITE_READ_PARSE_PLAN_TREES\n -DCOPY_PARSE_PLAN_TREES\n -DREALLOCATE_BITMAPSETS\n -DRAW_EXPRESSION_COVERAGE_TEST -fno-omit-frame-pointer' \\\n -Ddocs_pdf=disabled \\\n -Dllvm=disabled \\\n -Ddocs_pdf=disabled\n\n\n",
"msg_date": "Fri, 5 Jan 2024 19:58:31 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 6:58 PM jian he <[email protected]> wrote:\n> -Dcassert=true \\\n\n> -Dbuildtype=debug \\\n\nThese probably don't matter much for this test, but these should be\noff for any performance testing.\n\n> -DWRITE_READ_PARSE_PLAN_TREES\n> -DCOPY_PARSE_PLAN_TREES\n> -DREALLOCATE_BITMAPSETS\n> -DRAW_EXPRESSION_COVERAGE_TEST\n\nI'd guess it was was of these, which should likewise be off as well.\n\n\n",
"msg_date": "Sat, 6 Jan 2024 08:03:55 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Sat, Jan 6, 2024 at 9:04 AM John Naylor <[email protected]> wrote:\n>\n> On Fri, Jan 5, 2024 at 6:58 PM jian he <[email protected]> wrote:\n> > -Dcassert=true \\\n>\n> > -Dbuildtype=debug \\\n>\n> These probably don't matter much for this test, but these should be\n> off for any performance testing.\n>\n> > -DWRITE_READ_PARSE_PLAN_TREES\n> > -DCOPY_PARSE_PLAN_TREES\n> > -DREALLOCATE_BITMAPSETS\n> > -DRAW_EXPRESSION_COVERAGE_TEST\n>\n> I'd guess it was was of these, which should likewise be off as well.\n\nThanks for pointing it out.\nmeson setup ${BUILD} \\\n -Dprefix=${PG_PREFIX} \\\n -Dpgport=5459 \\\n -Dplperl=enabled \\\n -Dplpython=enabled \\\n -Dssl=openssl \\\n -Dldap=enabled \\\n -Dlibxml=enabled \\\n -Dlibxslt=enabled \\\n -Duuid=e2fs \\\n -Dzstd=enabled \\\n -Dlz4=enabled \\\n -Dsystemd=enabled \\\n -Dicu=enabled \\\n -Dbuildtype=release \\\n -Ddocs_pdf=disabled \\\n -Dllvm=disabled \\\n -Ddocs_pdf=disabled\n\nnow the results:\n\njian@jian:~/Desktop/pg_src/src4/postgres$ bash\n/home/jian/Desktop/pg_src/src4/postgres/runbench.sh\nselect * from bench_string_hash(100000);\n\nlatency average = 145.021 ms\nselect * from bench_cstring_hash_unaligned(100000);\nlatency average = 100.829 ms\nselect * from bench_cstring_hash_aligned(100000);\nlatency average = 100.606 ms\nselect * from bench_pgstat_hash(100000);\nlatency average = 96.140 ms\nselect * from bench_pgstat_hash_fh(100000);\n\nlatency average = 62.784 ms\njian@jian:~/Desktop/pg_src/src4/postgres$ bash\n/home/jian/Desktop/pg_src/src4/postgres/runbench.sh\nselect * from bench_string_hash(100000);\n\nlatency average = 147.782 ms\nselect * from bench_cstring_hash_unaligned(100000);\nlatency average = 101.179 ms\nselect * from bench_cstring_hash_aligned(100000);\nlatency average = 101.219 ms\nselect * from bench_pgstat_hash(100000);\nlatency average = 96.357 ms\nselect * from bench_pgstat_hash_fh(100000);\n\nlatency average = 62.902 ms\n\n\n",
"msg_date": "Sat, 6 Jan 2024 10:01:27 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Sat, Jan 6, 2024 at 9:01 AM jian he <[email protected]> wrote:\n>\n> latency average = 147.782 ms\n> select * from bench_cstring_hash_unaligned(100000);\n> latency average = 101.179 ms\n> select * from bench_cstring_hash_aligned(100000);\n> latency average = 101.219 ms\n\nThanks for testing again! This looks closer to my results. It doesn't\nshow improvement for the aligned case, but it's not worse, either.\n\nThere is still some polishing to be done, mostly on comments/examples,\nbut I think it's mostly there. I'll return to it by next week.\n\n\n",
"msg_date": "Mon, 8 Jan 2024 09:43:39 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "Hi John,\n\nOn Mon, Jan 8, 2024 at 10:44 AM John Naylor <[email protected]> wrote:\n>\n> On Sat, Jan 6, 2024 at 9:01 AM jian he <[email protected]> wrote:\n> >\n> > latency average = 147.782 ms\n> > select * from bench_cstring_hash_unaligned(100000);\n> > latency average = 101.179 ms\n> > select * from bench_cstring_hash_aligned(100000);\n> > latency average = 101.219 ms\n>\n> Thanks for testing again! This looks closer to my results. It doesn't\n> show improvement for the aligned case, but it's not worse, either.\n>\n> There is still some polishing to be done, mostly on comments/examples,\n> but I think it's mostly there. I'll return to it by next week.\n>\n>\n\n+ * Portions Copyright (c) 2018-2023, PostgreSQL Global Development Group\n\nA kind reminder, it's already 2024 :)\n\nI'm also curious why the 2018, is there any convention for that?\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Mon, 8 Jan 2024 15:24:40 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Mon, Jan 8, 2024 at 2:24 PM Junwang Zhao <[email protected]> wrote:\n>\n> + * Portions Copyright (c) 2018-2023, PostgreSQL Global Development Group\n>\n> A kind reminder, it's already 2024 :)\n>\n> I'm also curious why the 2018, is there any convention for that?\n\nThe convention I followed was \"blind copy-paste\", but the first year\nis supposed to be when the file entered the repo. Thanks, will fix.\n\n\n",
"msg_date": "Mon, 8 Jan 2024 16:43:05 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "I spent some time rewriting the comments and a couple other cosmetic\nchanges, and squashed into two patches: the second one has the\noptimized string hashing. They each have still just one demo use case.\nIt looks pretty close to commitable, but I'll leave this up for a few\ndays in case anyone wants to have another look.\n\nAfter this first step is out of the way, we can look into using this\nmore widely, including dynahash and the GUC hash.",
"msg_date": "Wed, 17 Jan 2024 14:15:11 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On 17/01/2024 09:15, John Naylor wrote:\n> /*\n> * hashfn_unstable.h\n> *\n> * Building blocks for creating fast inlineable hash functions. The\n> * unstable designation is in contrast to hashfn.h, which cannot break\n> * compatibility because hashes can be written to disk and so must produce\n> * the same hashes between versions.\n> *\n> * The functions in this file are not guaranteed to be stable between\n> * versions, and may differ by hardware platform.\n\nThese paragraphs sound a bit awkward. It kind of buries the lede, the \n\"these functions are not guaranteed to be stable\" part, to the bottom.\n\nMaybe something like:\n\n\"\nBuilding blocks for creating fast inlineable hash functions. The \nfunctions in this file are not guaranteed to be stable between versions, \nand may differ by hardware platform. Hence they must not be used in \nindexes or other on-disk structures. See hashfn.h if you need stability.\n\"\n\ntypo: licencse\n\nOther than that, LGTM.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 17 Jan 2024 16:54:48 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 9:54 PM Heikki Linnakangas <[email protected]> wrote:\n\n> Maybe something like:\n>\n> \"\n> Building blocks for creating fast inlineable hash functions. The\n> functions in this file are not guaranteed to be stable between versions,\n> and may differ by hardware platform. Hence they must not be used in\n> indexes or other on-disk structures. See hashfn.h if you need stability.\n> \"\n>\n> typo: licencse\n>\n> Other than that, LGTM.\n\nPushed that way, thanks! After fixing another typo in big endian\nbuilds, an s390x member reported green, so I think that aspect is\nworking now. I'll come back to follow-up topics shortly.\n\n\n",
"msg_date": "Fri, 19 Jan 2024 14:27:11 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On 19/01/2024 09:27, John Naylor wrote:\n> Pushed that way, thanks! After fixing another typo in big endian\n> builds, an s390x member reported green, so I think that aspect is\n> working now. I'll come back to follow-up topics shortly.\n\nThanks! I started to look at how to use this, and I have some questions. \nI'd like to replace this murmurhash ussage in resowner.c with this:\n\n> \t/*\n> \t * Most resource kinds store a pointer in 'value', and pointers are unique\n> \t * all on their own. But some resources store plain integers (Files and\n> \t * Buffers as of this writing), so we want to incorporate the 'kind' in\n> \t * the hash too, otherwise those resources will collide a lot. But\n> \t * because there are only a few resource kinds like that - and only a few\n> \t * resource kinds to begin with - we don't need to work too hard to mix\n> \t * 'kind' into the hash. Just add it with hash_combine(), it perturbs the\n> \t * result enough for our purposes.\n> \t */\n> #if SIZEOF_DATUM == 8\n> \treturn hash_combine64(murmurhash64((uint64) value), (uint64) kind);\n> #else\n> \treturn hash_combine(murmurhash32((uint32) value), (uint32) kind);\n> #endif\n\nThe straightforward replacement would be:\n\n fasthash_state hs;\n\n fasthash_init(&hs, sizeof(Datum), 0);\n fasthash_accum(&hs, (char *) &kind, sizeof(ResourceOwnerDesc *));\n fasthash_accum(&hs, (char *) &value, sizeof(Datum));\n return fasthash_final32(&hs, 0);\n\nBut I wonder if it would be OK to abuse the 'seed' and 'tweak' \nparameters to the init and final functions instead, like this:\n\n fasthash_state hs;\n\n fasthash_init(&hs, sizeof(Datum), (uint64) kind);\n return fasthash_final32(&hs, (uint64) value);\n\nI couldn't find any guidance on what properties the 'seed' and 'tweak' \nhave, compared to just accumulating the values with accum. Anyone know?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 19 Jan 2024 18:54:12 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Fri, 2024-01-19 at 14:27 +0700, John Naylor wrote:\n> Pushed that way, thanks!\n\nThank you.\n\nOne post-commit question on 0aba255440: why do\nhaszero64(pg_bswap64(chunk)) rather than just haszero64(chunk)? How\ndoes byteswapping affect whether a zero byte exists or not?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 19 Jan 2024 13:38:33 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Fri, 2024-01-19 at 13:38 -0800, Jeff Davis wrote:\n> One post-commit question on 0aba255440: why do\n> haszero64(pg_bswap64(chunk)) rather than just haszero64(chunk)? How\n> does byteswapping affect whether a zero byte exists or not?\n\nI missed that it was used later when finding the rightmost one\nposition.\n\nThe placement of the comment was slightly confusing. Is:\n\n haszero64(pg_bswap64(chunk)) == pg_bswap64(haszero64(chunk))\n\n? If so, perhaps we can do the byte swapping outside of the loop, which\nmight save a few cycles on longer strings and would be more readable.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 19 Jan 2024 16:13:18 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Fri, Jan 19, 2024 at 11:54 PM Heikki Linnakangas <[email protected]> wrote:\n\n> Thanks! I started to look at how to use this, and I have some questions.\n> I'd like to replace this murmurhash ussage in resowner.c with this:\n>\n> > /*\n> > * Most resource kinds store a pointer in 'value', and pointers are unique\n> > * all on their own. But some resources store plain integers (Files and\n> > * Buffers as of this writing), so we want to incorporate the 'kind' in\n> > * the hash too, otherwise those resources will collide a lot. But\n> > * because there are only a few resource kinds like that - and only a few\n> > * resource kinds to begin with - we don't need to work too hard to mix\n> > * 'kind' into the hash. Just add it with hash_combine(), it perturbs the\n> > * result enough for our purposes.\n> > */\n> > #if SIZEOF_DATUM == 8\n> > return hash_combine64(murmurhash64((uint64) value), (uint64) kind);\n> > #else\n> > return hash_combine(murmurhash32((uint32) value), (uint32) kind);\n> > #endif\n>\n> The straightforward replacement would be:\n>\n> fasthash_state hs;\n>\n> fasthash_init(&hs, sizeof(Datum), 0);\n> fasthash_accum(&hs, (char *) &kind, sizeof(ResourceOwnerDesc *));\n> fasthash_accum(&hs, (char *) &value, sizeof(Datum));\n> return fasthash_final32(&hs, 0);\n\nThat would give the fullest mixing possible, more than currently.\n\n> But I wonder if it would be OK to abuse the 'seed' and 'tweak'\n> parameters to the init and final functions instead, like this:\n>\n> fasthash_state hs;\n>\n> fasthash_init(&hs, sizeof(Datum), (uint64) kind);\n> return fasthash_final32(&hs, (uint64) value);\n\nThis would go in the other direction, and sacrifice some quality for\nspeed. The fasthash finalizer is pretty short -- XMX, where X is\n\"right shift and XOR\" and M is \"multiply\". In looking at some other\nhash functions, it seems that's often done only if the input has\nalready had some mixing. The Murmur finalizer has the shape XMXMX, and\nthat seems to be the preferred way to get good mixing on a single\nregister-sized value. For that reason, hash functions whose main loop\nis designed for long inputs will often skip that for small inputs and\njust go straight to a Murmur-style finalizer. Fasthash doesn't do\nthat, so for a small input it ends up doing XMXM then XMX, which is a\nlittle more expensive.\n\n> I couldn't find any guidance on what properties the 'seed' and 'tweak'\n> have, compared to just accumulating the values with accum. Anyone know?\n\nIn Postgres, I only know of one use of a seed parameter, to create two\nindependent hash functions from hash_bytes_uint32_extended(), in\nbrin-bloom indexes. I think that's the more typical use for a seed.\nSince there was no guidance with the existing hash functions, and it's\na widespread concept, I didn't feel the need to put any here. We could\nchange that.\n\nI modeled the finalizer tweak on one of the finalizers in xxHash that\nalso used it only for the input length. Length is used as a tiebreaker\nwhere otherwise it will often not collide anyway, so it seems that's\nhow we should think about using it elsewhere. There is a comment above\nfasthash_final64 mentioning that the tweak is used for length when\nthat is not known ahead of time, but it might be good to generalize\nthat, and maybe put it somewhere more prominent. With that in mind,\nI'm not sure \"value\" is a good fit for the tweak. \"kind\" is sort of\nin the middle because IIUC it doesn't mattter at all for pointer\nvalues, but it's important for other kinds, which would commonly\ncollide.\n\nIf I were to change from murmur64, I'd probably go in between the two\nextremes mentioned earlier, and mix \"value\" normally and pass \"kind\"\nas the seed:\n\n fasthash_state hs;\n\n fasthash_init(&hs, sizeof(Datum), kind);\n fasthash_accum(&hs, (char *) &value, sizeof(Datum));\n return fasthash_final32(&hs, 0);\n\n\n",
"msg_date": "Sat, 20 Jan 2024 12:50:47 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Sat, Jan 20, 2024 at 7:13 AM Jeff Davis <[email protected]> wrote:\n>\n> On Fri, 2024-01-19 at 13:38 -0800, Jeff Davis wrote:\n> > One post-commit question on 0aba255440: why do\n> > haszero64(pg_bswap64(chunk)) rather than just haszero64(chunk)? How\n> > does byteswapping affect whether a zero byte exists or not?\n>\n> I missed that it was used later when finding the rightmost one\n> position.\n>\n> The placement of the comment was slightly confusing. Is:\n>\n> haszero64(pg_bswap64(chunk)) == pg_bswap64(haszero64(chunk))\n>\n> ? If so, perhaps we can do the byte swapping outside of the loop, which\n> might save a few cycles on longer strings and would be more readable.\n\nThe above identity is not true for this haszero64 macro. I phrased it\nas \"The rest of the bytes are indeterminate\", but that's not very\nclear. It can only be true if it set bytes for all and only those\nbytes where the input had zeros. In the NetBSD strlen source, there is\na comment telling of a way to do this:\n\n~(((x & 0x7f....7f) + 0x7f....7f) | (x | 0x7f....7f))\n\nhttps://github.com/NetBSD/src/blob/trunk/common/lib/libc/arch/x86_64/string/strlen.S\n\n(They don't actually use it of course, since x86_64 is little-endian)\n From the commentary there, it sounds like 1 or 2 more instructions.\nOne unmentioned assumption I had was that the byte swap would be a\nsingle instruction on all platforms where we care about performance\n(*). If that's not the case, we could switch to the above macro for\nbig-endian machines. It'd be less readable since we'd then need an\nadditional #ifdef for counting leading, rather than trailing zeros\n(that would avoid byte-swapping entirely). Either way, I'm afraid\nbig-endian is stuck doing a bit of extra work somewhere. That work\ncould be amortized by doing a quick check in the loop and afterwards\ncompletely redoing the zero check (or a bytewise check same as the\nunaligned path), but that would penalize short strings.\n\n(*) 32-bit platforms don't take this path, but mamba's build failed\nbecause the previously-misspelled symbol was still in the source file.\nWe could also #ifdef around the whole aligned-path function, although\nit's redundant.\n\nI hope this makes it more clear. Maybe the comment could use some work.\n\n\n",
"msg_date": "Sat, 20 Jan 2024 13:48:53 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Sat, 2024-01-20 at 13:48 +0700, John Naylor wrote:\n> The above identity is not true for this haszero64 macro.\n\nI see.\n\n> I hope this makes it more clear. Maybe the comment could use some\n> work.\n\nYes, thank you. I don't think we need to change the algorithm.\n\nAfter having stepped away from this work for a couple weeks and\nreturning to it, I think the comments and/or naming could be more\nclear. We first use the result of haszero64() as a boolean to break out\nof the loop, but then later use it in a more interesting way to count\nthe number of remaining bytes.\n\nPerhaps you can take the comment out of the loop and just describe the\nalgorithm we're using, and make a note that we have to byteswap first.\n\"Indeterminate\" could be explained briefly as well.\n\nThese are minor comments.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 20 Jan 2024 17:06:36 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "I wrote:\n> fasthash_init(&hs, sizeof(Datum), kind);\n> fasthash_accum(&hs, (char *) &value, sizeof(Datum));\n> return fasthash_final32(&hs, 0);\n\nIt occurred to me that it's strange to have two places that length can\nbe passed. That was a side effect of the original, which used length\nto both know how many bytes to read, and to modify the internal seed.\nWith the incremental API, it doesn't make sense to pass the length (or\na dummy macro) up front -- with a compile-time fixed length, it can't\npossibly break a tie, so it's just noise.\n\n0001 removes the length from initialization in the incremental\ninterface. The standalone functions use length directly the same as\nbefore, but after initialization. Thoughts?\n\nAlso, the fasthash_accum call is a bit verbose, because it's often\nused in a loop with varlen input. For register-sized values, I think\nit's simpler to say this, as done in the search path cache, so maybe a\ncomment to that effect would be helpful:\n\nhs.accum = value;\nfasthash_combine(&hs);\n\nI noticed that we already have a more recent, stronger 64-bit mixer\nthan murmur64: splitmix64, in pg_prng.c. We could put that, as well as\na better 4-byte mixer [1] in hashfn_unstable.h, for in-memory use.\nMaybe with names like \"hash_4bytes\" etc. so it's not tied to a\nspecific implementation. I see one simplehash case that can use it,\neven if the resowner hash table gets rid of it.\n\n0002 and 0003 use fasthash for dynahash and GUC hash, respectively.\nThese cannot use the existing cstring hashing directly because of\ntruncation and case-folding, respectively. (Some simplehash uses can,\nbut that can come later)\n\nOn Sun, Jan 21, 2024 at 8:06 AM Jeff Davis <[email protected]> wrote:\n>\n> After having stepped away from this work for a couple weeks and\n> returning to it, I think the comments and/or naming could be more\n> clear. We first use the result of haszero64() as a boolean to break out\n> of the loop, but then later use it in a more interesting way to count\n> the number of remaining bytes.\n>\n> Perhaps you can take the comment out of the loop and just describe the\n> algorithm we're using, and make a note that we have to byteswap first.\n> \"Indeterminate\" could be explained briefly as well.\n\nv15-0004 is a stab at that. As an idea, it also renames zero_bytes_le\nto zero_byte_low to reflect the effect better. There might be some\nother comment edits needed to explain usage, so I plan to hold on to\nthis for later. Let me know what you think.\n\n[1] Examples of both in\nhttps://www.boost.org/doc/libs/1_84_0/boost/container_hash/detail/hash_mix.hpp",
"msg_date": "Mon, 22 Jan 2024 09:03:38 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Mon, 2024-01-22 at 09:03 +0700, John Naylor wrote:\n> v15-0004 is a stab at that. As an idea, it also renames zero_bytes_le\n> to zero_byte_low to reflect the effect better. There might be some\n> other comment edits needed to explain usage, so I plan to hold on to\n> this for later. Let me know what you think.\n\n0004 looks good to me. No urgency so feel free to hold it until a\nconvenient time.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sun, 21 Jan 2024 20:16:27 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Sun, 21 Jan 2024 at 03:06, Jeff Davis <[email protected]> wrote:\n> Yes, thank you. I don't think we need to change the algorithm.\n\nJumping in here at a random point just to share my findings from\npoking around this on and off. I am concentrating here on cstring\nhashing as that is the most complicated one.\n\nOne thing that caught my eye in testing was that the unaligned cstring\ncode was unexpectedly faster for short strings (3-18B uniform\ndistribution). Looking into it the cause was fasthash_accum() called\nin the final iteration. In the unaligned case compiler (clang-15)\nunrolled the inner loop which allowed it to jump directly into the\ncorrect place in the switch. In the unaligned case clang decided to\nuse a data dependent jump which then mispredicts all of the time.\n\nBut given that we know the data length and we have it in a register\nalready, it's easy enough to just mask out data past the end with a\nshift. See patch 1. Performance benefit is about 1.5x Measured on a\nsmall test harness that just hashes and finalizes an array of strings,\nwith a data dependency between consecutive hashes (next address\ndepends on the previous hash output).\n\nUnaligned case can actually take advantage of the same trick as the\naligned case, it just has to shuffle the data from two consecutive\nwords before applying the combine function. Patch 2 implements this.\nIt makes the unaligned case almost as fast as the aligned one, both on\nshort and long strings. 10% benefit on short strings, 50% on long\nones.\n\nNot sure if the second one is worth the extra code. A different\napproach would be to use the simple word at a time hashing for the\nunaligned case too and handle word accesses that straddle a page\nboundary as a special case. Obviously this only makes sense for\nplatforms that support unaligned access. On x86 unaligned access\nwithin a cache line is basically free, and across cache lines is only\nslightly more expensive. On benchmarks calling the aligned code on\nunaligned strings only has a 5% penalty on long strings, short ones\nare indistinguishable.\n\nI also took a look at using SIMD for implementing the hash using the\nsame aligned access + shuffle trick. The good news is that the\nshuffling works well enough that neither it nor checking for string\nend are the longest chain. The bad news is that the data load,\nalignment, zero finding and masking form a big dependency chain on the\nfirst iteration. Mixing and finalization is even worse, fasthash uses\n64bit imul instruction that has a 3 cycle latency, the iteration to\niteration chain is imul + xor, for 4 cycles or 2 B/cycle (in practice\na bit less due to ALU port contention). In SIMD registers there is no\n64bit multiply, and 32 bit multiply has a terrible 10 cycle latency on\nIntel. AES instructions are an interesting option, but it seems that 2\nare needed for good enough mixing, at 4 cycles each, we again end up\nat 2B/cycle. Finalization needs another 3 AES instructions, a shuffle\nand a xor fold to pass SMHasher, for 17 cycles. The mix latency issue\ncould be worked around by doing more mixing in parallel, potentially\nup to 8x faster, but this does not help short strings at all and would\nmake the code way bigger. SIMD code does use fewer instructions so it\ninterleaves better with nearby code that is not dependent on it, not\nsure if that matters anywhere.\n\nThe short version is that for very long (4k+) strings the attached\nSIMD code is 35% faster, for short strings it is 35% slower, and this\nis very much x86-64-v3 only and would need a fallback when AVX and\nAES-NI are not available. Basically a dead end for the use cases this\nhash function is used for.\n\nRegards,\nAnts Aasma",
"msg_date": "Mon, 29 Jan 2024 23:12:55 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 4:13 AM Ants Aasma <[email protected]> wrote:\n> But given that we know the data length and we have it in a register\n> already, it's easy enough to just mask out data past the end with a\n> shift. See patch 1. Performance benefit is about 1.5x Measured on a\n> small test harness that just hashes and finalizes an array of strings,\n> with a data dependency between consecutive hashes (next address\n> depends on the previous hash output).\n\nInteresting work! I've taken this idea and (I'm guessing, haven't\ntested) improved it by re-using an intermediate step for the\nconditional, simplifying the creation of the mask, and moving the\nbitscan out of the longest dependency chain. Since you didn't attach\nthe test harness, would you like to run this and see how it fares?\n(v16-0001 is same as your 0001, and v16-0002 builds upon it.) I plan\nto test myself as well, but since your test tries to model true\nlatency, I'm more interested in that one.\n\n> Not sure if the second one is worth the extra code.\n\nI'd say it's not worth optimizing the case we think won't be taken\nanyway. I also like having a simple path to assert against.",
"msg_date": "Tue, 30 Jan 2024 17:04:20 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, 30 Jan 2024 at 12:04, John Naylor <[email protected]> wrote:\n>\n> On Tue, Jan 30, 2024 at 4:13 AM Ants Aasma <[email protected]> wrote:\n> > But given that we know the data length and we have it in a register\n> > already, it's easy enough to just mask out data past the end with a\n> > shift. See patch 1. Performance benefit is about 1.5x Measured on a\n> > small test harness that just hashes and finalizes an array of strings,\n> > with a data dependency between consecutive hashes (next address\n> > depends on the previous hash output).\n>\n> Interesting work! I've taken this idea and (I'm guessing, haven't\n> tested) improved it by re-using an intermediate step for the\n> conditional, simplifying the creation of the mask, and moving the\n> bitscan out of the longest dependency chain. Since you didn't attach\n> the test harness, would you like to run this and see how it fares?\n> (v16-0001 is same as your 0001, and v16-0002 builds upon it.) I plan\n> to test myself as well, but since your test tries to model true\n> latency, I'm more interested in that one.\n\nIt didn't calculate the same result because the if (mask) condition\nwas incorrect. Changed it to if (chunk & 0xFF) and removed the right\nshift from the mask. It seems to be half a nanosecond faster, but as I\ndon't have a machine set up for microbenchmarking it's quite close to\nmeasurement noise.\n\nI didn't post the harness as it's currently so messy to be near\nuseless to others. But if you'd like to play around, I can tidy it up\na bit and post it.\n\n> > Not sure if the second one is worth the extra code.\n>\n> I'd say it's not worth optimizing the case we think won't be taken\n> anyway. I also like having a simple path to assert against.\n\nAgreed.\n\nAs an addendum, I couldn't resist trying out using 256bit vectors with\ntwo parallel AES hashes running, unaligned loads with special casing\npage boundary straddling loads. Requires -march=x86-64-v3 -maes. About\n20% faster than fasthash on short strings, 2.2x faster on 4k strings.\nRight now requires 4 bytes alignment (uses vpmaskmovd), but could be\nmade to work with any alignment.\n\nRegards,\nAnts Aasma",
"msg_date": "Tue, 30 Jan 2024 14:51:24 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 7:51 PM Ants Aasma <[email protected]> wrote:\n>\n> It didn't calculate the same result because the if (mask) condition\n> was incorrect. Changed it to if (chunk & 0xFF) and removed the right\n> shift from the mask.\n\nYes, you're quite right.\n\n> It seems to be half a nanosecond faster, but as I\n> don't have a machine set up for microbenchmarking it's quite close to\n> measurement noise.\n\nWith my \"throughput-ush\" test, they look good:\n\npgbench -n -T 20 -f bench_cstr_aligned.sql -M prepared | grep latency\n\nmaster:\nlatency average = 490.722 ms\n\n(Ants Aantsma) v-17 0001:\nlatency average = 385.263 ms\n\nv17 0001+0002:\nlatency average = 339.506 ms\n\n> I didn't post the harness as it's currently so messy to be near\n> useless to others. But if you'd like to play around, I can tidy it up\n> a bit and post it.\n\nI'd be curious, thanks.",
"msg_date": "Fri, 2 Feb 2024 16:21:01 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "I wrote:\n>\n> It occurred to me that it's strange to have two places that length can\n> be passed. That was a side effect of the original, which used length\n> to both know how many bytes to read, and to modify the internal seed.\n> With the incremental API, it doesn't make sense to pass the length (or\n> a dummy macro) up front -- with a compile-time fixed length, it can't\n> possibly break a tie, so it's just noise.\n\nThis was a wart, so pushed removing initial length from the incremental API.\n\nOn Mon, Jan 22, 2024 at 11:16 AM Jeff Davis <[email protected]> wrote:\n>\n> On Mon, 2024-01-22 at 09:03 +0700, John Naylor wrote:\n> > v15-0004 is a stab at that. As an idea, it also renames zero_bytes_le\n> > to zero_byte_low to reflect the effect better. There might be some\n> > other comment edits needed to explain usage, so I plan to hold on to\n> > this for later. Let me know what you think.\n>\n> 0004 looks good to me. No urgency so feel free to hold it until a\n> convenient time.\n\nThanks for looking, I pushed this along with an expanded explanation of usage.\n\n> 0002 and 0003 use fasthash for dynahash and GUC hash, respectively.\n> These cannot use the existing cstring hashing directly because of\n> truncation and case-folding, respectively. (Some simplehash uses can,\n> but that can come later)\n\nI've re-attached these as well as a cleaned-up version of the tail\noptimization. For the CF entry, the GUC hash function in this form\nmight only be necessary if we went ahead with simple hash. We don't\nyet have a new benchmark to show if that's still worthwhile after\n867dd2dc87 improved the one upthread.\n\nFor dynahash, one tricky part seems to be the comment about the\ndefault and when it was an assertion error. I've tried to reword this,\nbut maybe needs work. When that's in shape, I'll incorporate removing\nother strlen calls.",
"msg_date": "Tue, 6 Feb 2024 14:59:52 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On 22.01.24 03:03, John Naylor wrote:\n> I wrote:\n>> fasthash_init(&hs, sizeof(Datum), kind);\n>> fasthash_accum(&hs, (char *) &value, sizeof(Datum));\n>> return fasthash_final32(&hs, 0);\n> It occurred to me that it's strange to have two places that length can\n> be passed. That was a side effect of the original, which used length\n> to both know how many bytes to read, and to modify the internal seed.\n> With the incremental API, it doesn't make sense to pass the length (or\n> a dummy macro) up front -- with a compile-time fixed length, it can't\n> possibly break a tie, so it's just noise.\n> \n> 0001 removes the length from initialization in the incremental\n> interface. The standalone functions use length directly the same as\n> before, but after initialization. Thoughts?\n\nUnrelated related issue: src/include/common/hashfn_unstable.h currently \ncauses warnings from cpluspluscheck:\n\n/tmp/cirrus-ci-build/src/include/common/hashfn_unstable.h: In function \n‘int fasthash_accum_cstring_unaligned(fasthash_state*, const char*)’:\n/tmp/cirrus-ci-build/src/include/common/hashfn_unstable.h:201:20: \nwarning: comparison of integer expressions of different signedness: \n‘int’ and ‘long unsigned int’ [-Wsign-compare]\n 201 | while (chunk_len < FH_SIZEOF_ACCUM && str[chunk_len] != '\\0')\n | ^\n\nand a few more like that.\n\nI think it would be better to declare various int variables and \narguments as size_t instead. Even if you don't actually need the larger \nrange, it would make it more self-documenting.\n\n\n\n",
"msg_date": "Wed, 7 Feb 2024 16:41:38 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, Feb 7, 2024 at 10:41 PM Peter Eisentraut <[email protected]> wrote:\n>\n> /tmp/cirrus-ci-build/src/include/common/hashfn_unstable.h: In function\n> ‘int fasthash_accum_cstring_unaligned(fasthash_state*, const char*)’:\n> /tmp/cirrus-ci-build/src/include/common/hashfn_unstable.h:201:20:\n> warning: comparison of integer expressions of different signedness:\n> ‘int’ and ‘long unsigned int’ [-Wsign-compare]\n> 201 | while (chunk_len < FH_SIZEOF_ACCUM && str[chunk_len] != '\\0')\n> | ^\n>\n> and a few more like that.\n>\n> I think it would be better to declare various int variables and\n> arguments as size_t instead. Even if you don't actually need the larger\n> range, it would make it more self-documenting.\n\nThanks for the report! I can reproduce and have pushed that change.\n\n\n",
"msg_date": "Thu, 8 Feb 2024 10:11:53 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 5:04 PM John Naylor <[email protected]> wrote:\n>\n> On Tue, Jan 30, 2024 at 4:13 AM Ants Aasma <[email protected]> wrote:\n> > But given that we know the data length and we have it in a register\n> > already, it's easy enough to just mask out data past the end with a\n> > shift. See patch 1. Performance benefit is about 1.5x Measured on a\n> > small test harness that just hashes and finalizes an array of strings,\n> > with a data dependency between consecutive hashes (next address\n> > depends on the previous hash output).\n>\n> Interesting work! I've taken this idea and (I'm guessing, haven't\n> tested) improved it by re-using an intermediate step for the\n> conditional, simplifying the creation of the mask, and moving the\n> bitscan out of the longest dependency chain.\n\nThis needed a rebase, and is now 0001. I plan to push this soon.\n\nI also went and looked at the simplehash instances and found a few\nthat would be easy to switch over. Rather than try to figure out which\ncould benefit from shaving cycles, I changed all the string hashes,\nand one more, in 0002 so they can act as examples.\n\n0003 uses fasthash for resowner, as suggested by Heikki upthread. Now\nmurmur64 has no callers, but it (or similar *) could be used in\npg_dump/common.c for hashing CatalogId (8 bytes).\n\nCommit 42a1de3013 added a new use for string_hash, but I can't tell\nfrom a quick glance whether it uses the truncation, so I'm going to\ntake a closer look before re-attaching the proposed dynahash change\nagain.\n\n* some examples here:\nhttps://www.boost.org/doc/libs/1_84_0/boost/container_hash/detail/hash_mix.hpp",
"msg_date": "Tue, 5 Mar 2024 17:30:16 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 5:30 PM John Naylor <[email protected]> wrote:\n>\n> On Tue, Jan 30, 2024 at 5:04 PM John Naylor <[email protected]> wrote:\n> >\n> > On Tue, Jan 30, 2024 at 4:13 AM Ants Aasma <[email protected]> wrote:\n> > > But given that we know the data length and we have it in a register\n> > > already, it's easy enough to just mask out data past the end with a\n> > > shift. See patch 1. Performance benefit is about 1.5x Measured on a\n> > > small test harness that just hashes and finalizes an array of strings,\n> > > with a data dependency between consecutive hashes (next address\n> > > depends on the previous hash output).\n> >\n> > Interesting work! I've taken this idea and (I'm guessing, haven't\n> > tested) improved it by re-using an intermediate step for the\n> > conditional, simplifying the creation of the mask, and moving the\n> > bitscan out of the longest dependency chain.\n>\n> This needed a rebase, and is now 0001. I plan to push this soon.\n\nI held off on this because CI was failing, but it wasn't because of this.\n\n> I also went and looked at the simplehash instances and found a few\n> that would be easy to switch over. Rather than try to figure out which\n> could benefit from shaving cycles, I changed all the string hashes,\n> and one more, in 0002 so they can act as examples.\n\nThis was the culprit. The search path cache didn't trigger this when\nit went in, but it seems for frontend a read past the end of malloc\nfails -fsantize=address. By the same token, I'm guessing the only\nreason this didn't fail for backend is because almost all strings\nyou'd want to use as a hash key won't use a malloc'd external block.\n\nI found that adding __attribute__((no_sanitize_address)) to\nfasthash_accum_cstring_aligned() passes CI. While this kind of\nexception is warned against (for good reason), I think it's fine here\ngiven that glibc and NetBSD, and probably others, do something similar\nfor optimized strlen(). Before I write the proper macro for that, are\nthere any objections? Better ideas?\n\n> Commit 42a1de3013 added a new use for string_hash, but I can't tell\n> from a quick glance whether it uses the truncation, so I'm going to\n> take a closer look before re-attaching the proposed dynahash change\n> again.\n\nAfter looking, I think the thing to do here is create a\nhashfn_unstable.c file for global functions:\n- hash_string() to replace all those duplicate definitions of\nhash_string_pointer() in all the frontend code\n- hash_string_with_limit() for dynahash and dshash.\n\n\n",
"msg_date": "Wed, 20 Mar 2024 14:26:55 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, 2024-03-20 at 14:26 +0700, John Naylor wrote:\n> This was the culprit. The search path cache didn't trigger this when\n> it went in, but it seems for frontend a read past the end of malloc\n> fails -fsantize=address. By the same token, I'm guessing the only\n> reason this didn't fail for backend is because almost all strings\n> you'd want to use as a hash key won't use a malloc'd external block.\n> \n> I found that adding __attribute__((no_sanitize_address)) to\n> fasthash_accum_cstring_aligned() passes CI. While this kind of\n> exception is warned against (for good reason), I think it's fine here\n> given that glibc and NetBSD, and probably others, do something\n> similar\n> for optimized strlen(). Before I write the proper macro for that, are\n> there any objections? Better ideas?\n\nIt appears that the spelling no_sanitize_address is deprecated in\nclang[1] in favor of 'no_sanitize(\"address\")'. It doesn't appear to be\ndeprecated in gcc[2].\n\nAside from that, +1.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://clang.llvm.org/docs/AddressSanitizer.html#disabling-instrumentation-with-attribute-no-sanitize-address\n[2] https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html\n\n\n",
"msg_date": "Wed, 20 Mar 2024 09:01:54 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 11:01 PM Jeff Davis <[email protected]> wrote:\n>\n> > I found that adding __attribute__((no_sanitize_address)) to\n> > fasthash_accum_cstring_aligned() passes CI. While this kind of\n> > exception is warned against (for good reason), I think it's fine here\n> > given that glibc and NetBSD, and probably others, do something\n> > similar\n> > for optimized strlen(). Before I write the proper macro for that, are\n> > there any objections? Better ideas?\n>\n> It appears that the spelling no_sanitize_address is deprecated in\n> clang[1] in favor of 'no_sanitize(\"address\")'. It doesn't appear to be\n> deprecated in gcc[2].\n\nThanks for the pointers! In v20-0001, I've drafted checking thes\nspelling first, since pg_attribute_no_sanitize_alignment has a similar\nversion check. Then it checks for no_sanitize_address using\n__has_attribute, which goes back to gcc 5. That's plenty for the\nbuildfarm and CI, and I'm not sure it's worth expending additional\neffort to cover more cases. (A similar attribute exists for MSVC in\ncase it comes up.)\n\nv21-0003 adds a new file hashfn_unstable.c for convenience functions\nand converts all the duplicate frontend uses of hash_string_pointer.\n\nThis will be where a similar hash_string_with_len will live for\ndynash/dshash, which I tested some time ago. I haven't decided whether\nto merge that earlier work here or keep it in a separate patch, but\nregardless of how 0003 ends up I'd like to push 0001/0002 shortly.",
"msg_date": "Wed, 27 Mar 2024 13:44:10 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Wed, 2024-03-27 at 13:44 +0700, John Naylor wrote:\n> Thanks for the pointers! In v20-0001, I've drafted checking thes\n> spelling first, since pg_attribute_no_sanitize_alignment has a\n> similar\n> version check. Then it checks for no_sanitize_address using\n> __has_attribute, which goes back to gcc 5. That's plenty for the\n> buildfarm and CI, and I'm not sure it's worth expending additional\n> effort to cover more cases. (A similar attribute exists for MSVC in\n> case it comes up.)\n\n0001 looks good to me, thank you.\n\n> v21-0003 adds a new file hashfn_unstable.c for convenience functions\n> and converts all the duplicate frontend uses of hash_string_pointer.\n\nWhy not make hash_string() inline, too? I'm fine with it either way,\nI'm just curious why you went to the trouble to create a new .c file so\nit didn't have to be inlined.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 27 Mar 2024 22:37:41 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 12:37 PM Jeff Davis <[email protected]> wrote:\n>\n> > v21-0003 adds a new file hashfn_unstable.c for convenience functions\n> > and converts all the duplicate frontend uses of hash_string_pointer.\n>\n> Why not make hash_string() inline, too? I'm fine with it either way,\n> I'm just curious why you went to the trouble to create a new .c file so\n> it didn't have to be inlined.\n\nYeah, it's a bit strange looking in isolation, and I'm not sure I'll\ngo that route. When I was thinking of this, I also had dynahash and\ndshash in mind, which do indirect calls, even if the function is\ndefined in the same file. That would still work with an inline\ndefinition in the header, just duplicated in the different translation\nunits. Maybe that's not worth worrying about, since I imagine use\ncases with indirect calls will remain rare.\n\n\n",
"msg_date": "Sun, 31 Mar 2024 11:00:15 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 5:30 PM John Naylor <[email protected]> wrote:\n>\n> On Tue, Jan 30, 2024 at 5:04 PM John Naylor <[email protected]> wrote:\n> >\n> > On Tue, Jan 30, 2024 at 4:13 AM Ants Aasma <[email protected]> wrote:\n> > > But given that we know the data length and we have it in a register\n> > > already, it's easy enough to just mask out data past the end with a\n> > > shift. See patch 1. Performance benefit is about 1.5x Measured on a\n> > > small test harness that just hashes and finalizes an array of strings,\n> > > with a data dependency between consecutive hashes (next address\n> > > depends on the previous hash output).\n> >\n> > Interesting work! I've taken this idea and (I'm guessing, haven't\n> > tested) improved it by re-using an intermediate step for the\n> > conditional, simplifying the creation of the mask, and moving the\n> > bitscan out of the longest dependency chain.\n>\n> This needed a rebase, and is now 0001. I plan to push this soon.\n\nI pushed but had to revert -- my version (and I believe both) failed\nto keep the invariant that the aligned and unaligned must result in\nthe same hash. It's clear to me how to fix, but I've injured my strong\nhand and won't be typing much in for a cuople days. I'll prioritize\nthe removal of strlen calls for v17, since the optimization can wait\nand there is also a valgrind issue I haven't looked into.\n\n\n",
"msg_date": "Tue, 2 Apr 2024 10:27:24 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 4:13 AM Ants Aasma <[email protected]> wrote:\n>\n> But given that we know the data length and we have it in a register\n> already, it's easy enough to just mask out data past the end with a\n> shift. See patch 1. Performance benefit is about 1.5x Measured on a\n> small test harness that just hashes and finalizes an array of strings,\n> with a data dependency between consecutive hashes (next address\n> depends on the previous hash output).\n\nI pushed this with a couple cosmetic adjustments, after fixing the\nendianness issue. I'm not sure why valgrind is fine with this way, and\nthe other ways I tried forming the (little-endian) mask raised errors.\nIn addition to \"zero_byte_low | (zero_byte_low - 1)\", I tried\n\"~zero_byte_low & (zero_byte_low - 1)\" and \"zero_byte_low ^\n(zero_byte_low - 1)\" to no avail.\n\nOn Thu, Mar 28, 2024 at 12:37 PM Jeff Davis <[email protected]> wrote:\n> 0001 looks good to me, thank you.\n>\n> > v21-0003 adds a new file hashfn_unstable.c for convenience functions\n> > and converts all the duplicate frontend uses of hash_string_pointer.\n>\n> Why not make hash_string() inline, too? I'm fine with it either way,\n> I'm just curious why you went to the trouble to create a new .c file so\n> it didn't have to be inlined.\n\nThanks for looking! I pushed these, with hash_string() inlined.\n\nI've attached (not reindented for clarity) an update of something\nmentioned a few times already -- removing strlen calls for dynahash\nand dshash string keys. I'm not quite sure how the comments should be\nupdated about string_hash being deprecated to call directly. This\npatch goes further and semi-deprecates calling it at all, so these\ncomments seem a bit awkward now.",
"msg_date": "Sun, 7 Apr 2024 08:40:15 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change GUC hashtable to use simplehash?"
}
] |
[
{
"msg_contents": "Hi,\n\nRight now we use PANIC for very different kinds of errors.\n\nSometimes for errors that are persistent, where crash-restarting and trying\nagain won't help:\n ereport(PANIC,\n (errmsg(\"could not locate a valid checkpoint record\")));\nor\n ereport(PANIC,\n (errmsg(\"online backup was canceled, recovery cannot continue\")));\n\n\n\nSometimes for errors that could be transient, e.g. when running out of space\nwhile trying to write out WAL:\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not write to file \\\"%s\\\": %m\", tmppath)));\n(the ERROR is often promoted to a PANIC due to critical sections).\nor\nereport(PANIC,\n (errcode_for_file_access(),\n errmsg(\"could not write to log file \\\"%s\\\" at offset %u, length %zu: %m\",\n xlogfname, startoffset, nleft)));\n\n\nSometimes for \"should never happen\" checks that are important enough to check\nin production builds:\n elog(PANIC, \"stuck spinlock detected at %s, %s:%d\",\nor\n elog(PANIC, \"failed to re-find shared proclock object\");\n\n\nI have two main issues with this:\n\n1) If core dumps are allowed, we trigger core dumps for all of these. That's\n good for \"should never happen\" type of errors like a stuck spinlock. But it\n makes no sense for things like the on-disk state being wrong at startup, or\n running out of space while writing WAL - if anything, it might make that\n worse!\n\n It's very useful to be able to collect dumps for crashes in production, but\n it's not useful to generate thousands of identical cores because crash\n recovery fails with out-of-space and we retry over and over.\n\n\n2) For errors where crash-restarting won't fix anything, using PANIC doesn't\n allow postmaster to distinguish between an error that should lead\n postmaster to exit itself (after killing other processes, obviously) and\n the normal crash restart cycle.\n\n\nI've been trying to do some fleet wide analyses of the causes of crashes, but\nhaving core dumps for lots of stuff that aren't crashes, often repeated many\ntimes, makes that much harder. Filtering out abort()s and just looking at\nsegfaults filters out far too much.\n\n\nI don't quite know what we should do. But the current situation decidedly\ndoesn't seem great.\n\nMaybe we could have:\n- PANIC_BUG - triggers abort() followed by a crash restart cycle\n to be used for things like a stuck spinlock\n- PANIC_RETRY - causes a crash restart cycle, no core dump\n to be used for things like ENOSPC during WAL writes\n- PANIC_EXIT - causes postmaster to exit(1)\n to be used for things where retrying won't help, like\n \"requested recovery stop point is before consistent recovery point\"\n\n\nOne could argue that some of the PANICs that want to just shut down the server\nshould instead be FATALs, with knowledge in postmaster about which/when such\nerrors should trigger exiting. We do have something like this for the startup\nprocess, but only when errors happen \"early enough\", and without being able to\ndistinguish between \"retryable\" and \"should exit\" type errors. But ISTM that\nthat requires adding more and more knowledge to postmaster.c, instead of\nleaving it with the code that raises the error.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Nov 2023 14:29:11 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "PANIC serves too many masters"
},
{
"msg_contents": "Hi,\n\nOn Sat, 2023-11-18 at 14:29 -0800, Andres Freund wrote:\n> I don't quite know what we should do. But the current situation\n> decidedly\n> doesn't seem great.\n\nAgreed. Better classification is nice, but it also requires more\ndiscipline and it might not always be obvious which category something\nfits in. What about an error loop resulting in:\n\n ereport(PANIC, (errmsg_internal(\"ERRORDATA_STACK_SIZE exceeded\")));\n\nWe'd want a core file, but I don't think we want to restart in that\ncase, right?\n\n\nAlso, can we do a change like this incrementally by updating a few\nPANIC sites at a time? Is it fine to leave plain PANICs in place for\nthe foreseeable future, or do you want all of them to eventually move?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 20 Nov 2023 13:39:03 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PANIC serves too many masters"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Sat, 2023-11-18 at 14:29 -0800, Andres Freund wrote:\n>> I don't quite know what we should do. But the current situation\n>> decidedly\n>> doesn't seem great.\n\n> Agreed.\n\n+1\n\n> Better classification is nice, but it also requires more\n> discipline and it might not always be obvious which category something\n> fits in. What about an error loop resulting in:\n> ereport(PANIC, (errmsg_internal(\"ERRORDATA_STACK_SIZE exceeded\")));\n> We'd want a core file, but I don't think we want to restart in that\n> case, right?\n\nWhy not restart? There's no strong reason to assume this will\nrepeat.\n\nIt might be worth having some independent logic in the postmaster\nthat causes it to give up after too many crashes in a row. But with\nmany/most of these call sites, by definition we're not too sure what\nis wrong.\n\n> Also, can we do a change like this incrementally by updating a few\n> PANIC sites at a time? Is it fine to leave plain PANICs in place for\n> the foreseeable future, or do you want all of them to eventually move?\n\nI'd be inclined to keep PANIC with its current meaning, and\nincrementally change call sites where we decide that's not the\nbest behavior. I think those will be a minority, maybe a small\nminority. (PANIC_EXIT had darn well better be a small minority.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Nov 2023 17:12:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PANIC serves too many masters"
},
{
"msg_contents": "On Mon, 2023-11-20 at 17:12 -0500, Tom Lane wrote:\n> I'd be inclined to keep PANIC with its current meaning, and\n> incrementally change call sites where we decide that's not the\n> best behavior. I think those will be a minority, maybe a small\n> minority. (PANIC_EXIT had darn well better be a small minority.)\n\nIs the error level the right way to express what we want to happen? It\nseems like what we really want is to decide on the behavior, i.e.\nrestart or not, and generate core or not. That could be done a\ndifferent way, like:\n\n ereport(PANIC,\n (errmsg(\"could not locate a valid checkpoint record\"),\n errabort(false),errrestart(false)));\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 20 Nov 2023 14:48:27 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PANIC serves too many masters"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> Is the error level the right way to express what we want to happen? It\n> seems like what we really want is to decide on the behavior, i.e.\n> restart or not, and generate core or not. That could be done a\n> different way, like:\n\n> ereport(PANIC,\n> (errmsg(\"could not locate a valid checkpoint record\"),\n> errabort(false),errrestart(false)));\n\nYeah, I was wondering about that too. It feels to me that\nPANIC_EXIT is an error level (even more severe than PANIC).\nBut maybe \"no core dump please\" should be conveyed separately,\nsince it's just a minor adjustment that doesn't fundamentally\nchange what happens. It's plausible that you'd want a core,\nor not want one, for different cases that all seem to require\nPANIC_EXIT.\n\n(Need a better name than PANIC_EXIT. OMIGOD?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Nov 2023 17:55:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PANIC serves too many masters"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-20 17:55:32 -0500, Tom Lane wrote:\n> Jeff Davis <[email protected]> writes:\n> > Is the error level the right way to express what we want to happen? It\n> > seems like what we really want is to decide on the behavior, i.e.\n> > restart or not, and generate core or not. That could be done a\n> > different way, like:\n> \n> > ereport(PANIC,\n> > (errmsg(\"could not locate a valid checkpoint record\"),\n> > errabort(false),errrestart(false)));\n> \n> Yeah, I was wondering about that too. It feels to me that\n> PANIC_EXIT is an error level (even more severe than PANIC).\n> But maybe \"no core dump please\" should be conveyed separately,\n> since it's just a minor adjustment that doesn't fundamentally\n> change what happens.\n\nI guess I was thinking of an error level because that'd be easier to search\nfor in logs. It seems reasonable to want to specificially search for errors\nthat cause core dumps, since IMO they should all be \"should never happen\" kind\nof paths.\n\n\n> It's plausible that you'd want a core,\n> or not want one, for different cases that all seem to require\n> PANIC_EXIT.\n\nI can't immediately think of a case where you'd want PANIC_EXIT but also want\na core dump? In my mental model to use PANIC_EXIT we'd need to have a decent\nunderstanding that the situation isn't going to change after crash-restart -\nin which case a core dump presumably isn't interesting?\n\n\n> (Need a better name than PANIC_EXIT. OMIGOD?)\n\nCRITICAL?\n\n\nI agree with the point made upthread that we'd want leave PANIC around, it's\nnot realistic to annotate everything, and then there's obviously also\nextensions (although I hope there aren't many PANICs in extensions).\n\nIf that weren't the case, something like this could make sense:\n\nPANIC: crash-restart\nCRITICAL: crash-shutdown\nBUG: crash-restart, abort()\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Nov 2023 15:35:18 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PANIC serves too many masters"
}
] |
[
{
"msg_contents": "Hi,\n\nOn linux, many filesystems default to remounting themselves read-only when\nmetadata IO fails. I.e. one common reaction to disks failing is a previously\nread-write filesystem becoming read-only.\n\nWhen e.g. trying to create a file on such a filesystem, errno is set to\nEROFS. Writing with pre-existing FDs seems to mostly generate EIO.\n\nIn errcode_for_file_access(), we map EROFS to\nERRCODE_INSUFFICIENT_PRIVILEGE. An error code that's used very widely for many\nother purposes.\n\nBecause it is so widely used, just searching for log messages with an\nERRCODE_INSUFFICIENT_PRIVILEGE sqlstate isn't promising, obviously stuff like\n ERROR: permission denied to set parameter \\\"%s\\\"\nisn't interesting.\n\nNor is EROFS a question of insufficient privileges - the filesystem is read\nonly, even root would not be permitted to write.\n\n\nI think ERRCODE_IO_ERROR would be more appropriate than\nERRCODE_INSUFFICIENT_PRIVILEGE, but not exactly great.\n\nThe only real downside would be a slightly odd sqlstate for postmaster's\ncreation of a lock file. If a tablespace were mounted read-only, IO_ERROR\nactually seems fine.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Nov 2023 14:59:18 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "errcode_for_file_access() maps EROFS to INSUFFICIENT_PRIVILEGE"
}
] |
[
{
"msg_contents": "Hi,\n\nWe currently provide no way to learn about a postgres instance having\ncorruption than searching the logs for corruption events than matching by\nsqlstate, for ERRCODE_DATA_CORRUPTED and ERRCODE_INDEX_CORRUPTED.\n\nUnfortunately, there is a case of such an sqlstate that's not at all indicating\ncorruption, namely REINDEX CONCURRENTLY when the index is invalid:\n\n if (!indexRelation->rd_index->indisvalid)\n ereport(WARNING,\n (errcode(ERRCODE_INDEX_CORRUPTED),\n errmsg(\"cannot reindex invalid index \\\"%s.%s\\\" concurrently, skipping\",\n get_namespace_name(get_rel_namespace(cellOid)),\n get_rel_name(cellOid))));\n\nThe only thing required to get to this is an interrupted CREATE INDEX\nCONCURRENTLY, which I don't think can be fairly characterized as \"corruption\".\n\nISTM something like ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE would be more\nappropriate?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Nov 2023 15:09:58 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "reindexing an invalid index should not use ERRCODE_INDEX_CORRUPTED"
},
{
"msg_contents": "On Sat, Nov 18, 2023 at 03:09:58PM -0800, Andres Freund wrote:\n> We currently provide no way to learn about a postgres instance having\n> corruption than searching the logs for corruption events than matching by\n> sqlstate, for ERRCODE_DATA_CORRUPTED and ERRCODE_INDEX_CORRUPTED.\n> \n> Unfortunately, there is a case of such an sqlstate that's not at all indicating\n> corruption, namely REINDEX CONCURRENTLY when the index is invalid:\n> \n> if (!indexRelation->rd_index->indisvalid)\n> ereport(WARNING,\n> (errcode(ERRCODE_INDEX_CORRUPTED),\n> errmsg(\"cannot reindex invalid index \\\"%s.%s\\\" concurrently, skipping\",\n> get_namespace_name(get_rel_namespace(cellOid)),\n> get_rel_name(cellOid))));\n> \n> The only thing required to get to this is an interrupted CREATE INDEX\n> CONCURRENTLY, which I don't think can be fairly characterized as \"corruption\".\n> \n> ISTM something like ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE would be more\n> appropriate?\n\n+1, that's a clear improvement.\n\nThe \"cannot\" part of the message is also inaccurate, and it's not clear to me\nwhy we have this specific restriction at all. REINDEX INDEX CONCURRENTLY\naccepts such indexes, so I doubt it's an implementation gap. Since an INVALID\nindex often duplicates some valid index, I could see an argument that\nreindexing INVALID indexes as part of a table-level REINDEX is wanted less\noften than not. But that argument would be just as pertinent to REINDEX TABLE\n(w/o CONCURRENTLY), which doesn't impose this restriction. Hmmm.\n\n\n",
"msg_date": "Sat, 18 Nov 2023 16:32:36 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindexing an invalid index should not use\n ERRCODE_INDEX_CORRUPTED"
},
{
"msg_contents": "On Sat, Nov 18, 2023 at 04:32:36PM -0800, Noah Misch wrote:\n> On Sat, Nov 18, 2023 at 03:09:58PM -0800, Andres Freund wrote:\n>> Unfortunately, there is a case of such an sqlstate that's not at all indicating\n>> corruption, namely REINDEX CONCURRENTLY when the index is invalid:\n>> \n>> if (!indexRelation->rd_index->indisvalid)\n>> ereport(WARNING,\n>> (errcode(ERRCODE_INDEX_CORRUPTED),\n>> errmsg(\"cannot reindex invalid index \\\"%s.%s\\\" concurrently, skipping\",\n>> get_namespace_name(get_rel_namespace(cellOid)),\n>> get_rel_name(cellOid))));\n>> \n>> The only thing required to get to this is an interrupted CREATE INDEX\n>> CONCURRENTLY, which I don't think can be fairly characterized as \"corruption\".\n>> \n>> ISTM something like ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE would be more\n>> appropriate?\n> \n> +1, that's a clear improvement.\n\nThe same thing can be said a couple of lines above where the code uses\nERRCODE_FEATURE_NOT_SUPPORTED but your suggestion of\nERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE would be better.\n\nWould the attached be OK for you?\n\n> The \"cannot\" part of the message is also inaccurate, and it's not clear to me\n> why we have this specific restriction at all. REINDEX INDEX CONCURRENTLY\n> accepts such indexes, so I doubt it's an implementation gap.\n\nIf you would reword that, what would you change?\n\n> Since an INVALID\n> index often duplicates some valid index, I could see an argument that\n> reindexing INVALID indexes as part of a table-level REINDEX is wanted less\n> often than not.\n\nThe argument behind this restriction is that repeated interruptions of\na table-level REINDEX CONCURRENTLY would bloat the entire relation in\nindex entries if invalid entries are rebuilt. This was discussed back\non the original thread back in 2019, around here:\nhttps://www.postgresql.org/message-id/[email protected]\n--\nMichael",
"msg_date": "Wed, 6 Dec 2023 15:17:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindexing an invalid index should not use\n ERRCODE_INDEX_CORRUPTED"
},
{
"msg_contents": "On Wed, Dec 06, 2023 at 03:17:12PM +0900, Michael Paquier wrote:\n> On Sat, Nov 18, 2023 at 04:32:36PM -0800, Noah Misch wrote:\n> > On Sat, Nov 18, 2023 at 03:09:58PM -0800, Andres Freund wrote:\n> >> Unfortunately, there is a case of such an sqlstate that's not at all indicating\n> >> corruption, namely REINDEX CONCURRENTLY when the index is invalid:\n> >> \n> >> if (!indexRelation->rd_index->indisvalid)\n> >> ereport(WARNING,\n> >> (errcode(ERRCODE_INDEX_CORRUPTED),\n> >> errmsg(\"cannot reindex invalid index \\\"%s.%s\\\" concurrently, skipping\",\n> >> get_namespace_name(get_rel_namespace(cellOid)),\n> >> get_rel_name(cellOid))));\n> >> \n> >> The only thing required to get to this is an interrupted CREATE INDEX\n> >> CONCURRENTLY, which I don't think can be fairly characterized as \"corruption\".\n> >> \n> >> ISTM something like ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE would be more\n> >> appropriate?\n> > \n> > +1, that's a clear improvement.\n> \n> The same thing can be said a couple of lines above where the code uses\n> ERRCODE_FEATURE_NOT_SUPPORTED but your suggestion of\n> ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE would be better.\n> \n> Would the attached be OK for you?\n\nOkay.\n\n> > The \"cannot\" part of the message is also inaccurate, and it's not clear to me\n> > why we have this specific restriction at all. REINDEX INDEX CONCURRENTLY\n> > accepts such indexes, so I doubt it's an implementation gap.\n> \n> If you would reword that, what would you change?\n\nI'd do \"skipping reindex of invalid index \\\"%s.%s\\\"\". If one wanted more,\nerrhint(\"Use DROP INDEX or REINDEX INDEX.\") would fit.\n\n\n",
"msg_date": "Wed, 6 Dec 2023 16:33:33 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindexing an invalid index should not use\n ERRCODE_INDEX_CORRUPTED"
},
{
"msg_contents": "On Wed, Dec 06, 2023 at 04:33:33PM -0800, Noah Misch wrote:\n> On Wed, Dec 06, 2023 at 03:17:12PM +0900, Michael Paquier wrote:\n>>> The \"cannot\" part of the message is also inaccurate, and it's not clear to me\n>>> why we have this specific restriction at all. REINDEX INDEX CONCURRENTLY\n>>> accepts such indexes, so I doubt it's an implementation gap.\n>> \n>> If you would reword that, what would you change?\n> \n> I'd do \"skipping reindex of invalid index \\\"%s.%s\\\"\". If one wanted more,\n\nIn line with vacuum.c, that sounds like a good idea at the end.\n\n> errhint(\"Use DROP INDEX or REINDEX INDEX.\") would fit.\n\nI'm OK with this suggestion as well.\n--\nMichael",
"msg_date": "Thu, 7 Dec 2023 10:32:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindexing an invalid index should not use\n ERRCODE_INDEX_CORRUPTED"
},
{
"msg_contents": "Hi,\n\nThis looks good to me!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Dec 2023 17:40:44 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: reindexing an invalid index should not use\n ERRCODE_INDEX_CORRUPTED"
},
{
"msg_contents": "On Wed, Dec 06, 2023 at 05:40:44PM -0800, Andres Freund wrote:\n> This looks good to me!\n\nCool. I've applied this one, then.\n--\nMichael",
"msg_date": "Thu, 7 Dec 2023 14:29:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindexing an invalid index should not use\n ERRCODE_INDEX_CORRUPTED"
}
] |
[
{
"msg_contents": "Hi hackers, \n\nI hope this message finds you well. I am reaching out to seek guidance on a specific aspect of PostgreSQL's index scanning functionality.\n\nI am currently working on a vector search extension for postgres, where I need to generate bitmaps based on filter conditions during an index scan. The goal is to optimize the query performance by efficiently identifying the rows that meet the given criteria.\n\nThe query plan looks like this\n> Index Scan using products_feature_idx on products (cost=0.00..27.24 rows=495 width=12)\n> Order By: (feature <-> '[0.5, 0.5, 0.5]'::vector)\n> Filter: ((price > '0.2'::double precision) AND (price <= '0.7'::double precision))\n\nWe have a custom index for the order by clause on the feature column. Now we want to utilize the index on other columns like price column. We want to access the bitmap of price column's filter condition in the feature column index. Is there any way I can achieve this goal?\n\nAny help or guidance is appreciated!\n\nThanks.\nJinjing Zhou\n\n\nHi hackers, I hope this message finds you well. I am reaching out to seek guidance on a specific aspect of PostgreSQL's index scanning functionality.I am currently working on a vector search extension for postgres, where I need to generate bitmaps based on filter conditions during an index scan. The goal is to optimize the query performance by efficiently identifying the rows that meet the given criteria.The query plan looks like thisIndex Scan using products_feature_idx on products (cost=0.00..27.24 rows=495 width=12) Order By: (feature <-> '[0.5, 0.5, 0.5]'::vector) Filter: ((price > '0.2'::double precision) AND (price <= '0.7'::double precision))We have a custom index for the order by clause on the feature column. Now we want to utilize the index on other columns like price column. We want to access the bitmap of price column's filter condition in the feature column index. Is there any way I can achieve this goal?Any help or guidance is appreciated!Thanks.Jinjing Zhou",
"msg_date": "Mon, 20 Nov 2023 01:19:33 +0800",
"msg_from": "\"Jinjing Zhou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inquiry on Generating Bitmaps from Filter Conditions in Index Scans"
},
{
"msg_contents": "\n\nOn 11/19/23 18:19, Jinjing Zhou wrote:\n> Hi hackers, \n> \n> I hope this message finds you well. I am reaching out to seek guidance\n> on a specific aspect of PostgreSQL's index scanning functionality.\n> \n> I am currently working on a vector search extension for postgres, where\n> I need to generate bitmaps based on filter conditions during an index\n> scan. The goal is to optimize the query performance by efficiently\n> identifying the rows that meet the given criteria.\n> \n> The query plan looks like this\n> \n> Index Scan using products_feature_idx on products (cost=0.00..27.24\n> rows=495 width=12)\n> Order By: (feature <-> '[0.5, 0.5, 0.5]'::vector)\n> Filter: ((price > '0.2'::double precision) AND (price <=\n> '0.7'::double precision))\n> \n> \n> We have a custom index for the order by clause on the feature column.\n> Now we want to utilize the index on other columns like price column. We\n> want to access the bitmap of price column's filter condition in the\n> feature column index. Is there any way I can achieve this goal?\n> \n> Any help or guidance is appreciated!\n> \n\nI suppose you'll need to give more details about what exactly are you\ntrying to achieve, what you tried, maybe some code examples, etc. Your\nquestion is quite vague, and it's unclear what \"bitmaps generated on\nfilter conditions\" or \"custom index\" means.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 20 Nov 2023 11:51:47 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inquiry on Generating Bitmaps from Filter Conditions in Index\n Scans"
},
{
"msg_contents": "On Mon, 20 Nov 2023 at 09:30, Jinjing Zhou <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> I hope this message finds you well. I am reaching out to seek guidance on a specific aspect of PostgreSQL's index scanning functionality.\n>\n> I am currently working on a vector search extension for postgres, where I need to generate bitmaps based on filter conditions during an index scan. The goal is to optimize the query performance by efficiently identifying the rows that meet the given criteria.\n>\n> The query plan looks like this\n>\n> Index Scan using products_feature_idx on products (cost=0.00..27.24 rows=495 width=12)\n> Order By: (feature <-> '[0.5, 0.5, 0.5]'::vector)\n> Filter: ((price > '0.2'::double precision) AND (price <= '0.7'::double precision))\n>\n>\n> We have a custom index for the order by clause on the feature column. Now we want to utilize the index on other columns like price column. We want to access the bitmap of price column's filter condition in the feature column index. Is there any way I can achieve this goal?\n\nIf you mean \"I'd like to use bitmaps generated by combining filter\nresults from index A, B, and C for (pre-)filtering the ordered index\nlookups in index D\",\nthen there is no current infrastructure to do this. Bitmap scans\ncurrently generate a data structure that is not indexable, and can\nthus not be used efficiently to push an index's generated bitmap into\nanother bitmap's scans.\n\nThere are efforts to improve the data structures we use for storing\nTIDs during vacuum [0] which could extend to the TID bitmap structure,\nbut even then we'd need some significant effort to rewire Postgres'\ninternals to push down the bitmap filters; and that is even under the\nassumption that pushing the bitmap down into the index AM is more\nefficient than doing the merges above the index AM and then re-sorting\nthe data.\n\nSo, in short, it's not currently available in community PostgreSQL.\nYou could probably create a planner hook + custom executor node that\ndoes this, but it won't be able to use much of the features available\ninside PostgreSQL.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/flat/CANWCAZbrZ58-w1W_3pg-0tOfbx8K41_n_03_0ndGV78hJWswBA%2540mail.gmail.com\n\n\n",
"msg_date": "Mon, 20 Nov 2023 12:32:47 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inquiry on Generating Bitmaps from Filter Conditions in Index\n Scans"
},
{
"msg_contents": "Thanks a lot! This is exactly what I'm asking. We've tried the CustomScanAPI at https://github.com/tensorchord/pgvecto.rs/pull/126, but met error with \"variable not found in subplan target list\". We're still investigating the root cause and thanks for your guidance!\n\nBest\nJinjing Zhou\n> From: \"Matthias van de Meent\"<[email protected]>\n> Date: Mon, Nov 20, 2023, 19:33\n> Subject: Re: Inquiry on Generating Bitmaps from Filter Conditions in Index Scans\n> To: \"Jinjing Zhou\"<[email protected]>\n> Cc: \"[email protected]\"<[email protected]>\n> On Mon, 20 Nov 2023 at 09:30, Jinjing Zhou <[email protected]> wrote:\n> >\n> > Hi hackers,\n> >\n> > I hope this message finds you well. I am reaching out to seek guidance on a specific aspect of PostgreSQL's index scanning functionality.\n> >\n> > I am currently working on a vector search extension for postgres, where I need to generate bitmaps based on filter conditions during an index scan. The goal is to optimize the query performance by efficiently identifying the rows that meet the given criteria.\n> >\n> > The query plan looks like this\n> >\n> > Index Scan using products_feature_idx on products (cost=0.00..27.24 rows=495 width=12)\n> > Order By: (feature <-> '[0.5, 0.5, 0.5]'::vector)\n> > Filter: ((price > '0.2'::double precision) AND (price <= '0.7'::double precision))\n> >\n> >\n> > We have a custom index for the order by clause on the feature column. Now we want to utilize the index on other columns like price column. We want to access the bitmap of price column's filter condition in the feature column index. Is there any way I can achieve this goal?\n> \n> If you mean \"I'd like to use bitmaps generated by combining filter\n> results from index A, B, and C for (pre-)filtering the ordered index\n> lookups in index D\",\n> then there is no current infrastructure to do this. Bitmap scans\n> currently generate a data structure that is not indexable, and can\n> thus not be used efficiently to push an index's generated bitmap into\n> another bitmap's scans.\n> \n> There are efforts to improve the data structures we use for storing\n> TIDs during vacuum [0] which could extend to the TID bitmap structure,\n> but even then we'd need some significant effort to rewire Postgres'\n> internals to push down the bitmap filters; and that is even under the\n> assumption that pushing the bitmap down into the index AM is more\n> efficient than doing the merges above the index AM and then re-sorting\n> the data.\n> \n> So, in short, it's not currently available in community PostgreSQL.\n> You could probably create a planner hook + custom executor node that\n> does this, but it won't be able to use much of the features available\n> inside PostgreSQL.\n> \n> Kind regards,\n> \n> Matthias van de Meent\n> \n> [0] https://www.postgresql.org/message-id/flat/CANWCAZbrZ58-w1W_3pg-0tOfbx8K41_n_03_0ndGV78hJWswBA%2540mail.gmail.com\n\nThanks a lot! This is exactly what I'm asking. We've tried the CustomScanAPI at https://github.com/tensorchord/pgvecto.rs/pull/126, but met error with \"variable not found in subplan target list\". We're still investigating the root cause and thanks for your guidance!BestJinjing ZhouFrom: \"Matthias van de Meent\"<[email protected]>Date: Mon, Nov 20, 2023, 19:33Subject: Re: Inquiry on Generating Bitmaps from Filter Conditions in Index ScansTo: \"Jinjing Zhou\"<[email protected]>Cc: \"[email protected]\"<[email protected]>On Mon, 20 Nov 2023 at 09:30, Jinjing Zhou <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> I hope this message finds you well. I am reaching out to seek guidance on a specific aspect of PostgreSQL's index scanning functionality.\n>\n> I am currently working on a vector search extension for postgres, where I need to generate bitmaps based on filter conditions during an index scan. The goal is to optimize the query performance by efficiently identifying the rows that meet the given criteria.\n>\n> The query plan looks like this\n>\n> Index Scan using products_feature_idx on products (cost=0.00..27.24 rows=495 width=12)\n> Order By: (feature <-> '[0.5, 0.5, 0.5]'::vector)\n> Filter: ((price > '0.2'::double precision) AND (price <= '0.7'::double precision))\n>\n>\n> We have a custom index for the order by clause on the feature column. Now we want to utilize the index on other columns like price column. We want to access the bitmap of price column's filter condition in the feature column index. Is there any way I can achieve this goal?\n\nIf you mean \"I'd like to use bitmaps generated by combining filter\nresults from index A, B, and C for (pre-)filtering the ordered index\nlookups in index D\",\nthen there is no current infrastructure to do this. Bitmap scans\ncurrently generate a data structure that is not indexable, and can\nthus not be used efficiently to push an index's generated bitmap into\nanother bitmap's scans.\n\nThere are efforts to improve the data structures we use for storing\nTIDs during vacuum [0] which could extend to the TID bitmap structure,\nbut even then we'd need some significant effort to rewire Postgres'\ninternals to push down the bitmap filters; and that is even under the\nassumption that pushing the bitmap down into the index AM is more\nefficient than doing the merges above the index AM and then re-sorting\nthe data.\n\nSo, in short, it's not currently available in community PostgreSQL.\nYou could probably create a planner hook + custom executor node that\ndoes this, but it won't be able to use much of the features available\ninside PostgreSQL.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/flat/CANWCAZbrZ58-w1W_3pg-0tOfbx8K41_n_03_0ndGV78hJWswBA%2540mail.gmail.com",
"msg_date": "Mon, 20 Nov 2023 20:20:23 +0800",
"msg_from": "\"Jinjing Zhou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inquiry on Generating Bitmaps from Filter Conditions in Index\n Scans"
},
{
"msg_contents": "Thanks. Our project is at https://github.com/tensorchord/pgvecto.rs. A custom index is implemented for the vector similarity search, which implements `amgettuples` with direction support to provide candidates for the order by clause. \n\nAnd we want to inject the filter condition using bitmap into the amgettuples process, instead of checking the tuples one by one to accelerate the whole process. \n\nBest\nJinjing Zhou\n> From: \"Tomas Vondra\"<[email protected]>\n> Date: Mon, Nov 20, 2023, 18:52\n> Subject: Re: Inquiry on Generating Bitmaps from Filter Conditions in Index Scans\n> To: \"Jinjing Zhou\"<[email protected]>, \"[email protected]\"<[email protected]>\n> On 11/19/23 18:19, Jinjing Zhou wrote:\n> > Hi hackers, \n> > \n> > I hope this message finds you well. I am reaching out to seek guidance\n> > on a specific aspect of PostgreSQL's index scanning functionality.\n> > \n> > I am currently working on a vector search extension for postgres, where\n> > I need to generate bitmaps based on filter conditions during an index\n> > scan. The goal is to optimize the query performance by efficiently\n> > identifying the rows that meet the given criteria.\n> > \n> > The query plan looks like this\n> > \n> > Index Scan using products_feature_idx on products (cost=0.00..27.24\n> > rows=495 width=12)\n> > Order By: (feature <-> '[0.5, 0.5, 0.5]'::vector)\n> > Filter: ((price > '0.2'::double precision) AND (price <=\n> > '0.7'::double precision))\n> > \n> > \n> > We have a custom index for the order by clause on the feature column.\n> > Now we want to utilize the index on other columns like price column. We\n> > want to access the bitmap of price column's filter condition in the\n> > feature column index. Is there any way I can achieve this goal?\n> > \n> > Any help or guidance is appreciated!\n> > \n> \n> I suppose you'll need to give more details about what exactly are you\n> trying to achieve, what you tried, maybe some code examples, etc. Your\n> question is quite vague, and it's unclear what \"bitmaps generated on\n> filter conditions\" or \"custom index\" means.\n> \n> regards\n> \n> -- \n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\nThanks. Our project is at https://github.com/tensorchord/pgvecto.rs. A custom index is implemented for the vector similarity search, which implements `amgettuples` with direction support to provide candidates for the order by clause. And we want to inject the filter condition using bitmap into the amgettuples process, instead of checking the tuples one by one to accelerate the whole process. BestJinjing ZhouFrom: \"Tomas Vondra\"<[email protected]>Date: Mon, Nov 20, 2023, 18:52Subject: Re: Inquiry on Generating Bitmaps from Filter Conditions in Index ScansTo: \"Jinjing Zhou\"<[email protected]>, \"[email protected]\"<[email protected]>On 11/19/23 18:19, Jinjing Zhou wrote:\n> Hi hackers, \n> \n> I hope this message finds you well. I am reaching out to seek guidance\n> on a specific aspect of PostgreSQL's index scanning functionality.\n> \n> I am currently working on a vector search extension for postgres, where\n> I need to generate bitmaps based on filter conditions during an index\n> scan. The goal is to optimize the query performance by efficiently\n> identifying the rows that meet the given criteria.\n> \n> The query plan looks like this\n> \n> Index Scan using products_feature_idx on products (cost=0.00..27.24\n> rows=495 width=12)\n> Order By: (feature <-> '[0.5, 0.5, 0.5]'::vector)\n> Filter: ((price > '0.2'::double precision) AND (price <=\n> '0.7'::double precision))\n> \n> \n> We have a custom index for the order by clause on the feature column.\n> Now we want to utilize the index on other columns like price column. We\n> want to access the bitmap of price column's filter condition in the\n> feature column index. Is there any way I can achieve this goal?\n> \n> Any help or guidance is appreciated!\n> \n\nI suppose you'll need to give more details about what exactly are you\ntrying to achieve, what you tried, maybe some code examples, etc. Your\nquestion is quite vague, and it's unclear what \"bitmaps generated on\nfilter conditions\" or \"custom index\" means.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 20 Nov 2023 20:22:22 +0800",
"msg_from": "\"Jinjing Zhou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inquiry on Generating Bitmaps from Filter Conditions in Index\n Scans"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing another patch I was looking at the walsender's static\nfunction CreateReplicationSlot\n\nI found that the current logic seemed to have some unnecessary if/else\nchecking which can be simplified.\n\n~~\n\nTo summarise:\n\nCURRENT\nif (cmd->kind == REPLICATION_KIND_PHYSICAL)\n{\n ...\n}\nelse\n{\n ...\n}\nif (cmd->kind == REPLICATION_KIND_LOGICAL)\n{\n ...\n}\nelse if (cmd->kind == REPLICATION_KIND_PHYSICAL && reserve_wal)\n{\n ...\n}\n\n\nSUGGESTION\nif (cmd->kind == REPLICATION_KIND_PHYSICAL)\n{\n ...\n if (reserve_wal)\n {\n ...\n }\n}\nelse /* REPLICATION_KIND_LOGICAL */\n{\n ...\n}\n\n~~~\n\nPSA a small patch for making this change.\n\n(I ran make check-world after this change and it was successful)\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 20 Nov 2023 18:01:42 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simplify if/else logic of walsender CreateReplicationSlot"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 06:01:42PM +1100, Peter Smith wrote:\n> While reviewing another patch I was looking at the walsender's static\n> function CreateReplicationSlot\n> \n> I found that the current logic seemed to have some unnecessary if/else\n> checking which can be simplified.\n\nGood idea. What you are suggesting here improves the readability of\nthis code, so +1.\n--\nMichael",
"msg_date": "Mon, 20 Nov 2023 17:07:38 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify if/else logic of walsender CreateReplicationSlot"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 05:07:38PM +0900, Michael Paquier wrote:\n> Good idea. What you are suggesting here improves the readability of\n> this code, so +1.\n\nAnd applied this one, thanks!\n--\nMichael",
"msg_date": "Tue, 21 Nov 2023 13:57:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify if/else logic of walsender CreateReplicationSlot"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 3:57 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Nov 20, 2023 at 05:07:38PM +0900, Michael Paquier wrote:\n> > Good idea. What you are suggesting here improves the readability of\n> > this code, so +1.\n>\n> And applied this one, thanks!\n\nThanks for pushing.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 21 Nov 2023 16:52:32 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simplify if/else logic of walsender CreateReplicationSlot"
}
] |
[
{
"msg_contents": "Although it's not performance-critical, I think it just makes sense to break\nthe loop in replorigin_session_setup() as soon as we've found the origin.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Mon, 20 Nov 2023 10:07:53 +0100",
"msg_from": "Antonin Houska <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stop the search once replication origin is found"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 2:36 PM Antonin Houska <[email protected]> wrote:\n>\n> Although it's not performance-critical, I think it just makes sense to break\n> the loop in replorigin_session_setup() as soon as we've found the origin.\n>\n\nYour proposal sounds reasonable to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Nov 2023 16:36:05 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stop the search once replication origin is found"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 4:36 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Nov 20, 2023 at 2:36 PM Antonin Houska <[email protected]> wrote:\n> >\n> > Although it's not performance-critical, I think it just makes sense to break\n> > the loop in replorigin_session_setup() as soon as we've found the origin.\n> >\n>\n> Your proposal sounds reasonable to me.\n>\n\nPushed, thanks for the patch!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:18:58 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stop the search once replication origin is found"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 7:49 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Nov 20, 2023 at 4:36 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Nov 20, 2023 at 2:36 PM Antonin Houska <[email protected]> wrote:\n> > >\n> > > Although it's not performance-critical, I think it just makes sense to break\n> > > the loop in replorigin_session_setup() as soon as we've found the origin.\n> > >\n> >\n> > Your proposal sounds reasonable to me.\n> >\n>\n> Pushed, thanks for the patch!\n>\n> --\n\nHi,\n\nWhile reviewing the replorigin_session_setup() fix [1] that was pushed\nyesterday, I saw that some nearby code in that same function might\nbenefit from some refactoring.\n\nI'm not sure if you want to modify it or not, but FWIW I think the\ncode can be tidied by making the following changes:\n\n~~~\n\n1.\n else if (curstate->acquired_by != 0 && acquired_by == 0)\n {\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_IN_USE),\n errmsg(\"replication origin with ID %d is already\nactive for PID %d\",\n curstate->roident, curstate->acquired_by)));\n }\n\n1a. AFAICT the above code doesn't need to be else/if\n\n1b. The brackets are unnecessary for a single statement.\n\n~~~\n\n2.\nif (session_replication_state == NULL && free_slot == -1)\nereport(ERROR,\n(errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),\nerrmsg(\"could not find free replication state slot for replication\norigin with ID %d\",\nnode),\nerrhint(\"Increase max_replication_slots and try again.\")));\nelse if (session_replication_state == NULL)\n{\n/* initialize new slot */\nsession_replication_state = &replication_states[free_slot];\nAssert(session_replication_state->remote_lsn == InvalidXLogRecPtr);\nAssert(session_replication_state->local_lsn == InvalidXLogRecPtr);\nsession_replication_state->roident = node;\n}\n\nThe above code can be improved by combining within a single check for\nsession_replication_state NULL.\n\n~~~\n\n3.\nThere are some unnecessary double-blank lines.\n\n~~~\n\n4.\n /* ok, found slot */\n session_replication_state = curstate;\n break;\n\nQUESTION: That 'session_replication_state' is a global variable, but\nthere is more validation logic that comes *after* this assignment\nwhich might decide there was some problem and cause an ereport or\nelog. In practice, maybe it makes no difference, but it did seem\nslightly dubious to me to assign to a global before determining\neverything is OK. Thoughts?\n\n~~~\n\nAnyway, PSA a patch for the 1-3 above.\n\n======\n[1] https://www.postgresql.org/message-id/flat/2694.1700471273%40antos\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 23 Nov 2023 09:32:32 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stop the search once replication origin is found"
}
] |
[
{
"msg_contents": "Hello\n\nLet me assume that there is a table T with columns a, b, c, d, e, f, g, and h.\n\nIf one wants to select data from all the columns except d and e, then one has to write\n\nSELECT a, b, c, f, g, h\nFROM T;\n\ninstead of writing\n\nSELECT ALL BUT (d, e)\nFROM T;\n\nor something similar (perhaps by using keywords EXCEPT or EXCLUDE).\n\nThe more a table has columns, the more one has to write the column names.\n\nThere are systems that support this kind of shorthand syntax in SQL:\n\nBigQuery: https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select-modifiers<https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select-modifiers>\n\nDatabricks: https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-qry-select.html#syntax\n\nDuckDB: https://duckdb.org/docs/sql/query_syntax/select\n\nSnowflake:https://stephenallwright.com/select-columns-except-snowflake/\n\nI think that such syntax would be useful and if more and more DBMS-s start to offer it, then perhaps one day it will be in the SQL standard as well.\n\nWhat do you think, is it something that could be added to PostgreSQL?\n\nPeople are interested of this feature. The following links are just some examples:\nhttp://www.postgresonline.com/journal/archives/41-How-to-SELECT-ALL-EXCEPT-some-columns-in-a-table.html\n\nhttps://stackoverflow.com/questions/729197/exclude-a-column-using-select-except-columna-from-tablea\n\nhttps://dba.stackexchange.com/questions/1957/sql-select-all-columns-except-some\n\nhttps://www.reddit.com/r/SQL/comments/15x97kw/sql_is_there_a_way_to_just_exclude_1_column_in/\n\n\nBest regards\nErki Eessaar\n\n\n\n\n\n\n\n\n\n\nHello\n\n\n\n\nLet me assume that there is a table T with columns a, b, c, d, e, f, g, and h.\n\n\n\n\nIf one wants to select data from all the columns except d and e, then one has to write\n\n\n\n\nSELECT a, b, c, f, g, h\n\nFROM T;\n\n\n\n\ninstead of writing \n\n\n\n\nSELECT ALL BUT (d, e)\n\nFROM T;\n\n\n\n\nor something similar (perhaps by using keywords EXCEPT or EXCLUDE).\n\n\n\n\nThe more a table has columns, the more one has to write the column names.\n\n\n\n\nThere are systems that support this kind of shorthand syntax in SQL:\n\n\n\nBigQuery: https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select-modifiers\n\n\n\nDatabricks:\n\nhttps://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-qry-select.html#syntax\n\n\n\nDuckDB:\n\nhttps://duckdb.org/docs/sql/query_syntax/select\n\n\nSnowflake:https://stephenallwright.com/select-columns-except-snowflake/\n\n\n\nI think that such syntax would be useful and if more and more DBMS-s start to offer\n it, then perhaps one day it will be in the SQL standard as well.\n\n\nWhat do you think, is it something that could be added to PostgreSQL? \n\n\nPeople are interested of this feature. The following links are just some examples:\nhttp://www.postgresonline.com/journal/archives/41-How-to-SELECT-ALL-EXCEPT-some-columns-in-a-table.html\n\n\nhttps://stackoverflow.com/questions/729197/exclude-a-column-using-select-except-columna-from-tablea\n\n\n\nhttps://dba.stackexchange.com/questions/1957/sql-select-all-columns-except-some\n\n\nhttps://www.reddit.com/r/SQL/comments/15x97kw/sql_is_there_a_way_to_just_exclude_1_column_in/\n\n\n\n\n\nBest regards\nErki Eessaar",
"msg_date": "Mon, 20 Nov 2023 09:52:28 +0000",
"msg_from": "Erki Eessaar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Perhaps a possible new feature to a future PostgreSQL release"
},
{
"msg_contents": "On Mon, 2023-11-20 at 09:52 +0000, Erki Eessaar wrote:\n> Let me assume that there is a table T with columns a, b, c, d, e, f, g, and h.\n> \n> If one wants to select data from all the columns except d and e, then one has to write\n> \n> SELECT a, b, c, f, g, h\n> FROM T;\n> \n> instead of writing \n> \n> SELECT ALL BUT (d, e)\n> FROM T;\n> \n> or something similar (perhaps by using keywords EXCEPT or EXCLUDE).\n\nThis has been discussed before (repeatedly); see for example\nhttps://www.postgresql.org/message-id/flat/CANcm6wbR3EG7t-G%3DTxy64Yt8nR6YbpzFRuTewJQ%2BkCq%3DrZ8M2A%40mail.gmail.com\n\nAll previous attempts went nowhere.\n\n\n> I think that such syntax would be useful and if more and more DBMS-s start to\n> offer it, then perhaps one day it will be in the SQL standard as well.\n\nOne of the reasons *against* the feature is that the SQL standard committee\nmight one day come up with a feature like that using a syntax that conflicts\nwith whatever we introduced on our own.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 20 Nov 2023 11:18:06 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perhaps a possible new feature to a future PostgreSQL release"
}
] |
[
{
"msg_contents": "Hi hackers\n\nI've been looking into ways to reduce the overhead we're having in pqcomm\nand I'd like to propose a small patch to modify how socket_putmessage works.\n\nCurrently socket_putmessage copies any input data into the pqcomm send\nbuffer (PqSendBuffer) and the size of this buffer is 8K. When the send\nbuffer gets full, it's flushed and we continue to copy more data into the\nsend buffer until we have no data left to be sent.\nSince the send buffer is flushed whenever it's full, I think we are safe to\nsay that if the size of input data is larger than the buffer size, which is\n8K, then the buffer will be flushed at least once (or more, depends on the\ninput size) to store and all the input data.\n\nProposed change modifies socket_putmessage to send any data larger than\n8K immediately without copying it into the send buffer. Assuming that the\nsend buffer would be flushed anyway due to reaching its limit, the patch\njust gets rid of the copy part which seems unnecessary and sends data\nwithout waiting.\n\nThis change affects places where pq_putmessage is used such as\npg_basebackup, COPY TO, walsender etc.\n\nI did some experiments to see how the patch performs.\nFirstly, I loaded ~5GB data into a table [1], then ran \"COPY test TO\nSTDOUT\". Here are perf results of both the patch and HEAD\n\nHEAD:\n- 94,13% 0,22% postgres postgres [.] DoCopyTo\n - 93,90% DoCopyTo\n - 91,80% CopyOneRowTo\n + 47,35% CopyAttributeOutText\n - 26,49% CopySendEndOfRow\n - 25,97% socket_putmessage\n - internal_putbytes\n - 24,38% internal_flush\n + secure_write\n + 1,47% memcpy (inlined)\n + 14,69% FunctionCall1Coll\n + 1,94% appendBinaryStringInfo\n + 0,75% MemoryContextResetOnly\n + 1,54% table_scan_getnextslot (inlined)\n\nPatch:\n- 94,40% 0,30% postgres postgres [.] DoCopyTo\n - 94,11% DoCopyTo\n - 92,41% CopyOneRowTo\n + 51,20% CopyAttributeOutText\n - 20,87% CopySendEndOfRow\n - 20,45% socket_putmessage\n - internal_putbytes\n - 18,50% internal_flush (inlined)\n internal_flush_buffer\n + secure_write\n + 1,61% memcpy (inlined)\n + 17,36% FunctionCall1Coll\n + 1,33% appendBinaryStringInfo\n + 0,93% MemoryContextResetOnly\n + 1,36% table_scan_getnextslot (inlined)\n\nThe patch brings a ~5% gain in socket_putmessage.\n\nAlso timed the pg_basebackup like:\ntime pg_basebackup -p 5432 -U replica_user -X none -c fast --no_maanifest\n-D test\n\nHEAD:\nreal 0m10,040s\nuser 0m0,768s\nsys 0m7,758s\n\nPatch:\nreal 0m8,882s\nuser 0m0,699s\nsys 0m6,980s\n\nIt seems ~11% faster in this specific case.\n\nI'd appreciate any feedback/thoughts.\n\n\n[1]\nCREATE TABLE test(id int, name text, time TIMESTAMP);\nINSERT INTO test (id, name, time) SELECT i AS id, repeat('dummy', 100) AS\nname, NOW() AS time FROM generate_series(1, 100000000) AS i;\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Mon, 20 Nov 2023 15:21:58 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On 20/11/2023 14:21, Melih Mutlu wrote:\n> Hi hackers\n> \n> I've been looking into ways to reduce the overhead we're having in \n> pqcomm and I'd like to propose a small patch to modify how \n> socket_putmessage works.\n> \n> Currently socket_putmessage copies any input data into the pqcomm send \n> buffer (PqSendBuffer) and the size of this buffer is 8K. When the send \n> buffer gets full, it's flushed and we continue to copy more data into \n> the send buffer until we have no data left to be sent.\n> Since the send buffer is flushed whenever it's full, I think we are safe \n> to say that if the size of input data is larger than the buffer size, \n> which is 8K, then the buffer will be flushed at least once (or more, \n> depends on the input size) to store and all the input data.\n\nAgreed, that's silly.\n\n> Proposed change modifies socket_putmessage to send any data larger than \n> 8K immediately without copying it into the send buffer. Assuming that \n> the send buffer would be flushed anyway due to reaching its limit, the \n> patch just gets rid of the copy part which seems unnecessary and sends \n> data without waiting.\n\nIf there's already some data in PqSendBuffer, I wonder if it would be \nbetter to fill it up with data, flush it, and then send the rest of the \ndata directly. Instead of flushing the partial data first. I'm afraid \nthat you'll make a tiny call to secure_write(), followed by a large one, \nthen a tine one again, and so forth. Especially when socket_putmessage \nitself writes the msgtype and len, which are tiny, before the payload.\n\nPerhaps we should invent a new pq_putmessage() function that would take \nan input buffer with 5 bytes of space reserved before the payload. \npq_putmessage() could then fill in the msgtype and len bytes in the \ninput buffer and send that directly. (Not wedded to that particular API, \nbut something that would have the same effect)\n\n> This change affects places where pq_putmessage is used such as \n> pg_basebackup, COPY TO, walsender etc.\n> \n> I did some experiments to see how the patch performs.\n> Firstly, I loaded ~5GB data into a table [1], then ran \"COPY test TO \n> STDOUT\". Here are perf results of both the patch and HEAD > ...\n> The patch brings a ~5% gain in socket_putmessage.\n> \n> [1]\n> CREATE TABLE test(id int, name text, time TIMESTAMP);\n> INSERT INTO test (id, name, time) SELECT i AS id, repeat('dummy', 100) \n> AS name, NOW() AS time FROM generate_series(1, 100000000) AS i;\n\nI'm surprised by these results, because each row in that table is < 600 \nbytes. PqSendBufferSize is 8kB, so the optimization shouldn't kick in in \nthat test. Am I missing something?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 29 Jan 2024 18:12:39 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 11:12 AM Heikki Linnakangas <[email protected]> wrote:\n> Agreed, that's silly.\n\n+1.\n\n> If there's already some data in PqSendBuffer, I wonder if it would be\n> better to fill it up with data, flush it, and then send the rest of the\n> data directly. Instead of flushing the partial data first. I'm afraid\n> that you'll make a tiny call to secure_write(), followed by a large one,\n> then a tine one again, and so forth. Especially when socket_putmessage\n> itself writes the msgtype and len, which are tiny, before the payload.\n>\n> Perhaps we should invent a new pq_putmessage() function that would take\n> an input buffer with 5 bytes of space reserved before the payload.\n> pq_putmessage() could then fill in the msgtype and len bytes in the\n> input buffer and send that directly. (Not wedded to that particular API,\n> but something that would have the same effect)\n\nI share the concern; I'm not sure about the best solution. I wonder if\nit would be useful to have pq_putmessagev() in the style of writev()\net al. Or maybe what we need is secure_writev().\n\nI also wonder if the threshold for sending data directly should be\nsmaller than the buffer size, and/or whether it should depend on the\nbuffer being empty. If we have an 8kB buffer that currently has\nnothing in it, and somebody writes 2kB, I suspect it might be wrong to\ncopy that into the buffer. If the same buffer had 5kB used and 3kB\nfree, copying sounds a lot more likely to work out. The goal here is\nprobably to condense sequences of short messages into a single\ntransmission while sending long messages individually. I'm just not\nquite sure what heuristic would do that most effectively.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Jan 2024 12:48:32 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Hi Heikki,\n\nHeikki Linnakangas <[email protected]>, 29 Oca 2024 Pzt, 19:12 tarihinde şunu\nyazdı:\n\n> > Proposed change modifies socket_putmessage to send any data larger than\n> > 8K immediately without copying it into the send buffer. Assuming that\n> > the send buffer would be flushed anyway due to reaching its limit, the\n> > patch just gets rid of the copy part which seems unnecessary and sends\n> > data without waiting.\n>\n> If there's already some data in PqSendBuffer, I wonder if it would be\n> better to fill it up with data, flush it, and then send the rest of the\n> data directly. Instead of flushing the partial data first. I'm afraid\n> that you'll make a tiny call to secure_write(), followed by a large one,\n> then a tine one again, and so forth. Especially when socket_putmessage\n> itself writes the msgtype and len, which are tiny, before the payload.\n>\n\nI agree that I could do better there without flushing twice for both\nPqSendBuffer and\ninput data. PqSendBuffer always has some data, even if it's tiny, since\nmsgtype and len are added.\n\n\n> Perhaps we should invent a new pq_putmessage() function that would take\n> an input buffer with 5 bytes of space reserved before the payload.\n> pq_putmessage() could then fill in the msgtype and len bytes in the\n> input buffer and send that directly. (Not wedded to that particular API,\n> but something that would have the same effect)\n>\n\nI thought about doing this. The reason why I didn't was because I think\nthat such a change would require adjusting all input buffers wherever\npq_putmessage is called, and I did not want to touch that many different\nplaces. These places where we need pq_putmessage might not be that many\nthough, I'm not sure.\n\n\n>\n> > This change affects places where pq_putmessage is used such as\n> > pg_basebackup, COPY TO, walsender etc.\n> >\n> > I did some experiments to see how the patch performs.\n> > Firstly, I loaded ~5GB data into a table [1], then ran \"COPY test TO\n> > STDOUT\". Here are perf results of both the patch and HEAD > ...\n> > The patch brings a ~5% gain in socket_putmessage.\n> >\n> > [1]\n> > CREATE TABLE test(id int, name text, time TIMESTAMP);\n> > INSERT INTO test (id, name, time) SELECT i AS id, repeat('dummy', 100)\n> > AS name, NOW() AS time FROM generate_series(1, 100000000) AS i;\n>\n> I'm surprised by these results, because each row in that table is < 600\n> bytes. PqSendBufferSize is 8kB, so the optimization shouldn't kick in in\n> that test. Am I missing something?\n>\n\nYou're absolutely right. I made a silly mistake there. I also think that\nthe way I did perf analysis does not make much sense, even if one row of\nthe table is greater than 8kB.\nHere are some quick timing results after being sure that it triggers this\npatch's optimization. I need to think more on how to profile this with\nperf. I hope to share proper results soon.\n\nI just added a bit more zeros [1] and ran [2] (hopefully measured the\ncorrect thing)\n\nHEAD:\nreal 2m48,938s\nuser 0m9,226s\nsys 1m35,342s\n\nPatch:\nreal 2m40,690s\nuser 0m8,492s\nsys 1m31,001s\n\n[1]\n INSERT INTO test (id, name, time) SELECT i AS id, repeat('dummy', 10000)\nAS name, NOW() AS time FROM generate_series(1, 1000000) AS i;\n\n[2]\n rm /tmp/dummy && echo 3 | sudo tee /proc/sys/vm/drop_caches && time psql\n-d postgres -c \"COPY test TO STDOUT;\" > /tmp/dummy\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Heikki,Heikki Linnakangas <[email protected]>, 29 Oca 2024 Pzt, 19:12 tarihinde şunu yazdı:\n> Proposed change modifies socket_putmessage to send any data larger than \n> 8K immediately without copying it into the send buffer. Assuming that \n> the send buffer would be flushed anyway due to reaching its limit, the \n> patch just gets rid of the copy part which seems unnecessary and sends \n> data without waiting.\n\nIf there's already some data in PqSendBuffer, I wonder if it would be \nbetter to fill it up with data, flush it, and then send the rest of the \ndata directly. Instead of flushing the partial data first. I'm afraid \nthat you'll make a tiny call to secure_write(), followed by a large one, \nthen a tine one again, and so forth. Especially when socket_putmessage \nitself writes the msgtype and len, which are tiny, before the payload.I agree that I could do better there without flushing twice for both PqSendBuffer and input data. PqSendBuffer always has some data, even if it's tiny, since msgtype and len are added. \nPerhaps we should invent a new pq_putmessage() function that would take \nan input buffer with 5 bytes of space reserved before the payload. \npq_putmessage() could then fill in the msgtype and len bytes in the \ninput buffer and send that directly. (Not wedded to that particular API, \nbut something that would have the same effect)I thought about doing this. The reason why I didn't was because I think that such a change would require adjusting all input buffers wherever pq_putmessage is called, and I did not want to touch that many different places. These places where we need pq_putmessage might not be that many though, I'm not sure. \n\n> This change affects places where pq_putmessage is used such as \n> pg_basebackup, COPY TO, walsender etc.\n> \n> I did some experiments to see how the patch performs.\n> Firstly, I loaded ~5GB data into a table [1], then ran \"COPY test TO \n> STDOUT\". Here are perf results of both the patch and HEAD > ...\n> The patch brings a ~5% gain in socket_putmessage.\n> \n> [1]\n> CREATE TABLE test(id int, name text, time TIMESTAMP);\n> INSERT INTO test (id, name, time) SELECT i AS id, repeat('dummy', 100) \n> AS name, NOW() AS time FROM generate_series(1, 100000000) AS i;\n\nI'm surprised by these results, because each row in that table is < 600 \nbytes. PqSendBufferSize is 8kB, so the optimization shouldn't kick in in \nthat test. Am I missing something?You're absolutely right. I made a silly mistake there. I also think that the way I did perf analysis does not make much sense, even if one row of the table is greater than 8kB.Here are some quick timing results after being sure that it triggers this patch's optimization. I need to think more on how to profile this with perf. I hope to share proper results soon.I just added a bit more zeros [1] and ran [2] (hopefully measured the correct thing)HEAD:real 2m48,938suser 0m9,226ssys 1m35,342sPatch:real 2m40,690suser 0m8,492ssys 1m31,001s[1] INSERT INTO test (id, name, time) SELECT i AS id, repeat('dummy', 10000) AS name, NOW() AS time FROM generate_series(1, 1000000) AS i;[2] rm /tmp/dummy && echo 3 | sudo tee /proc/sys/vm/drop_caches && time psql -d postgres -c \"COPY test TO STDOUT;\" > /tmp/dummyThanks,-- Melih MutluMicrosoft",
"msg_date": "Tue, 30 Jan 2024 20:41:30 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Hi Robert,\n\nRobert Haas <[email protected]>, 29 Oca 2024 Pzt, 20:48 tarihinde şunu\nyazdı:\n\n> > If there's already some data in PqSendBuffer, I wonder if it would be\n> > better to fill it up with data, flush it, and then send the rest of the\n> > data directly. Instead of flushing the partial data first. I'm afraid\n> > that you'll make a tiny call to secure_write(), followed by a large one,\n> > then a tine one again, and so forth. Especially when socket_putmessage\n> > itself writes the msgtype and len, which are tiny, before the payload.\n> >\n> > Perhaps we should invent a new pq_putmessage() function that would take\n> > an input buffer with 5 bytes of space reserved before the payload.\n> > pq_putmessage() could then fill in the msgtype and len bytes in the\n> > input buffer and send that directly. (Not wedded to that particular API,\n> > but something that would have the same effect)\n>\n> I share the concern; I'm not sure about the best solution. I wonder if\n> it would be useful to have pq_putmessagev() in the style of writev()\n> et al. Or maybe what we need is secure_writev().\n>\n\nI thought about using writev() for not only pq_putmessage() but\npq_putmessage_noblock() too. Currently, pq_putmessage_noblock()\nrepallocs PqSendBuffer\nand copies input buffer, which can easily be larger than 8kB, into\nPqSendBuffer.I\nalso discussed it with Thomas off-list. The thing is that I believe we\nwould need secure_writev() with SSL/GSS cases handled properly. I'm just\nnot sure if the effort would be worthwhile considering what we gain from it.\n\n\n> I also wonder if the threshold for sending data directly should be\n> smaller than the buffer size, and/or whether it should depend on the\n> buffer being empty.\n\n\nYou might be right. I'm not sure what the ideal threshold would be.\n\n\n> If we have an 8kB buffer that currently has\n> nothing in it, and somebody writes 2kB, I suspect it might be wrong to\n> copy that into the buffer. If the same buffer had 5kB used and 3kB\n> free, copying sounds a lot more likely to work out. The goal here is\n> probably to condense sequences of short messages into a single\n> transmission while sending long messages individually. I'm just not\n> quite sure what heuristic would do that most effectively.\n>\n\nSounds like it's difficult to come up with a heuristic that would work well\nenough for most cases.\nOne thing with sending data instead of copying it if the buffer is empty is\nthat initially the buffer is empty. I believe it will stay empty forever if\nwe do not copy anything when the buffer is empty. We can maybe simply set\nthe threshold to the buffer size/2 (4kB) and hope that will work better. Or\ncopy the data only if it fits into the remaining space in the buffer. What\ndo you think?\n\n\nAn additional note while I mentioned pq_putmessage_noblock(), I've been\ntesting sending input data immediately in pq_putmessage_noblock() without\nblocking and copy the data into PqSendBuffer only if the socket would block\nand cannot send it. Unfortunately, I don't have strong numbers to\ndemonstrate any improvement in perf or timing yet. But I still like to know\nwhat would you think about it?\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Robert,Robert Haas <[email protected]>, 29 Oca 2024 Pzt, 20:48 tarihinde şunu yazdı:\n> If there's already some data in PqSendBuffer, I wonder if it would be\n> better to fill it up with data, flush it, and then send the rest of the\n> data directly. Instead of flushing the partial data first. I'm afraid\n> that you'll make a tiny call to secure_write(), followed by a large one,\n> then a tine one again, and so forth. Especially when socket_putmessage\n> itself writes the msgtype and len, which are tiny, before the payload.\n>\n> Perhaps we should invent a new pq_putmessage() function that would take\n> an input buffer with 5 bytes of space reserved before the payload.\n> pq_putmessage() could then fill in the msgtype and len bytes in the\n> input buffer and send that directly. (Not wedded to that particular API,\n> but something that would have the same effect)\n\nI share the concern; I'm not sure about the best solution. I wonder if\nit would be useful to have pq_putmessagev() in the style of writev()\net al. Or maybe what we need is secure_writev().I thought about using writev() for not only pq_putmessage() but pq_putmessage_noblock() too. Currently, pq_putmessage_noblock() repallocs PqSendBuffer and copies input buffer, which can easily be larger than 8kB, into PqSendBuffer.I also discussed it with Thomas off-list. The thing is that I believe we would need secure_writev() with SSL/GSS cases handled properly. I'm just not sure if the effort would be worthwhile considering what we gain from it. \nI also wonder if the threshold for sending data directly should be\nsmaller than the buffer size, and/or whether it should depend on the\nbuffer being empty. You might be right. I'm not sure what the ideal threshold would be. If we have an 8kB buffer that currently has\nnothing in it, and somebody writes 2kB, I suspect it might be wrong to\ncopy that into the buffer. If the same buffer had 5kB used and 3kB\nfree, copying sounds a lot more likely to work out. The goal here is\nprobably to condense sequences of short messages into a single\ntransmission while sending long messages individually. I'm just not\nquite sure what heuristic would do that most effectively.Sounds like it's difficult to come up with a heuristic that would work well enough for most cases.One thing with sending data instead of copying it if the buffer is empty is that initially the buffer is empty. I believe it will stay empty forever if we do not copy anything when the buffer is empty. We can maybe simply set the threshold to the buffer size/2 (4kB) and hope that will work better. Or copy the data only if it fits into the remaining space in the buffer. What do you think?An additional note while I mentioned pq_putmessage_noblock(), I've been testing sending input data immediately in pq_putmessage_noblock() without blocking and copy the data into PqSendBuffer only if the socket would block and cannot send it. Unfortunately, I don't have strong numbers to demonstrate any improvement in perf or timing yet. But I still like to know what would you think about it?Thanks,-- Melih MutluMicrosoft",
"msg_date": "Tue, 30 Jan 2024 20:58:05 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 12:58 PM Melih Mutlu <[email protected]> wrote:\n> Sounds like it's difficult to come up with a heuristic that would work well enough for most cases.\n> One thing with sending data instead of copying it if the buffer is empty is that initially the buffer is empty. I believe it will stay empty forever if we do not copy anything when the buffer is empty. We can maybe simply set the threshold to the buffer size/2 (4kB) and hope that will work better. Or copy the data only if it fits into the remaining space in the buffer. What do you think?\n>\n> An additional note while I mentioned pq_putmessage_noblock(), I've been testing sending input data immediately in pq_putmessage_noblock() without blocking and copy the data into PqSendBuffer only if the socket would block and cannot send it. Unfortunately, I don't have strong numbers to demonstrate any improvement in perf or timing yet. But I still like to know what would you think about it?\n\nI think this is an area where it's very difficult to foresee on\ntheoretical grounds what will be right in practice. The problem is\nthat the best algorithm probably depends on what usage patterns are\ncommon in practice. I think one common usage pattern will be a bunch\nof roughly equal-sized messages in a row, like CopyData or DataRow\nmessages -- but those messages won't have a consistent width. It would\nprobably be worth testing what behavior you see in such cases -- start\nwith say a stream of 100 byte messages and then gradually increase and\nsee how the behavior evolves.\n\nBut you can also have other patterns, with messages of different sizes\ninterleaved. In the case of FE-to-BE traffic, the extended query\nprotocol might be a good example of that: the Parse message could be\nquite long, or not, but the Bind Describe Execute Sync messages that\nfollow are probably all short. That case doesn't arise in this\ndirection, but I can't think exactly of what cases that do. It seems\nlike someone would need to play around and try some different cases\nand maybe log the sizes of the secure_write() calls with various\nalgorithms, and then try to figure out what's best. For example, if\nthe alternating short-write, long-write behavior that Heikki mentioned\nis happening, and I do think that particular thing is a very real\nrisk, then you haven't got it figured out yet...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Jan 2024 13:48:34 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Tue, 30 Jan 2024 at 19:48, Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jan 30, 2024 at 12:58 PM Melih Mutlu <[email protected]> wrote:\n> > Sounds like it's difficult to come up with a heuristic that would work well enough for most cases.\n> > One thing with sending data instead of copying it if the buffer is empty is that initially the buffer is empty. I believe it will stay empty forever if we do not copy anything when the buffer is empty. We can maybe simply set the threshold to the buffer size/2 (4kB) and hope that will work better. Or copy the data only if it fits into the remaining space in the buffer. What do you think?\n> >\n> > An additional note while I mentioned pq_putmessage_noblock(), I've been testing sending input data immediately in pq_putmessage_noblock() without blocking and copy the data into PqSendBuffer only if the socket would block and cannot send it. Unfortunately, I don't have strong numbers to demonstrate any improvement in perf or timing yet. But I still like to know what would you think about it?\n>\n> I think this is an area where it's very difficult to foresee on\n> theoretical grounds what will be right in practice\n\nI agree that it's hard to prove that such heuristics will always be\nbetter in practice than the status quo. But I feel like we shouldn't\nlet perfect be the enemy of good here. I one approach that is a clear\nimprovement over the status quo is:\n1. If the buffer is empty AND the data we are trying to send is larger\nthan the buffer size, then don't use the buffer.\n2. If not, fill up the buffer first (just like we do now) then send\nthat. And if the left over data is then still larger than the buffer,\nthen now the buffer is empty so 1. applies.\n\n\n",
"msg_date": "Wed, 31 Jan 2024 00:38:51 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 6:39 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I agree that it's hard to prove that such heuristics will always be\n> better in practice than the status quo. But I feel like we shouldn't\n> let perfect be the enemy of good here.\n\nSure, I agree.\n\n> I one approach that is a clear\n> improvement over the status quo is:\n> 1. If the buffer is empty AND the data we are trying to send is larger\n> than the buffer size, then don't use the buffer.\n> 2. If not, fill up the buffer first (just like we do now) then send\n> that. And if the left over data is then still larger than the buffer,\n> then now the buffer is empty so 1. applies.\n\nThat seems like it might be a useful refinement of Melih Mutlu's\noriginal proposal, but consider a message stream that consists of\nmessages exactly 8kB in size. If that message stream begins when the\nbuffer is empty, all messages are sent directly. If it begins when\nthere are any number of bytes in the buffer, we buffer every message\nforever. That's kind of an odd artifact, but maybe it's fine in\npractice. I say again that it's good to test out a bunch of scenarios\nand see what shakes out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jan 2024 12:22:48 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Wed, 31 Jan 2024 at 18:23, Robert Haas <[email protected]> wrote:\n> That's kind of an odd artifact, but maybe it's fine in\n> practice.\n\nI agree it's an odd artifact, but it's not a regression over the\nstatus quo. Achieving that was the intent of my suggestion: A change\nthat improves some cases, but regresses nowhere.\n\n> I say again that it's good to test out a bunch of scenarios\n> and see what shakes out.\n\nTesting a bunch of scenarios to find a good one sounds like a good\nidea, which can probably give us a more optimal heuristic. But it also\nsounds like a lot of work, and probably results in a lot of\ndiscussion. That extra effort might mean that we're not going to\ncommit any change for PG17 (or even at all). If so, then I'd rather\nhave a modest improvement from my refinement of Melih's proposal, than\nnone at all.\n\n\n",
"msg_date": "Wed, 31 Jan 2024 18:49:40 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Wed, Jan 31, 2024 at 12:49 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Testing a bunch of scenarios to find a good one sounds like a good\n> idea, which can probably give us a more optimal heuristic. But it also\n> sounds like a lot of work, and probably results in a lot of\n> discussion. That extra effort might mean that we're not going to\n> commit any change for PG17 (or even at all). If so, then I'd rather\n> have a modest improvement from my refinement of Melih's proposal, than\n> none at all.\n\nPersonally, I don't think it's likely that anything will get committed\nhere without someone doing more legwork than I've seen on the thread\nso far. I don't have any plan to pick up this patch anyway, but if I\nwere thinking about it, I would abandon the idea unless I were\nprepared to go test a bunch of stuff myself. I agree with the core\nidea of this work, but not with the idea that the bar is as low as \"if\nit can't lose relative to today, it's good enough.\"\n\nOf course, another committer may see it differently.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jan 2024 13:27:51 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Robert Haas <[email protected]>, 31 Oca 2024 Çar, 20:23 tarihinde şunu\nyazdı:\n\n> On Tue, Jan 30, 2024 at 6:39 PM Jelte Fennema-Nio <[email protected]>\n> wrote:\n> > I agree that it's hard to prove that such heuristics will always be\n> > better in practice than the status quo. But I feel like we shouldn't\n> > let perfect be the enemy of good here.\n>\n> Sure, I agree.\n>\n> > I one approach that is a clear\n> > improvement over the status quo is:\n> > 1. If the buffer is empty AND the data we are trying to send is larger\n> > than the buffer size, then don't use the buffer.\n> > 2. If not, fill up the buffer first (just like we do now) then send\n> > that. And if the left over data is then still larger than the buffer,\n> > then now the buffer is empty so 1. applies.\n>\n> That seems like it might be a useful refinement of Melih Mutlu's\n> original proposal, but consider a message stream that consists of\n> messages exactly 8kB in size. If that message stream begins when the\n> buffer is empty, all messages are sent directly. If it begins when\n> there are any number of bytes in the buffer, we buffer every message\n> forever. That's kind of an odd artifact, but maybe it's fine in\n> practice. I say again that it's good to test out a bunch of scenarios\n> and see what shakes out.\n>\n\nIsn't this already the case? Imagine sending exactly 8kB messages, the\nfirst pq_putmessage() call will buffer 8kB. Any call after this point\nsimply sends a 8kB message already buffered from the previous call and\nbuffers a new 8kB message. Only difference here is we keep the message in\nthe buffer for a while instead of sending it directly. In theory, the\nproposed idea should not bring any difference in the number of flushes and\nthe size of data we send in each time, but can remove unnecessary copies to\nthe buffer in this case. I guess the behaviour is also the same with or\nwithout the patch in case the buffer has already some bytes.\n\nRobert Haas <[email protected]>, 31 Oca 2024 Çar, 21:28 tarihinde şunu\nyazdı:\n\n> Personally, I don't think it's likely that anything will get committed\n> here without someone doing more legwork than I've seen on the thread\n> so far. I don't have any plan to pick up this patch anyway, but if I\n> were thinking about it, I would abandon the idea unless I were\n> prepared to go test a bunch of stuff myself. I agree with the core\n> idea of this work, but not with the idea that the bar is as low as \"if\n> it can't lose relative to today, it's good enough.\"\n>\n\nYou're right and I'm open to doing more legwork. I'd also appreciate any\nsuggestion about how to test this properly and/or useful scenarios to test.\nThat would be really helpful.\n\nI understand that I should provide more/better analysis around this change\nto prove that it doesn't hurt (hopefully) but improves some cases even\nthough not all the cases. That may even help us to find a better approach\nthan what's already proposed. Just to clarify, I don't think anyone here\nsuggests that the bar should be at \"if it can't lose relative to today,\nit's good enough\". IMHO \"a change that improves some cases, but regresses\nnowhere\" does not translate to that.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nRobert Haas <[email protected]>, 31 Oca 2024 Çar, 20:23 tarihinde şunu yazdı:On Tue, Jan 30, 2024 at 6:39 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I agree that it's hard to prove that such heuristics will always be\n> better in practice than the status quo. But I feel like we shouldn't\n> let perfect be the enemy of good here.\n\nSure, I agree.\n\n> I one approach that is a clear\n> improvement over the status quo is:\n> 1. If the buffer is empty AND the data we are trying to send is larger\n> than the buffer size, then don't use the buffer.\n> 2. If not, fill up the buffer first (just like we do now) then send\n> that. And if the left over data is then still larger than the buffer,\n> then now the buffer is empty so 1. applies.\n\nThat seems like it might be a useful refinement of Melih Mutlu's\noriginal proposal, but consider a message stream that consists of\nmessages exactly 8kB in size. If that message stream begins when the\nbuffer is empty, all messages are sent directly. If it begins when\nthere are any number of bytes in the buffer, we buffer every message\nforever. That's kind of an odd artifact, but maybe it's fine in\npractice. I say again that it's good to test out a bunch of scenarios\nand see what shakes out.Isn't this already the case? Imagine sending exactly 8kB messages, the first pq_putmessage() call will buffer 8kB. Any call after this point simply sends a 8kB message already buffered from the previous call and buffers a new 8kB message. Only difference here is we keep the message in the buffer for a while instead of sending it directly. In theory, the proposed idea should not bring any difference in the number of flushes and the size of data we send in each time, but can remove unnecessary copies to the buffer in this case. I guess the behaviour is also the same with or without the patch in case the buffer has already some bytes.Robert Haas <[email protected]>, 31 Oca 2024 Çar, 21:28 tarihinde şunu yazdı:Personally, I don't think it's likely that anything will get committedhere without someone doing more legwork than I've seen on the threadso far. I don't have any plan to pick up this patch anyway, but if Iwere thinking about it, I would abandon the idea unless I wereprepared to go test a bunch of stuff myself. I agree with the coreidea of this work, but not with the idea that the bar is as low as \"ifit can't lose relative to today, it's good enough.\"You're right and I'm open to doing more legwork. I'd also appreciate any suggestion about how to test this properly and/or useful scenarios to test. That would be really helpful.I understand that I should provide more/better analysis around this change to prove that it doesn't hurt (hopefully) but improves some cases even though not all the cases. That may even help us to find a better approach than what's already proposed. Just to clarify, I don't think anyone here suggests that the bar should be at \"if it can't lose relative to today, it's good enough\". IMHO \"a change that improves some cases, but regresses nowhere\" does not translate to that.Thanks,-- Melih MutluMicrosoft",
"msg_date": "Wed, 31 Jan 2024 22:23:07 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Wed, Jan 31, 2024 at 2:23 PM Melih Mutlu <[email protected]> wrote:\n>> That seems like it might be a useful refinement of Melih Mutlu's\n>> original proposal, but consider a message stream that consists of\n>> messages exactly 8kB in size. If that message stream begins when the\n>> buffer is empty, all messages are sent directly. If it begins when\n>> there are any number of bytes in the buffer, we buffer every message\n>> forever. That's kind of an odd artifact, but maybe it's fine in\n>> practice. I say again that it's good to test out a bunch of scenarios\n>> and see what shakes out.\n>\n> Isn't this already the case? Imagine sending exactly 8kB messages, the first pq_putmessage() call will buffer 8kB. Any call after this point simply sends a 8kB message already buffered from the previous call and buffers a new 8kB message. Only difference here is we keep the message in the buffer for a while instead of sending it directly. In theory, the proposed idea should not bring any difference in the number of flushes and the size of data we send in each time, but can remove unnecessary copies to the buffer in this case. I guess the behaviour is also the same with or without the patch in case the buffer has already some bytes.\n\nYes, it's never worse than today in terms of number of buffer flushes,\nbut it doesn't feel like great behavior, either. Users tend not to\nlike it when the behavior of an algorithm depends heavily on\nincidental factors that shouldn't really be relevant, like whether the\nbuffer starts with 1 byte in it or 0 at the beginning of a long\nsequence of messages. They see the performance varying \"for no reason\"\nand they dislike it. They don't say \"even the bad performance is no\nworse than earlier versions so it's fine.\"\n\n> You're right and I'm open to doing more legwork. I'd also appreciate any suggestion about how to test this properly and/or useful scenarios to test. That would be really helpful.\n\nI think experimenting to see whether the long-short-long-short\nbehavior that Heikki postulated emerges in practice would be a really\ngood start.\n\nAnother experiment that I think would be interesting is: suppose you\ncreate a patch that sends EVERY message without buffering and compare\nthat to master. My naive expectation would be that this will lose if\nyou pump short messages through that connection and win if you pump\nlong messages through that connection. Is that true? If yes, at what\npoint do we break even on performance? Does it depend on whether the\nconnection is local or over a network? Does it depend on whether it's\nwith or without SSL? Does it depend on Linux vs. Windows vs.\nwhateverBSD? What happens if you twiddle the 8kB buffer size up or,\nsay, down to just below the Ethernet frame size?\n\nI think that what we really want to understand here is under what\ncircumstances the extra layer of buffering is a win vs. being a loss.\nIf all the stuff I just mentioned doesn't really matter and the answer\nis, say, that an 8kB buffer is great and the breakpoint where extra\nbuffering makes sense is also 8kB, and that's consistent regardless of\nother variables, then your algorithm or Jelte's variant or something\nof that nature is probably just right. But if it turns out, say, that\nthe extra buffering is only a win for sub-1kB messages, that would be\nrather nice to know before we finalize the approach. Also, if it turns\nout that the answer differs dramatically based on whether you're using\na UNIX socket or TCP, that would also be nice to know before\nfinalizing an algorithm.\n\n> I understand that I should provide more/better analysis around this change to prove that it doesn't hurt (hopefully) but improves some cases even though not all the cases. That may even help us to find a better approach than what's already proposed. Just to clarify, I don't think anyone here suggests that the bar should be at \"if it can't lose relative to today, it's good enough\". IMHO \"a change that improves some cases, but regresses nowhere\" does not translate to that.\n\nWell, I thought those were fairly similar sentiments, so maybe I'm not\nquite understanding the statement in the way it was meant.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jan 2024 14:57:35 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-31 14:57:35 -0500, Robert Haas wrote:\n> > You're right and I'm open to doing more legwork. I'd also appreciate any\n> > suggestion about how to test this properly and/or useful scenarios to\n> > test. That would be really helpful.\n>\n> I think experimenting to see whether the long-short-long-short\n> behavior that Heikki postulated emerges in practice would be a really\n> good start.\n>\n> Another experiment that I think would be interesting is: suppose you\n> create a patch that sends EVERY message without buffering and compare\n> that to master. My naive expectation would be that this will lose if\n> you pump short messages through that connection and win if you pump\n> long messages through that connection. Is that true? If yes, at what\n> point do we break even on performance? Does it depend on whether the\n> connection is local or over a network? Does it depend on whether it's\n> with or without SSL? Does it depend on Linux vs. Windows vs.\n> whateverBSD? What happens if you twiddle the 8kB buffer size up or,\n> say, down to just below the Ethernet frame size?\n\nI feel like you're putting up a too high bar for something that can be a\npretty clear improvement on its own, without a downside. The current behaviour\nis pretty absurd, doing all this research across all platforms isn't going to\ndisprove that - and it's a lot of work. ISTM we can analyze this without\ntaking concrete hardware into account easily enough.\n\n\nOne thing that I haven't seen mentioned here that's relevant around using\nsmall buffers: Postgres uses TCP_NODELAY and has to do so. That means doing\ntiny sends can hurt substantially\n\n\n> I think that what we really want to understand here is under what\n> circumstances the extra layer of buffering is a win vs. being a loss.\n\nIt's quite easy to see that doing no buffering isn't viable - we end up with\ntiny tiny TCP packets, one for each send(). And then there's the syscall\noverhead.\n\n\nHere's a quickly thrown together benchmark using netperf. First with -D, which\ninstructs it to use TCP_NODELAY, as we do.\n\n10gbit network, remote host:\n\n$ (fields=\"request_size,throughput\"; echo \"$fields\";for i in $(seq 0 16); do s=$((2**$i));netperf -P0 -t TCP_STREAM -l1 -H alap5-10gbe -- -r $s,$s -D 1 -o \"$fields\";done)|column -t -s,\n\nrequest_size throughput\n1 22.73\n2 45.77\n4 108.64\n8 225.78\n16 560.32\n32 1035.61\n64 2177.91\n128 3604.71\n256 5878.93\n512 9334.70\n1024 9031.13\n2048 9405.35\n4096 9334.60\n8192 9275.33\n16384 9406.29\n32768 9385.52\n65536 9399.40\n\n\nlocalhost:\nrequest_size throughput\n1 2.76\n2 5.10\n4 9.89\n8 20.51\n16 43.42\n32 87.13\n64 173.72\n128 343.70\n256 647.89\n512 1328.79\n1024 2550.14\n2048 4998.06\n4096 9482.06\n8192 17130.76\n16384 29048.02\n32768 42106.33\n65536 48579.95\n\nI'm slightly baffled by the poor performance of localhost with tiny packet\nsizes. Ah, I see - it's the NODELA, without that:\n\nlocalhost:\n1 32.02\n2 60.58\n4 114.32\n8 262.71\n16 558.42\n32 1053.66\n64 2099.39\n128 3815.60\n256 6566.19\n512 11751.79\n1024 18976.11\n2048 27222.99\n4096 33838.07\n8192 38219.60\n16384 39146.37\n32768 44784.98\n65536 44214.70\n\n\nNODELAY triggers many more context switches, because there's immediately data\navailable for the receiving side. Whereas with real network the interrupts get\ncoalesced.\n\n\nI think that's pretty clear evidence that we need buffering. But I think we\ncan probably be smarter than we are right now, and then what's been proposed\nin the patch. Because of TCP_NODELAY we shouldn't send a tiny buffer on its\nown, it may trigger sending a small TCP packet, which is quite inefficient.\n\n\nWhile not perfect - e.g. because networks might use jumbo packets / large MTUs\nand we don't know how many outstanding bytes there are locally, I think a\ndecent heuristic could be to always try to send at least one packet worth of\ndata at once (something like ~1400 bytes), even if that requires copying some\nof the input data. It might not be sent on its own, but it should make it\nreasonably unlikely to end up with tiny tiny packets.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jan 2024 19:24:42 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Wed, Jan 31, 2024 at 10:24 PM Andres Freund <[email protected]> wrote:\n> While not perfect - e.g. because networks might use jumbo packets / large MTUs\n> and we don't know how many outstanding bytes there are locally, I think a\n> decent heuristic could be to always try to send at least one packet worth of\n> data at once (something like ~1400 bytes), even if that requires copying some\n> of the input data. It might not be sent on its own, but it should make it\n> reasonably unlikely to end up with tiny tiny packets.\n\nI think that COULD be a decent heuristic but I think it should be\nTESTED, including against the ~3 or so other heuristics proposed on\nthis thread, before we make a decision.\n\nI literally mentioned the Ethernet frame size as one of the things\nthat we should test whether it's relevant in the exact email to which\nyou're replying, and you replied by proposing that as a heuristic, but\nalso criticizing me for wanting more research before we settle on\nsomething. Are we just supposed to assume that your heuristic is\nbetter than the others proposed here without testing anything, or,\nlike, what? I don't think this needs to be a completely exhaustive or\nexhausting process, but I think trying a few different things out and\nseeing what happens is smart.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Feb 2024 10:52:22 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Thu, Feb 1, 2024 at 10:52 AM Robert Haas <[email protected]> wrote:\n> On Wed, Jan 31, 2024 at 10:24 PM Andres Freund <[email protected]> wrote:\n> > While not perfect - e.g. because networks might use jumbo packets / large MTUs\n> > and we don't know how many outstanding bytes there are locally, I think a\n> > decent heuristic could be to always try to send at least one packet worth of\n> > data at once (something like ~1400 bytes), even if that requires copying some\n> > of the input data. It might not be sent on its own, but it should make it\n> > reasonably unlikely to end up with tiny tiny packets.\n>\n> I think that COULD be a decent heuristic but I think it should be\n> TESTED, including against the ~3 or so other heuristics proposed on\n> this thread, before we make a decision.\n>\n> I literally mentioned the Ethernet frame size as one of the things\n> that we should test whether it's relevant in the exact email to which\n> you're replying, and you replied by proposing that as a heuristic, but\n> also criticizing me for wanting more research before we settle on\n> something. Are we just supposed to assume that your heuristic is\n> better than the others proposed here without testing anything, or,\n> like, what? I don't think this needs to be a completely exhaustive or\n> exhausting process, but I think trying a few different things out and\n> seeing what happens is smart.\n\nThere was probably a better way to phrase this email ... the sentiment\nis sincere, but there was almost certainly a way of writing it that\ndidn't sound like I'm super-annoyed.\n\nApologies for that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Feb 2024 15:02:57 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-01 15:02:57 -0500, Robert Haas wrote:\n> On Thu, Feb 1, 2024 at 10:52 AM Robert Haas <[email protected]> wrote:\n> There was probably a better way to phrase this email ... the sentiment\n> is sincere, but there was almost certainly a way of writing it that\n> didn't sound like I'm super-annoyed.\n\nNP - I could have phrased mine better as well...\n\n\n> > On Wed, Jan 31, 2024 at 10:24 PM Andres Freund <[email protected]> wrote:\n> > > While not perfect - e.g. because networks might use jumbo packets / large MTUs\n> > > and we don't know how many outstanding bytes there are locally, I think a\n> > > decent heuristic could be to always try to send at least one packet worth of\n> > > data at once (something like ~1400 bytes), even if that requires copying some\n> > > of the input data. It might not be sent on its own, but it should make it\n> > > reasonably unlikely to end up with tiny tiny packets.\n> >\n> > I think that COULD be a decent heuristic but I think it should be\n> > TESTED, including against the ~3 or so other heuristics proposed on\n> > this thread, before we make a decision.\n> >\n> > I literally mentioned the Ethernet frame size as one of the things\n> > that we should test whether it's relevant in the exact email to which\n> > you're replying, and you replied by proposing that as a heuristic, but\n> > also criticizing me for wanting more research before we settle on\n> > something.\n\nI mentioned the frame size thing because afaict nobody in the thread had\nmentioned our use of TCP_NODELAY (which basically forces the kernel to send\nout data immediately instead of waiting for further data to be sent). Without\nthat it'd be a lot less problematic to occasionally send data in small\nincrements inbetween larger sends. Nor would packet sizes be as relevant.\n\n\n> > Are we just supposed to assume that your heuristic is better than the\n> > others proposed here without testing anything, or, like, what? I don't\n> > think this needs to be a completely exhaustive or exhausting process, but\n> > I think trying a few different things out and seeing what happens is\n> > smart.\n\nI wasn't trying to say that my heuristic necessarily is better. What I was\ntrying to get at - and expressed badly - was that I doubt that testing can get\nus all that far here. It's not too hard to test the effects of our buffering\nwith regards to syscall overhead, but once you actually take network effects\ninto account it gets quite hard. Bandwidth, latency, the specific network\nhardware and operating systems involved all play a significant role. Given\nhow, uh, naive our current approach is, I think analyzing the situation from\nfirst principles and then doing some basic validation of the results makes\nmore sense.\n\nSeparately, I think we shouldn't aim for perfect here. It's obviously\nextremely inefficient to send a larger amount of data by memcpy()ing and\nsend()ing it in 8kB chunks. As mentioned by several folks upthread, we can\nimprove upon that without having worse behaviour than today. Medium-long term\nI suspect we're going to want to use asynchronous network interfaces, in\ncombination with zero-copy sending, which requires larger changes. Not that\nrelevant for things like query results, quite relevant for base backups etc.\n\n\nIt's perhaps also worth mentioning that the small send buffer isn't great for\nSSL performance, the encryption overhead increases when sending in small\nchunks.\n\n\nI hacked up Melih's patch to send the pending data together with the first bit\nof the large \"to be sent\" data and also added a patch to increased\nSINK_BUFFER_LENGTH by 16x. With a 12GB database I tested the time for\n pg_basebackup -c fast -Ft --compress=none -Xnone -D - -d \"$conn\" > /dev/null\n\n time via\ntest unix tcp tcp+ssl\nmaster 6.305s 9.436s 15.596s\nmaster-larger-buffer 6.535s 9.453s 15.208s\npatch 5.900s 7.465s 13.634s\npatch-larger-buffer 5.233s 5.439s 11.730s\n\n\nThe increase when using tcp is pretty darn impressive. If I had remembered in\ntime to disable manifests checksums, the win would have been even bigger.\n\n\nThe bottleneck for SSL is that it still ends up with ~16kB sends, not sure\nwhy.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Feb 2024 14:38:27 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Hi hackers,\r\n\r\nI did some experiments with this patch, after previous discussions. This\r\nprobably does not answer all questions, but would be happy to do more if\r\nneeded.\r\n\r\nFirst, I updated the patch according to what suggested here [1]. PSA v2.\r\nI tweaked the master branch a bit to not allow any buffering. I compared\r\nHEAD, this patch and no buffering at all.\r\nI also added a simple GUC to control PqSendBufferSize, this change only\r\nallows to modify the buffer size and should not have any impact on\r\nperformance.\r\n\r\nI again ran the COPY TO STDOUT command and timed it. AFAIU COPY sends data\r\nrow by row, and I tried running the command under different scenarios with\r\ndifferent # of rows and row sizes. You can find the test script attached\r\n(see test.sh).\r\nAll timings are in ms.\r\n\r\n1- row size = 100 bytes, # of rows = 1000000\r\n┌───────────┬────────────┬──────┬──────┬──────┬──────┬──────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ HEAD │ 1036 │ 998 │ 940 │ 910 │ 894 │ 874 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ patch │ 1107 │ 1032 │ 980 │ 957 │ 917 │ 909 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ no buffer │ 6230 │ 6125 │ 6282 │ 6279 │ 6255 │ 6221 │\r\n└───────────┴────────────┴──────┴──────┴──────┴──────┴──────┘\r\n\r\n2- row size = half of the rows are 1KB and rest is 10KB , # of rows =\r\n1000000\r\n┌───────────┬────────────┬───────┬───────┬───────┬───────┬───────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ HEAD │ 25197 │ 23414 │ 20612 │ 19206 │ 18334 │ 18033 │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ patch │ 19843 │ 19889 │ 19798 │ 19129 │ 18578 │ 18260 │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ no buffer │ 23752 │ 23565 │ 23602 │ 23622 │ 23541 │ 23599 │\r\n└───────────┴────────────┴───────┴───────┴───────┴───────┴───────┘\r\n\r\n3- row size = half of the rows are 1KB and rest is 1MB , # of rows = 1000\r\n┌───────────┬────────────┬──────┬──────┬──────┬──────┬──────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ HEAD │ 3137 │ 2937 │ 2687 │ 2551 │ 2456 │ 2465 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ patch │ 2399 │ 2390 │ 2402 │ 2415 │ 2417 │ 2422 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ no buffer │ 2417 │ 2414 │ 2429 │ 2418 │ 2435 │ 2404 │\r\n└───────────┴────────────┴──────┴──────┴──────┴──────┴──────┘\r\n\r\n3- row size = all rows are 1MB , # of rows = 1000\r\n┌───────────┬────────────┬──────┬──────┬──────┬──────┬──────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ HEAD │ 6113 │ 5764 │ 5281 │ 5009 │ 4885 │ 4872 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ patch │ 4759 │ 4754 │ 4754 │ 4758 │ 4782 │ 4805 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ no buffer │ 4756 │ 4774 │ 4793 │ 4766 │ 4770 │ 4774 │\r\n└───────────┴────────────┴──────┴──────┴──────┴──────┴──────┘\r\n\r\nSome quick observations:\r\n1- Even though I expect both the patch and HEAD behave similarly in case of\r\nsmall data (case 1: 100 bytes), the patch runs slightly slower than HEAD.\r\n2- In cases where the data does not fit into the buffer, the patch starts\r\nperforming better than HEAD. For example, in case 2, patch seems faster\r\nuntil the buffer size exceeds the data length. When the buffer size is set\r\nto something larger than 10KB (16KB/32KB in this case), there is again a\r\nslight performance loss with the patch as in case 1.\r\n3- With large row sizes (i.e. sizes that do not fit into the buffer) not\r\nbuffering at all starts performing better than HEAD. Similarly the patch\r\nperforms better too as it stops buffering if data does not fit into the\r\nbuffer.\r\n\r\n\r\n\r\n[1]\r\nhttps://www.postgresql.org/message-id/CAGECzQTYUhnC1bO%3DzNiSpUgCs%3DhCYxVHvLD2doXNx3My6ZAC2w%40mail.gmail.com\r\n\r\n\r\nThanks,\r\n-- \r\nMelih Mutlu\r\nMicrosoft",
"msg_date": "Thu, 14 Mar 2024 14:22:21 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 7:22 AM Melih Mutlu <[email protected]> wrote:\n> 1- Even though I expect both the patch and HEAD behave similarly in case of small data (case 1: 100 bytes), the patch runs slightly slower than HEAD.\n\nI wonder why this happens. It seems like maybe something that could be fixed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 08:12:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Thu, 14 Mar 2024 at 12:22, Melih Mutlu <[email protected]> wrote:\n> I did some experiments with this patch, after previous discussions\n\nOne thing I noticed is that the buffer sizes don't seem to matter much\nin your experiments, even though Andres his expectation was that 1400\nwould be better. I think I know the reason for that:\n\nafaict from your test.sh script you connect psql over localhost or\nmaybe even unix socket to postgres. Neither of those would not have an\nMTU of 1500. You'd probably want to do those tests over an actual\nnetwork or at least change the MTU of the loopback interface. e.g. my\n\"lo\" interface mtu is 65536 by default:\n\n❯ ip a\n1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN\ngroup default qlen 1000\n link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope host lo\n valid_lft forever preferred_lft forever\n\n\n",
"msg_date": "Thu, 14 Mar 2024 13:30:19 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On 14/03/2024 13:22, Melih Mutlu wrote:\n> @@ -1282,14 +1283,32 @@ internal_putbytes(const char *s, size_t len)\n> \t\t\tif (internal_flush())\n> \t\t\t\treturn EOF;\n> \t\t}\n> -\t\tamount = PqSendBufferSize - PqSendPointer;\n> -\t\tif (amount > len)\n> -\t\t\tamount = len;\n> -\t\tmemcpy(PqSendBuffer + PqSendPointer, s, amount);\n> -\t\tPqSendPointer += amount;\n> -\t\ts += amount;\n> -\t\tlen -= amount;\n> +\n> +\t\t/*\n> +\t\t * If the buffer is empty and data length is larger than the buffer\n> +\t\t * size, send it without buffering. Otherwise, put as much data as\n> +\t\t * possible into the buffer.\n> +\t\t */\n> +\t\tif (!pq_is_send_pending() && len >= PqSendBufferSize)\n> +\t\t{\n> +\t\t\tint start = 0;\n> +\n> +\t\t\tsocket_set_nonblocking(false);\n> +\t\t\tif (internal_flush_buffer(s, &start, (int *)&len))\n> +\t\t\t\treturn EOF;\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tamount = PqSendBufferSize - PqSendPointer;\n> +\t\t\tif (amount > len)\n> +\t\t\t\tamount = len;\n> +\t\t\tmemcpy(PqSendBuffer + PqSendPointer, s, amount);\n> +\t\t\tPqSendPointer += amount;\n> +\t\t\ts += amount;\n> +\t\t\tlen -= amount;\n> +\t\t}\n> \t}\n> +\n> \treturn 0;\n> }\n\nTwo small bugs:\n\n- the \"(int *) &len)\" cast is not ok, and will break visibly on \nbig-endian systems where sizeof(int) != sizeof(size_t).\n\n- If internal_flush_buffer() cannot write all the data in one call, it \nupdates 'start' for how much it wrote, and leaves 'end' unchanged. You \nthrow the updated 'start' value away, and will send the same data again \non next iteration.\n\nNot a correctness issue, but instead of pq_is_send_pending(), I think it \nwould be better to check \"PqSendStart == PqSendPointer\" directly, or \ncall socket_is_send_pending() directly here. pq_is_send_pending() does \nthe same, but it's at a higher level of abstraction.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 14 Mar 2024 14:45:59 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Thu, 14 Mar 2024 at 13:12, Robert Haas <[email protected]> wrote:\n>\n> On Thu, Mar 14, 2024 at 7:22 AM Melih Mutlu <[email protected]> wrote:\n> > 1- Even though I expect both the patch and HEAD behave similarly in case of small data (case 1: 100 bytes), the patch runs slightly slower than HEAD.\n>\n> I wonder why this happens. It seems like maybe something that could be fixed.\n\nsome wild guesses:\n1. maybe it's the extra call overhead of the new internal_flush\nimplementation. What happens if you make that an inline function?\n2. maybe swap these conditions around (the call seems heavier than a\nsimple comparison): !pq_is_send_pending() && len >= PqSendBufferSize\n\nBTW, the improvements for the larger rows are awesome!\n\n\n",
"msg_date": "Thu, 14 Mar 2024 14:03:19 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Fri, 15 Mar 2024 at 02:03, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Thu, 14 Mar 2024 at 13:12, Robert Haas <[email protected]> wrote:\n> >\n> > On Thu, Mar 14, 2024 at 7:22 AM Melih Mutlu <[email protected]> wrote:\n> > > 1- Even though I expect both the patch and HEAD behave similarly in case of small data (case 1: 100 bytes), the patch runs slightly slower than HEAD.\n> >\n> > I wonder why this happens. It seems like maybe something that could be fixed.\n>\n> some wild guesses:\n> 1. maybe it's the extra call overhead of the new internal_flush\n> implementation. What happens if you make that an inline function?\n> 2. maybe swap these conditions around (the call seems heavier than a\n> simple comparison): !pq_is_send_pending() && len >= PqSendBufferSize\n\nI agree these are both worth trying. For #2, I wonder if the\npq_is_send_pending() call is even worth checking at all. It seems to\nme that the internal_flush_buffer() code will just do nothing if\nnothing is pending. Also, isn't there almost always going to be\nsomething pending when the \"len >= PqSendBufferSize\" condition is met?\n We've just added the msgtype and number of bytes to the buffer which\nis 5 bytes. If the previous message was also more than\nPqSendBufferSize, then the buffer is likely to have 5 bytes due to the\nprevious flush, otherwise isn't it a 1 in 8192 chance that the buffer\nis empty?\n\nIf that fails to resolve the regression, maybe it's worth memcpy()ing\nenough bytes out of the message to fill the buffer then flush it and\ncheck if we still have > PqSendBufferSize remaining and skip the\nmemcpy() for the rest. That way there are no small flushes of just 5\nbytes and only ever the possibility of reducing the flushes as no\npattern should cause the number of flushes to increase.\n\nDavid\n\n\n",
"msg_date": "Thu, 21 Mar 2024 10:54:22 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Fri, 15 Mar 2024 at 01:46, Heikki Linnakangas <[email protected]> wrote:\n> - the \"(int *) &len)\" cast is not ok, and will break visibly on\n> big-endian systems where sizeof(int) != sizeof(size_t).\n\nI think fixing this requires adjusting the signature of\ninternal_flush_buffer() to use size_t instead of int. That also\nmeans that PqSendStart and PqSendPointer must also become size_t, or\ninternal_flush() must add local size_t variables to pass to\ninternal_flush_buffer and assign these back again to the global after\nthe call. Upgrading the globals might be the cleaner option.\n\nDavid\n\n\n",
"msg_date": "Thu, 21 Mar 2024 10:57:34 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "David Rowley <[email protected]>, 21 Mar 2024 Per, 00:54 tarihinde şunu\nyazdı:\n\n> On Fri, 15 Mar 2024 at 02:03, Jelte Fennema-Nio <[email protected]>\n> wrote:\n> >\n> > On Thu, 14 Mar 2024 at 13:12, Robert Haas <[email protected]> wrote:\n> > >\n> > > On Thu, Mar 14, 2024 at 7:22 AM Melih Mutlu <[email protected]>\n> wrote:\n> > > > 1- Even though I expect both the patch and HEAD behave similarly in\n> case of small data (case 1: 100 bytes), the patch runs slightly slower than\n> HEAD.\n> > >\n> > > I wonder why this happens. It seems like maybe something that could be\n> fixed.\n> >\n> > some wild guesses:\n> > 1. maybe it's the extra call overhead of the new internal_flush\n> > implementation. What happens if you make that an inline function?\n> > 2. maybe swap these conditions around (the call seems heavier than a\n> > simple comparison): !pq_is_send_pending() && len >= PqSendBufferSize\n>\n> I agree these are both worth trying. For #2, I wonder if the\n> pq_is_send_pending() call is even worth checking at all. It seems to\n> me that the internal_flush_buffer() code will just do nothing if\n> nothing is pending. Also, isn't there almost always going to be\n> something pending when the \"len >= PqSendBufferSize\" condition is met?\n> We've just added the msgtype and number of bytes to the buffer which\n> is 5 bytes. If the previous message was also more than\n> PqSendBufferSize, then the buffer is likely to have 5 bytes due to the\n> previous flush, otherwise isn't it a 1 in 8192 chance that the buffer\n> is empty?\n>\n> If that fails to resolve the regression, maybe it's worth memcpy()ing\n> enough bytes out of the message to fill the buffer then flush it and\n> check if we still have > PqSendBufferSize remaining and skip the\n> memcpy() for the rest. That way there are no small flushes of just 5\n> bytes and only ever the possibility of reducing the flushes as no\n> pattern should cause the number of flushes to increase.\n>\n\nIn len > PqSendBufferSize cases, the buffer should be filled as much as\npossible if we're sure that it will be flushed at some point. Otherwise we\nmight end up with small flushes. The cases where we're sure that the buffer\nwill be flushed is when the buffer is not empty. If it's empty, there is no\nneed to fill it unnecessarily as it might cause an additional flush. AFAIU\nfrom what you said, we shouldn't be worried about such a case since it's\nunlikely to have the buffer empty due to the first 5 bytes. I guess the\nonly case where the buffer can be empty is when the buffer has\nPqSendBufferSize-5\nbytes from previous messages and adding 5 bytes of the current message will\nflush the buffer. I'm not sure if removing the check may cause any\nregression in any case, but it's just there to be safe.\n\nWhat if I do a simple comparison like PqSendStart == PqSendPointer instead\nof calling pq_is_send_pending() as Heikki suggested, then this check should\nnot hurt that much. Right? Does that make sense?\n\n-- \nMelih Mutlu\nMicrosoft\n\nDavid Rowley <[email protected]>, 21 Mar 2024 Per, 00:54 tarihinde şunu yazdı:On Fri, 15 Mar 2024 at 02:03, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Thu, 14 Mar 2024 at 13:12, Robert Haas <[email protected]> wrote:\n> >\n> > On Thu, Mar 14, 2024 at 7:22 AM Melih Mutlu <[email protected]> wrote:\n> > > 1- Even though I expect both the patch and HEAD behave similarly in case of small data (case 1: 100 bytes), the patch runs slightly slower than HEAD.\n> >\n> > I wonder why this happens. It seems like maybe something that could be fixed.\n>\n> some wild guesses:\n> 1. maybe it's the extra call overhead of the new internal_flush\n> implementation. What happens if you make that an inline function?\n> 2. maybe swap these conditions around (the call seems heavier than a\n> simple comparison): !pq_is_send_pending() && len >= PqSendBufferSize\n\nI agree these are both worth trying. For #2, I wonder if the\npq_is_send_pending() call is even worth checking at all. It seems to\nme that the internal_flush_buffer() code will just do nothing if\nnothing is pending. Also, isn't there almost always going to be\nsomething pending when the \"len >= PqSendBufferSize\" condition is met?\n We've just added the msgtype and number of bytes to the buffer which\nis 5 bytes. If the previous message was also more than\nPqSendBufferSize, then the buffer is likely to have 5 bytes due to the\nprevious flush, otherwise isn't it a 1 in 8192 chance that the buffer\nis empty?\n\nIf that fails to resolve the regression, maybe it's worth memcpy()ing\nenough bytes out of the message to fill the buffer then flush it and\ncheck if we still have > PqSendBufferSize remaining and skip the\nmemcpy() for the rest. That way there are no small flushes of just 5\nbytes and only ever the possibility of reducing the flushes as no\npattern should cause the number of flushes to increase.In len > PqSendBufferSize cases, the buffer should be filled as much as possible if we're sure that it will be flushed at some point. Otherwise we might end up with small flushes. The cases where we're sure that the buffer will be flushed is when the buffer is not empty. If it's empty, there is no need to fill it unnecessarily as it might cause an additional flush. AFAIU from what you said, we shouldn't be worried about such a case since it's unlikely to have the buffer empty due to the first 5 bytes. I guess the only case where the buffer can be empty is when the buffer has PqSendBufferSize-5 bytes from previous messages and adding 5 bytes of the current message will flush the buffer. I'm not sure if removing the check may cause any regression in any case, but it's just there to be safe.What if I do a simple comparison like PqSendStart == PqSendPointer instead of calling pq_is_send_pending() as Heikki suggested, then this check should not hurt that much. Right? Does that make sense?-- Melih MutluMicrosoft",
"msg_date": "Thu, 21 Mar 2024 03:24:48 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Thu, 21 Mar 2024 at 13:24, Melih Mutlu <[email protected]> wrote:\n> What if I do a simple comparison like PqSendStart == PqSendPointer instead of calling pq_is_send_pending() as Heikki suggested, then this check should not hurt that much. Right? Does that make sense?\n\nAs I understand the code, there's no problem calling\ninternal_flush_buffer() when the buffer is empty and I suspect that if\nwe're sending a few buffers with \"len > PqSendBufferSize\" that it's\njust so unlikely that the buffer is empty that we should just do the\nfunction call and let internal_flush_buffer() handle doing nothing if\nthe buffer really is empty. I think the chances of\ninternal_flush_buffer() having to do exactly nothing here is less than\n1 in 8192, so I just don't think the check is worthwhile. The reason\nI don't think the odds are exactly 1 in 8192 is because if we're\nsending a large number of bytes then it will be common that the buffer\nwill contain exactly 5 bytes due to the previous flush and command\nprefix just having been added.\n\nIt's worth testing both, however. I might be wrong. Performance is\nhard to predict. It would be good to see your test.sh script run with\nand without the PqSendStart == PqSendPointer condition.\n\nDavid\n\n\n",
"msg_date": "Thu, 21 Mar 2024 13:45:06 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Thu, 21 Mar 2024 at 01:45, David Rowley <[email protected]> wrote:\n> As I understand the code, there's no problem calling\n> internal_flush_buffer() when the buffer is empty and I suspect that if\n> we're sending a few buffers with \"len > PqSendBufferSize\" that it's\n> just so unlikely that the buffer is empty that we should just do the\n> function call and let internal_flush_buffer() handle doing nothing if\n> the buffer really is empty. I think the chances of\n> internal_flush_buffer() having to do exactly nothing here is less than\n> 1 in 8192, so I just don't think the check is worthwhile.\n\nI think you're missing the exact case that we're trying to improve\nhere: Calls to internal_putbytes with a very large len, e.g. 1MB.\nWith the new code the buffer will be empty ~50% of the time (not less\nthan 1 in 8192) with such large buffers, because the flow that will\nhappen:\n\n1. We check len > PqSendBufferSize. There are some bytes in the buffer\ne.g. the 5 bytes of the msgtype. So we fill up the buffer, but have\nmany bytes left in len.\n2. We loop again, because len is not 0.\n3. We flush the buffer (at the top of the loop) because the buffer is full.\n4. We check len > PqSendBufferSize. Now the buffer is empty, so we\ncall internal_flush_buffer directly\n\nAs you can see we check len > PqSendBufferSize twice (in step 1. and\nstep 4.), and 1 out of 2 times it returns 0\n\nTo be clear, the code is done this way so our behaviour would only\never be better than the status-quo, and cause no regressions. For\ninstance, flushing the 5 byte header separately and then flushing the\nfull input buffer might result in more IP packets being sent in total\nin some cases due to our TCP_NODELAY.\n\n\n",
"msg_date": "Thu, 21 Mar 2024 10:44:17 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Thu, 21 Mar 2024 at 01:24, Melih Mutlu <[email protected]> wrote:\n> What if I do a simple comparison like PqSendStart == PqSendPointer instead of calling pq_is_send_pending()\n\nYeah, that sounds worth trying out. So the new suggestions to fix the\nperf issues on small message sizes would be:\n\n1. add \"inline\" to internal_flush function\n2. replace pq_is_send_pending() with PqSendStart == PqSendPointer\n3. (optional) swap the order of PqSendStart == PqSendPointer and len\n>= PqSendBufferSize\n\n\n",
"msg_date": "Thu, 21 Mar 2024 10:58:40 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Thu, 21 Mar 2024 at 22:44, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Thu, 21 Mar 2024 at 01:45, David Rowley <[email protected]> wrote:\n> > As I understand the code, there's no problem calling\n> > internal_flush_buffer() when the buffer is empty and I suspect that if\n> > we're sending a few buffers with \"len > PqSendBufferSize\" that it's\n> > just so unlikely that the buffer is empty that we should just do the\n> > function call and let internal_flush_buffer() handle doing nothing if\n> > the buffer really is empty. I think the chances of\n> > internal_flush_buffer() having to do exactly nothing here is less than\n> > 1 in 8192, so I just don't think the check is worthwhile.\n>\n> I think you're missing the exact case that we're trying to improve\n> here: Calls to internal_putbytes with a very large len, e.g. 1MB.\n> With the new code the buffer will be empty ~50% of the time (not less\n> than 1 in 8192) with such large buffers, because the flow that will\n> happen:\n\nIt was the code I misread. I understand what the aim is. I failed to\nnotice the while loop in internal_putbytes(). So what I mentioned\nabout trying to fill the buffer before flushing already happens. I\nnow agree that the PqSendStart == PqSendPointer test. I'd say since\nthe reported regression was with 100 byte rows that testing \"len >=\nPqSendBufferSize\" before PqSendStart == PqSendPointer makes sense.\n\nDavid\n\n\n",
"msg_date": "Fri, 22 Mar 2024 00:41:56 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]>, 14 Mar 2024 Per, 15:46 tarihinde şunu\nyazdı:\n\n> On 14/03/2024 13:22, Melih Mutlu wrote:\n> > @@ -1282,14 +1283,32 @@ internal_putbytes(const char *s, size_t len)\n> > if (internal_flush())\n> > return EOF;\n> > }\n> > - amount = PqSendBufferSize - PqSendPointer;\n> > - if (amount > len)\n> > - amount = len;\n> > - memcpy(PqSendBuffer + PqSendPointer, s, amount);\n> > - PqSendPointer += amount;\n> > - s += amount;\n> > - len -= amount;\n> > +\n> > + /*\n> > + * If the buffer is empty and data length is larger than\n> the buffer\n> > + * size, send it without buffering. Otherwise, put as much\n> data as\n> > + * possible into the buffer.\n> > + */\n> > + if (!pq_is_send_pending() && len >= PqSendBufferSize)\n> > + {\n> > + int start = 0;\n> > +\n> > + socket_set_nonblocking(false);\n> > + if (internal_flush_buffer(s, &start, (int *)&len))\n> > + return EOF;\n> > + }\n> > + else\n> > + {\n> > + amount = PqSendBufferSize - PqSendPointer;\n> > + if (amount > len)\n> > + amount = len;\n> > + memcpy(PqSendBuffer + PqSendPointer, s, amount);\n> > + PqSendPointer += amount;\n> > + s += amount;\n> > + len -= amount;\n> > + }\n> > }\n> > +\n> > return 0;\n> > }\n>\n> Two small bugs:\n>\n> - the \"(int *) &len)\" cast is not ok, and will break visibly on\n> big-endian systems where sizeof(int) != sizeof(size_t).\n>\n> - If internal_flush_buffer() cannot write all the data in one call, it\n> updates 'start' for how much it wrote, and leaves 'end' unchanged. You\n> throw the updated 'start' value away, and will send the same data again\n> on next iteration.\n>\n\nThere are two possible options for internal_flush_buffer() in\ninternal_putbytes() case:\n1- Write all the data and return 0. We don't need start or end of the data\nin this case.\n2- Cannot write all and return EOF. In this case internal_putbytes() also\nreturns EOF immediately and does not really retry. There will be no next\niteration.\n\nIf it was non-blocking, then we may need to keep the new value. But I think\nwe do not need the updated start value in both cases here. What do you\nthink?\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHeikki Linnakangas <[email protected]>, 14 Mar 2024 Per, 15:46 tarihinde şunu yazdı:On 14/03/2024 13:22, Melih Mutlu wrote:\n> @@ -1282,14 +1283,32 @@ internal_putbytes(const char *s, size_t len)\n> if (internal_flush())\n> return EOF;\n> }\n> - amount = PqSendBufferSize - PqSendPointer;\n> - if (amount > len)\n> - amount = len;\n> - memcpy(PqSendBuffer + PqSendPointer, s, amount);\n> - PqSendPointer += amount;\n> - s += amount;\n> - len -= amount;\n> +\n> + /*\n> + * If the buffer is empty and data length is larger than the buffer\n> + * size, send it without buffering. Otherwise, put as much data as\n> + * possible into the buffer.\n> + */\n> + if (!pq_is_send_pending() && len >= PqSendBufferSize)\n> + {\n> + int start = 0;\n> +\n> + socket_set_nonblocking(false);\n> + if (internal_flush_buffer(s, &start, (int *)&len))\n> + return EOF;\n> + }\n> + else\n> + {\n> + amount = PqSendBufferSize - PqSendPointer;\n> + if (amount > len)\n> + amount = len;\n> + memcpy(PqSendBuffer + PqSendPointer, s, amount);\n> + PqSendPointer += amount;\n> + s += amount;\n> + len -= amount;\n> + }\n> }\n> +\n> return 0;\n> }\n\nTwo small bugs:\n\n- the \"(int *) &len)\" cast is not ok, and will break visibly on \nbig-endian systems where sizeof(int) != sizeof(size_t).\n\n- If internal_flush_buffer() cannot write all the data in one call, it \nupdates 'start' for how much it wrote, and leaves 'end' unchanged. You \nthrow the updated 'start' value away, and will send the same data again \non next iteration.There are two possible options for internal_flush_buffer() in internal_putbytes() case:1- Write all the data and return 0. We don't need start or end of the data in this case.2- Cannot write all and return EOF. In this case internal_putbytes() also returns EOF immediately and does not really retry. There will be no next iteration.If it was non-blocking, then we may need to keep the new value. But I think we do not need the updated start value in both cases here. What do you think?Thanks,-- Melih MutluMicrosoft",
"msg_date": "Fri, 22 Mar 2024 02:07:56 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Hi,\r\n\r\nPSA v3.\r\n\r\nJelte Fennema-Nio <[email protected]>, 21 Mar 2024 Per, 12:58 tarihinde\r\nşunu yazdı:\r\n\r\n> On Thu, 21 Mar 2024 at 01:24, Melih Mutlu <[email protected]> wrote:\r\n> > What if I do a simple comparison like PqSendStart == PqSendPointer\r\n> instead of calling pq_is_send_pending()\r\n>\r\n> Yeah, that sounds worth trying out. So the new suggestions to fix the\r\n> perf issues on small message sizes would be:\r\n>\r\n> 1. add \"inline\" to internal_flush function\r\n> 2. replace pq_is_send_pending() with PqSendStart == PqSendPointer\r\n> 3. (optional) swap the order of PqSendStart == PqSendPointer and len\r\n> >= PqSendBufferSize\r\n>\r\n\r\nI did all of the above changes and it seems like those resolved the\r\nregression issue.\r\nSince the previous results were with unix sockets, I share here the results\r\nof v3 when using unix sockets for comparison.\r\nSharing only the case where all messages are 100 bytes, since this was when\r\nthe regression was most visible.\r\n\r\nrow size = 100 bytes, # of rows = 1000000\r\n┌───────────┬────────────┬──────┬──────┬──────┬──────┬──────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ HEAD │ 1106 │ 1006 │ 947 │ 920 │ 899 │ 888 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ patch │ 1094 │ 997 │ 943 │ 913 │ 894 │ 881 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ no buffer │ 6389 │ 6195 │ 6214 │ 6271 │ 6325 │ 6211 │\r\n└───────────┴────────────┴──────┴──────┴──────┴──────┴──────┘\r\n\r\nDavid Rowley <[email protected]>, 21 Mar 2024 Per, 00:57 tarihinde şunu\r\nyazdı:\r\n\r\n> On Fri, 15 Mar 2024 at 01:46, Heikki Linnakangas <[email protected]> wrote:\r\n> > - the \"(int *) &len)\" cast is not ok, and will break visibly on\r\n> > big-endian systems where sizeof(int) != sizeof(size_t).\r\n>\r\n> I think fixing this requires adjusting the signature of\r\n> internal_flush_buffer() to use size_t instead of int. That also\r\n> means that PqSendStart and PqSendPointer must also become size_t, or\r\n> internal_flush() must add local size_t variables to pass to\r\n> internal_flush_buffer and assign these back again to the global after\r\n> the call. Upgrading the globals might be the cleaner option.\r\n>\r\n> David\r\n\r\n\r\nThis is done too.\r\n\r\nI actually tried to test it over a real network for a while. However, I\r\ncouldn't get reliable-enough numbers with both HEAD and the patch due to\r\nnetwork related issues.\r\nI've decided to go with Jelte's suggestion [1] which is decreasing MTU of\r\nthe loopback interface to 1500 and using localhost.\r\n\r\nHere are the results:\r\n\r\n1- row size = 100 bytes, # of rows = 1000000\r\n┌───────────┬────────────┬───────┬───────┬───────┬───────┬───────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ HEAD │ 1351 │ 1233 │ 1074 │ 988 │ 944 │ 916 │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ patch │ 1369 │ 1232 │ 1073 │ 981 │ 928 │ 907 │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ no buffer │ 14949 │ 14533 │ 14791 │ 14864 │ 14612 │ 14751 │\r\n└───────────┴────────────┴───────┴───────┴───────┴───────┴───────┘\r\n\r\n2- row size = half of the rows are 1KB and rest is 10KB , # of rows =\r\n1000000\r\n┌───────────┬────────────┬───────┬───────┬───────┬───────┬───────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ HEAD │ 37212 │ 31372 │ 25520 │ 21980 │ 20311 │ 18864 │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ patch │ 23006 │ 23127 │ 23147 │ 22229 │ 20367 │ 19155 │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ no buffer │ 30725 │ 31090 │ 30917 │ 30796 │ 30984 │ 30813 │\r\n└───────────┴────────────┴───────┴───────┴───────┴───────┴───────┘\r\n\r\n3- row size = half of the rows are 1KB and rest is 1MB , # of rows = 1000\r\n┌───────────┬────────────┬──────┬──────┬──────┬──────┬──────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ HEAD │ 4296 │ 3713 │ 3040 │ 2711 │ 2528 │ 2449 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ patch │ 2401 │ 2411 │ 2404 │ 2374 │ 2395 │ 2408 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ no buffer │ 2399 │ 2403 │ 2408 │ 2389 │ 2402 │ 2403 │\r\n└───────────┴────────────┴──────┴──────┴──────┴──────┴──────┘\r\n\r\n4- row size = all rows are 1MB , # of rows = 1000\r\n┌───────────┬────────────┬──────┬──────┬──────┬──────┬──────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ HEAD │ 8335 │ 7370 │ 6017 │ 5368 │ 5009 │ 4843 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ patch │ 4711 │ 4722 │ 4708 │ 4693 │ 4724 │ 4717 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ no buffer │ 4704 │ 4712 │ 4746 │ 4728 │ 4709 │ 4730 │\r\n└───────────┴────────────┴──────┴──────┴──────┴──────┴──────┘\r\n\r\n\r\n[1]\r\nhttps://www.postgresql.org/message-id/CAGECzQQMktuTj8ijJgBRXCwLEqfJyAFxg1h7rCTej-6%3DcR0r%3DQ%40mail.gmail.com\r\n\r\nThanks,\r\n-- \r\nMelih Mutlu\r\nMicrosoft",
"msg_date": "Fri, 22 Mar 2024 02:45:52 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Fri, 22 Mar 2024 at 12:46, Melih Mutlu <[email protected]> wrote:\n> I did all of the above changes and it seems like those resolved the regression issue.\n\nThanks for adjusting the patch. The numbers do look better, but on\nlooking at your test.sh script from [1], I see:\n\nmeson setup --buildtype debug -Dcassert=true\n--prefix=\"$DESTDIR/usr/local/pgsql\" $DESTDIR && \\\n\ncan you confirm if the test was done in debug with casserts on? If\nso, it would be much better to have asserts off and have\n-Dbuildtype=release.\n\nI'm planning to run some benchmarks tomorrow. My thoughts are that\nthe patch allows the memcpy() to be skipped without adding any\nadditional buffer flushes and demonstrates a good performance increase\nin various scenarios from doing so. I think that is a satisfactory\ngoal. If I don't see any issues from reviewing and benchmarking\ntomorrow, I'd like to commit this.\n\nRobert, I understand you'd like a bit more from this patch. I'm\nwondering if you planning on blocking another committer from going\nahead with this? Or if you have a reason why the current state of the\npatch is not a meaningful enough improvement that would justify\npossibly not getting any improvements in this area for PG17?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAGPVpCSX8bTF61ZL9jOgh1AaY3bgsWnQ6J7WmJK4TV0f2LPnJQ%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 28 Mar 2024 00:39:29 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 7:39 AM David Rowley <[email protected]> wrote:\n> Robert, I understand you'd like a bit more from this patch. I'm\n> wondering if you planning on blocking another committer from going\n> ahead with this? Or if you have a reason why the current state of the\n> patch is not a meaningful enough improvement that would justify\n> possibly not getting any improvements in this area for PG17?\n\nSo, I think that the first version of the patch, when it got a big\nchunk of data, would just flush whatever was already in the buffer and\nthen send the rest without copying. The current version, as I\nunderstand it, only does that if the buffer is empty; otherwise, it\ncopies data as much data as it can into the partially-filled buffer. I\nthink that change addresses most of my concern about the approach; the\nold way could, I believe, lead to an increased total number of flushes\nwith the right usage pattern, but I don't believe that's possible with\nthe revised approach. I do kind of wonder whether there is some more\nfine-tuning of the approach that would improve things further, but I\nrealize that we have very limited time to figure this out, and there's\nno sense letting the perfect be the enemy of the good.\n\nSo in short... no, I don't have big concerns at this point. Melih's\nlatest benchmarks look fairly promising to me, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 11:54:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 14:39 David Rowley <[email protected]> wrote:\n\n> On Fri, 22 Mar 2024 at 12:46, Melih Mutlu <[email protected]> wrote:\n> > I did all of the above changes and it seems like those resolved the\n> regression issue.\n>\n> Thanks for adjusting the patch. The numbers do look better, but on\n> looking at your test.sh script from [1], I see:\n>\n> meson setup --buildtype debug -Dcassert=true\n> --prefix=\"$DESTDIR/usr/local/pgsql\" $DESTDIR && \\\n>\n> can you confirm if the test was done in debug with casserts on? If\n> so, it would be much better to have asserts off and have\n> -Dbuildtype=release.\n\n\nYes, previous numbers were with --buildtype debug -Dcassert=true. I can\nshare new numbers with release build and asserts off soon.\n\nThanks,\nMelih\n\nOn Wed, Mar 27, 2024 at 14:39 David Rowley <[email protected]> wrote:On Fri, 22 Mar 2024 at 12:46, Melih Mutlu <[email protected]> wrote:\n> I did all of the above changes and it seems like those resolved the regression issue.\n\nThanks for adjusting the patch. The numbers do look better, but on\nlooking at your test.sh script from [1], I see:\n\nmeson setup --buildtype debug -Dcassert=true\n--prefix=\"$DESTDIR/usr/local/pgsql\" $DESTDIR && \\\n\ncan you confirm if the test was done in debug with casserts on? If\nso, it would be much better to have asserts off and have\n-Dbuildtype=release.Yes, previous numbers were with --buildtype debug -Dcassert=true. I can share new numbers with release build and asserts off soon.Thanks, Melih",
"msg_date": "Thu, 28 Mar 2024 22:44:12 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 18:54 Robert Haas <[email protected]> wrote:\n\n> On Wed, Mar 27, 2024 at 7:39 AM David Rowley <[email protected]> wrote:\n> > Robert, I understand you'd like a bit more from this patch. I'm\n> > wondering if you planning on blocking another committer from going\n> > ahead with this? Or if you have a reason why the current state of the\n> > patch is not a meaningful enough improvement that would justify\n> > possibly not getting any improvements in this area for PG17?\n>\n> So, I think that the first version of the patch, when it got a big\n> chunk of data, would just flush whatever was already in the buffer and\n> then send the rest without copying.\n\n\nCorrect.\n\nThe current version, as I\n> understand it, only does that if the buffer is empty; otherwise, it\n> copies data as much data as it can into the partially-filled buffer.\n\n\nYes, currently it should fill and flush the buffer first, if it’s not\nalready empty. Only then it sends the rest without copying.\n\nThanks,\nMelih\n\nOn Wed, Mar 27, 2024 at 18:54 Robert Haas <[email protected]> wrote:On Wed, Mar 27, 2024 at 7:39 AM David Rowley <[email protected]> wrote:\n> Robert, I understand you'd like a bit more from this patch. I'm\n> wondering if you planning on blocking another committer from going\n> ahead with this? Or if you have a reason why the current state of the\n> patch is not a meaningful enough improvement that would justify\n> possibly not getting any improvements in this area for PG17?\n\nSo, I think that the first version of the patch, when it got a big\nchunk of data, would just flush whatever was already in the buffer and\nthen send the rest without copying. Correct.The current version, as I\nunderstand it, only does that if the buffer is empty; otherwise, it\ncopies data as much data as it can into the partially-filled buffer. Yes, currently it should fill and flush the buffer first, if it’s not already empty. Only then it sends the rest without copying.Thanks,Melih",
"msg_date": "Thu, 28 Mar 2024 22:47:24 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Hi,\r\n\r\nMelih Mutlu <[email protected]>, 28 Mar 2024 Per, 22:44 tarihinde şunu\r\nyazdı:\r\n>\r\n> On Wed, Mar 27, 2024 at 14:39 David Rowley <[email protected]> wrote:\r\n>>\r\n>> On Fri, 22 Mar 2024 at 12:46, Melih Mutlu <[email protected]> wrote:\r\n>> can you confirm if the test was done in debug with casserts on? If\r\n>> so, it would be much better to have asserts off and have\r\n>> -Dbuildtype=release.\r\n>\r\n>\r\n> Yes, previous numbers were with --buildtype debug -Dcassert=true. I can\r\nshare new numbers with release build and asserts off soon.\r\n\r\nWhile testing the patch without --buildtype debug -Dcassert=true, I felt\r\nlike there was still a slight regression. I changed internal_flush() to an\r\ninline function, results look better this way.\r\n\r\n\r\n1- row size = 100 bytes, # of rows = 1000000\r\n┌───────────┬────────────┬───────┬───────┬───────┬───────┬───────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ HEAD │ 861 │ 765 │ 612 │ 521 │ 477 │ 480 │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ patch │ 869 │ 766 │ 612 │ 519 │ 482 │ 467 │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ no buffer │ 13978 │ 13746 │ 13909 │ 13956 │ 13920 │ 13895 │\r\n└───────────┴────────────┴───────┴───────┴───────┴───────┴───────┘\r\n\r\n2- row size = half of the rows are 1KB and rest is 10KB , # of rows =\r\n1000000\r\n┌───────────┬────────────┬───────┬───────┬───────┬───────┬───────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ HEAD │ 30195 │ 26455 │ 17338 │ 14562 │ 12844 │ 11652 │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ patch │ 14744 │ 15830 │ 15697 │ 14273 │ 12794 │ 11652 │\r\n├───────────┼────────────┼───────┼───────┼───────┼───────┼───────┤\r\n│ no buffer │ 24054 │ 23992 │ 24162 │ 23951 │ 23901 │ 23925 │\r\n└───────────┴────────────┴───────┴───────┴───────┴───────┴───────┘\r\n\r\n3- row size = half of the rows are 1KB and rest is 1MB , # of rows = 1000\r\n┌───────────┬────────────┬──────┬──────┬──────┬──────┬──────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ HEAD │ 3546 │ 3029 │ 2373 │ 2032 │ 1873 │ 1806 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ patch │ 1715 │ 1723 │ 1724 │ 1731 │ 1729 │ 1709 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ no buffer │ 1749 │ 1748 │ 1742 │ 1744 │ 1757 │ 1744 │\r\n└───────────┴────────────┴──────┴──────┴──────┴──────┴──────┘\r\n\r\n4- row size = all rows are 1MB , # of rows = 1000\r\n┌───────────┬────────────┬──────┬──────┬──────┬──────┬──────┐\r\n│ │ 1400 bytes │ 2KB │ 4KB │ 8KB │ 16KB │ 32KB │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ HEAD │ 7089 │ 5987 │ 4697 │ 4048 │ 3737 │ 3523 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ patch │ 3438 │ 3411 │ 3400 │ 3416 │ 3399 │ 3429 │\r\n├───────────┼────────────┼──────┼──────┼──────┼──────┼──────┤\r\n│ no buffer │ 3432 │ 3432 │ 3416 │ 3424 │ 3378 │ 3429 │\r\n└───────────┴────────────┴──────┴──────┴──────┴──────┴──────┘\r\n\r\nThanks,\r\n-- \r\nMelih Mutlu\r\nMicrosoft",
"msg_date": "Thu, 4 Apr 2024 14:08:45 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 13:08, Melih Mutlu <[email protected]> wrote:\n> I changed internal_flush() to an inline function, results look better this way.\n\nIt seems you also change internal_flush_buffer to be inline (but only\nin the actual function definition, not declaration at the top). I\ndon't think inlining internal_flush_buffer should be necessary to\navoid the perf regressions, i.e. internal_flush is adding extra\nindirection compared to master and is only a single line, so that one\nmakes sense to inline.\n\nOther than that the code looks good to me.\n\nThe new results look great.\n\nOne thing that is quite interesting about these results is that\nincreasing the buffer size results in even better performance (by\nquite a bit). I don't think we can easily choose a perfect number, as\nit seems to be a trade-off between memory usage and perf. But allowing\npeople to configure it through a GUC like in your second patchset\nwould be quite useful I think, especially because larger buffers could\nbe configured for connections that would benefit most for it (e.g.\nreplication connections or big COPYs).\n\nI think your add-pq_send_buffer_size-GUC.patch is essentially what we\nwould need there but it would need some extra changes to actually be\nmerge-able:\n1. needs docs\n2. rename PQ_SEND_BUFFER_SIZE (at least make it not UPPER_CASE, but\nmaybe also remove the PQ_ prefix)\n3. It's marked as PGC_USERSET, but there's no logic to grow/shrink it\nafter initial allocation\n\n\n",
"msg_date": "Thu, 4 Apr 2024 15:34:24 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Jelte Fennema-Nio <[email protected]>, 4 Nis 2024 Per, 16:34 tarihinde\nşunu yazdı:\n\n> On Thu, 4 Apr 2024 at 13:08, Melih Mutlu <[email protected]> wrote:\n> > I changed internal_flush() to an inline function, results look better\n> this way.\n>\n> It seems you also change internal_flush_buffer to be inline (but only\n> in the actual function definition, not declaration at the top). I\n> don't think inlining internal_flush_buffer should be necessary to\n> avoid the perf regressions, i.e. internal_flush is adding extra\n> indirection compared to master and is only a single line, so that one\n> makes sense to inline.\n>\n\nRight. It was a mistake, forgot to remove that. Fixed it in v5.\n\n\n\n> Other than that the code looks good to me.\n>\n> The new results look great.\n>\n> One thing that is quite interesting about these results is that\n> increasing the buffer size results in even better performance (by\n> quite a bit). I don't think we can easily choose a perfect number, as\n> it seems to be a trade-off between memory usage and perf. But allowing\n> people to configure it through a GUC like in your second patchset\n> would be quite useful I think, especially because larger buffers could\n> be configured for connections that would benefit most for it (e.g.\n> replication connections or big COPYs).\n>\n> I think your add-pq_send_buffer_size-GUC.patch is essentially what we\n> would need there but it would need some extra changes to actually be\n> merge-able:\n> 1. needs docs\n> 2. rename PQ_SEND_BUFFER_SIZE (at least make it not UPPER_CASE, but\n> maybe also remove the PQ_ prefix)\n> 3. It's marked as PGC_USERSET, but there's no logic to grow/shrink it\n> after initial allocation\n>\n\nI agree that the GUC patch requires more work to be in good shape. I\ncreated that for testing purposes. But if we decide to make the buffer size\ncustomizable, then I'll start polishing up that patch and address your\nsuggestions.\n\nOne case that could benefit from increased COPY performance is table sync\nof logical replication. It might make sense letting users to configure\nbuffer size to speed up table sync. I'm not sure what kind of problems this\nGUC would bring though.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Thu, 4 Apr 2024 17:28:35 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Fri, 5 Apr 2024 at 03:28, Melih Mutlu <[email protected]> wrote:\n>\n> Jelte Fennema-Nio <[email protected]>, 4 Nis 2024 Per, 16:34 tarihinde şunu yazdı:\n>>\n>> On Thu, 4 Apr 2024 at 13:08, Melih Mutlu <[email protected]> wrote:\n>> > I changed internal_flush() to an inline function, results look better this way.\n>>\n>> It seems you also change internal_flush_buffer to be inline (but only\n>> in the actual function definition, not declaration at the top). I\n>> don't think inlining internal_flush_buffer should be necessary to\n>> avoid the perf regressions, i.e. internal_flush is adding extra\n>> indirection compared to master and is only a single line, so that one\n>> makes sense to inline.\n>\n> Right. It was a mistake, forgot to remove that. Fixed it in v5.\n\nI don't see any issues with v5, so based on the performance numbers\nshown on this thread for the latest patch, it would make sense to push\nit. The problem is, I just can't recreate the performance numbers.\n\nI've tried both on my AMD 3990x machine and an Apple M2 with a script\nsimilar to the test.sh from above. I mostly just stripped out the\nbuffer size stuff and adjusted the timing code to something that would\nwork with mac.\n\nThe script runs each copy 30 times and takes the average time,\nreported here in seconds.\n\nWith AMD 3990x:\n\nmaster\nRun 100 100 5000000: 1.032264113 sec\nRun 1024 10240 200000: 1.016229105 sec\nRun 1024 1048576 2000: 1.242267116 sec\nRun 1048576 1048576 1000: 1.245425089 sec\n\nv5\nRun 100 100 5000000: 1.068543053 sec\nRun 1024 10240 200000: 1.026298571 sec\nRun 1024 1048576 2000: 1.231169669 sec\nRun 1048576 1048576 1000: 1.236355567 sec\n\nWith the M2 mini:\n\nmaster\nRun 100 100 5000000: 1.167851249 sec\nRun 1024 10240 200000: 1.962466987 sec\nRun 1024 1048576 2000: 2.052836275 sec\nRun 1048576 1048576 1000: 2.057908066 sec\n\nv5\nRun 100 100 5000000: 1.149636571 sec\nRun 1024 10240 200000: 2.158487741 sec\nRun 1024 1048576 2000: 2.046627068 sec\nRun 1048576 1048576 1000: 2.039329068 sec\n\n From looking at the perf reports, the top function is:\n\n 57.62% postgres [.] CopyAttributeOutText\n\nI messed around with trying to speed up the string escaping in that\nfunction with the attached hacky patch and got the following on the\nAMD 3990x machine:\n\nCopyAttributeOutText_speedup.patch.txt\nRun 100 100 5000000: 0.821673910\nRun 1024 10240 200000: 0.546632147\nRun 1024 1048576 2000: 0.848492694\nRun 1048576 1048576 1000: 0.840870293\n\nI don't think we could actually do this unless we modified the output\nfunction API to have it somehow output the number of bytes. The patch\nmay look beyond the NUL byte with pg_lfind8, which I don't think is\nsafe.\n\nDoes anyone else want to try the attached script on the v5 patch to\nsee if their numbers are better?\n\nDavid",
"msg_date": "Sat, 6 Apr 2024 14:34:17 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Sat, 6 Apr 2024 at 03:34, David Rowley <[email protected]> wrote:\n> Does anyone else want to try the attached script on the v5 patch to\n> see if their numbers are better?\n\nOn my machine (i9-10900X, in Ubuntu 22.04 on WSL on Windows) v5\nconsistently beats master by ~0.25 seconds:\n\nmaster:\nRun 100 100 5000000: 1.948975205\nRun 1024 10240 200000: 3.039986587\nRun 1024 1048576 2000: 2.444176276\nRun 1048576 1048576 1000: 2.475328596\n\nv5:\nRun 100 100 5000000: 1.997170909\nRun 1024 10240 200000: 3.057802598\nRun 1024 1048576 2000: 2.199449857\nRun 1048576 1048576 1000: 2.210328762\n\nThe first two runs are pretty much equal, and I ran your script a few\nmore times and this seems like just random variance (sometimes v5 wins\nthose, sometimes master does always quite close to each other). But\nthe last two runs v5 consistently wins.\n\nWeird that on your machines you don't see a difference. Are you sure\nyou didn't make a silly mistake, like not restarting postgres or\nsomething?\n\n\n",
"msg_date": "Sat, 6 Apr 2024 12:16:58 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Sat, 6 Apr 2024 at 23:17, Jelte Fennema-Nio <[email protected]> wrote:\n> Weird that on your machines you don't see a difference. Are you sure\n> you didn't make a silly mistake, like not restarting postgres or\n> something?\n\nI'm sure. I spent quite a long time between the AMD and an Apple m2 trying.\n\nI did see the same regression as you on the smaller numbers. I\nexperimented with the attached which macro'ifies internal_flush() and\npg_noinlines internal_flush_buffer.\n\nCan you try that to see if it gets rid of the regression on the first two tests?\n\nDavid",
"msg_date": "Sun, 7 Apr 2024 01:51:03 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-06 14:34:17 +1300, David Rowley wrote:\n> I don't see any issues with v5, so based on the performance numbers\n> shown on this thread for the latest patch, it would make sense to push\n> it. The problem is, I just can't recreate the performance numbers.\n>\n> I've tried both on my AMD 3990x machine and an Apple M2 with a script\n> similar to the test.sh from above. I mostly just stripped out the\n> buffer size stuff and adjusted the timing code to something that would\n> work with mac.\n\nI think there are a few issues with the test script leading to not seeing a\ngain:\n\n1) I think using the textual protocol, with the text datatype, will make it\n harder to spot differences. That's a lot of overhead.\n\n2) Afaict the test is connecting over the unix socket, I think we expect\n bigger wins for tcp\n\n3) Particularly the larger string is bottlenecked due to pglz compression in\n toast.\n\n\nWhere I had noticed the overhead of the current approach badly, was streaming\nout basebackups. Which is all binary, of course.\n\n\nI added WITH BINARY, SET STORAGE EXTERNAL and tested both unix socket and\nlocalhost. I also reduced row counts and iteration counts, because I am\nimpatient, and I don't think it matters much here. Attached the modified\nversion.\n\n\nOn a dual xeon Gold 5215, turbo boost disabled, server pinned to one core,\nscript pinned to another:\n\n\nunix:\n\nmaster:\nRun 100 100 1000000: 0.058482377\nRun 1024 10240 100000: 0.120909810\nRun 1024 1048576 2000: 0.153027916\nRun 1048576 1048576 1000: 0.154953512\n\nv5:\nRun 100 100 1000000: 0.058760126\nRun 1024 10240 100000: 0.118831396\nRun 1024 1048576 2000: 0.124282503\nRun 1048576 1048576 1000: 0.123894962\n\n\nlocalhost:\n\nmaster:\nRun 100 100 1000000: 0.067088000\nRun 1024 10240 100000: 0.170894273\nRun 1024 1048576 2000: 0.230346632\nRun 1048576 1048576 1000: 0.230336078\n\nv5:\nRun 100 100 1000000: 0.067144036\nRun 1024 10240 100000: 0.167950948\nRun 1024 1048576 2000: 0.135167027\nRun 1048576 1048576 1000: 0.135347867\n\n\nThe perf difference for 1MB via TCP is really impressive.\n\nThe small regression for small results is still kinda visible, I haven't yet\ntested the patch downthread.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 6 Apr 2024 13:21:27 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Sat, 6 Apr 2024 at 22:21, Andres Freund <[email protected]> wrote:\n> The small regression for small results is still kinda visible, I haven't yet\n> tested the patch downthread.\n\nThanks a lot for the faster test script, I'm also impatient. I still\nsaw the small regression with David his patch. Here's a v6 where I\nthink it is now gone. I added inline to internal_put_bytes too. I\nthink that helped especially because for two calls to\ninternal_put_bytes len is a constant (1 and 4) that is smaller than\nPqSendBufferSize. So for those calls the compiler can now statically\neliminate the new codepath because \"len >= PqSendBufferSize\" is known\nto be false at compile time.\n\nAlso I incorporated all of Ranier his comments.",
"msg_date": "Sun, 7 Apr 2024 00:45:31 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-07 00:45:31 +0200, Jelte Fennema-Nio wrote:\n> On Sat, 6 Apr 2024 at 22:21, Andres Freund <[email protected]> wrote:\n> > The small regression for small results is still kinda visible, I haven't yet\n> > tested the patch downthread.\n> \n> Thanks a lot for the faster test script, I'm also impatient. I still\n> saw the small regression with David his patch. Here's a v6 where I\n> think it is now gone. I added inline to internal_put_bytes too. I\n> think that helped especially because for two calls to\n> internal_put_bytes len is a constant (1 and 4) that is smaller than\n> PqSendBufferSize. So for those calls the compiler can now statically\n> eliminate the new codepath because \"len >= PqSendBufferSize\" is known\n> to be false at compile time.\n\nNice.\n\n\n> Also I incorporated all of Ranier his comments.\n\nChanging the global vars to size_t seems mildly bogus to me. All it's\nachieving is to use slightly more memory. It also just seems unrelated to the\nchange.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Apr 2024 18:39:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Sun, 7 Apr 2024 at 08:21, Andres Freund <[email protected]> wrote:\n> I added WITH BINARY, SET STORAGE EXTERNAL and tested both unix socket and\n> localhost. I also reduced row counts and iteration counts, because I am\n> impatient, and I don't think it matters much here. Attached the modified\n> version.\n\nThanks for the script. I'm able to reproduce the speedup with your script.\n\nI looked over the patch again and ended up making internal_flush an\ninline function rather than a macro. I compared the assembly produced\nfrom each and it's the same with the exception of the label names\nbeing different.\n\nI've now pushed the patch.\n\nOne thing that does not seem ideal is having to cast away the\nconst-ness of the buffer in internal_flush_buffer(). Previously this\nwasn't an issue as we always copied the buffer and passed that to\nsecure_write(). I wonder if it's worth seeing if we can keep this\nbuffer constant all the way to the socket write.\n\nThat seems to require modifying the following function signatures:\nsecure_write(), be_tls_write(), be_gssapi_write(). That's not an area\nI'm familiar with, however.\n\nDavid\n\n\n",
"msg_date": "Sun, 7 Apr 2024 21:33:52 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Sun, 7 Apr 2024 at 03:39, Andres Freund <[email protected]> wrote:\n> Changing the global vars to size_t seems mildly bogus to me. All it's\n> achieving is to use slightly more memory. It also just seems unrelated to the\n> change.\n\nI took a closer look at this. I agree that changing PqSendBufferSize\nto size_t is unnecessary: given the locations that it is used I see no\nrisk of overflow anywhere. Changing the type of PqSendPointer and\nPqSendStart is needed though, because (as described by Heiki and David\nupthread) the argument type of internal_flush_buffer is size_t*. So if\nyou actually pass int* there, and the sizes are not the same then you\nwill start writing out of bounds. And because internal_flush_buffer is\nintroduced in this patch, it is related to this change.\n\nThis is what David just committed too.\n\nHowever, the \"required\" var actually should be of size_t to avoid\noverflow if len is larger than int even without this change. So\nattached is a tiny patch that does that.",
"msg_date": "Sun, 7 Apr 2024 12:04:48 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Sun, 7 Apr 2024 at 22:05, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Sun, 7 Apr 2024 at 03:39, Andres Freund <[email protected]> wrote:\n> > Changing the global vars to size_t seems mildly bogus to me. All it's\n> > achieving is to use slightly more memory. It also just seems unrelated to the\n> > change.\n>\n> I took a closer look at this. I agree that changing PqSendBufferSize\n> to size_t is unnecessary: given the locations that it is used I see no\n> risk of overflow anywhere. Changing the type of PqSendPointer and\n> PqSendStart is needed though, because (as described by Heiki and David\n> upthread) the argument type of internal_flush_buffer is size_t*. So if\n> you actually pass int* there, and the sizes are not the same then you\n> will start writing out of bounds. And because internal_flush_buffer is\n> introduced in this patch, it is related to this change.\n>\n> This is what David just committed too.\n>\n> However, the \"required\" var actually should be of size_t to avoid\n> overflow if len is larger than int even without this change. So\n> attached is a tiny patch that does that.\n\nLooking at the code in socket_putmessage_noblock(), I don't understand\nwhy it's ok for PqSendBufferSize to be int but \"required\" must be\nsize_t. There's a line that does \"PqSendBufferSize = required;\". It\nkinda looks like they both should be size_t. Am I missing something\nthat you've thought about?\n\nDavid\n\n\n",
"msg_date": "Mon, 8 Apr 2024 00:40:52 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Em sáb., 6 de abr. de 2024 às 22:39, Andres Freund <[email protected]>\nescreveu:\n\n> Hi,\n>\n> On 2024-04-07 00:45:31 +0200, Jelte Fennema-Nio wrote:\n> > On Sat, 6 Apr 2024 at 22:21, Andres Freund <[email protected]> wrote:\n> > > The small regression for small results is still kinda visible, I\n> haven't yet\n> > > tested the patch downthread.\n> >\n> > Thanks a lot for the faster test script, I'm also impatient. I still\n> > saw the small regression with David his patch. Here's a v6 where I\n> > think it is now gone. I added inline to internal_put_bytes too. I\n> > think that helped especially because for two calls to\n> > internal_put_bytes len is a constant (1 and 4) that is smaller than\n> > PqSendBufferSize. So for those calls the compiler can now statically\n> > eliminate the new codepath because \"len >= PqSendBufferSize\" is known\n> > to be false at compile time.\n>\n> Nice.\n>\n>\n> > Also I incorporated all of Ranier his comments.\n>\n> Changing the global vars to size_t seems mildly bogus to me. All it's\n> achieving is to use slightly more memory. It also just seems unrelated to\n> the\n> change.\n>\nI don't agree with this thought.\nActually size_t uses 4 bytes of memory than int, right.\nBut mixing up int and size_t is a sure way to write non-portable code.\nAnd the compilers will start showing messages such as \" signed/unsigned\nmismatch\".\n\nThe global vars PqSendPointer and PqSendStart were changed in the v5 patch,\nso for the sake of style and consistency, I understand that it is better\nnot to mix the types.\n\nThe compiler will promote PqSendBufferSize to size_t in all comparisons.\n\nAnd finally the correct type to deal with char * variables is size_t.\n\nBest regards,\nRanier Vilela\n\n\n> Greetings,\n>\n> Andres Freund\n>\n\nEm sáb., 6 de abr. de 2024 às 22:39, Andres Freund <[email protected]> escreveu:Hi,\n\nOn 2024-04-07 00:45:31 +0200, Jelte Fennema-Nio wrote:\n> On Sat, 6 Apr 2024 at 22:21, Andres Freund <[email protected]> wrote:\n> > The small regression for small results is still kinda visible, I haven't yet\n> > tested the patch downthread.\n> \n> Thanks a lot for the faster test script, I'm also impatient. I still\n> saw the small regression with David his patch. Here's a v6 where I\n> think it is now gone. I added inline to internal_put_bytes too. I\n> think that helped especially because for two calls to\n> internal_put_bytes len is a constant (1 and 4) that is smaller than\n> PqSendBufferSize. So for those calls the compiler can now statically\n> eliminate the new codepath because \"len >= PqSendBufferSize\" is known\n> to be false at compile time.\n\nNice.\n\n\n> Also I incorporated all of Ranier his comments.\n\nChanging the global vars to size_t seems mildly bogus to me. All it's\nachieving is to use slightly more memory. It also just seems unrelated to the\nchange.I don't agree with this thought.Actually size_t uses 4 bytes of memory than int, right.But \nmixing up int and size_t is a sure way to write non-portable code.\n\nAnd the compilers will start showing messages such as \"\nsigned/unsigned mismatch\".The global vars PqSendPointer and PqSendStart were changed in the v5 patch, so for the sake of style and consistency, I understand that it is better not to mix the types.The compiler will promote PqSendBufferSize to size_t in all comparisons.And finally the correct type to deal with char * variables is size_t.Best regards,Ranier Vilela\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 7 Apr 2024 09:42:54 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "David Rowley <[email protected]>, 6 Nis 2024 Cmt, 04:34 tarihinde şunu\nyazdı:\n\n> Does anyone else want to try the attached script on the v5 patch to\n> see if their numbers are better?\n>\n\nI'm seeing the below results with your script on my machine (). I ran it\nseveral times, results were almost similar each time.\n\nmaster:\nRun 100 100 5000000: 1.627905512\nRun 1024 10240 200000: 1.603231684\nRun 1024 1048576 2000: 2.962812352\nRun 1048576 1048576 1000: 2.940766748\n\nv5:\nRun 100 100 5000000: 1.611508155\nRun 1024 10240 200000: 1.603505596\nRun 1024 1048576 2000: 2.727241937\nRun 1048576 1048576 1000: 2.721268988\n\nDavid Rowley <[email protected]>, 6 Nis 2024 Cmt, 04:34 tarihinde şunu yazdı:\nDoes anyone else want to try the attached script on the v5 patch to\nsee if their numbers are better?I'm seeing the below results with your script on my machine (). I ran it several times, results were almost similar each time.master:Run 100 100 5000000: 1.627905512Run 1024 10240 200000: 1.603231684Run 1024 1048576 2000: 2.962812352Run 1048576 1048576 1000: 2.940766748v5:Run 100 100 5000000: 1.611508155Run 1024 10240 200000: 1.603505596Run 1024 1048576 2000: 2.727241937Run 1048576 1048576 1000: 2.721268988",
"msg_date": "Mon, 8 Apr 2024 00:56:56 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Sun, 7 Apr 2024 at 14:41, David Rowley <[email protected]> wrote:\n> Looking at the code in socket_putmessage_noblock(), I don't understand\n> why it's ok for PqSendBufferSize to be int but \"required\" must be\n> size_t. There's a line that does \"PqSendBufferSize = required;\". It\n> kinda looks like they both should be size_t. Am I missing something\n> that you've thought about?\n\n\nYou and Ranier are totally right (I missed this assignment). Attached is v8.",
"msg_date": "Mon, 8 Apr 2024 12:42:23 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Em seg., 8 de abr. de 2024 às 07:42, Jelte Fennema-Nio <[email protected]>\nescreveu:\n\n> On Sun, 7 Apr 2024 at 14:41, David Rowley <[email protected]> wrote:\n> > Looking at the code in socket_putmessage_noblock(), I don't understand\n> > why it's ok for PqSendBufferSize to be int but \"required\" must be\n> > size_t. There's a line that does \"PqSendBufferSize = required;\". It\n> > kinda looks like they both should be size_t. Am I missing something\n> > that you've thought about?\n>\n>\n> You and Ranier are totally right (I missed this assignment). Attached is\n> v8.\n>\n+1\nLGTM.\n\nbest regards,\nRanier Vilela\n\nEm seg., 8 de abr. de 2024 às 07:42, Jelte Fennema-Nio <[email protected]> escreveu:On Sun, 7 Apr 2024 at 14:41, David Rowley <[email protected]> wrote:\n> Looking at the code in socket_putmessage_noblock(), I don't understand\n> why it's ok for PqSendBufferSize to be int but \"required\" must be\n> size_t. There's a line that does \"PqSendBufferSize = required;\". It\n> kinda looks like they both should be size_t. Am I missing something\n> that you've thought about?\n\n\nYou and Ranier are totally right (I missed this assignment). Attached is v8.+1LGTM.best regards,Ranier Vilela",
"msg_date": "Mon, 8 Apr 2024 09:05:56 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "On Sun, 7 Apr 2024 at 11:34, David Rowley <[email protected]> wrote:\n> That seems to require modifying the following function signatures:\n> secure_write(), be_tls_write(), be_gssapi_write(). That's not an area\n> I'm familiar with, however.\n\nAttached is a new patchset where 0003 does exactly that. The only\nplace where we need to cast to non-const is for GSS, but that seems\nfine (commit message has more details).\n\nI also added patch 0002, which is a small addition to the function\ncomment of internal_flush_buffer that seemed useful to me to\ndifferentiate it from internal_flush (feel free to ignore/rewrite).",
"msg_date": "Mon, 8 Apr 2024 14:27:44 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
},
{
"msg_contents": "Em seg., 8 de abr. de 2024 às 09:27, Jelte Fennema-Nio <[email protected]>\nescreveu:\n\n> On Sun, 7 Apr 2024 at 11:34, David Rowley <[email protected]> wrote:\n> > That seems to require modifying the following function signatures:\n> > secure_write(), be_tls_write(), be_gssapi_write(). That's not an area\n> > I'm familiar with, however.\n>\n> Attached is a new patchset where 0003 does exactly that. The only\n> place where we need to cast to non-const is for GSS, but that seems\n> fine (commit message has more details).\n>\n+1.\nLooks ok to me.\nThe GSS pointer *ptr, is already cast to char * where it is needed,\nso the code is already correct.\n\nbest regards,\nRanier Vilela\n\nEm seg., 8 de abr. de 2024 às 09:27, Jelte Fennema-Nio <[email protected]> escreveu:On Sun, 7 Apr 2024 at 11:34, David Rowley <[email protected]> wrote:\n> That seems to require modifying the following function signatures:\n> secure_write(), be_tls_write(), be_gssapi_write(). That's not an area\n> I'm familiar with, however.\n\nAttached is a new patchset where 0003 does exactly that. The only\nplace where we need to cast to non-const is for GSS, but that seems\nfine (commit message has more details).+1.Looks ok to me.The GSS pointer *ptr, is already cast to char * where it is needed,so the code is already correct.best regards,Ranier Vilela",
"msg_date": "Tue, 9 Apr 2024 09:02:55 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Flushing large data immediately in pqcomm"
}
] |
[
{
"msg_contents": "Hi,\n\nI've found a race condition in libpq. It is about the initialization of\nthe my_bio_methods static variable in fe-secure-openssl.c, which is not\nprotected by any lock. The race condition may make the initialization of\nthe connection fail, and as an additional weird consequence, it might\ncause openssl call close(0), so stdin of the client application gets\nclosed.\n\nI've prepared a patch to protect the initialization of my_bio_methods\nfrom the race. This is my first patch submission to the postgresql\nproject, so I hope I didn't miss anything. Any comments and suggestions\nare of course very welcome.\n\nI also prepared a testcase. In the testcase tarball, there is a patch\nthat adds sleeps at the right positions to make the close(0) problem\noccur practically always. It also includes comments to explain how the\nrace can end up calling close(0).\n\nConcerning the patch, it is only tested on Linux. I'm unsure about\nwhether the simple initialization of the mutex would work nowadays also\non Windows or whether the more complicated initialization also to be\nfound for the ssl_config_mutex in the same source file needs to be used.\nLet me know whether I should adapt that.\n\nWe discovered the problem with release 11.5, but the patch and the \ntestcase are against the master branch.\n\nRegards,\nWilli\n\n-- \n___________________________________________________\n\nDr. Willi Mann | Principal Software Engineer, Tech Lead PQL\n\nCelonis SE | Theresienstrasse 6 | 80333 Munich, Germany\nF: +4989416159679\[email protected] | www.celonis.com | LinkedIn | Twitter | Xing\n\nAG Munich HRB 225439 | Management: Martin Klenk, Bastian Nominacher, \nAlexander Rinke",
"msg_date": "Mon, 20 Nov 2023 14:37:39 +0100",
"msg_from": "Willi Mann <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] fix race condition in libpq (related to ssl connections)"
},
{
"msg_contents": "Hi,\n\n> I've found a race condition in libpq. It is about the initialization of\n> the my_bio_methods static variable in fe-secure-openssl.c, which is not\n> protected by any lock. The race condition may make the initialization of\n> the connection fail, and as an additional weird consequence, it might\n> cause openssl call close(0), so stdin of the client application gets\n> closed.\n\nThanks for the patch. Interestingly enough we have PQisthreadsafe()\n[1], but it's current implementation is:\n\n```\n/* libpq is thread-safe? */\nint\nPQisthreadsafe(void)\n{\n return true;\n}\n```\n\nI wonder if we should just document that libpq is thread safe as of PG\nv??? and deprecate PQisthreadsafe() at some point. Currently the\ndocumentation gives an impression that the library may or may not be\nthread safe depending on the circumstances.\n\n> I've prepared a patch to protect the initialization of my_bio_methods\n> from the race. This is my first patch submission to the postgresql\n> project, so I hope I didn't miss anything. Any comments and suggestions\n> are of course very welcome.\n>\n> I also prepared a testcase. In the testcase tarball, there is a patch\n> that adds sleeps at the right positions to make the close(0) problem\n> occur practically always. It also includes comments to explain how the\n> race can end up calling close(0).\n>\n> Concerning the patch, it is only tested on Linux. I'm unsure about\n> whether the simple initialization of the mutex would work nowadays also\n> on Windows or whether the more complicated initialization also to be\n> found for the ssl_config_mutex in the same source file needs to be used.\n> Let me know whether I should adapt that.\n\nPlease add the patch to the nearest open commit fest [2]. The patch\nwill be automatically picked up by cfbot [3] and tested on different\nplatforms. Also this way it will not be lost among other patches.\n\nThe code looks OK but I would appreciate a second opinion from cfbot.\nAlso maybe a few comments before my_BIO_methods_init_mutex and/or\npthread_mutex_lock would be appropriate. Personally I am inclined to\nthink that the automatic test in this particular case is redundant.\n\n[1]: https://www.postgresql.org/docs/current/libpq-threading.html\n[2]: https://commitfest.postgresql.org/\n[3]: http://cfbot.cputube.org/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 21 Nov 2023 12:14:16 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix race condition in libpq (related to ssl connections)"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 12:14:16PM +0300, Aleksander Alekseev wrote:\n\nThanks for the report, Willi, and the test case! Thanks Aleksander\nfor the reply.\n\n> I wonder if we should just document that libpq is thread safe as of PG\n> v??? and deprecate PQisthreadsafe() at some point. Currently the\n> documentation gives an impression that the library may or may not be\n> thread safe depending on the circumstances.\n\nBecause --disable-thread-safe has been removed recently in\n68a4b58eca03. The routine could be marked as deprecated on top of\nsaying that it always returns 1 for 17~.\n\n> Please add the patch to the nearest open commit fest [2]. The patch\n> will be automatically picked up by cfbot [3] and tested on different\n> platforms. Also this way it will not be lost among other patches.\n> \n> The code looks OK but I would appreciate a second opinion from cfbot.\n> Also maybe a few comments before my_BIO_methods_init_mutex and/or\n> pthread_mutex_lock would be appropriate. Personally I am inclined to\n> think that the automatic test in this particular case is redundant.\n\nI am not really convinced that we require a second mutex here, as\nthere is always a concern with inter-locking changes. I may be\nmissing something, of course, but BIO_s_socket() is just a pointer to\na set of callbacks hidden in bss_sock.c with BIO_meth_new() and\nBIO_get_new_index() assigning some centralized data to handle the\nmethods in a controlled way in OpenSSL. We only case about\ninitializing once for the sake of libpq's threads, so wouldn't it be\nbetter to move my_BIO_s_socket() in pgtls_init() where the\ninitialization of the BIO methods would be protected by\nssl_config_mutex? That's basically what Willi means in his first\nmessage, no?\n--\nMichael",
"msg_date": "Wed, 22 Nov 2023 10:43:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix race condition in libpq (related to ssl connections)"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 12:14:16PM +0300, Aleksander Alekseev wrote:\n> Please add the patch to the nearest open commit fest [2]. The patch\n> will be automatically picked up by cfbot [3] and tested on different\n> platforms. Also this way it will not be lost among other patches.\n\nI have noticed that this was not tracked yet, so I have added an entry\nhere:\nhttps://commitfest.postgresql.org/46/4670/\n\nWilli, note that this requires a PostgreSQL community account, and it\ndoes not seem like you have one, or I would have added you as author\n;)\n--\nMichael",
"msg_date": "Wed, 22 Nov 2023 10:48:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix race condition in libpq (related to ssl connections)"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 2:44 PM Michael Paquier <[email protected]> wrote:\n> On Tue, Nov 21, 2023 at 12:14:16PM +0300, Aleksander Alekseev wrote:\n> > I wonder if we should just document that libpq is thread safe as of PG\n> > v??? and deprecate PQisthreadsafe() at some point. Currently the\n> > documentation gives an impression that the library may or may not be\n> > thread safe depending on the circumstances.\n>\n> Because --disable-thread-safe has been removed recently in\n> 68a4b58eca03. The routine could be marked as deprecated on top of\n> saying that it always returns 1 for 17~.\n\nSee also commit ce0b0fa3 \"Doc: Adjust libpq docs about thread safety.\"\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:58:15 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix race condition in libpq (related to ssl connections)"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 02:58:15PM +1300, Thomas Munro wrote:\n> On Wed, Nov 22, 2023 at 2:44 PM Michael Paquier <[email protected]> wrote:\n>> Because --disable-thread-safe has been removed recently in\n>> 68a4b58eca03. The routine could be marked as deprecated on top of\n>> saying that it always returns 1 for 17~.\n> \n> See also commit ce0b0fa3 \"Doc: Adjust libpq docs about thread safety.\"\n\nSure, I've noticed that in the docs, but declaring it as deprecated is\na different topic, and we don't actually use this term in the docs for\nthis routine, no?\n--\nMichael",
"msg_date": "Fri, 24 Nov 2023 14:45:06 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix race condition in libpq (related to ssl connections)"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 10:43:32AM +0900, Michael Paquier wrote:\n> I am not really convinced that we require a second mutex here, as\n> there is always a concern with inter-locking changes. I may be\n> missing something, of course, but BIO_s_socket() is just a pointer to\n> a set of callbacks hidden in bss_sock.c with BIO_meth_new() and\n> BIO_get_new_index() assigning some centralized data to handle the\n> methods in a controlled way in OpenSSL.\n\nI was looking at the origin of this one, and this is an issue coming\ndown to 8bb14cdd33de that has removed the ssl_config_mutex taken in\npgtls_open_client() when we called my_SSL_set_fd(). The commit has\naccidentally missed that path with the static BIO method where the\nmutex mattered.\n\n> We only case about\n> initializing once for the sake of libpq's threads, so wouldn't it be\n> better to move my_BIO_s_socket() in pgtls_init() where the\n> initialization of the BIO methods would be protected by\n> ssl_config_mutex? That's basically what Willi means in his first\n> message, no?\n\nI've looked at this idea, and finished by being unhappy with the error\nhandling that we are currently assuming in my_SSL_set_fd() in the\nevent of an error in the bio method setup, which would be most likely\nan OOM, so let's use ssl_config_mutex in my_BIO_s_socket(). Another\nthing is that I have minimized the manipulation of my_bio_methods in\nthe setup routine.\n\nI've been also testing the risks of collusion, and it takes me quite a\na few tries with hundred threads to reproduce the failure even without\nany forced sleep, so that seems really hard to reach.\n\nPlease find attached a patch (still need to do more checks with older\nversions of OpenSSL). Any thoughts or comments?\n--\nMichael",
"msg_date": "Fri, 24 Nov 2023 16:48:58 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix race condition in libpq (related to ssl connections)"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 04:48:58PM +0900, Michael Paquier wrote:\n> I've looked at this idea, and finished by being unhappy with the error\n> handling that we are currently assuming in my_SSL_set_fd() in the\n> event of an error in the bio method setup, which would be most likely\n> an OOM, so let's use ssl_config_mutex in my_BIO_s_socket(). Another\n> thing is that I have minimized the manipulation of my_bio_methods in\n> the setup routine.\n\nI've spent more time on that today, and the patch I've posted on\nFriday had a small mistake in the non-HAVE_BIO_METH_NEW path when\nsaving the BIO_METHODs causing the SSL tests to fail with older\nOpenSSL versions. I've fixed that and the patch was straight-forward,\nso applied it down to v12. I didn't use Willi's patch at the end,\nstill credited him as author as his original patch is rather close to\nthe result committed and it feels that he has spent a good deal of\ntime on this issue.\n--\nMichael",
"msg_date": "Mon, 27 Nov 2023 12:03:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix race condition in libpq (related to ssl connections)"
}
] |
[
{
"msg_contents": "meson: docs: Add {html,man} targets, rename install-doc-*\n\nWe have toplevel html, man targets in the autoconf build as well. It'd be odd\nto have an 'html' target but have the install target be 'install-doc-html',\nthus rename the install targets to match.\n\nReviewed-by: Christoph Berg <[email protected]>\nReviewed-by: Peter Eisentraut <[email protected]>\nDiscussion: https://postgr.es/m/[email protected]\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/ddcab2a0329511e8872b62f2c77e5fa33547c277\n\nModified Files\n--------------\ndoc/src/sgml/meson.build | 6 ++++--\n1 file changed, 4 insertions(+), 2 deletions(-)",
"msg_date": "Tue, 21 Nov 2023 01:53:02 +0000",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "On 2023-11-20 Mo 20:53, Andres Freund wrote:\n> meson: docs: Add {html,man} targets, rename install-doc-*\n>\n> We have toplevel html, man targets in the autoconf build as well. It'd be odd\n> to have an 'html' target but have the install target be 'install-doc-html',\n> thus rename the install targets to match.\n\n\nThis commit of one of its nearby friends appears to have broken crake's \ndocs build:\n\nERROR: Can't invoke target `html`: ambiguous name.Add target type and/or path:\n- ./doc/src/sgml/html:custom\n- ./doc/src/sgml/html:alias\n\nSee<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04>\n\ncheers\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-11-20 Mo 20:53, Andres Freund\n wrote:\n\n\nmeson: docs: Add {html,man} targets, rename install-doc-*\n\nWe have toplevel html, man targets in the autoconf build as well. It'd be odd\nto have an 'html' target but have the install target be 'install-doc-html',\nthus rename the install targets to match.\n\n\n\nThis commit of one of its nearby friends appears to have broken\n crake's docs build:\nERROR: Can't invoke target `html`: ambiguous name.Add target type and/or path:\n- ./doc/src/sgml/html:custom\n- ./doc/src/sgml/html:alias\n\nSee <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04>\n\ncheers\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 23 Nov 2023 08:32:21 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "On 2023-11-23 Th 08:32, Andrew Dunstan wrote:\n>\n>\n> On 2023-11-20 Mo 20:53, Andres Freund wrote:\n>> meson: docs: Add {html,man} targets, rename install-doc-*\n>>\n>> We have toplevel html, man targets in the autoconf build as well. It'd be odd\n>> to have an 'html' target but have the install target be 'install-doc-html',\n>> thus rename the install targets to match.\n>\n>\n> This commit of one of its nearby friends appears to have broken \n> crake's docs build:\n>\n> ERROR: Can't invoke target `html`: ambiguous name.Add target type and/or path:\n> - ./doc/src/sgml/html:custom\n> - ./doc/src/sgml/html:alias\n>\n> See<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04>\n>\n>\n\nThis is still broken.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-11-23 Th 08:32, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-11-20 Mo 20:53, Andres Freund\n wrote:\n\n\nmeson: docs: Add {html,man} targets, rename install-doc-*\n\nWe have toplevel html, man targets in the autoconf build as well. It'd be odd\nto have an 'html' target but have the install target be 'install-doc-html',\nthus rename the install targets to match.\n\n\n\nThis commit of one of its nearby friends appears to have broken\n crake's docs build:\nERROR: Can't invoke target `html`: ambiguous name.Add target type and/or path:\n- ./doc/src/sgml/html:custom\n- ./doc/src/sgml/html:alias\n\nSee <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04>\n\n\n\n\n\n\nThis is still broken.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 27 Nov 2023 15:38:24 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-23 08:32:21 -0500, Andrew Dunstan wrote:\n> On 2023-11-20 Mo 20:53, Andres Freund wrote:\n> > meson: docs: Add {html,man} targets, rename install-doc-*\n> >\n> > We have toplevel html, man targets in the autoconf build as well. It'd be odd\n> > to have an 'html' target but have the install target be 'install-doc-html',\n> > thus rename the install targets to match.\n>\n>\n> This commit of one of its nearby friends appears to have broken crake's docs\n> build:\n>\n> ERROR: Can't invoke target `html`: ambiguous name.Add target type and/or path:\n> - ./doc/src/sgml/html:custom\n> - ./doc/src/sgml/html:alias\n>\n> See<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04>\n\nAh, I realize now that this is from meson compile html, not 'ninja html'. That\nexplains why I couldn't reproduce this initially and why CI didn't complain.\nI don't really understand why meson compile complains in this case. I assume\nyou don't want to disambiguate as suggested, by building html:alias instead?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 28 Nov 2023 18:28:43 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "\nOn 2023-11-28 Tu 21:28, Andres Freund wrote:\n> Hi,\n>\n> On 2023-11-23 08:32:21 -0500, Andrew Dunstan wrote:\n>> On 2023-11-20 Mo 20:53, Andres Freund wrote:\n>>> meson: docs: Add {html,man} targets, rename install-doc-*\n>>>\n>>> We have toplevel html, man targets in the autoconf build as well. It'd be odd\n>>> to have an 'html' target but have the install target be 'install-doc-html',\n>>> thus rename the install targets to match.\n>>\n>> This commit of one of its nearby friends appears to have broken crake's docs\n>> build:\n>>\n>> ERROR: Can't invoke target `html`: ambiguous name.Add target type and/or path:\n>> - ./doc/src/sgml/html:custom\n>> - ./doc/src/sgml/html:alias\n>>\n>> See<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04>\n> Ah, I realize now that this is from meson compile html, not 'ninja html'. That\n> explains why I couldn't reproduce this initially and why CI didn't complain.\n> I don't really understand why meson compile complains in this case. I assume\n> you don't want to disambiguate as suggested, by building html:alias instead?\n>\n\nI've done that as a temporary fix to get crake out of the hole, but it's \npretty ugly, and I don't want to do it in a release if at all possible.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 29 Nov 2023 07:20:59 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2023-11-28 Tu 21:28, Andres Freund wrote:\n>> I don't really understand why meson compile complains in this case. I assume\n>> you don't want to disambiguate as suggested, by building html:alias instead?\n\n> I've done that as a temporary fix to get crake out of the hole, but it's \n> pretty ugly, and I don't want to do it in a release if at all possible.\n\nOur documentation says specifically that \"ninja html\" will build the\nHTML format. I would expect that to work by analogy with the \"make\"\ntarget; having to spell it differently seems like clearly a bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Nov 2023 08:49:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "\nOn 2023-11-29 We 08:49, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> On 2023-11-28 Tu 21:28, Andres Freund wrote:\n>>> I don't really understand why meson compile complains in this case. I assume\n>>> you don't want to disambiguate as suggested, by building html:alias instead?\n>> I've done that as a temporary fix to get crake out of the hole, but it's\n>> pretty ugly, and I don't want to do it in a release if at all possible.\n> Our documentation says specifically that \"ninja html\" will build the\n> HTML format. I would expect that to work by analogy with the \"make\"\n> target; having to spell it differently seems like clearly a bug.\n>\n> \t\t\t\n\n\n\"ninja html\" does in fact work. What's not working is \"meson compile \nhtml\". And it looks like the reason I used that in the buildfarm code is \nthat ninja doesn't know about other targets like \"postgres-US.pdf\". Up \nto now \"meson compile postgres-US.pdf html\" has worked.\n\nFWIW, the buildfarm code doesn't use ninja explicitly anywhere else.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 29 Nov 2023 10:05:26 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-29 10:05:26 -0500, Andrew Dunstan wrote:\n> On 2023-11-29 We 08:49, Tom Lane wrote:\n> > Andrew Dunstan <[email protected]> writes:\n> > > On 2023-11-28 Tu 21:28, Andres Freund wrote:\n> > > > I don't really understand why meson compile complains in this case. I assume\n> > > > you don't want to disambiguate as suggested, by building html:alias instead?\n> > > I've done that as a temporary fix to get crake out of the hole, but it's\n> > > pretty ugly, and I don't want to do it in a release if at all possible.\n> > Our documentation says specifically that \"ninja html\" will build the\n> > HTML format. I would expect that to work by analogy with the \"make\"\n> > target; having to spell it differently seems like clearly a bug.\n> > \n> > \t\t\t\n> \n> \n> \"ninja html\" does in fact work. What's not working is \"meson compile html\".\n> And it looks like the reason I used that in the buildfarm code is that ninja\n> doesn't know about other targets like \"postgres-US.pdf\".\n\nIt does:\n\nninja help|grep pdf\n doc/src/sgml/postgres-A4.pdf Build documentation in PDF format, with A4 pages\n doc/src/sgml/postgres-US.pdf Build documentation in PDF format, with US letter pages\n\n\"ninja doc/src/sgml/postgres-US.pdf\" works and has worked since day one.\n\nFWIW, you can continue to use meson compile, you just need to disambiguate the\ntarget name:\n meson compile html:alias\n\nWhich isn't particularly pretty, but does work.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 29 Nov 2023 08:19:16 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "Hi,\n\nThis started at https://www.postgresql.org/message-id/746ba786-85bb-d1f7-b613-57bec35c642a%40dunslane.net\nbut seems worth discussing on -hackers.\n\nOn 2023-11-29 07:20:59 -0500, Andrew Dunstan wrote:\n> On 2023-11-28 Tu 21:28, Andres Freund wrote:\n> > On 2023-11-23 08:32:21 -0500, Andrew Dunstan wrote:\n> > > On 2023-11-20 Mo 20:53, Andres Freund wrote:\n> > > > meson: docs: Add {html,man} targets, rename install-doc-*\n> > > >\n> > > > We have toplevel html, man targets in the autoconf build as well. It'd be odd\n> > > > to have an 'html' target but have the install target be 'install-doc-html',\n> > > > thus rename the install targets to match.\n> > >\n> > > This commit of one of its nearby friends appears to have broken crake's docs\n> > > build:\n> > >\n> > > ERROR: Can't invoke target `html`: ambiguous name.Add target type and/or path:\n> > > - ./doc/src/sgml/html:custom\n> > > - ./doc/src/sgml/html:alias\n> > >\n> > > See<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04>\n> > Ah, I realize now that this is from meson compile html, not 'ninja html'. That\n> > explains why I couldn't reproduce this initially and why CI didn't complain.\n> > I don't really understand why meson compile complains in this case. I assume\n> > you don't want to disambiguate as suggested, by building html:alias instead?\n> >\n>\n> I've done that as a temporary fix to get crake out of the hole, but it's\n> pretty ugly, and I don't want to do it in a release if at all possible.\n\nIf we want to prevent these kind of conflicts, which doesn't seem\nunreasonable, I think we need an automatic check that prevents reintroducing\nthem. I think most people will just use ninja and not see them. Meson stores\nthe relevant information in meson-info/intro-targets.json, so that's just a\nbit of munging of that file.\n\nI think the background for this issue existing is that meson supports a \"flat\"\nbuild directory layout (which is deprecated), so the directory name can't be\nused to deconflict with meson compile, which tries to work across all \"build\nexecution\" systems.\n\nPrototype of such a check, as well as a commit deconflicting the target names,\nattached.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 29 Nov 2023 10:36:19 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "\nOn 2023-11-29 We 07:20, Andrew Dunstan wrote:\n>\n> On 2023-11-28 Tu 21:28, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2023-11-23 08:32:21 -0500, Andrew Dunstan wrote:\n>>> On 2023-11-20 Mo 20:53, Andres Freund wrote:\n>>>> meson: docs: Add {html,man} targets, rename install-doc-*\n>>>>\n>>>> We have toplevel html, man targets in the autoconf build as well. \n>>>> It'd be odd\n>>>> to have an 'html' target but have the install target be \n>>>> 'install-doc-html',\n>>>> thus rename the install targets to match.\n>>>\n>>> This commit of one of its nearby friends appears to have broken \n>>> crake's docs\n>>> build:\n>>>\n>>> ERROR: Can't invoke target `html`: ambiguous name.Add target type \n>>> and/or path:\n>>> - ./doc/src/sgml/html:custom\n>>> - ./doc/src/sgml/html:alias\n>>>\n>>> See<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04> \n>>>\n>> Ah, I realize now that this is from meson compile html, not 'ninja \n>> html'. That\n>> explains why I couldn't reproduce this initially and why CI didn't \n>> complain.\n>> I don't really understand why meson compile complains in this case. \n>> I assume\n>> you don't want to disambiguate as suggested, by building html:alias \n>> instead?\n>>\n>\n> I've done that as a temporary fix to get crake out of the hole, but \n> it's pretty ugly, and I don't want to do it in a release if at all \n> possible.\n\n\nand doing this has broken the docs build for release 16.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 1 Dec 2023 09:04:19 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "Hi,\n\n\nOn 2023-12-01 09:04:19 -0500, Andrew Dunstan wrote:\n> On 2023-11-29 We 07:20, Andrew Dunstan wrote:\n> > On 2023-11-28 Tu 21:28, Andres Freund wrote:\n> > > On 2023-11-23 08:32:21 -0500, Andrew Dunstan wrote:\n> > > > On 2023-11-20 Mo 20:53, Andres Freund wrote:\n> > > > > meson: docs: Add {html,man} targets, rename install-doc-*\n> > > > > \n> > > > > We have toplevel html, man targets in the autoconf build as\n> > > > > well. It'd be odd\n> > > > > to have an 'html' target but have the install target be\n> > > > > 'install-doc-html',\n> > > > > thus rename the install targets to match.\n> > > > \n> > > > This commit of one of its nearby friends appears to have broken\n> > > > crake's docs\n> > > > build:\n> > > > \n> > > > ERROR: Can't invoke target `html`: ambiguous name.Add target\n> > > > type and/or path:\n> > > > - ./doc/src/sgml/html:custom\n> > > > - ./doc/src/sgml/html:alias\n> > > > \n> > > > See<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04>\n> > > > \n> > > Ah, I realize now that this is from meson compile html, not 'ninja\n> > > html'. That\n> > > explains why I couldn't reproduce this initially and why CI didn't\n> > > complain.\n> > > I don't really understand why meson compile complains in this case.�\n> > > I assume\n> > > you don't want to disambiguate as suggested, by building html:alias\n> > > instead?\n> > > \n> > \n> > I've done that as a temporary fix to get crake out of the hole, but it's\n> > pretty ugly, and I don't want to do it in a release if at all possible.\n> \n> \n> and doing this has broken the docs build for release 16.\n\nIf I can get somebody to comment on\nhttps://postgr.es/m/20231129183619.3hrnwaexbrpygbxg%40awork3.anarazel.de\nwe can remove the need for the :$buildtype suffix.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 1 Dec 2023 09:12:19 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "On 2023-12-01 Fr 09:04, Andrew Dunstan wrote:\n>\n> On 2023-11-29 We 07:20, Andrew Dunstan wrote:\n>>\n>> On 2023-11-28 Tu 21:28, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2023-11-23 08:32:21 -0500, Andrew Dunstan wrote:\n>>>> On 2023-11-20 Mo 20:53, Andres Freund wrote:\n>>>>> meson: docs: Add {html,man} targets, rename install-doc-*\n>>>>>\n>>>>> We have toplevel html, man targets in the autoconf build as well. \n>>>>> It'd be odd\n>>>>> to have an 'html' target but have the install target be \n>>>>> 'install-doc-html',\n>>>>> thus rename the install targets to match.\n>>>>\n>>>> This commit of one of its nearby friends appears to have broken \n>>>> crake's docs\n>>>> build:\n>>>>\n>>>> ERROR: Can't invoke target `html`: ambiguous name.Add target type \n>>>> and/or path:\n>>>> - ./doc/src/sgml/html:custom\n>>>> - ./doc/src/sgml/html:alias\n>>>>\n>>>> See<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04> \n>>>>\n>>> Ah, I realize now that this is from meson compile html, not 'ninja \n>>> html'. That\n>>> explains why I couldn't reproduce this initially and why CI didn't \n>>> complain.\n>>> I don't really understand why meson compile complains in this case. \n>>> I assume\n>>> you don't want to disambiguate as suggested, by building html:alias \n>>> instead?\n>>>\n>>\n>> I've done that as a temporary fix to get crake out of the hole, but \n>> it's pretty ugly, and I don't want to do it in a release if at all \n>> possible.\n>\n>\n> and doing this has broken the docs build for release 16.\n\n\nOK, so this code is what I have now, and seems to work on both HEAD and \nREL_16_STABLE:\n\n my $extra_targets = $PGBuild::conf{extra_doc_targets} || \"\";\n my @targs = split(/\\s+/, $extra_targets);\n s!^!doc/src/sgml/! foreach @targs;\n $extra_targets=join(' ', @targs) ;\n @makeout = run_log(\"cd $pgsql && ninja doc/src/sgml/html $extra_targets\");\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-12-01 Fr 09:04, Andrew Dunstan\n wrote:\n\n\n\n On 2023-11-29 We 07:20, Andrew Dunstan wrote:\n \n\n\n On 2023-11-28 Tu 21:28, Andres Freund wrote:\n \nHi,\n \n\n On 2023-11-23 08:32:21 -0500, Andrew Dunstan wrote:\n \nOn 2023-11-20 Mo 20:53, Andres Freund\n wrote:\n \nmeson: docs: Add {html,man} targets,\n rename install-doc-*\n \n\n We have toplevel html, man targets in the autoconf build\n as well. It'd be odd\n \n to have an 'html' target but have the install target be\n 'install-doc-html',\n \n thus rename the install targets to match.\n \n\n\n This commit of one of its nearby friends appears to have\n broken crake's docs\n \n build:\n \n\n ERROR: Can't invoke target `html`: ambiguous name.Add target\n type and/or path:\n \n - ./doc/src/sgml/html:custom\n \n - ./doc/src/sgml/html:alias\n \n\nSee<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-11-23%2012%3A52%3A04>\n\n\n Ah, I realize now that this is from meson compile html, not\n 'ninja html'. That\n \n explains why I couldn't reproduce this initially and why CI\n didn't complain.\n \n I don't really understand why meson compile complains in this\n case. I assume\n \n you don't want to disambiguate as suggested, by building\n html:alias instead?\n \n\n\n\n I've done that as a temporary fix to get crake out of the hole,\n but it's pretty ugly, and I don't want to do it in a release if\n at all possible.\n \n\n\n\n and doing this has broken the docs build for release 16.\n \n\n\n\nOK, so this code is what I have now, and seems to work on both\n HEAD and REL_16_STABLE:\n my $extra_targets = $PGBuild::conf{extra_doc_targets} || \"\";\n my @targs = split(/\\s+/, $extra_targets);\n s!^!doc/src/sgml/! foreach @targs;\n $extra_targets=join(' ', @targs) ;\n @makeout = run_log(\"cd $pgsql && ninja doc/src/sgml/html $extra_targets\");\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 1 Dec 2023 14:06:13 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
},
{
"msg_contents": "Commits look fine to me, but I hate the new target names... Luckily, \nI just use plain ninja, so I don't interact with that.\n\n> + for name, v in targets_info_byname.items():\n> + if len(targets_info_byname[name]) > 1:\n\nMy only comment is that you could reverse the logic and save yourself an \nindentation.\n\n- if len(targets_info_byname[name]) > 1:\n+ if len(targets_info_byname[name]) <= 1:\n+ continue\n\nBut whatever you want.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 01 Dec 2023 15:55:29 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename\n install-doc-*"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-01 15:55:29 -0600, Tristan Partin wrote:\n> Commits look fine to me, but I hate the new target names...\n\nYou shouldn't ever need to use them anywhere - that's what the alias is for...\n\nHappy to go another route if you have a suggestion.\n\n\n> > + for name, v in targets_info_byname.items():\n> > + if len(targets_info_byname[name]) > 1:\n> \n> My only comment is that you could reverse the logic and save yourself an\n> indentation.\n> \n> - if len(targets_info_byname[name]) > 1:\n> + if len(targets_info_byname[name]) <= 1:\n> + continue\n> \n> But whatever you want.\n\nMakes sense.\n\n\n",
"msg_date": "Fri, 1 Dec 2023 14:12:55 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: meson: docs: Add {html,man} targets, rename install-doc-*"
}
] |
[
{
"msg_contents": "Patch attached.\n\nThe caller could do something similar, so this option is not necessary,\nbut it seems like it could be generally useful. It speeds things up for\nthe search_path cache (and is an alternative to another patch I have\nthat implements the same thing in the caller).\n\nThoughts?\n\nRegards,\n\tJeff Davis",
"msg_date": "Mon, 20 Nov 2023 18:12:47 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "simplehash: SH_OPTIMIZE_REPEAT for optimizing repeated lookups of\n the same key"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 06:12:47PM -0800, Jeff Davis wrote:\n> The caller could do something similar, so this option is not necessary,\n> but it seems like it could be generally useful. It speeds things up for\n> the search_path cache (and is an alternative to another patch I have\n> that implements the same thing in the caller).\n\nI'm mostly thinking out loud here, but could we just always do this? I\nguess you might want to avoid it if your SH_EQUAL is particularly expensive\nand you know repeated lookups are rare, but maybe that's uncommon enough\nthat we don't really care.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 20 Nov 2023 22:50:15 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simplehash: SH_OPTIMIZE_REPEAT for optimizing repeated lookups\n of the same key"
},
{
"msg_contents": "On Mon, 2023-11-20 at 22:50 -0600, Nathan Bossart wrote:\n> I'm mostly thinking out loud here, but could we just always do this? \n> I\n> guess you might want to avoid it if your SH_EQUAL is particularly\n> expensive\n> and you know repeated lookups are rare, but maybe that's uncommon\n> enough\n> that we don't really care.\n\nI like that simplehash is simple, so I'm not inclined to introduce an\nalways-on feature.\n\nIt would be interesting to know how often it's a good idea to turn it\non, though. I could try turning it on for various other uses of\nsimplehash, and see where it tends to win.\n\nThe caller can also save the hash and pass it down, but that's not\nalways convenient to do.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 20 Nov 2023 22:37:47 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simplehash: SH_OPTIMIZE_REPEAT for optimizing repeated lookups\n of the same key"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 10:37:47PM -0800, Jeff Davis wrote:\n> It would be interesting to know how often it's a good idea to turn it\n> on, though. I could try turning it on for various other uses of\n> simplehash, and see where it tends to win.\n\nThat seems worthwhile to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 21 Nov 2023 09:53:54 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simplehash: SH_OPTIMIZE_REPEAT for optimizing repeated lookups\n of the same key"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-20 22:37:47 -0800, Jeff Davis wrote:\n> On Mon, 2023-11-20 at 22:50 -0600, Nathan Bossart wrote:\n> > I'm mostly thinking out loud here, but could we just always do this?�\n> > I\n> > guess you might want to avoid it if your SH_EQUAL is particularly\n> > expensive\n> > and you know repeated lookups are rare, but maybe that's uncommon\n> > enough\n> > that we don't really care.\n> \n> I like that simplehash is simple, so I'm not inclined to introduce an\n> always-on feature.\n\nI think it'd be a bad idea to make it always on - there's plenty cases where\nit just would make things slower because the hit rate is low. A equal\ncomparison is far from free.\n\nI am not quite sure this kind of cache best lives in simplehash - ISTM that\nquite often it'd be more beneficial to have a cache that you can test more\ncheaply higher up.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Nov 2023 08:51:38 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simplehash: SH_OPTIMIZE_REPEAT for optimizing repeated lookups\n of the same key"
},
{
"msg_contents": "On Tue, 2023-11-21 at 08:51 -0800, Andres Freund wrote:\n> I am not quite sure this kind of cache best lives in simplehash -\n> ISTM that\n> quite often it'd be more beneficial to have a cache that you can test\n> more\n> cheaply higher up.\n\nYeah. I suppose when a few more callers are likely to benefit we can\nreconsider.\n\nThough it makes it easy to test a few other callers, just to see what\nnumbers appear.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 21 Nov 2023 10:26:14 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simplehash: SH_OPTIMIZE_REPEAT for optimizing repeated lookups\n of the same key"
}
] |
[
{
"msg_contents": "Dear Hackers,\n\nI would like to clarify, what the correct way is to determine that a given relation is using local buffers. Local buffers, as far as I know, are used for temporary tables in backends. There are two functions/macros (bufmgr.c): SmgrIsTemp, RelationUsesLocalBuffers. The first function verifies that the current process is a regular session backend, while the other macro verifies the relation persistence characteristic. It seems, the use of each function independently is not correct. I think, these functions should be applied in pair to check for local buffers use, but, it seems, these functions are used independently. It works until temporary tables are allowed only in session backends.\n\nI'm concerned, how to determine the use of local buffers in some other theoretical cases? For example, if we decide to replicate temporary tables? Are there the other cases, when local buffers can be used with relations in the Vanilla? Do we allow the use of relations with RELPERSISTENCE_TEMP not only in session backends?\n\nThank you in advance for your help!\n\nWith best regards,\nVitaly Davydov\n\nDear Hackers,I would like to clarify, what the correct way is to determine that a given relation is using local buffers. Local buffers, as far as I know, are used for temporary tables in backends. There are two functions/macros (bufmgr.c): SmgrIsTemp, RelationUsesLocalBuffers. The first function verifies that the current process is a regular session backend, while the other macro verifies the relation persistence characteristic. It seems, the use of each function independently is not correct. I think, these functions should be applied in pair to check for local buffers use, but, it seems, these functions are used independently. It works until temporary tables are allowed only in session backends.I'm concerned, how to determine the use of local buffers in some other theoretical cases? For example, if we decide to replicate temporary tables? Are there the other cases, when local buffers can be used with relations in the Vanilla? Do we allow the use of relations with RELPERSISTENCE_TEMP not only in session backends?Thank you in advance for your help!With best regards,Vitaly Davydov",
"msg_date": "Tue, 21 Nov 2023 08:04:05 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to accurately determine when a relation should use local\n =?utf-8?q?buffers=3F?="
},
{
"msg_contents": "Hi,\n\n> I would like to clarify, what the correct way is to determine that a given relation is using local buffers. Local buffers, as far as I know, are used for temporary tables in backends. There are two functions/macros (bufmgr.c): SmgrIsTemp, RelationUsesLocalBuffers. The first function verifies that the current process is a regular session backend, while the other macro verifies the relation persistence characteristic. It seems, the use of each function independently is not correct. I think, these functions should be applied in pair to check for local buffers use, but, it seems, these functions are used independently. It works until temporary tables are allowed only in session backends.\n\nCould you please provide a specific example when the current code will\ndo something wrong/unintended?\n\n> I'm concerned, how to determine the use of local buffers in some other theoretical cases? For example, if we decide to replicate temporary tables? Are there the other cases, when local buffers can be used with relations in the Vanilla? Do we allow the use of relations with RELPERSISTENCE_TEMP not only in session backends?\n\nTemporary tables, by definition, are visible only within one session.\nI can't imagine how and why they would be replicated.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 21 Nov 2023 11:52:13 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to accurately determine when a relation should use local\n buffers?"
},
{
"msg_contents": "Hi Aleksander,\n\nThank you for the reply.\n\n> Could you please provide a specific example when the current code willdo\n> something wrong/unintended?\n\nI can't say that something is wrong in vanilla. But if you decide to\nreplicate DDL in some solutions like multimaster, you might want to\nreplicate CREATE TEMPORARY TABLE. Furthermore, there is some possible\ninconsistency in the code show below (REL_16_STABLE) in bufmgr.c file:\n\n - FlushRelationBuffers, PrefetchBuffer uses\n RelationUsesLocalBuffers(rel).\n - ExtendBufferedRel_common finally use\n BufferManagerRelation.relpersistence which is actually\n rd_rel->relpersistence, works like RelationUsesLocalBuffers.\n - ReadBuffer_common uses isLocalBuf = SmgrIsTemp(smgr), that checks\n rlocator.backend for InvalidBackendId.\n\nI would like to clarify, do we completely refuse the use of temporary\ntables in other contexts than in backends or there is some work-in-progress\nto allow some other usage contexts? If so, the check of\nrd_rel->relpersistence is enough. Not sure why we use SmgrIsTemp instead of\nRelationUsesLocalBuffers in ReadBuffer_common.\n\n\nWith best regards,\n\nVitaly Davydov\n\nвт, 21 нояб. 2023 г. в 11:52, Aleksander Alekseev <[email protected]\n>:\n\n> Hi,\n>\n> > I would like to clarify, what the correct way is to determine that a\n> given relation is using local buffers. Local buffers, as far as I know, are\n> used for temporary tables in backends. There are two functions/macros\n> (bufmgr.c): SmgrIsTemp, RelationUsesLocalBuffers. The first function\n> verifies that the current process is a regular session backend, while the\n> other macro verifies the relation persistence characteristic. It seems, the\n> use of each function independently is not correct. I think, these functions\n> should be applied in pair to check for local buffers use, but, it seems,\n> these functions are used independently. It works until temporary tables are\n> allowed only in session backends.\n>\n> Could you please provide a specific example when the current code will\n> do something wrong/unintended?\n>\n> > I'm concerned, how to determine the use of local buffers in some other\n> theoretical cases? For example, if we decide to replicate temporary tables?\n> Are there the other cases, when local buffers can be used with relations in\n> the Vanilla? Do we allow the use of relations with RELPERSISTENCE_TEMP not\n> only in session backends?\n>\n> Temporary tables, by definition, are visible only within one session.\n> I can't imagine how and why they would be replicated.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nС уважением,\nДавыдов Виталий\nhttp://www.vdavydov.ru\n\nHi Aleksander,\nThank you for the reply.Could you please provide a specific example when the current code willdo something wrong/unintended?\nI can't say that something is wrong in vanilla. But if you decide to replicate DDL in some solutions like multimaster, you might want to replicate CREATE TEMPORARY TABLE. Furthermore, there is some possible inconsistency in the code show below (REL_16_STABLE) in bufmgr.c file: \n\nFlushRelationBuffers, PrefetchBuffer uses RelationUsesLocalBuffers(rel).\nExtendBufferedRel_common finally use BufferManagerRelation.relpersistence which is actually rd_rel->relpersistence, works like RelationUsesLocalBuffers.\nReadBuffer_common uses isLocalBuf = SmgrIsTemp(smgr), that checks rlocator.backend for InvalidBackendId.\n\n\nI would like to clarify, do we completely refuse the use of temporary tables in other contexts than in backends or there is some work-in-progress to allow some other usage contexts? If so, the check of rd_rel->relpersistence is enough. Not sure why we use SmgrIsTemp instead of RelationUsesLocalBuffers in ReadBuffer_common.\nWith best regards,Vitaly Davydovвт, 21 нояб. 2023 г. в 11:52, Aleksander Alekseev <[email protected]>:Hi,\n\n> I would like to clarify, what the correct way is to determine that a given relation is using local buffers. Local buffers, as far as I know, are used for temporary tables in backends. There are two functions/macros (bufmgr.c): SmgrIsTemp, RelationUsesLocalBuffers. The first function verifies that the current process is a regular session backend, while the other macro verifies the relation persistence characteristic. It seems, the use of each function independently is not correct. I think, these functions should be applied in pair to check for local buffers use, but, it seems, these functions are used independently. It works until temporary tables are allowed only in session backends.\n\nCould you please provide a specific example when the current code will\ndo something wrong/unintended?\n\n> I'm concerned, how to determine the use of local buffers in some other theoretical cases? For example, if we decide to replicate temporary tables? Are there the other cases, when local buffers can be used with relations in the Vanilla? Do we allow the use of relations with RELPERSISTENCE_TEMP not only in session backends?\n\nTemporary tables, by definition, are visible only within one session.\nI can't imagine how and why they would be replicated.\n\n-- \nBest regards,\nAleksander Alekseev\n-- С уважением,Давыдов Виталийhttp://www.vdavydov.ru",
"msg_date": "Tue, 21 Nov 2023 13:18:06 +0300",
"msg_from": "Vitaly Davydov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to accurately determine when a relation should use local\n buffers?"
},
{
"msg_contents": "Hi,\n\n> Furthermore, there is some possible inconsistency in the code show below (REL_16_STABLE) in bufmgr.c file:\n>\n> FlushRelationBuffers, PrefetchBuffer uses RelationUsesLocalBuffers(rel).\n> ExtendBufferedRel_common finally use BufferManagerRelation.relpersistence which is actually rd_rel->relpersistence, works like RelationUsesLocalBuffers.\n> ReadBuffer_common uses isLocalBuf = SmgrIsTemp(smgr), that checks rlocator.backend for InvalidBackendId.\n\nI didn't do a deep investigation of the code in this particular aspect\nbut that could be a fair point. Would you like to propose a\nrefactoring that unifies the way we check if the relation is\ntemporary?\n\n> I would like to clarify, do we completely refuse the use of temporary tables in other contexts than in backends or there is some work-in-progress to allow some other usage contexts? If so, the check of rd_rel->relpersistence is enough. Not sure why we use SmgrIsTemp instead of RelationUsesLocalBuffers in ReadBuffer_common.\n\nAccording to the comments in relfilelocator.h:\n\n```\n/*\n * Augmenting a relfilelocator with the backend ID provides all the information\n * we need to locate the physical storage. The backend ID is InvalidBackendId\n * for regular relations (those accessible to more than one backend), or the\n * owning backend's ID for backend-local relations. Backend-local relations\n * are always transient and removed in case of a database crash; they are\n * never WAL-logged or fsync'd.\n */\ntypedef struct RelFileLocatorBackend\n{\n RelFileLocator locator;\n BackendId backend;\n} RelFileLocatorBackend;\n\n#define RelFileLocatorBackendIsTemp(rlocator) \\\n ((rlocator).backend != InvalidBackendId)\n```\n\nAnd this is what ReadBuffer_common() and other callers of SmgrIsTemp()\nare using. So no, you can't have a temporary table without an assigned\nRelFileLocatorBackend.backend.\n\nIt is my understanding that SmgrIsTemp() and\nRelationUsesLocalBuffers() are equivalent except the fact that the\nfirst macro works with SMgrRelation objects and the second one - with\nRelation objects.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 21 Nov 2023 18:01:30 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to accurately determine when a relation should use local\n buffers?"
},
{
"msg_contents": "Hi Aleksander,\n\nThank you for your answers. It seems, local buffers are used for temporary relations unconditionally. In this case, we may check either relpersistence or backend id, or both of them.\nI didn't do a deep investigation of the code in this particular aspect but that could be a fair point. Would you like to propose a refactoring that unifies the way we check if the relation is temporary?I would propose not to associate temporary relations with local buffers. I would say, that we that we should choose local buffers only in a backend context. It is the primary condition. Thus, to choose local buffers, two checks should be succeeded:\n * relpersistence (RelationUsesLocalBuffers) * backend id (SmgrIsTemp)I know, it may be not as effective as to check relpersistence only, but it makes the internal architecture more flexible, I believe.\n\nWith best regards,\nVitaly Davydov\n\n\n\n \n\nHi Aleksander,Thank you for your answers. It seems, local buffers are used for temporary relations unconditionally. In this case, we may check either relpersistence or backend id, or both of them.I didn't do a deep investigation of the code in this particular aspect but that could be a fair point. Would you like to propose a refactoring that unifies the way we check if the relation is temporary?I would propose not to associate temporary relations with local buffers. I would say, that we that we should choose local buffers only in a backend context. It is the primary condition. Thus, to choose local buffers, two checks should be succeeded:relpersistence (RelationUsesLocalBuffers)backend id (SmgrIsTemp)I know, it may be not as effective as to check relpersistence only, but it makes the internal architecture more flexible, I believe.With best regards,Vitaly Davydov",
"msg_date": "Wed, 22 Nov 2023 13:29:30 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= How to accurately determine when a relation should\n use\n local =?utf-8?q?buffers=3F?="
},
{
"msg_contents": "Hi,\n\n> I would propose not to associate temporary relations with local buffers\n\nThe whole point of why local buffers exist is to place the buffers of\ntemp tables into MemoryContexts so that these tables will not fight\nfor the locks for shared buffers with the rest of the system. If we\nstart treating them as regular tables this will cause a severe\nperformance degradation. I doubt that such a patch will make it.\n\nI sort of suspect that you are working on a very specific extension\nand/or feature for PG fork. Any chance you could give us more details\nabout the case?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 22 Nov 2023 16:38:52 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to accurately determine when a relation should use local\n buffers?"
},
{
"msg_contents": "Hi Aleksander,\nI sort of suspect that you are working on a very specific extension\nand/or feature for PG fork. Any chance you could give us more details\nabout the case?I'm trying to adapt a multimaster solution to some changes in pg16. We replicate temp table DDL due to some reasons. Furthermore, such tables should be accessible from other processes than the replication receiver process on a replica, and they still should be temporary. I understand that DML replication for temporary tables will cause a severe performance degradation. But it is not our case.\n\nThere are some changes in ReadBuffer logic if to compare with pg15. To define which buffers to use, ReadBuffer used SmgrIsTemp function in pg15. The decision was based on backend id of the relation. In pg16 the decision is based on relpersistence attribute, that caused some problems on my side. My opinion, we should choose local buffers based on backend ids of relations, not on its persistence. Additional check for relpersistence prior to backend id may improve the performance in some cases, I think. The internal design may become more flexible as a result.\n\nWith best regards,\nVitaly Davydov\n \n\nHi Aleksander,I sort of suspect that you are working on a very specific extensionand/or feature for PG fork. Any chance you could give us more detailsabout the case?I'm trying to adapt a multimaster solution to some changes in pg16. We replicate temp table DDL due to some reasons. Furthermore, such tables should be accessible from other processes than the replication receiver process on a replica, and they still should be temporary. I understand that DML replication for temporary tables will cause a severe performance degradation. But it is not our case.There are some changes in ReadBuffer logic if to compare with pg15. To define which buffers to use, ReadBuffer used SmgrIsTemp function in pg15. The decision was based on backend id of the relation. In pg16 the decision is based on relpersistence attribute, that caused some problems on my side. My opinion, we should choose local buffers based on backend ids of relations, not on its persistence. Additional check for relpersistence prior to backend id may improve the performance in some cases, I think. The internal design may become more flexible as a result.With best regards,Vitaly Davydov",
"msg_date": "Fri, 24 Nov 2023 10:10:17 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= How to accurately determine when a relation should\n use\n local =?utf-8?q?buffers=3F?="
},
{
"msg_contents": "Hi,\n\n> There are some changes in ReadBuffer logic if to compare with pg15. To define which buffers to use, ReadBuffer used SmgrIsTemp function in pg15. The decision was based on backend id of the relation. In pg16 the decision is based on relpersistence attribute, that caused some problems on my side. My opinion, we should choose local buffers based on backend ids of relations, not on its persistence. Additional check for relpersistence prior to backend id may improve the performance in some cases, I think. The internal design may become more flexible as a result.\n\nWell even assuming this patch will make it to the upstream some day,\nwhich I seriously doubt, it will take somewhere between 2 and 5 years.\nPersonally I would recommend reconsidering this design.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 24 Nov 2023 15:51:59 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to accurately determine when a relation should use local\n buffers?"
},
{
"msg_contents": "Hi Aleksander,\nWell even assuming this patch will make it to the upstream some day,\nwhich I seriously doubt, it will take somewhere between 2 and 5 years.\nPersonally I would recommend reconsidering this design.\nI understand what you are saying. I have no plans to create a patch for this issue. I would like to believe that my case will be taken into consideration for next developments. Thank you very much for your help!\n\nWith best regards,\nVitaly\n\nHi Aleksander,Well even assuming this patch will make it to the upstream some day,which I seriously doubt, it will take somewhere between 2 and 5 years.Personally I would recommend reconsidering this design.I understand what you are saying. I have no plans to create a patch for this issue. I would like to believe that my case will be taken into consideration for next developments. Thank you very much for your help!With best regards,Vitaly",
"msg_date": "Mon, 27 Nov 2023 11:56:11 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= How to accurately determine when a relation should\n use\n local =?utf-8?q?buffers=3F?="
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile rebasing a patch from 2016 related to sequence AMs (more about\nthat later), I've bumped on a mistake from 8586bf7ed888 in\nopr_sanity.sql, as of:\n+SELECT p1.oid, p1.amname, p2.oid, p2.proname\n+FROM pg_am AS p1, pg_proc AS p2\n+WHERE p2.oid = p1.amhandler AND p1.amtype = 's' AND\n\nIt seems to me that this has been copy-pasted on HEAD from the\nsequence AM patch, but forgot to update amtype to 't'. While that's\nmaybe cosmetic, I think that this could lead to unexpected results, so\nperhaps there is a point in doing a backpatch?\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 21 Nov 2023 15:09:20 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Typo with amtype = 's' in opr_sanity.sql"
},
{
"msg_contents": "Hi,\n\n> While rebasing a patch from 2016 related to sequence AMs (more about\n> that later), I've bumped on a mistake from 8586bf7ed888 in\n> opr_sanity.sql, as of:\n> +SELECT p1.oid, p1.amname, p2.oid, p2.proname\n> +FROM pg_am AS p1, pg_proc AS p2\n> +WHERE p2.oid = p1.amhandler AND p1.amtype = 's' AND\n\nGood catch.\n\n> It seems to me that this has been copy-pasted on HEAD from the\n> sequence AM patch, but forgot to update amtype to 't'. While that's\n> maybe cosmetic, I think that this could lead to unexpected results, so\n> perhaps there is a point in doing a backpatch?\n\nI disagree that it's cosmetic. The test doesn't check what it's supposed to.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 21 Nov 2023 13:02:40 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Typo with amtype = 's' in opr_sanity.sql"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 01:02:40PM +0300, Aleksander Alekseev wrote:\n>> It seems to me that this has been copy-pasted on HEAD from the\n>> sequence AM patch, but forgot to update amtype to 't'. While that's\n>> maybe cosmetic, I think that this could lead to unexpected results, so\n>> perhaps there is a point in doing a backpatch?\n> \n> I disagree that it's cosmetic. The test doesn't check what it's supposed to.\n\nYes, I've backpatched that all the way down to 12 at the end.\n--\nMichael",
"msg_date": "Wed, 22 Nov 2023 09:34:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Typo with amtype = 's' in opr_sanity.sql"
}
] |
[
{
"msg_contents": "Hi,\n\nI decided to do some stress-testing of the built-in logical replication,\nas part of the sequence decoding work. And I soon ran into an undetected\ndeadlock related to ALTER SUBSCRIPTION ... REFRESH PUBLICATION :-(\n\nThe attached bash scripts triggers that in a couple seconds for me. The\nscript looks complicated, but most of the code is waiting for sync to\ncomplete, catchup, and that sort of thing.\n\nWhat the script does is pretty simple:\n\n1) initialize two clusters, set them as publisher/subscriber pair\n\n2) create some number of tables, add them to publication and wait for\n the sync to complete\n\n3) start two pgbench runs in the background, modifying the publication\n (one removes+adds all tables in a single transaction, one does that\n with transaction per table)\n\n4) run refresh.sh which does ALTER PUBLICATION ... REFRESH PUBLICATION\n in a loop (now that I think about it, could be another pgbench\n script, but well ...)\n\n5) some consistency checks, but the lockup happens earlier so this does\n not really matter\n\nAfter a small number of refresh cycles (for me it's usually a couple\ndozen), we end up with a couple stuck locks (I shortened the backend\ntype string a bit, for formatting reasons):\n\n test=# select a.pid, classid, objid, backend_type, query\n from pg_locks l join pg_stat_activity a on (a.pid = l.pid)\n where not granted;\n\n pid | classid | objid | backend_type | query\n ---------+---------+-------+------------------+----------------------\n 2691941 | 6100 | 16785 | client backend | ALTER SUBSCRIPTION s\n REFRESH PUBLICATION\n 2691837 | 6100 | 16785 | tablesync worker |\n 2691936 | 6100 | 16785 | tablesync worker |\n (3 rows)\n\nAll these backends wait for 6100/16785, which is the subscription row in\npg_subscription. The tablesync workers are requesting AccessShareLock,\nthe client backend however asks for AccessExclusiveLock.\n\nThe entry is currently locked by:\n\n test=# select a.pid, mode, backend_type from pg_locks l\n join pg_stat_activity a on (a.pid = l.pid)\n where classid=6100 and objid=16785 and granted;\n\n pid | mode | backend_type\n ---------+-----------------+----------------------------------\n 2690477 | AccessShareLock | logical replication apply worker\n (1 row)\n\nBut the apply worker is not waiting for any locks, so what's going on?\n\nWell, the problem is the apply worker is waiting for notification from\nthe tablesync workers the relation is synced, which happens through\nupdating the pg_subscription_rel row. And that wait happens in\nwait_for_relation_state_change, which simply checks the row in a loop,\nwith a sleep by WaitLatch().\n\nUnfortunately, the tablesync workers can't update the row because the\nclient backend executing ALTER SUBSCRIPTION ... REFRESH PUBLICATION\nsneaked in, and waits for an AccessExclusiveLock. So the tablesync\nworkers are stuck in the queue and can't proceed.\n\nThe client backend can't proceed, because it's waiting for a lock held\nby the apply worker.\n\nThe tablesync workers can't proceed because their lock request is stuck\nbehind the AccessExclusiveLock request.\n\nAnd the apply worker can't proceed, because it's waiting for status\nupdate from the tablesync workers.\n\nAnd the deadlock is undetected, because the apply worker is not waiting\non a lock, but sleeping on a latch :-(\n\n\nI don't know what's the right solution here. I wonder if the apply\nworker might release the lock before waiting for the update, that'd\nsolve this whole issue.\n\nAlternatively, ALTER PUBLICATION might wait for the lock only for a\nlimited amount of time, and try again (but then it'd be susceptible to\nstarving, of course).\n\nOr maybe there's a way to make this work in a way that would be visible\nto the deadlock detector? That'd mean we occasionally get processes\nkilled to resolve a deadlock, but that's still better than processes\nstuck indefinitely ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 21 Nov 2023 12:47:38 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 5:17 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> I decided to do some stress-testing of the built-in logical replication,\n> as part of the sequence decoding work. And I soon ran into an undetected\n> deadlock related to ALTER SUBSCRIPTION ... REFRESH PUBLICATION :-(\n>\n> The attached bash scripts triggers that in a couple seconds for me. The\n> script looks complicated, but most of the code is waiting for sync to\n> complete, catchup, and that sort of thing.\n>\n> What the script does is pretty simple:\n>\n> 1) initialize two clusters, set them as publisher/subscriber pair\n>\n> 2) create some number of tables, add them to publication and wait for\n> the sync to complete\n>\n> 3) start two pgbench runs in the background, modifying the publication\n> (one removes+adds all tables in a single transaction, one does that\n> with transaction per table)\n>\n> 4) run refresh.sh which does ALTER PUBLICATION ... REFRESH PUBLICATION\n> in a loop (now that I think about it, could be another pgbench\n> script, but well ...)\n>\n> 5) some consistency checks, but the lockup happens earlier so this does\n> not really matter\n>\n> After a small number of refresh cycles (for me it's usually a couple\n> dozen), we end up with a couple stuck locks (I shortened the backend\n> type string a bit, for formatting reasons):\n>\n> test=# select a.pid, classid, objid, backend_type, query\n> from pg_locks l join pg_stat_activity a on (a.pid = l.pid)\n> where not granted;\n>\n> pid | classid | objid | backend_type | query\n> ---------+---------+-------+------------------+----------------------\n> 2691941 | 6100 | 16785 | client backend | ALTER SUBSCRIPTION s\n> REFRESH PUBLICATION\n> 2691837 | 6100 | 16785 | tablesync worker |\n> 2691936 | 6100 | 16785 | tablesync worker |\n> (3 rows)\n>\n> All these backends wait for 6100/16785, which is the subscription row in\n> pg_subscription. The tablesync workers are requesting AccessShareLock,\n> the client backend however asks for AccessExclusiveLock.\n>\n> The entry is currently locked by:\n>\n> test=# select a.pid, mode, backend_type from pg_locks l\n> join pg_stat_activity a on (a.pid = l.pid)\n> where classid=6100 and objid=16785 and granted;\n>\n> pid | mode | backend_type\n> ---------+-----------------+----------------------------------\n> 2690477 | AccessShareLock | logical replication apply worker\n> (1 row)\n>\n> But the apply worker is not waiting for any locks, so what's going on?\n>\n> Well, the problem is the apply worker is waiting for notification from\n> the tablesync workers the relation is synced, which happens through\n> updating the pg_subscription_rel row. And that wait happens in\n> wait_for_relation_state_change, which simply checks the row in a loop,\n> with a sleep by WaitLatch().\n>\n> Unfortunately, the tablesync workers can't update the row because the\n> client backend executing ALTER SUBSCRIPTION ... REFRESH PUBLICATION\n> sneaked in, and waits for an AccessExclusiveLock. So the tablesync\n> workers are stuck in the queue and can't proceed.\n>\n> The client backend can't proceed, because it's waiting for a lock held\n> by the apply worker.\n>\n\nIt seems there is some inconsistency in what you have written for\nclient backends/tablesync worker vs. apply worker. The above text\nseems to be saying that the client backend and table sync worker are\nwaiting on a \"subscription row in pg_subscription\" and the apply\nworker is operating on \"pg_subscription_rel\". So, if that is true then\nthey shouldn't get stuck.\n\nI think here client backend and tablesync worker seems to be blocked\nfor a lock on pg_subscription_rel.\n\n> The tablesync workers can't proceed because their lock request is stuck\n> behind the AccessExclusiveLock request.\n>\n> And the apply worker can't proceed, because it's waiting for status\n> update from the tablesync workers.\n>\n\nThis part is not clear to me because\nwait_for_relation_state_change()->GetSubscriptionRelState() seems to\nbe releasing the lock while closing the relation. Am, I missing\nsomething?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Nov 2023 18:46:59 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "\n\nOn 11/21/23 14:16, Amit Kapila wrote:\n> On Tue, Nov 21, 2023 at 5:17 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> I decided to do some stress-testing of the built-in logical replication,\n>> as part of the sequence decoding work. And I soon ran into an undetected\n>> deadlock related to ALTER SUBSCRIPTION ... REFRESH PUBLICATION :-(\n>>\n>> The attached bash scripts triggers that in a couple seconds for me. The\n>> script looks complicated, but most of the code is waiting for sync to\n>> complete, catchup, and that sort of thing.\n>>\n>> What the script does is pretty simple:\n>>\n>> 1) initialize two clusters, set them as publisher/subscriber pair\n>>\n>> 2) create some number of tables, add them to publication and wait for\n>> the sync to complete\n>>\n>> 3) start two pgbench runs in the background, modifying the publication\n>> (one removes+adds all tables in a single transaction, one does that\n>> with transaction per table)\n>>\n>> 4) run refresh.sh which does ALTER PUBLICATION ... REFRESH PUBLICATION\n>> in a loop (now that I think about it, could be another pgbench\n>> script, but well ...)\n>>\n>> 5) some consistency checks, but the lockup happens earlier so this does\n>> not really matter\n>>\n>> After a small number of refresh cycles (for me it's usually a couple\n>> dozen), we end up with a couple stuck locks (I shortened the backend\n>> type string a bit, for formatting reasons):\n>>\n>> test=# select a.pid, classid, objid, backend_type, query\n>> from pg_locks l join pg_stat_activity a on (a.pid = l.pid)\n>> where not granted;\n>>\n>> pid | classid | objid | backend_type | query\n>> ---------+---------+-------+------------------+----------------------\n>> 2691941 | 6100 | 16785 | client backend | ALTER SUBSCRIPTION s\n>> REFRESH PUBLICATION\n>> 2691837 | 6100 | 16785 | tablesync worker |\n>> 2691936 | 6100 | 16785 | tablesync worker |\n>> (3 rows)\n>>\n>> All these backends wait for 6100/16785, which is the subscription row in\n>> pg_subscription. The tablesync workers are requesting AccessShareLock,\n>> the client backend however asks for AccessExclusiveLock.\n>>\n>> The entry is currently locked by:\n>>\n>> test=# select a.pid, mode, backend_type from pg_locks l\n>> join pg_stat_activity a on (a.pid = l.pid)\n>> where classid=6100 and objid=16785 and granted;\n>>\n>> pid | mode | backend_type\n>> ---------+-----------------+----------------------------------\n>> 2690477 | AccessShareLock | logical replication apply worker\n>> (1 row)\n>>\n>> But the apply worker is not waiting for any locks, so what's going on?\n>>\n>> Well, the problem is the apply worker is waiting for notification from\n>> the tablesync workers the relation is synced, which happens through\n>> updating the pg_subscription_rel row. And that wait happens in\n>> wait_for_relation_state_change, which simply checks the row in a loop,\n>> with a sleep by WaitLatch().\n>>\n>> Unfortunately, the tablesync workers can't update the row because the\n>> client backend executing ALTER SUBSCRIPTION ... REFRESH PUBLICATION\n>> sneaked in, and waits for an AccessExclusiveLock. So the tablesync\n>> workers are stuck in the queue and can't proceed.\n>>\n>> The client backend can't proceed, because it's waiting for a lock held\n>> by the apply worker.\n>>\n> \n> It seems there is some inconsistency in what you have written for\n> client backends/tablesync worker vs. apply worker. The above text\n> seems to be saying that the client backend and table sync worker are\n> waiting on a \"subscription row in pg_subscription\" and the apply\n> worker is operating on \"pg_subscription_rel\". So, if that is true then\n> they shouldn't get stuck.\n> \n> I think here client backend and tablesync worker seems to be blocked\n> for a lock on pg_subscription_rel.\n> \n\nNot really, they are all locking the subscription. All the locks are on\nclassid=6100, which is pg_subscription:\n\n test=# select 6100::regclass;\n regclass\n -----------------\n pg_subscription\n (1 row)\n\nThe thing is, the tablesync workers call UpdateSubscriptionRelState,\nwhich locks the pg_subscription catalog at the very beginning:\n\n LockSharedObject(SubscriptionRelationId, ...);\n\nSo that's the issue. I haven't explored why it's done this way, and\nthere's no comment explaining locking the subscriptions is needed ...\n\n>> The tablesync workers can't proceed because their lock request is stuck\n>> behind the AccessExclusiveLock request.\n>>\n>> And the apply worker can't proceed, because it's waiting for status\n>> update from the tablesync workers.\n>>\n> \n> This part is not clear to me because\n> wait_for_relation_state_change()->GetSubscriptionRelState() seems to\n> be releasing the lock while closing the relation. Am, I missing\n> something?\n> \n\nI think you're missing the fact that GetSubscriptionRelState() acquires\nand releases the lock on pg_subscription_rel, but that's not the lock\ncausing the issue. The problem is the lock on the pg_subscription row.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 21 Nov 2023 14:26:08 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 6:56 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 11/21/23 14:16, Amit Kapila wrote:\n> > On Tue, Nov 21, 2023 at 5:17 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >\n> > It seems there is some inconsistency in what you have written for\n> > client backends/tablesync worker vs. apply worker. The above text\n> > seems to be saying that the client backend and table sync worker are\n> > waiting on a \"subscription row in pg_subscription\" and the apply\n> > worker is operating on \"pg_subscription_rel\". So, if that is true then\n> > they shouldn't get stuck.\n> >\n> > I think here client backend and tablesync worker seems to be blocked\n> > for a lock on pg_subscription_rel.\n> >\n>\n> Not really, they are all locking the subscription. All the locks are on\n> classid=6100, which is pg_subscription:\n>\n> test=# select 6100::regclass;\n> regclass\n> -----------------\n> pg_subscription\n> (1 row)\n>\n> The thing is, the tablesync workers call UpdateSubscriptionRelState,\n> which locks the pg_subscription catalog at the very beginning:\n>\n> LockSharedObject(SubscriptionRelationId, ...);\n>\n> So that's the issue. I haven't explored why it's done this way, and\n> there's no comment explaining locking the subscriptions is needed ...\n>\n\nI think it prevents concurrent drop of rel during the REFRESH operation.\n\n> >> The tablesync workers can't proceed because their lock request is stuck\n> >> behind the AccessExclusiveLock request.\n> >>\n> >> And the apply worker can't proceed, because it's waiting for status\n> >> update from the tablesync workers.\n> >>\n> >\n> > This part is not clear to me because\n> > wait_for_relation_state_change()->GetSubscriptionRelState() seems to\n> > be releasing the lock while closing the relation. Am, I missing\n> > something?\n> >\n>\n> I think you're missing the fact that GetSubscriptionRelState() acquires\n> and releases the lock on pg_subscription_rel, but that's not the lock\n> causing the issue. The problem is the lock on the pg_subscription row.\n>\n\nOkay. IIUC, what's going on here is that the apply worker acquires\nAccessShareLock on pg_subscription to update rel state for one of the\ntables say tbl-1, and then for another table say tbl-2, it started\nwaiting for a state change via wait_for_relation_state_change(). I\nthink here the fix is to commit the transaction before we go for a\nwait. I guess we need something along the lines of what is proposed in\n[1] though we have solved the problem in that thread in some other\nway..\n\n[1] - https://www.postgresql.org/message-id/1412708.1674417574%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Nov 2023 16:08:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On 11/22/23 11:38, Amit Kapila wrote:\n> On Tue, Nov 21, 2023 at 6:56 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 11/21/23 14:16, Amit Kapila wrote:\n>>> On Tue, Nov 21, 2023 at 5:17 PM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>>\n>>>\n>>> It seems there is some inconsistency in what you have written for\n>>> client backends/tablesync worker vs. apply worker. The above text\n>>> seems to be saying that the client backend and table sync worker are\n>>> waiting on a \"subscription row in pg_subscription\" and the apply\n>>> worker is operating on \"pg_subscription_rel\". So, if that is true then\n>>> they shouldn't get stuck.\n>>>\n>>> I think here client backend and tablesync worker seems to be blocked\n>>> for a lock on pg_subscription_rel.\n>>>\n>>\n>> Not really, they are all locking the subscription. All the locks are on\n>> classid=6100, which is pg_subscription:\n>>\n>> test=# select 6100::regclass;\n>> regclass\n>> -----------------\n>> pg_subscription\n>> (1 row)\n>>\n>> The thing is, the tablesync workers call UpdateSubscriptionRelState,\n>> which locks the pg_subscription catalog at the very beginning:\n>>\n>> LockSharedObject(SubscriptionRelationId, ...);\n>>\n>> So that's the issue. I haven't explored why it's done this way, and\n>> there's no comment explaining locking the subscriptions is needed ...\n>>\n> \n> I think it prevents concurrent drop of rel during the REFRESH operation.\n> \n\nYes. Or maybe some other concurrent DDL on the relations included in the\nsubscription.\n\n>>>> The tablesync workers can't proceed because their lock request is stuck\n>>>> behind the AccessExclusiveLock request.\n>>>>\n>>>> And the apply worker can't proceed, because it's waiting for status\n>>>> update from the tablesync workers.\n>>>>\n>>>\n>>> This part is not clear to me because\n>>> wait_for_relation_state_change()->GetSubscriptionRelState() seems to\n>>> be releasing the lock while closing the relation. Am, I missing\n>>> something?\n>>>\n>>\n>> I think you're missing the fact that GetSubscriptionRelState() acquires\n>> and releases the lock on pg_subscription_rel, but that's not the lock\n>> causing the issue. The problem is the lock on the pg_subscription row.\n>>\n> \n> Okay. IIUC, what's going on here is that the apply worker acquires\n> AccessShareLock on pg_subscription to update rel state for one of the\n> tables say tbl-1, and then for another table say tbl-2, it started\n> waiting for a state change via wait_for_relation_state_change(). I\n> think here the fix is to commit the transaction before we go for a\n> wait. I guess we need something along the lines of what is proposed in\n> [1] though we have solved the problem in that thread in some other\n> way..\n> \n\nPossibly. I haven't checked if the commit might have some unexpected\nconsequences, but I can confirm I can no longer reproduce the deadlock\nwith the patch applied.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 22 Nov 2023 12:21:25 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 4:51 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 11/22/23 11:38, Amit Kapila wrote:\n> >\n> > Okay. IIUC, what's going on here is that the apply worker acquires\n> > AccessShareLock on pg_subscription to update rel state for one of the\n> > tables say tbl-1, and then for another table say tbl-2, it started\n> > waiting for a state change via wait_for_relation_state_change(). I\n> > think here the fix is to commit the transaction before we go for a\n> > wait. I guess we need something along the lines of what is proposed in\n> > [1] though we have solved the problem in that thread in some other\n> > way..\n> >\n>\n> Possibly. I haven't checked if the commit might have some unexpected\n> consequences, but I can confirm I can no longer reproduce the deadlock\n> with the patch applied.\n>\n\nThanks for the verification. Offhand, I don't see any problem with\ndoing a commit at that place but will try to think some more about it.\nI think we may want to call pgstat_report_stat(false) after commit to\navoid a long delay in stats.\n\nI haven't verified but I think this will be a problem in back-branches as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 23 Nov 2023 14:54:03 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "\nOn 11/23/23 10:24, Amit Kapila wrote:\n> On Wed, Nov 22, 2023 at 4:51 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 11/22/23 11:38, Amit Kapila wrote:\n>>>\n>>> Okay. IIUC, what's going on here is that the apply worker acquires\n>>> AccessShareLock on pg_subscription to update rel state for one of the\n>>> tables say tbl-1, and then for another table say tbl-2, it started\n>>> waiting for a state change via wait_for_relation_state_change(). I\n>>> think here the fix is to commit the transaction before we go for a\n>>> wait. I guess we need something along the lines of what is proposed in\n>>> [1] though we have solved the problem in that thread in some other\n>>> way..\n>>>\n>>\n>> Possibly. I haven't checked if the commit might have some unexpected\n>> consequences, but I can confirm I can no longer reproduce the deadlock\n>> with the patch applied.\n>>\n> \n> Thanks for the verification. Offhand, I don't see any problem with\n> doing a commit at that place but will try to think some more about it.\n> I think we may want to call pgstat_report_stat(false) after commit to\n> avoid a long delay in stats.\n> \n\nMakes sense.\n\n> I haven't verified but I think this will be a problem in\n> back-branches as well.\n> \n\nYes. I haven't tried but I don't see why backbranches wouldn't have the\nsame issue.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Nov 2023 17:45:34 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "Hi,\n\nI tried to reproduce the issue and was able to reproduce it with\nscripts shared by Tomas.\nI tried testing it from PG17 to PG 11. This issue is reproducible for\neach version.\n\nNext I would try to test with the patch in the thread shared by Amit.\n\nThanks,\nShlok Kumar Kyal\n\n\n",
"msg_date": "Fri, 24 Nov 2023 09:45:45 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "Hi,\n\n> I tried to reproduce the issue and was able to reproduce it with\n> scripts shared by Tomas.\n> I tried testing it from PG17 to PG 11. This issue is reproducible for\n> each version.\n>\n> Next I would try to test with the patch in the thread shared by Amit.\n\nI have created the v1 patch to resolve the issue. Have tested the\npatch on HEAD to PG12.\nThe same patch applies to all the versions. The changes are similar to\nthe one posted in the thread\nhttps://www.postgresql.org/message-id/1412708.1674417574%40sss.pgh.pa.us\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Fri, 24 Nov 2023 17:05:00 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 5:05 PM Shlok Kyal <[email protected]> wrote:\n>\n> > I tried to reproduce the issue and was able to reproduce it with\n> > scripts shared by Tomas.\n> > I tried testing it from PG17 to PG 11. This issue is reproducible for\n> > each version.\n> >\n> > Next I would try to test with the patch in the thread shared by Amit.\n>\n> I have created the v1 patch to resolve the issue. Have tested the\n> patch on HEAD to PG12.\n> The same patch applies to all the versions. The changes are similar to\n> the one posted in the thread\n> https://www.postgresql.org/message-id/1412708.1674417574%40sss.pgh.pa.us\n>\n\n(it's quite likely we hold lock on\n+ * pg_replication_origin, which the sync worker will need\n+ * to update).\n\nThis part of the comment is stale and doesn't hold true. You need to\nupdate the reason based on the latest problem discovered in this\nthread. I think you can compare the timing of regression tests in\nsubscription, with and without the patch to show there is no\nregression. And probably some tests with a large number of tables for\nsync with very little data.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 25 Nov 2023 16:35:23 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "Hi,\n\n> thread. I think you can compare the timing of regression tests in\n> subscription, with and without the patch to show there is no\n> regression. And probably some tests with a large number of tables for\n> sync with very little data.\n\nI have tested the regression test timings for subscription with and\nwithout patch. I also did the timing test for sync of subscription\nwith the publisher for 100 and 1000 tables respectively.\nI have attached the test script and results of the timing test are as follows:\n\nTime taken for test to run in Linux VM\nSummary | Subscription Test (sec)\n| 100 tables in pub and Sub (sec) | 1000 tables in pub and Sub\n(sec)\nWithout patch Release | 95.564\n | 7.877 | 58.919\nWith patch Release | 96.513\n | 6.533 | 45.807\n\nTime Taken for test to run in another Linux VM\nSummary | Subscription Test (sec)\n| 100 tables in pub and Sub (sec) | 1000 tables in pub and Sub\n(sec)\nWithout patch Release | 109.8145\n| 6.4675 | 83.001\nWith patch Release | 113.162\n | 7.947 | 87.113\n\nTime Taken for test to run in Performance Machine Linux\nSummary | Subscription Test (sec)\n| 100 tables in pub and Sub (sec) | 1000 tables in pub and Sub\n(sec)\nWithout patch Release | 115.871\n | 6.656 | 81.157\nWith patch Release | 115.922\n | 6.7305 | 81.1525\n\nthoughts?\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Sat, 2 Dec 2023 21:52:02 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On Sat, Dec 2, 2023 at 9:52 PM Shlok Kyal <[email protected]> wrote:\n>\n> > thread. I think you can compare the timing of regression tests in\n> > subscription, with and without the patch to show there is no\n> > regression. And probably some tests with a large number of tables for\n> > sync with very little data.\n>\n> I have tested the regression test timings for subscription with and\n> without patch. I also did the timing test for sync of subscription\n> with the publisher for 100 and 1000 tables respectively.\n> I have attached the test script and results of the timing test are as follows:\n>\n> Time taken for test to run in Linux VM\n> Summary | Subscription Test (sec)\n> | 100 tables in pub and Sub (sec) | 1000 tables in pub and Sub\n> (sec)\n> Without patch Release | 95.564\n> | 7.877 | 58.919\n> With patch Release | 96.513\n> | 6.533 | 45.807\n>\n> Time Taken for test to run in another Linux VM\n> Summary | Subscription Test (sec)\n> | 100 tables in pub and Sub (sec) | 1000 tables in pub and Sub\n> (sec)\n> Without patch Release | 109.8145\n> | 6.4675 | 83.001\n> With patch Release | 113.162\n> | 7.947 | 87.113\n>\n\nSo, on some machines, it may increase the test timing although not too\nmuch. I think the reason is probably doing the work in multiple\ntransactions for multiple relations. I am wondering that instead of\ncommitting and starting a new transaction before\nwait_for_relation_state_change(), what if we do it inside that\nfunction just before we decide to wait? It is quite possible that in\nmany cases we don't need any wait at all.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Dec 2023 17:07:47 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "\n\nOn 12/4/23 12:37, Amit Kapila wrote:\n> On Sat, Dec 2, 2023 at 9:52 PM Shlok Kyal <[email protected]> wrote:\n>>\n>>> thread. I think you can compare the timing of regression tests in\n>>> subscription, with and without the patch to show there is no\n>>> regression. And probably some tests with a large number of tables for\n>>> sync with very little data.\n>>\n>> I have tested the regression test timings for subscription with and\n>> without patch. I also did the timing test for sync of subscription\n>> with the publisher for 100 and 1000 tables respectively.\n>> I have attached the test script and results of the timing test are as follows:\n>>\n>> Time taken for test to run in Linux VM\n>> Summary | Subscription Test (sec)\n>> | 100 tables in pub and Sub (sec) | 1000 tables in pub and Sub\n>> (sec)\n>> Without patch Release | 95.564\n>> | 7.877 | 58.919\n>> With patch Release | 96.513\n>> | 6.533 | 45.807\n>>\n>> Time Taken for test to run in another Linux VM\n>> Summary | Subscription Test (sec)\n>> | 100 tables in pub and Sub (sec) | 1000 tables in pub and Sub\n>> (sec)\n>> Without patch Release | 109.8145\n>> | 6.4675 | 83.001\n>> With patch Release | 113.162\n>> | 7.947 | 87.113\n>>\n> \n> So, on some machines, it may increase the test timing although not too\n> much. I think the reason is probably doing the work in multiple\n> transactions for multiple relations. I am wondering that instead of\n> committing and starting a new transaction before\n> wait_for_relation_state_change(), what if we do it inside that\n> function just before we decide to wait? It is quite possible that in\n> many cases we don't need any wait at all.\n> \n\nI'm not sure what you mean by \"do it\". What should the function do?\n\nAs for the test results, I very much doubt the differences are not\ncaused simply by random timing variations, or something like that. And I\ndon't understand what \"Performance Machine Linux\" is, considering those\ntimings are slower than the other two machines.\n\nAlso, even if it was a bit slower, does it really matter? I mean, the\ncurrent code is wrong, can lead to infinite duration if it happens to\nhit the deadlock. And it's a one-time action, I don't think it's a very\nsensitive in terms of performance.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 4 Dec 2023 13:00:27 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On Mon, Dec 4, 2023 at 5:30 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 12/4/23 12:37, Amit Kapila wrote:\n> > On Sat, Dec 2, 2023 at 9:52 PM Shlok Kyal <[email protected]> wrote:\n> >>\n> >>> thread. I think you can compare the timing of regression tests in\n> >>> subscription, with and without the patch to show there is no\n> >>> regression. And probably some tests with a large number of tables for\n> >>> sync with very little data.\n> >>\n> >> I have tested the regression test timings for subscription with and\n> >> without patch. I also did the timing test for sync of subscription\n> >> with the publisher for 100 and 1000 tables respectively.\n> >> I have attached the test script and results of the timing test are as follows:\n> >>\n> >> Time taken for test to run in Linux VM\n> >> Summary | Subscription Test (sec)\n> >> | 100 tables in pub and Sub (sec) | 1000 tables in pub and Sub\n> >> (sec)\n> >> Without patch Release | 95.564\n> >> | 7.877 | 58.919\n> >> With patch Release | 96.513\n> >> | 6.533 | 45.807\n> >>\n> >> Time Taken for test to run in another Linux VM\n> >> Summary | Subscription Test (sec)\n> >> | 100 tables in pub and Sub (sec) | 1000 tables in pub and Sub\n> >> (sec)\n> >> Without patch Release | 109.8145\n> >> | 6.4675 | 83.001\n> >> With patch Release | 113.162\n> >> | 7.947 | 87.113\n> >>\n> >\n> > So, on some machines, it may increase the test timing although not too\n> > much. I think the reason is probably doing the work in multiple\n> > transactions for multiple relations. I am wondering that instead of\n> > committing and starting a new transaction before\n> > wait_for_relation_state_change(), what if we do it inside that\n> > function just before we decide to wait? It is quite possible that in\n> > many cases we don't need any wait at all.\n> >\n>\n> I'm not sure what you mean by \"do it\". What should the function do?\n>\n\nI mean to commit the open transaction at the below place in\nwait_for_relation_state_change()\n\nwait_for_relation_state_change()\n{\n...\n-- commit the xact\nWaitLatch();\n...\n}\n\nThen start after the wait is over. This is just to test whether it\nimproves the difference in regression test timing.\n\n> As for the test results, I very much doubt the differences are not\n> caused simply by random timing variations, or something like that. And I\n> don't understand what \"Performance Machine Linux\" is, considering those\n> timings are slower than the other two machines.\n>\n> Also, even if it was a bit slower, does it really matter? I mean, the\n> current code is wrong, can lead to infinite duration if it happens to\n> hit the deadlock. And it's a one-time action, I don't think it's a very\n> sensitive in terms of performance.\n>\n\nYeah, I see that point but trying to evaluate if we can avoid an\nincrease in regression test timing at least for the HEAD. The tests\nare done in release mode, so, I suspect there could be a slightly\nbigger gap in debug mode (we can check that once though) which may hit\ndevelopers running regressions quite often in their development\nenvironments. Now, if we there is no easy way to avoid the increase in\nregression test timing, we have to still fix the problem, so we have\nto take the hit of some increase in time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Dec 2023 17:41:34 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "Hi,\n\n> As for the test results, I very much doubt the differences are not\n> caused simply by random timing variations, or something like that. And I\n> don't understand what \"Performance Machine Linux\" is, considering those\n> timings are slower than the other two machines.\n\nThe machine has Total Memory of 755.536 GB, 120 CPUs and RHEL 7 Operating System\nAlso find the detailed info of the performance machine attached.\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Tue, 5 Dec 2023 12:44:23 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On 12/5/23 08:14, Shlok Kyal wrote:\n> Hi,\n> \n>> As for the test results, I very much doubt the differences are not\n>> caused simply by random timing variations, or something like that. And I\n>> don't understand what \"Performance Machine Linux\" is, considering those\n>> timings are slower than the other two machines.\n> \n> The machine has Total Memory of 755.536 GB, 120 CPUs and RHEL 7 Operating System\n> Also find the detailed info of the performance machine attached.\n> \n\nThanks for the info. I don't think the tests really benefit from this\nmuch resources, I would be rather surprised if it was faster beyond 8\ncores or so. The CPU frequency likely matters much more. Which probably\nexplains why this machine was the slowest.\n\nAlso, I wonder how much the results vary between the runs. I suppose you\nonly did s single run for each, right?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 5 Dec 2023 12:48:31 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On Tue, 5 Dec 2023 at 17:18, Tomas Vondra <[email protected]> wrote:\n>\n> On 12/5/23 08:14, Shlok Kyal wrote:\n> > Hi,\n> >\n> >> As for the test results, I very much doubt the differences are not\n> >> caused simply by random timing variations, or something like that. And I\n> >> don't understand what \"Performance Machine Linux\" is, considering those\n> >> timings are slower than the other two machines.\n> >\n> > The machine has Total Memory of 755.536 GB, 120 CPUs and RHEL 7 Operating System\n> > Also find the detailed info of the performance machine attached.\n> >\n>\n> Thanks for the info. I don't think the tests really benefit from this\n> much resources, I would be rather surprised if it was faster beyond 8\n> cores or so. The CPU frequency likely matters much more. Which probably\n> explains why this machine was the slowest.\n>\n> Also, I wonder how much the results vary between the runs. I suppose you\n> only did s single run for each, right?\n\nI did 10 runs for each of the cases and reported the median result in\nthe previous thread.\nI have documented the result of the runs and have attached the same here.\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Wed, 6 Dec 2023 10:55:51 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "Hi,\n\n> I mean to commit the open transaction at the below place in\n> wait_for_relation_state_change()\n>\n> wait_for_relation_state_change()\n> {\n> ...\n> -- commit the xact\n> WaitLatch();\n> ...\n> }\n>\n> Then start after the wait is over. This is just to test whether it\n> improves the difference in regression test timing.\n\nI tried the above approach and observed that the performance of this\napproach is nearly same as the previous approach.\n\nFor Linux VM:\nSummary | Subscription | 100 tables in pub | 1000 tables in pub\n | Test (sec) | and Sub (sec) | and Sub (sec)\n------------------------------------------------------------------------------\nold patch | 107.4545 | 6.911 | 77.918\nalternate | 108.3985 | 6.9835 | 78.111\napproach\n\nFor Performance Machine:\nSummary | Subscription | 100 tables in pub | 1000 tables in pub\n | Test (sec) | and Sub (sec) | and Sub (sec)\n------------------------------------------------------------------------------\nold patch | 115.922 | 6.7305 | 81.1525\nalternate | 115.8215 | 6.7685 | 81.2335\napproach\n\nI have attached the patch for this approach as 'alternate_approach.patch'.\n\nSince the performance is the same, I think that the previous approach\nis better. As in this approach we are using CommitTransactionCommand()\nand StartTransactionCommand() inside a 'for loop'.\n\nI also fixed the comment in previous approach and attached here as\n'v2-0001-Deadlock-when-apply-worker-tablesync-worker-and-c.patch'\n\nThanks and Regards\n\n\nShlok Kyal",
"msg_date": "Thu, 7 Dec 2023 11:21:29 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On Thu, Dec 7, 2023 at 11:21 AM Shlok Kyal <[email protected]> wrote:\n>\n> > I mean to commit the open transaction at the below place in\n> > wait_for_relation_state_change()\n> >\n> > wait_for_relation_state_change()\n> > {\n> > ...\n> > -- commit the xact\n> > WaitLatch();\n> > ...\n> > }\n> >\n> > Then start after the wait is over. This is just to test whether it\n> > improves the difference in regression test timing.\n>\n> I tried the above approach and observed that the performance of this\n> approach is nearly same as the previous approach.\n>\n\nThen let's go with the original patch only. BTW, it took almost the\nsame time (105 wallclock secs) in my environment (CentOs VM) to run\ntests in src/test/subscription both with and without the patch. I took\na median of five runs. I have slightly adjusted the comments and\ncommit message in the attached. If you are fine with this, we can\ncommit and backpatch this.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 8 Dec 2023 17:15:00 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "Hi,\n\n> Then let's go with the original patch only. BTW, it took almost the\n> same time (105 wallclock secs) in my environment (CentOs VM) to run\n> tests in src/test/subscription both with and without the patch. I took\n> a median of five runs. I have slightly adjusted the comments and\n> commit message in the attached. If you are fine with this, we can\n> commit and backpatch this.\n\nI have tested the patch for all the branches from PG 17 to PG 12.\nThe same patch applies cleanly on all branches. Also, the same patch\nresolves the issue on all the branches.\nI ran all the tests and all the tests passed on each branch.\n\nI also reviewed the patch and it looks good to me.\n\nThanks and Regards,\nShlok Kyal\n\n\n",
"msg_date": "Fri, 8 Dec 2023 19:16:47 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 7:16 PM Shlok Kyal <[email protected]> wrote:\n>\n> > Then let's go with the original patch only. BTW, it took almost the\n> > same time (105 wallclock secs) in my environment (CentOs VM) to run\n> > tests in src/test/subscription both with and without the patch. I took\n> > a median of five runs. I have slightly adjusted the comments and\n> > commit message in the attached. If you are fine with this, we can\n> > commit and backpatch this.\n>\n> I have tested the patch for all the branches from PG 17 to PG 12.\n> The same patch applies cleanly on all branches. Also, the same patch\n> resolves the issue on all the branches.\n> I ran all the tests and all the tests passed on each branch.\n>\n> I also reviewed the patch and it looks good to me.\n>\n\nThanks, I could also reproduce the issue on back branches (tried till\n12), and the fix works. I'll push this on Monday.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 9 Dec 2023 12:16:16 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On Sat, Dec 9, 2023 at 12:16 PM Amit Kapila <[email protected]> wrote:\n>\n> Thanks, I could also reproduce the issue on back branches (tried till\n> 12), and the fix works. I'll push this on Monday.\n>\n\nPeter sent one minor suggestion (to write the check differently for\neasier understanding) offlist which I addressed and pushed the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 11 Dec 2023 11:42:01 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
},
{
"msg_contents": "On 12/11/23 07:12, Amit Kapila wrote:\n> On Sat, Dec 9, 2023 at 12:16 PM Amit Kapila <[email protected]> wrote:\n>>\n>> Thanks, I could also reproduce the issue on back branches (tried till\n>> 12), and the fix works. I'll push this on Monday.\n>>\n> \n> Peter sent one minor suggestion (to write the check differently for\n> easier understanding) offlist which I addressed and pushed the patch.\n> \n\nThanks for taking care of fixing this!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 11 Dec 2023 16:36:08 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: undetected deadlock in ALTER SUBSCRIPTION ... REFRESH PUBLICATION"
}
] |
[
{
"msg_contents": "Hi all,\n\nWas doing a relation size estimation based on pg_class.relpages of the\nrelation and the related objects (index, toast) and noticed that it is not\nupdated for the toast index, for example:\n\nfabrizio=# CREATE TABLE t(c TEXT);\nINSERT INTO t VALUES (repeat('x', (8192^2)::int));\n\nVACUUM (ANALYZE) t;\nCREATE TABLE\nINSERT 0 1\nVACUUM\nfabrizio=# \\x on\nExpanded display is on.\nfabrizio=# SELECT\n c.oid,\n c.relname,\n c.relpages,\n t.relname,\n t.relpages AS toast_pages,\n ci.relname,\n ci.relpages AS toast_index_pages,\n (pg_stat_file(pg_relation_filepath(ci.oid))).size AS toast_index_size\nFROM\n pg_class c\n JOIN pg_class t ON t.oid = c.reltoastrelid\n JOIN pg_index i ON i.indrelid = t.oid\n JOIN pg_class ci ON ci.oid = i.indexrelid\nWHERE\n c.oid = 't'::regclass;\n-[ RECORD 1 ]-----+---------------------\noid | 17787\nrelname | t\nrelpages | 1\nrelname | pg_toast_17787\ntoast_pages | 97\nrelname | pg_toast_17787_index\ntoast_index_pages | 1\ntoast_index_size | 16384\n\nAre there any reasons for toast index relpages not to be updated? Or is it\na bug?\n\nRegards,\n\n-- \nFabrízio de Royes Mello\n\nHi all,Was doing a relation size estimation based on pg_class.relpages of the relation and the related objects (index, toast) and noticed that it is not updated for the toast index, for example:fabrizio=# CREATE TABLE t(c TEXT);INSERT INTO t VALUES (repeat('x', (8192^2)::int));VACUUM (ANALYZE) t;CREATE TABLEINSERT 0 1VACUUMfabrizio=# \\x onExpanded display is on.fabrizio=# SELECT c.oid, c.relname, c.relpages, t.relname, t.relpages AS toast_pages, ci.relname, ci.relpages AS toast_index_pages, (pg_stat_file(pg_relation_filepath(ci.oid))).size AS toast_index_sizeFROM pg_class c JOIN pg_class t ON t.oid = c.reltoastrelid JOIN pg_index i ON i.indrelid = t.oid JOIN pg_class ci ON ci.oid = i.indexrelidWHERE c.oid = 't'::regclass;-[ RECORD 1 ]-----+---------------------oid | 17787relname | trelpages | 1relname | pg_toast_17787toast_pages | 97relname | pg_toast_17787_indextoast_index_pages | 1toast_index_size | 16384Are there any reasons for toast index relpages not to be updated? Or is it a bug?Regards,-- Fabrízio de Royes Mello",
"msg_date": "Tue, 21 Nov 2023 11:34:08 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_class.relpages not updated for toast index"
}
] |
[
{
"msg_contents": "In commit 97550c0, I added a \"MyProcPid == getpid()\" check in the SIGTERM\nhandler for the startup process. This ensures that processes forked by\nsystem(3) (i.e., for restore_command) that have yet to install their own\nsignal handlers do not call proc_exit() upon receiving SIGTERM. Without\nthis protection, both the startup process and the restore_command process\nmight try to remove themselves from the PGPROC shared array (among other\nthings), which can end badly.\n\nSince then, I've been exploring a more general approach that would offer\nprotection against similar issues in the future. We probably don't want\nsignal handlers in these grandchild processes to touch shared memory at\nall. The attached 0001 is an attempt at adding such protection for all\nhandlers installed via pqsignal(). In short, it stores the actual handler\nfunctions in a separate array, and sigaction() is given a wrapper function\nthat performs the \"MyProcPid == getpid()\" check. If that check fails, the\nwrapper function installs the default signal handler and calls it.\n\nBesides allowing us to revert commit 97550c0 (see attached 0003), this\nwrapper handler could also restore errno, as shown in 0002. Right now,\nindividual signal handlers must do this today as needed, but that seems\neasy to miss and prone to going unnoticed for a long time.\n\nI see two main downsides of this proposal:\n\n* Overhead: The wrapper handler calls a function pointer and getpid(),\n which AFAICT is a real system call on most platforms. That might not be\n a tremendous amount of overhead, but it's not zero, either. I'm\n particularly worried about signal-heavy code like synchronous\n replication. (Are there other areas that should be tested?) If this is\n a concern, perhaps we could allow certain processes to opt out of this\n wrapper handler, provided we believe it is unlikely to fork or that the\n handler code is safe to run in grandchild processes.\n\n* Race conditions: With these patches, pqsignal() becomes quite racy when\n used within signal handlers. Specifically, you might get a bogus return\n value. However, there are no in-tree callers of pqsignal() that look at\n the return value (and I don't see any reason they should), and it seems\n unlikely that pqsignal() will be used within signal handlers frequently,\n so this might not be a deal-breaker. I did consider trying to convert\n pqsignal() into a void function, but IIUC that would require an SONAME\n bump. For now, I've just documented the bogosity of the return values.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 21 Nov 2023 15:20:08 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "common signal handler protection"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 03:20:08PM -0600, Nathan Bossart wrote:\n> +#ifdef NSIG\n> +#define PG_NSIG (NSIG)\n> +#else\n> +#define PG_NSIG (64)\t\t\t/* XXX: wild guess */\n> +#endif\n\n> +\tAssert(signo < PG_NSIG);\n\ncfbot seems unhappy with this on Windows. IIUC we need to use\nPG_SIGNAL_COUNT there instead, but I'd like to find a way to have just one\nmacro for all platforms.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 21 Nov 2023 16:40:06 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 04:40:06PM -0600, Nathan Bossart wrote:\n> cfbot seems unhappy with this on Windows. IIUC we need to use\n> PG_SIGNAL_COUNT there instead, but I'd like to find a way to have just one\n> macro for all platforms.\n\nHere's an attempt at fixing the Windows build.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 22 Nov 2023 15:59:44 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-22 15:59:44 -0600, Nathan Bossart wrote:\n> Subject: [PATCH v2 1/3] Check that MyProcPid == getpid() in all signal\n> handlers.\n> \n> In commit 97550c0711, we added a similar check to the SIGTERM\n> handler for the startup process. This commit adds this check to\n> all signal handlers installed with pqsignal(). This is done by\n> using a wrapper function that performs the check before calling the\n> actual handler.\n> \n> The hope is that this will offer more general protection against\n> grandchildren processes inadvertently modifying shared memory due\n> to inherited signal handlers.\n\nIt's a bit unclear here what grandchildren refers to - it's presumably in\nreference to postmaster, but the preceding text doesn't even mention\npostmaster. I'd probably just say \"child processes of the current process.\n\n\n> +\n> +#ifdef PG_SIGNAL_COUNT\t\t\t/* Windows */\n> +#define PG_NSIG (PG_SIGNAL_COUNT)\n> +#elif defined(NSIG)\n> +#define PG_NSIG (NSIG)\n> +#else\n> +#define PG_NSIG (64)\t\t\t/* XXX: wild guess */\n> +#endif\n\nPerhaps worth adding a static assert for at least a few common types of\nsignals being below that value? That way we'd see a potential issue without\nneeding to reach the code path.\n\n\n> +/*\n> + * Except when called with SIG_IGN or SIG_DFL, pqsignal() sets up this function\n> + * as the handler for all signals. This wrapper handler function checks that\n> + * it is called within a process that the server knows about, and not a\n> + * grandchild process forked by system(3), etc.\n\nSimilar comment to earlier - the grandchildren bit seems like a dangling\nreference. And also too specific - I think we could encounter this in single\nuser mode as well?\n\nPerhaps it could be reframed to \"postgres processes, as determined by having\ncalled InitProcessGlobals()\"?\n\n\n>This check ensures that such\n> + * grandchildren processes do not modify shared memory, which could be\n> + * detrimental.\n\n\"could be\" seems a bit euphemistic :)\n\n\n> From b77da9c54590a71eb9921d6f16bf4ffb0543e87e Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <[email protected]>\n> Date: Fri, 17 Nov 2023 14:00:12 -0600\n> Subject: [PATCH v2 2/3] Centralize logic for restoring errno in signal\n> handlers.\n> \n> Presently, we rely on each individual signal handler to save the\n> initial value of errno and then restore it before returning if\n> needed. This is easily forgotten and, if missed, often goes\n> undetected for a long time.\n\nIt's also just verbose :)\n\n\n> From 5734e0cf5f00bbd74504b45934f68e1c2c1290bd Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <[email protected]>\n> Date: Fri, 17 Nov 2023 22:09:24 -0600\n> Subject: [PATCH v2 3/3] Revert \"Avoid calling proc_exit() in processes forked\n> by system().\"\n> \n> Thanks to commit XXXXXXXXXX, this check in the SIGTERM handler for\n> the startup process is now obsolete and can be removed. Instead of\n> leaving around the elog(PANIC, ...) calls that are now unlikely to\n> be triggered and the dead function write_stderr_signal_safe(), I've\n> opted to just remove them for now. Thanks to modern version\n> control software, it will be trivial to dig those up if they are\n> ever needed in the future.\n> \n> This reverts commit 97550c0711972a9856b5db751539bbaf2f88884c.\n> \n> Reviewed-by: ???\n> Discussion: ???\n> ---\n> src/backend/postmaster/startup.c | 17 +----------------\n> src/backend/storage/ipc/ipc.c | 4 ----\n> src/backend/storage/lmgr/proc.c | 8 --------\n> src/backend/utils/error/elog.c | 28 ----------------------------\n> src/include/utils/elog.h | 6 ------\n> 5 files changed, 1 insertion(+), 62 deletions(-)\n> \n> diff --git a/src/backend/postmaster/startup.c b/src/backend/postmaster/startup.c\n> index 83dbed86b9..f40acd20ff 100644\n> --- a/src/backend/postmaster/startup.c\n> +++ b/src/backend/postmaster/startup.c\n> @@ -19,8 +19,6 @@\n> */\n> #include \"postgres.h\"\n> \n> -#include <unistd.h>\n> -\n> #include \"access/xlog.h\"\n> #include \"access/xlogrecovery.h\"\n> #include \"access/xlogutils.h\"\n> @@ -113,20 +111,7 @@ static void\n> StartupProcShutdownHandler(SIGNAL_ARGS)\n> {\n> \tif (in_restore_command)\n> -\t{\n> -\t\t/*\n> -\t\t * If we are in a child process (e.g., forked by system() in\n> -\t\t * RestoreArchivedFile()), we don't want to call any exit callbacks.\n> -\t\t * The parent will take care of that.\n> -\t\t */\n> -\t\tif (MyProcPid == (int) getpid())\n> -\t\t\tproc_exit(1);\n> -\t\telse\n> -\t\t{\n> -\t\t\twrite_stderr_signal_safe(\"StartupProcShutdownHandler() called in child process\\n\");\n> -\t\t\t_exit(1);\n> -\t\t}\n> -\t}\n> +\t\tproc_exit(1);\n> \telse\n> \t\tshutdown_requested = true;\n> \tWakeupRecovery();\n\nHm. I wonder if this indicates an issue. Do the preceding changes perhaps\nmake it more likely that a child process of the startup process could hang\naround for longer, because the signal we're delivering doesn't terminate child\nprocesses, because we'd just reset the signal handler? I think it's fine for\nthe startup process, because we ask the startup process to shut down with\nSIGTERM, which defaults to exiting.\n\nBut we do have a few processes that we do ask to shut down with other\nsignals, wich do not trigger an exit by default, e.g. Checkpointer, archiver,\nwalsender are asked to shut down using SIGUSR2 IIRC. The only one where that\ncould be an issue is archiver, I guess?\n\n\n> diff --git a/src/backend/storage/ipc/ipc.c b/src/backend/storage/ipc/ipc.c\n> index 6591b5d6a8..1904d21795 100644\n> --- a/src/backend/storage/ipc/ipc.c\n> +++ b/src/backend/storage/ipc/ipc.c\n> @@ -103,10 +103,6 @@ static int\ton_proc_exit_index,\n> void\n> proc_exit(int code)\n> {\n> -\t/* not safe if forked by system(), etc. */\n> -\tif (MyProcPid != (int) getpid())\n> -\t\telog(PANIC, \"proc_exit() called in child process\");\n> -\n> \t/* Clean up everything that must be cleaned up */\n> \tproc_exit_prepare(code);\n\n> diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\n> index e9e445bb21..5b663a2997 100644\n> --- a/src/backend/storage/lmgr/proc.c\n> +++ b/src/backend/storage/lmgr/proc.c\n> @@ -806,10 +806,6 @@ ProcKill(int code, Datum arg)\n> \n> \tAssert(MyProc != NULL);\n> \n> -\t/* not safe if forked by system(), etc. */\n> -\tif (MyProc->pid != (int) getpid())\n> -\t\telog(PANIC, \"ProcKill() called in child process\");\n> -\n> \t/* Make sure we're out of the sync rep lists */\n> \tSyncRepCleanupAtProcExit();\n> \n> @@ -930,10 +926,6 @@ AuxiliaryProcKill(int code, Datum arg)\n> \n> \tAssert(proctype >= 0 && proctype < NUM_AUXILIARY_PROCS);\n> \n> -\t/* not safe if forked by system(), etc. */\n> -\tif (MyProc->pid != (int) getpid())\n> -\t\telog(PANIC, \"AuxiliaryProcKill() called in child process\");\n> -\n> \tauxproc = &AuxiliaryProcs[proctype];\n> \n> \tAssert(MyProc == auxproc);\n\nI think we should leave these checks. It's awful to debug situations where a\nproc gets reused and the cost of the checks is irrelevant. The checks don't\njust protect against child processes, they also protect e.g. against calling\nthose functions twice (we IIRC had cases of that).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:59:45 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "Thanks for taking a look.\n\nOn Wed, Nov 22, 2023 at 02:59:45PM -0800, Andres Freund wrote:\n> On 2023-11-22 15:59:44 -0600, Nathan Bossart wrote:\n>> +/*\n>> + * Except when called with SIG_IGN or SIG_DFL, pqsignal() sets up this function\n>> + * as the handler for all signals. This wrapper handler function checks that\n>> + * it is called within a process that the server knows about, and not a\n>> + * grandchild process forked by system(3), etc.\n> \n> Similar comment to earlier - the grandchildren bit seems like a dangling\n> reference. And also too specific - I think we could encounter this in single\n> user mode as well?\n> \n> Perhaps it could be reframed to \"postgres processes, as determined by having\n> called InitProcessGlobals()\"?\n\nEh, apparently my attempt at being clever didn't pan out. I like your idea\nof specifying InitProcessGlobals(), but I might also include \"e.g., client\nbackends\", too.\n\n> Hm. I wonder if this indicates an issue. Do the preceding changes perhaps\n> make it more likely that a child process of the startup process could hang\n> around for longer, because the signal we're delivering doesn't terminate child\n> processes, because we'd just reset the signal handler? I think it's fine for\n> the startup process, because we ask the startup process to shut down with\n> SIGTERM, which defaults to exiting.\n> \n> But we do have a few processes that we do ask to shut down with other\n> signals, wich do not trigger an exit by default, e.g. Checkpointer, archiver,\n> walsender are asked to shut down using SIGUSR2 IIRC. The only one where that\n> could be an issue is archiver, I guess?\n\nThis did cross my mind. AFAICT most default handlers already trigger an\nexit (including SIGUSR2), and for the ones that don't, we'd just end up in\nthe same situation as if the signal arrived a moment later after the child\nprocess has installed its own handlers. And postmaster doesn't send\ncertain signals (e.g., SIGHUP) to the whole process group, so we don't have\nthe opposite problem where things like reloading configuration files\nterminates these child processes.\n\nSo, I didn't notice any potential issues. Did you have anything else in\nmind?\n\n>> -\t/* not safe if forked by system(), etc. */\n>> -\tif (MyProc->pid != (int) getpid())\n>> -\t\telog(PANIC, \"AuxiliaryProcKill() called in child process\");\n>> -\n>> \tauxproc = &AuxiliaryProcs[proctype];\n>> \n>> \tAssert(MyProc == auxproc);\n> \n> I think we should leave these checks. It's awful to debug situations where a\n> proc gets reused and the cost of the checks is irrelevant. The checks don't\n> just protect against child processes, they also protect e.g. against calling\n> those functions twice (we IIRC had cases of that).\n\nSure, that's no problem.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Nov 2023 16:16:25 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "Here is a new patch set with feedback addressed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 28 Nov 2023 15:39:55 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-28 15:39:55 -0600, Nathan Bossart wrote:\n> From e4bea5353c2685457545b67396095e9b96156982 Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <[email protected]>\n> Date: Tue, 28 Nov 2023 14:58:20 -0600\n> Subject: [PATCH v3 1/3] Check that MyProcPid == getpid() in all signal\n> handlers.\n> \n> In commit 97550c0711, we added a similar check to the SIGTERM\n> handler for the startup process. This commit adds this check to\n> all signal handlers installed with pqsignal(). This is done by\n> using a wrapper function that performs the check before calling the\n> actual handler.\n> \n> The hope is that this will offer more general protection against\n> child processes of Postgres backends inadvertently modifying shared\n> memory due to inherited signal handlers. Another potential\n> follow-up improvement is to use this wrapper handler function to\n> restore errno instead of relying on each individual handler\n> function to do so.\n> \n> This commit makes the changes in commit 97550c0711 obsolete but\n> leaves reverting it for a follow-up commit.\n\nFor a moment I was, wrongly, worried this would break signal handlers we\nintentionally inherit from postmaster. It's fine though, because we block\nsignals in fork_process() until somewhere in InitPostmasterChild(), after\nwe've called InitProcessGlobals(). But perhaps that should be commented upon\nsomewhere?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 28 Nov 2023 18:37:50 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-27 16:16:25 -0600, Nathan Bossart wrote:\n> On Wed, Nov 22, 2023 at 02:59:45PM -0800, Andres Freund wrote:\n> > Hm. I wonder if this indicates an issue. Do the preceding changes perhaps\n> > make it more likely that a child process of the startup process could hang\n> > around for longer, because the signal we're delivering doesn't terminate child\n> > processes, because we'd just reset the signal handler? I think it's fine for\n> > the startup process, because we ask the startup process to shut down with\n> > SIGTERM, which defaults to exiting.\n> > \n> > But we do have a few processes that we do ask to shut down with other\n> > signals, wich do not trigger an exit by default, e.g. Checkpointer, archiver,\n> > walsender are asked to shut down using SIGUSR2 IIRC. The only one where that\n> > could be an issue is archiver, I guess?\n> \n> This did cross my mind. AFAICT most default handlers already trigger an\n> exit (including SIGUSR2), and for the ones that don't, we'd just end up in\n> the same situation as if the signal arrived a moment later after the child\n> process has installed its own handlers. And postmaster doesn't send\n> certain signals (e.g., SIGHUP) to the whole process group, so we don't have\n> the opposite problem where things like reloading configuration files\n> terminates these child processes.\n> \n> So, I didn't notice any potential issues. Did you have anything else in\n> mind?\n\nNo, I just was wondering about issues in this area. I couldn't immediately\nthink of a concrete scenario, so I thought I'd mention it here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 28 Nov 2023 18:38:56 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 06:37:50PM -0800, Andres Freund wrote:\n> For a moment I was, wrongly, worried this would break signal handlers we\n> intentionally inherit from postmaster. It's fine though, because we block\n> signals in fork_process() until somewhere in InitPostmasterChild(), after\n> we've called InitProcessGlobals(). But perhaps that should be commented upon\n> somewhere?\n\nGood call. I expanded on the MyProcPid assertion in wrapper_handler() a\nbit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 28 Nov 2023 21:16:52 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "rebased for cfbot\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Dec 2023 11:32:47 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 03:20:08PM -0600, Nathan Bossart wrote:\n> * Overhead: The wrapper handler calls a function pointer and getpid(),\n> which AFAICT is a real system call on most platforms. That might not be\n> a tremendous amount of overhead, but it's not zero, either. I'm\n> particularly worried about signal-heavy code like synchronous\n> replication. (Are there other areas that should be tested?) If this is\n> a concern, perhaps we could allow certain processes to opt out of this\n> wrapper handler, provided we believe it is unlikely to fork or that the\n> handler code is safe to run in grandchild processes.\n\nI finally spent some time trying to measure this overhead. Specifically, I\nsent many, many SIGUSR2 signals to postmaster, which just uses\ndummy_handler(), i.e., does nothing. I was just barely able to get\nwrapper_handler() to show up in the first page of 'perf top' in this\nextreme case, which leads me to think that the overhead might not be a\nproblem.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Feb 2024 20:39:41 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-06 20:39:41 -0600, Nathan Bossart wrote:\n> On Tue, Nov 21, 2023 at 03:20:08PM -0600, Nathan Bossart wrote:\n> > * Overhead: The wrapper handler calls a function pointer and getpid(),\n> > which AFAICT is a real system call on most platforms. That might not be\n> > a tremendous amount of overhead, but it's not zero, either. I'm\n> > particularly worried about signal-heavy code like synchronous\n> > replication. (Are there other areas that should be tested?) If this is\n> > a concern, perhaps we could allow certain processes to opt out of this\n> > wrapper handler, provided we believe it is unlikely to fork or that the\n> > handler code is safe to run in grandchild processes.\n> \n> I finally spent some time trying to measure this overhead. Specifically, I\n> sent many, many SIGUSR2 signals to postmaster, which just uses\n> dummy_handler(), i.e., does nothing. I was just barely able to get\n> wrapper_handler() to show up in the first page of 'perf top' in this\n> extreme case, which leads me to think that the overhead might not be a\n> problem.\n\nThat's what I'd expect. Signal delivery is fairly heavyweight, getpid() is one\nof the cheapest system calls (IIRC only beat by close() of an invalid FD on\nrecent-ish linux). If it were to become an issue, we'd much better spend our\ntime reducing the millions of signals/sec that'd have to involve.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Feb 2024 18:48:53 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "On Tue, Feb 06, 2024 at 06:48:53PM -0800, Andres Freund wrote:\n> On 2024-02-06 20:39:41 -0600, Nathan Bossart wrote:\n>> I finally spent some time trying to measure this overhead. Specifically, I\n>> sent many, many SIGUSR2 signals to postmaster, which just uses\n>> dummy_handler(), i.e., does nothing. I was just barely able to get\n>> wrapper_handler() to show up in the first page of 'perf top' in this\n>> extreme case, which leads me to think that the overhead might not be a\n>> problem.\n> \n> That's what I'd expect. Signal delivery is fairly heavyweight, getpid() is one\n> of the cheapest system calls (IIRC only beat by close() of an invalid FD on\n> recent-ish linux). If it were to become an issue, we'd much better spend our\n> time reducing the millions of signals/sec that'd have to involve.\n\nIndeed.\n\nI'd like to get this committed (to HEAD only) in the next few weeks. TBH\nI'm not wild about the weird caveats (e.g., race conditions when pqsignal()\nis called within a signal handler), but I also think it is unlikely that\nthey cause any issues in practice. Please do let me know if you have any\nconcerns about this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Feb 2024 11:06:50 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "Sorry for the noise.\n\nOn Wed, Feb 07, 2024 at 11:06:50AM -0600, Nathan Bossart wrote:\n> I'd like to get this committed (to HEAD only) in the next few weeks. TBH\n> I'm not wild about the weird caveats (e.g., race conditions when pqsignal()\n> is called within a signal handler), but I also think it is unlikely that\n> they cause any issues in practice. Please do let me know if you have any\n> concerns about this.\n\nPerhaps we should add a file global bool that is only set during\nwrapper_handler(). Then we could Assert() or elog(ERROR, ...) if\npqsignal() is called with it set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Feb 2024 11:15:54 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-07 11:15:54 -0600, Nathan Bossart wrote:\n> On Wed, Feb 07, 2024 at 11:06:50AM -0600, Nathan Bossart wrote:\n> > I'd like to get this committed (to HEAD only) in the next few weeks. TBH\n> > I'm not wild about the weird caveats (e.g., race conditions when pqsignal()\n> > is called within a signal handler), but I also think it is unlikely that\n> > they cause any issues in practice. Please do let me know if you have any\n> > concerns about this.\n\nI don't.\n\n\n> Perhaps we should add a file global bool that is only set during\n> wrapper_handler(). Then we could Assert() or elog(ERROR, ...) if\n> pqsignal() is called with it set.\n\nIn older branches that might have been harder (due to forking from a signal\nhandler and non-fatal errors thrown from signal handlers), but these days I\nthink that should work.\n\nFWIW, I don't think elog(ERROR) would be appropriate, that'd be jumping out of\na signal handler :)\n\n\nIf it were just for the purpose of avoiding the issue you brought up it might\nnot quite be worth it - but there are a lot of things we want to forbid in a\nsignal handler. Memory allocations, acquiring locks, throwing non-panic\nerrors, etc. That's one of the main reasons I like a common wrapper signal\nhandler.\n\n\nWhich reminded me of https://postgr.es/m/87msstvper.fsf%40163.com - the set of\nthings we want to forbid are similar. I'm not sure there's really room to\nharmonize things, but I thought I'd raise it.\n\nPerhaps we should make the state a bitmap and have a single\n AssertNotInState(HOLDING_SPINLOCK | IN_SIGNALHANDLER)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Feb 2024 10:40:50 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "On Wed, Feb 07, 2024 at 10:40:50AM -0800, Andres Freund wrote:\n> On 2024-02-07 11:15:54 -0600, Nathan Bossart wrote:\n>> Perhaps we should add a file global bool that is only set during\n>> wrapper_handler(). Then we could Assert() or elog(ERROR, ...) if\n>> pqsignal() is called with it set.\n> \n> In older branches that might have been harder (due to forking from a signal\n> handler and non-fatal errors thrown from signal handlers), but these days I\n> think that should work.\n> \n> FWIW, I don't think elog(ERROR) would be appropriate, that'd be jumping out of\n> a signal handler :)\n\n*facepalm* Yes.\n\n> If it were just for the purpose of avoiding the issue you brought up it might\n> not quite be worth it - but there are a lot of things we want to forbid in a\n> signal handler. Memory allocations, acquiring locks, throwing non-panic\n> errors, etc. That's one of the main reasons I like a common wrapper signal\n> handler.\n> \n> Which reminded me of https://postgr.es/m/87msstvper.fsf%40163.com - the set of\n> things we want to forbid are similar. I'm not sure there's really room to\n> harmonize things, but I thought I'd raise it.\n> \n> Perhaps we should make the state a bitmap and have a single\n> AssertNotInState(HOLDING_SPINLOCK | IN_SIGNALHANDLER)\n\nSeems worth a try. I'll go ahead and proceed with these patches and leave\nthis improvement for another thread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Feb 2024 14:12:08 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 11:32:47AM -0600, Nathan Bossart wrote:\n> rebased for cfbot\n\nI took a look over each of these. +1 for all. Thank you.\n\n\n",
"msg_date": "Wed, 14 Feb 2024 11:55:43 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: common signal handler protection"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 11:55:43AM -0800, Noah Misch wrote:\n> I took a look over each of these. +1 for all. Thank you.\n\nCommitted. Thanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Feb 2024 17:23:36 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: common signal handler protection"
}
] |
[
{
"msg_contents": "Hi!\n\n\nCurrently libpq sends B(ind), D(escribe), E(execute), S(ync) when \nexecuting a prepared statement.\n\nThe response for that D message is a RowDescription, which doesn't \nchange during prepared\n\nstatement lifetime (with the attributes format being an exception, as \nthey aren't know before execution) .\n\n\nIn a presumably very common case of repeatedly executing the same \nstatement, this leads to\n\nboth client and server parsing/sending exactly the same RowDescritpion \ndata over and over again.\n\n\nInstead, library user could acquire a statement result RowDescription \nonce (via PQdescribePrepared),\n\nand reuse it in subsequent calls to PQexecPrepared and/or its async \nfriends. Doing it this way saves\n\na measurable amount of CPU for both client and server and saves a lot of \nnetwork traffic, for example:\n\nwhen selecting a single row from a table with 30 columns, where each \ncolumn has 10-symbols name, and\n\nevery value in a row is a 10-symbols TEXT, i'm seeing an amount of bytes \nsent to client decreased\n\nby a factor of 2.8, and the CPU time client spends in userland decreased \nby a factor of ~1.5.\n\n\nThe patch attached adds a family of functions\n\nPQsendQueryPreparedPredescribed, PQgetResultPredescribed, \nPQisBusyPredescribed,\n\nwhich allow a user to do just that.\n\nIf the idea seems reasonable i'd be happy to extend these to \nPQexecPrepared as well and cover it with tests.\n\n\nP.S. This is my first time ever sending a patch via email, so please \ndon't hesitate to point at mistakes\n\ni'm doing in the process.",
"msg_date": "Wed, 22 Nov 2023 01:58:48 +0300",
"msg_from": "Ivan Trofimov <[email protected]>",
"msg_from_op": true,
"msg_subject": "WIP: libpq: add a possibility to not send D(escribe) when executing a\n prepared statement"
},
{
"msg_contents": "Hi Ivan,\n\nthank you for the patch.\n\n> On 22 Nov 2023, at 03:58, Ivan Trofimov <[email protected]> wrote:\n> \n> Currently libpq sends B(ind), D(escribe), E(execute), S(ync) when executing a prepared statement.\n> The response for that D message is a RowDescription, which doesn't change during prepared\n> statement lifetime (with the attributes format being an exception, as they aren't know before execution) .\nFrom my POV the idea seems reasonable (though I’m not a real libpq expert).\nBTW some drivers also send Describe even before Bind. This creates some fuss for routing connection poolers.\n\n> In a presumably very common case of repeatedly executing the same statement, this leads to\n> both client and server parsing/sending exactly the same RowDescritpion data over and over again.\n> Instead, library user could acquire a statement result RowDescription once (via PQdescribePrepared),\n> and reuse it in subsequent calls to PQexecPrepared and/or its async friends.\nBut what if query result structure changes? Will we detect this error gracefully and return correct error?\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 22 Nov 2023 11:46:27 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP: libpq: add a possibility to not send D(escribe) when\n executing a prepared statement"
},
{
"msg_contents": ">> In a presumably very common case of repeatedly executing the same statement, this leads to\n>> both client and server parsing/sending exactly the same RowDescritpion data over and over again.\n>> Instead, library user could acquire a statement result RowDescription once (via PQdescribePrepared),\n>> and reuse it in subsequent calls to PQexecPrepared and/or its async friends.\n> But what if query result structure changes? Will we detect this error gracefully and return correct error?\n\nAfaik changing prepared statement result structure is prohibited by\nPostgres server-side, and should always lead to \"ERROR: cached plan\nmust not change result type\", see src/test/regress/sql/plancache.sql.\n\nSo yes, from the libpq point of view this is just an server error, which\nwould be given to the user, the patch shouldn't change any behavior here.\n\nThe claim about this always being a server-side error better be\nreassured from someone from the Postgres team, of course.\n\n\n",
"msg_date": "Sun, 26 Nov 2023 19:38:49 +0300",
"msg_from": "Ivan Trofimov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP: libpq: add a possibility to not send D(escribe) when\n executing a prepared statement"
},
{
"msg_contents": "Ivan Trofimov <[email protected]> writes:\n> Afaik changing prepared statement result structure is prohibited by\n> Postgres server-side, and should always lead to \"ERROR: cached plan\n> must not change result type\", see src/test/regress/sql/plancache.sql.\n\nIndependently of whether we're willing to guarantee that that will\nnever change, I think this patch is basically a bad idea as presented.\nIt adds a whole new set of programmer-error possibilities, and I doubt\nit saves enough in typical cases to justify creating such a foot-gun.\n\nMoreover, it will force us to devote attention to the problem of\nkeeping libpq itself from misbehaving badly in the inevitable\nsituation that somebody passes the wrong tuple descriptor.\nThat is developer effort we could better spend elsewhere.\n\nI say this as somebody who deliberately designed the v3 protocol\nto allow clients to skip Describe steps if they were confident\nthey knew the query result type. I am not disavowing that choice;\nI just think that successful use of that option requires a client-\nside coding structure that allows tying a previously-obtained\ntuple descriptor to the current query with confidence. The proposed\nAPI fails badly at that, or at least leaves it up to the end-user\nprogrammer while providing no tools to help her get it right.\n\nInstead, I'm tempted to suggest having PQprepare/PQexecPrepared\nmaintain a cache that maps statement name to result tupdesc, so that\nthis is all handled internally to libpq. The main hole in that idea\nis that it's possible to issue PREPARE, DEALLOCATE, etc via PQexec, so\nthat a user could possibly redefine a prepared statement without libpq\nnoticing it. Maybe that's not a big issue. For a little more safety,\nwe could add some extra step that the library user has to take to\nenable caching of result tupdescs, whereupon it's definitely caller\nerror if she does that and then changes the statement behind our back.\n\nBTW, the submitted patch lacks both documentation and tests.\nFor a feature like this, there is much to be said for writing\nthe documentation *first*. Having to explain how to use something\noften helps you figure out weak spots in your initial design.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Nov 2023 12:13:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP: libpq: add a possibility to not send D(escribe) when\n executing a prepared statement"
},
{
"msg_contents": "Hi Tom! Thank you for considering this.\n\n> It adds a whole new set of programmer-error possibilities, and I doubt\n> it saves enough in typical cases to justify creating such a foot-gun.\nAlthough I agree it adds a considerable amount of complexity, I'd argue \nit doesn't bring the complexity to a new level, since matching queries \nagainst responses is a concept users of asynchronous processing are \nalready familiar with, especially so when pipelining is in play.\n\nIn case of a single-row select this can easily save as much as a half of \nthe network traffic, which is likely to be encrypted/decrypted through \nmultiple hops (a connection-pooler, for example), and has to be \nserialized/parsed on a server, a client, a pooler etc.\nFor example, i have a service which bombards its Postgres database with \n~20kRPS of \"SELECT * FROM users WHERE id=$1\", with \"users\" being a table \nof just a bunch of textual ids, a couple of timestamps and some enums in \nit, and for that service alone this change would save\n~10Megabytes of server-originated traffic per second, and i have \nhundreds of such services at my workplace.\n\nI can provide more elaborate network/CPU measurements of different \nworkloads if needed.\n\n> Instead, I'm tempted to suggest having PQprepare/PQexecPrepared\n> maintain a cache that maps statement name to result tupdesc, so that\n> this is all handled internally to libpq\n From a perspective of someone who maintains a library built on top of \nlibpq and is familiar with other such libraries, I think this is much \neasier done on the level above libpq, simply because there is more \ncontrol of when and how invalidation/eviction is done, and the level \nabove also has a more straightforward way to access the cache across \ndifferent asynchronous processing points.\n\n> I just think that successful use of that option requires a client-\n> side coding structure that allows tying a previously-obtained\n> tuple descriptor to the current query with confidence. The proposed\n> API fails badly at that, or at least leaves it up to the end-user\n> programmer while providing no tools to help her get it right\nI understand your concerns of usability/safety of what I propose, and I \nthink I have an idea of how to make this much less of a foot-gun: what \nif we add a new function\n\nPGresult *\nPQexecPreparedPredescribed(PGconn *conn,\n const char *stmtName,\n PGresult* description,\n ...);\nwhich requires both a prepared statement and its tuple descriptor (or \nthese two could even be tied together by a struct), and exposes its \nimplementation (basically what I've prototyped in the patch) in the \nlibpq-int.h?\n\nThis way users of synchronous API get a nice thing too, which is \narguably pretty hard to misuse:\nif the description isn't available upfront then there's no point to \nreach for the function added since PQexecPrepared is strictly better \nperformance/usability-wise, and if the description is available it's \nmost likely cached alongside the statement.\nIf a user still manages to provide an erroneous description, well,\nthey either get a parsing error or the erroneous description back,\nI don't see how libpq could misbehave badly here.\n\nExposure of the implementation in the internal includes gives a \npossibility for users to juggle the actual foot-gun, but implies they \nknow very well what they are doing, and are ready to be on their own.\n\nWhat do you think of such approach?\n\n\n",
"msg_date": "Tue, 28 Nov 2023 13:18:57 +0300",
"msg_from": "Ivan Trofimov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP: libpq: add a possibility to not send D(escribe) when\n executing a prepared statement"
}
] |
[
{
"msg_contents": "=== Background\n\nSomething as simple as the following doesn't respond to cancellation. In\nv15+, any DROP DATABASE will hang as long as it's running:\n\n SELECT dblink_exec(\n $$dbname='$$||current_database()||$$' port=$$||current_setting('port'),\n 'SELECT pg_sleep(15)');\n\nhttps://postgr.es/m/[email protected] proposed a fix back in\n2010. Latches and the libpqsrv facility have changed the server programming\nenvironment since that patch. The problem statement also came up here:\n\nOn Thu, Dec 08, 2022 at 06:08:15PM -0800, Andres Freund wrote:\n> dblink.c uses a lot of other blocking libpq functions, which obviously also\n> isn't ok.\n\n\n=== Key decisions\n\nThis patch adds to libpqsrv facility. It dutifully follows the existing\nnaming scheme. For greppability, I am favoring renaming new and old functions\nsuch that the libpq name is a substring of this facility's name. That is,\nrename libpqsrv_disconnect to srvPQfinish or maybe libpqsrv_PQfinish(). Now\nis better than later, while pgxn contains no references to libpqsrv. Does\nanyone else have a preference between naming schemes? If nobody does, I'll\nkeep today's libpqsrv_disconnect() style.\n\nI was tempted to add a timeout argument to each libpqsrv function, which would\nallow libpqsrv_get_result_last() to replace pgfdw_get_cleanup_result(). We\ncan always add a timeout-accepting function later and make this thread's\nfunction name a thin wrapper around it. Does anyone feel a mandatory timeout\nargument, accepting -1 for no timeout, would be the right thing?\n\n\n=== Minor topics\n\nIt would be nice to replace libpqrcv_PQgetResult() and friends with the new\nfunctions. I refrained since they use ProcessWalRcvInterrupts(), not\nCHECK_FOR_INTERRUPTS(). Since walreceiver already reaches\nCHECK_FOR_INTERRUPTS() via libpqsrv_connect_params(), things might just work.\n\nThis patch contains a PQexecParams() wrapper, called nowhere in\npostgresql.git. It's inessential, but twelve pgxn modules do mention\nPQexecParams. Just one mentions PQexecPrepared, and none mention PQdescribe*.\n\nThe patch makes postgres_fdw adopt its functions, as part of confirming the\nfunctions are general enough. postgres_fdw create_cursor() has been passing\nthe \"DECLARE CURSOR FOR inner_query\" string for some error messages and just\ninner_query for others. I almost standardized on the longer one, but the test\nsuite checks that. Hence, I standardized on just inner_query.\n\nI wrote this because pglogical needs these functions to cooperate with v15+\nDROP DATABASE (https://github.com/2ndQuadrant/pglogical/issues/418).\n\nThanks,\nnm",
"msg_date": "Tue, 21 Nov 2023 17:29:45 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": true,
"msg_subject": "dblink query interruptibility"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 10:29 AM Noah Misch <[email protected]> wrote:\n>\n> === Background\n>\n> Something as simple as the following doesn't respond to cancellation. In\n> v15+, any DROP DATABASE will hang as long as it's running:\n>\n> SELECT dblink_exec(\n> $$dbname='$$||current_database()||$$' port=$$||current_setting('port'),\n> 'SELECT pg_sleep(15)');\n>\n> https://postgr.es/m/[email protected] proposed a fix back in\n> 2010. Latches and the libpqsrv facility have changed the server programming\n> environment since that patch. The problem statement also came up here:\n>\n> On Thu, Dec 08, 2022 at 06:08:15PM -0800, Andres Freund wrote:\n> > dblink.c uses a lot of other blocking libpq functions, which obviously also\n> > isn't ok.\n>\n>\n> === Key decisions\n>\n> This patch adds to libpqsrv facility.\n\nI found that this patch was committed at d3c5f37dd5 and changed the\nerror message in postgres_fdw slightly. Here's an example:\n\n#1. Begin a new transaction.\n#2. Execute a query accessing to a foreign table, like SELECT * FROM\n<foreign table>\n#3. Terminate the *remote* session corresponding to the foreign table.\n#4. Commit the transaction, and then currently the following error\nmessage is output.\n\n ERROR: FATAL: terminating connection due to administrator command\n server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n invalid socket\n\nPreviously, before commit d3c5f37dd5, the error message at #4 did not\ninclude \"invalid socket.\" Now, after the commit, it does. Is this\nchange intentional?\n\n+ /* Consume whatever data is available from the socket */\n+ if (PQconsumeInput(conn) == 0)\n+ {\n+ /* trouble; expect PQgetResult() to return NULL */\n+ break;\n+ }\n+ }\n+\n+ /* Now we can collect and return the next PGresult */\n+ return PQgetResult(conn);\n\nThis code appears to cause the change. When the remote session ends,\nPQconsumeInput() returns 0 and marks conn->socket as invalid.\nSubsequent PQgetResult() calls pqWait(), detecting the invalid socket\nand appending \"invalid socket\" to the error message.\n\nI think the \"invalid socket\" message is unsuitable in this scenario,\nand PQgetResult() should not be called after PQconsumeInput() returns\n0. Thought?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 25 Jan 2024 04:23:39 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dblink query interruptibility"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 04:23:39AM +0900, Fujii Masao wrote:\n> I found that this patch was committed at d3c5f37dd5 and changed the\n> error message in postgres_fdw slightly. Here's an example:\n> \n> #1. Begin a new transaction.\n> #2. Execute a query accessing to a foreign table, like SELECT * FROM\n> <foreign table>\n> #3. Terminate the *remote* session corresponding to the foreign table.\n> #4. Commit the transaction, and then currently the following error\n> message is output.\n> \n> ERROR: FATAL: terminating connection due to administrator command\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> invalid socket\n> \n> Previously, before commit d3c5f37dd5, the error message at #4 did not\n> include \"invalid socket.\" Now, after the commit, it does. Is this\n> change intentional?\n\nNo. It's a consequence of an intentional change in libpq call sequence, but I\nwas unaware I was changing an error message.\n\n> + /* Consume whatever data is available from the socket */\n> + if (PQconsumeInput(conn) == 0)\n> + {\n> + /* trouble; expect PQgetResult() to return NULL */\n> + break;\n> + }\n> + }\n> +\n> + /* Now we can collect and return the next PGresult */\n> + return PQgetResult(conn);\n> \n> This code appears to cause the change. When the remote session ends,\n> PQconsumeInput() returns 0 and marks conn->socket as invalid.\n> Subsequent PQgetResult() calls pqWait(), detecting the invalid socket\n> and appending \"invalid socket\" to the error message.\n> \n> I think the \"invalid socket\" message is unsuitable in this scenario,\n> and PQgetResult() should not be called after PQconsumeInput() returns\n> 0. Thought?\n\nThe documentation is absolute about the necessity of PQgetResult():\n\n PQsendQuery cannot be called again (on the same connection) until\n PQgetResult has returned a null pointer, indicating that the command is\n done.\n\n PQgetResult must be called repeatedly until it returns a null pointer,\n indicating that the command is done. (If called when no command is active,\n PQgetResult will just return a null pointer at once.)\n\nSimilar statements also appear in libpq-pipeline-results,\nlibpq-pipeline-errors, and libpq-copy.\n\n\nSo, unless the documentation or my reading of it is wrong there, I think the\nanswer is something other than skipping PQgetResult(). Perhaps PQgetResult()\nshould not append \"invalid socket\" in this case? The extra line is a net\nnegative, though it's not wrong and not awful.\n\nThanks for reporting the change.\n\n\n",
"msg_date": "Wed, 24 Jan 2024 12:45:32 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dblink query interruptibility"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 5:45 AM Noah Misch <[email protected]> wrote:\n>\n> On Thu, Jan 25, 2024 at 04:23:39AM +0900, Fujii Masao wrote:\n> > I found that this patch was committed at d3c5f37dd5 and changed the\n> > error message in postgres_fdw slightly. Here's an example:\n> >\n> > #1. Begin a new transaction.\n> > #2. Execute a query accessing to a foreign table, like SELECT * FROM\n> > <foreign table>\n> > #3. Terminate the *remote* session corresponding to the foreign table.\n> > #4. Commit the transaction, and then currently the following error\n> > message is output.\n> >\n> > ERROR: FATAL: terminating connection due to administrator command\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > invalid socket\n> >\n> > Previously, before commit d3c5f37dd5, the error message at #4 did not\n> > include \"invalid socket.\" Now, after the commit, it does. Is this\n> > change intentional?\n>\n> No. It's a consequence of an intentional change in libpq call sequence, but I\n> was unaware I was changing an error message.\n>\n> > + /* Consume whatever data is available from the socket */\n> > + if (PQconsumeInput(conn) == 0)\n> > + {\n> > + /* trouble; expect PQgetResult() to return NULL */\n> > + break;\n> > + }\n> > + }\n> > +\n> > + /* Now we can collect and return the next PGresult */\n> > + return PQgetResult(conn);\n> >\n> > This code appears to cause the change. When the remote session ends,\n> > PQconsumeInput() returns 0 and marks conn->socket as invalid.\n> > Subsequent PQgetResult() calls pqWait(), detecting the invalid socket\n> > and appending \"invalid socket\" to the error message.\n> >\n> > I think the \"invalid socket\" message is unsuitable in this scenario,\n> > and PQgetResult() should not be called after PQconsumeInput() returns\n> > 0. Thought?\n>\n> The documentation is absolute about the necessity of PQgetResult():\n\nThe documentation looks unclear to me regarding what should be done\nwhen PQconsumeInput() returns 0. So I'm not sure if PQgetResult()\nmust be called even in that case.\n\nAs far as I read some functions like libpqrcv_PQgetResult() that use\nPQconsumeInput(), it appears that they basically report the error message\nusing PQerrorMessage(), without calling PQgetResult(),\nwhen PQconsumeInput() returns 0.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 25 Jan 2024 12:28:43 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dblink query interruptibility"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 12:28:43PM +0900, Fujii Masao wrote:\n> On Thu, Jan 25, 2024 at 5:45 AM Noah Misch <[email protected]> wrote:\n> > On Thu, Jan 25, 2024 at 04:23:39AM +0900, Fujii Masao wrote:\n> > > I found that this patch was committed at d3c5f37dd5 and changed the\n> > > error message in postgres_fdw slightly. Here's an example:\n> > >\n> > > #1. Begin a new transaction.\n> > > #2. Execute a query accessing to a foreign table, like SELECT * FROM\n> > > <foreign table>\n> > > #3. Terminate the *remote* session corresponding to the foreign table.\n> > > #4. Commit the transaction, and then currently the following error\n> > > message is output.\n> > >\n> > > ERROR: FATAL: terminating connection due to administrator command\n> > > server closed the connection unexpectedly\n> > > This probably means the server terminated abnormally\n> > > before or while processing the request.\n> > > invalid socket\n> > >\n> > > Previously, before commit d3c5f37dd5, the error message at #4 did not\n> > > include \"invalid socket.\" Now, after the commit, it does.\n\nOther clients have witnessed the extra \"invalid socket\" message:\nhttps://dba.stackexchange.com/questions/335081/how-to-investigate-intermittent-postgres-connection-errors\nhttps://stackoverflow.com/questions/77781358/pgbackrest-backup-error-with-exit-code-57\nhttps://github.com/timescale/timescaledb/issues/102\n\n> > > + /* Consume whatever data is available from the socket */\n> > > + if (PQconsumeInput(conn) == 0)\n> > > + {\n> > > + /* trouble; expect PQgetResult() to return NULL */\n> > > + break;\n> > > + }\n> > > + }\n> > > +\n> > > + /* Now we can collect and return the next PGresult */\n> > > + return PQgetResult(conn);\n> > >\n> > > This code appears to cause the change. When the remote session ends,\n> > > PQconsumeInput() returns 0 and marks conn->socket as invalid.\n> > > Subsequent PQgetResult() calls pqWait(), detecting the invalid socket\n> > > and appending \"invalid socket\" to the error message.\n\nWhat do you think of making PQconsumeInput() set PGASYNC_READY and\nCONNECTION_BAD in this case? Since libpq appended \"server closed the\nconnection unexpectedly\", it knows those indicators are correct. That way,\nPQgetResult() won't issue a pointless pqWait() call.\n\n> > > I think the \"invalid socket\" message is unsuitable in this scenario,\n> > > and PQgetResult() should not be called after PQconsumeInput() returns\n> > > 0. Thought?\n> >\n> > The documentation is absolute about the necessity of PQgetResult():\n> \n> The documentation looks unclear to me regarding what should be done\n> when PQconsumeInput() returns 0. So I'm not sure if PQgetResult()\n> must be called even in that case.\n\nI agree PQconsumeInput() docs don't specify how to interpret it returning 0.\n\n> As far as I read some functions like libpqrcv_PQgetResult() that use\n> PQconsumeInput(), it appears that they basically report the error message\n> using PQerrorMessage(), without calling PQgetResult(),\n> when PQconsumeInput() returns 0.\n\nlibpqrcv_PQgetResult() is part of walreceiver, where any ERROR becomes FATAL.\nHence, it can't hurt anything by eagerly skipping to ERROR. I designed\nlibpqsrv_exec() to mimic PQexec() as closely as possible, so it would be a\ndrop-in replacement for arbitrary callers. Ideally, accepting interrupts\nwould be the only caller-visible difference.\n\nI know of three ways PQconsumeInput() can return 0, along with my untested\nestimates of how they work:\n\na. Protocol violation. handleSyncLoss() sets PGASYNC_READY and\n CONNECTION_BAD. PQgetResult() is optional.\n\nb. Connection broken. PQgetResult() is optional.\n\nc. ENOMEM. PGASYNC_ and CONNECTION_ status don't change. Applications choose\n among (c1) free memory and retry, (c2) close the connection, or (c3) call\n PQgetResult() to break protocol sync and set PGASYNC_IDLE:\n\nComparing PQconsumeInput() with the PQgetResult() block under \"while\n(conn->asyncStatus == PGASYNC_BUSY)\", there's a key difference that\nPQgetResult() sets PGASYNC_IDLE on most errors, including ENOMEM. That\nprevents PQexec() subroutine PQexecFinish() from busy looping on ENOMEM, but I\nsuspect that also silently breaks protocol sync. While we could change it\nfrom (c3) to (c2) by dropping the connection via handleSyncLoss() or\nequivalent, I'm not confident about that being better.\n\nlibpqsrv_exec() implements (c3) by way of calling PQgetResult() after\nPQconsumeInput() fails. If PQisBusy(), the same ENOMEM typically will repeat,\nyielding (c3). If memory became available in that brief time, PQgetResult()\nmay instead block. That blocking is unwanted but unimportant.\n\n\n",
"msg_date": "Fri, 26 Jan 2024 18:35:39 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dblink query interruptibility"
}
] |
[
{
"msg_contents": "Commit 3e51b278db leaves lc_* conf lines as commented-out when\ntheir value is \"C\". This leads to the following behavior.\n\n$ echo LANG\nja_JP.UTF8\n$ initdb --no-locale hoge\n$ grep lc_ hoge/postgresql.conf\n#lc_messages = 'C' # locale for system error message\n#lc_monetary = 'C' # locale for monetary formatting\n#lc_numeric = 'C' # locale for number formatting\n#lc_time = 'C' # locale for time formatting\n\nIn this scenario, the postmaster ends up emitting log massages in\nJapanese, which contradicts the documentation.\n\nhttps://www.postgresql.org/docs/devel/app-initdb.html\n\n> --locale=locale \n> Sets the default locale for the database cluster. If this option is\n> not specified, the locale is inherited from the environment that\n> initdb runs in. Locale support is described in Section 24.1.\n> \n..\n> --lc-messages=locale\n> Like --locale, but only sets the locale in the specified category.\n\nHere's a somewhat amusing case:\n\n$ echo LANG\nja_JP.UTF8\n$ initdb --lc_messages=C\n$ grep lc_ hoge/postgresql.conf \n#lc_messages = 'C' # locale for system error message\nlc_monetary = 'ja_JP.UTF8' # locale for monetary formatting\nlc_numeric = 'ja_JP.UTF8' # locale for number formatting\nlc_time = 'ja_JP.UTF8' # locale for time formatting\n\nHmm. it seems that initdb replaces the values of all categories\n*except the specified one*. This behavior seems incorrect to\nme. initdb should replace the value when explicitly specified in the\ncommand line. If you use -c lc_messages=C, it does perform the\nexpected behavior to some extent, but I believe this is a separate\nmatter.\n\nI have doubts about not replacing these lines for purely cosmetic\nreasons. In this mail, I've attached three possible solutions for the\noriginal issue: the first one enforces replacement only when specified\non the command line, the second one simply always performs\nreplacement, and the last one addresses the concern about the absence\nof quotes around \"C\" by allowing explicit specification. (FWIW, I\nprefer the last one.)\n\nWhat do you think about these?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 22 Nov 2023 16:27:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "initdb --no-locale=C doesn't work as specified when the\n environment is not C"
},
{
"msg_contents": "Kyotaro Horiguchi <[email protected]> writes:\n> Commit 3e51b278db leaves lc_* conf lines as commented-out when\n> their value is \"C\". This leads to the following behavior.\n\nHmm ... I see a contributing factor here: this bit in\npostgresql.conf.sample is a lie:\n\n#lc_messages = 'C'\t\t\t# locale for system error message\n\t\t\t\t\t# strings\n\nA look in guc_tables.c shows that the actual default is '' (empty\nstring), which means \"use the environment\", and that matches how the\nvariable is documented in config.sgml. Somebody --- quite possibly me\n--- was misled by the contents of postgresql.conf.sample into thinking\nthat the lc_xxx GUCs all default to C, when that's only true for the\nothers.\n\nI think that a more correct fix for this would treat lc_messages\ndifferently from the other lc_xxx GUCs. Maybe just eliminate the\nhack about not substituting \"C\" for that one?\n\nIn any case, we need to fix this mistake in postgresql.conf.sample.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Nov 2023 11:04:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: initdb --no-locale=C doesn't work as specified when the\n environment is not C"
},
{
"msg_contents": "At Wed, 22 Nov 2023 11:04:01 -0500, Tom Lane <[email protected]> wrote in \n> Kyotaro Horiguchi <[email protected]> writes:\n> > Commit 3e51b278db leaves lc_* conf lines as commented-out when\n> > their value is \"C\". This leads to the following behavior.\n> \n> Hmm ... I see a contributing factor here: this bit in\n> postgresql.conf.sample is a lie:\n> \n> #lc_messages = 'C'\t\t\t# locale for system error message\n> \t\t\t\t\t# strings\n> \n> A look in guc_tables.c shows that the actual default is '' (empty\n> string), which means \"use the environment\", and that matches how the\n> variable is documented in config.sgml. Somebody --- quite possibly me\n> --- was misled by the contents of postgresql.conf.sample into thinking\n> that the lc_xxx GUCs all default to C, when that's only true for the\n> others.\n\nIt seems somewhat intentional that only lc_messages references the\nenvironment at boot time. On the other hand, previously, in the\nabsence of a specified locale, initdb would embed the environmental\nvalue in the configuration file, as it seems to be documented. Given\nthat initdb is always used for cluster creation, it's unlikey that\nsystems depend on this boot-time default for their operation.\n\n> I think that a more correct fix for this would treat lc_messages\n> differently from the other lc_xxx GUCs. Maybe just eliminate the\n> hack about not substituting \"C\" for that one?\n\nFor example, the --no-locale option for initdb is supposed to set all\ncategories to 'C'. That approach would lead to the postgres\nreferencing the runtime environment for all categories except\nlc_messages, which I believe contradicts the documentation. In my\nbiew, if lc_messages is exempted from that hack, then all other\ncategories should be similarly excluded as in the second approach\namong the attached in the previous mail.\n\n> In any case, we need to fix this mistake in postgresql.conf.sample.\n\nIf you are not particularly concerned about the presence of quotation\nmarks, I think it would be fine to go with the second approach and\nmake the necessary modification to the configuration file accordingly.\n\nWith the attached patch, initdb --no-locale generates the following\nlines in the configuration file.\n\n> lc_messages = C\t\t\t\t# locale for system error message\n> \t\t\t\t\t# strings\n> lc_monetary = C\t\t\t\t# locale for monetary formatting\n> lc_numeric = C\t\t\t\t# locale for number formatting\n> lc_time = C\t\t\t\t# locale for time formatting\n\nBy the way, the lines around lc_* in the sample file seem to have\nsomewhat inconsistent indentations. Wouldnt' it be preferable to fix\nthis? (The attached doesn't that.)\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 27 Nov 2023 12:00:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: initdb --no-locale=C doesn't work as specified when the\n environment is not C"
},
{
"msg_contents": "Kyotaro Horiguchi <[email protected]> writes:\n> It seems somewhat intentional that only lc_messages references the\n> environment at boot time. On the other hand, previously, in the\n> absence of a specified locale, initdb would embed the environmental\n> value in the configuration file, as it seems to be documented. Given\n> that initdb is always used for cluster creation, it's unlikey that\n> systems depend on this boot-time default for their operation.\n\nYeah, after further reflection there doesn't seem to be a lot of value\nin leaving these entries commented-out, even in the cases where that's\ntechnically correct. Let's just go back to the old behavior of always\nuncommenting them; that stood for years without complaints. So I\ncommitted your latest patch as-is.\n\n> By the way, the lines around lc_* in the sample file seem to have\n> somewhat inconsistent indentations. Wouldnt' it be preferable to fix\n> this? (The attached doesn't that.)\n\nThey look all right if you assume the tab width is 8, which seems to\nbe what is used elsewhere in the file. I think there's been some\nprior discussion about whether to ban use of tabs at all in these\nsample files, so as to reduce confusion about how wide the tabs are.\nBut I'm not touching that question today.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jan 2024 18:16:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: initdb --no-locale=C doesn't work as specified when the\n environment is not C"
},
{
"msg_contents": "At Wed, 10 Jan 2024 18:16:03 -0500, Tom Lane <[email protected]> wrote in \n> Kyotaro Horiguchi <[email protected]> writes:\n> > It seems somewhat intentional that only lc_messages references the\n> > environment at boot time. On the other hand, previously, in the\n> > absence of a specified locale, initdb would embed the environmental\n> > value in the configuration file, as it seems to be documented. Given\n> > that initdb is always used for cluster creation, it's unlikey that\n> > systems depend on this boot-time default for their operation.\n> \n> Yeah, after further reflection there doesn't seem to be a lot of value\n> in leaving these entries commented-out, even in the cases where that's\n> technically correct. Let's just go back to the old behavior of always\n> uncommenting them; that stood for years without complaints. So I\n> committed your latest patch as-is.\n\nI'm glad you understand. Thank you for commiting.\n\n> > By the way, the lines around lc_* in the sample file seem to have\n> > somewhat inconsistent indentations. Wouldnt' it be preferable to fix\n> > this? (The attached doesn't that.)\n> \n> They look all right if you assume the tab width is 8, which seems to\n> be what is used elsewhere in the file. I think there's been some\n> prior discussion about whether to ban use of tabs at all in these\n> sample files, so as to reduce confusion about how wide the tabs are.\n> But I'm not touching that question today.\n\nAh, I see, I understood.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 12 Jan 2024 11:56:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: initdb --no-locale=C doesn't work as specified when the\n environment is not C"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at the use of session_replication_state, I noticed that\nReplicationOriginLock is acquired in ReplicationOriginExitCleanup()\neven if session_replication_state is reset to NULL by\nreplorigin_session_reset(). Why can't there be a lockless exit path\nsomething like [1] similar to\nreplorigin_session_reset() which checks session_replication_state ==\nNULL without a lock?\n\n[1]\ndiff --git a/src/backend/replication/logical/origin.c\nb/src/backend/replication/logical/origin.c\nindex 460e3dcc38..99bbe90f6c 100644\n--- a/src/backend/replication/logical/origin.c\n+++ b/src/backend/replication/logical/origin.c\n@@ -1056,6 +1056,9 @@ ReplicationOriginExitCleanup(int code, Datum arg)\n {\n ConditionVariable *cv = NULL;\n\n+ if (session_replication_state == NULL)\n+ return;\n+\n LWLockAcquire(ReplicationOriginLock, LW_EXCLUSIVE);\n\n if (session_replication_state != NULL &&\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:12:06 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lockless exit path for ReplicationOriginExitCleanup"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 2:12 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> While looking at the use of session_replication_state, I noticed that\n> ReplicationOriginLock is acquired in ReplicationOriginExitCleanup()\n> even if session_replication_state is reset to NULL by\n> replorigin_session_reset(). Why can't there be a lockless exit path\n> something like [1] similar to\n> replorigin_session_reset() which checks session_replication_state ==\n> NULL without a lock?\n>\n\nI don't see any problem with such a check but not sure of the benefit\nof doing so either.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:27:58 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lockless exit path for ReplicationOriginExitCleanup"
},
{
"msg_contents": "Hello,\n\nOn 2023-Nov-22, Bharath Rupireddy wrote:\n\n> While looking at the use of session_replication_state, I noticed that\n> ReplicationOriginLock is acquired in ReplicationOriginExitCleanup()\n> even if session_replication_state is reset to NULL by\n> replorigin_session_reset(). Why can't there be a lockless exit path\n> something like [1] similar to\n> replorigin_session_reset() which checks session_replication_state ==\n> NULL without a lock?\n\nI suppose we can do this on consistency grounds -- I'm pretty sure you'd\nhave a really hard time proving that this makes a performance difference --\nbut this patch is incomplete: just two lines below, we're still testing\nsession_replication_state for nullness, which would now be dead code.\nPlease repair.\n\n\nThe comment on session_replication_state is confusing also:\n\n/*\n * Backend-local, cached element from ReplicationState for use in a backend\n * replaying remote commits, so we don't have to search ReplicationState for\n * the backends current RepOriginId.\n */\n\nMy problem with it is that this is not a \"cached element\", but instead a\n\"cached pointer to [shared memory]\". This is what makes testing that\npointer for null-ness doable, but access to each member therein\nrequiring lwlock acquisition.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Once again, thank you and all of the developers for your hard work on\nPostgreSQL. This is by far the most pleasant management experience of\nany database I've worked on.\" (Dan Harris)\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00247.php\n\n\n",
"msg_date": "Wed, 22 Nov 2023 10:36:53 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lockless exit path for ReplicationOriginExitCleanup"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 2:28 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Nov 22, 2023 at 2:12 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > While looking at the use of session_replication_state, I noticed that\n> > ReplicationOriginLock is acquired in ReplicationOriginExitCleanup()\n> > even if session_replication_state is reset to NULL by\n> > replorigin_session_reset(). Why can't there be a lockless exit path\n> > something like [1] similar to\n> > replorigin_session_reset() which checks session_replication_state ==\n> > NULL without a lock?\n> >\n>\n> I don't see any problem with such a check but not sure of the benefit\n> of doing so either.\n\nIt avoids an unnecessary lock acquisition and release when\nsession_replication_state is already reset by\nreplorigin_session_reset() before reaching\nReplicationOriginExitCleanup(). A patch something like [1] and a run\nof subscription tests shows that 153 times the lock acquisition and\nrelease can be avoided.\n\nubuntu:~/postgres/src/test/subscription$ grep -ir \"with\nsession_replication_state NULL\" . | wc -l\n153\n\nubuntu:~/postgres/src/test/subscription$ grep -ir \"with\nsession_replication_state not NULL\" . | wc -l\n174\n\n[1]\ndiff --git a/src/backend/replication/logical/origin.c\nb/src/backend/replication/logical/origin.c\nindex 460e3dcc38..dd3824bd27 100644\n--- a/src/backend/replication/logical/origin.c\n+++ b/src/backend/replication/logical/origin.c\n@@ -1056,6 +1056,11 @@ ReplicationOriginExitCleanup(int code, Datum arg)\n {\n ConditionVariable *cv = NULL;\n\n+ if (session_replication_state == NULL)\n+ elog(LOG, \"In ReplicationOriginExitCleanup() with\nsession_replication_state NULL\");\n+ else\n+ elog(LOG, \"In ReplicationOriginExitCleanup() with\nsession_replication_state not NULL\");\n+\n LWLockAcquire(ReplicationOriginLock, LW_EXCLUSIVE);\n\n if (session_replication_state != NULL &&\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 23 Nov 2023 09:39:43 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Lockless exit path for ReplicationOriginExitCleanup"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 3:06 PM Alvaro Herrera <[email protected]> wrote:\n>\n> Hello,\n>\n> On 2023-Nov-22, Bharath Rupireddy wrote:\n>\n> > While looking at the use of session_replication_state, I noticed that\n> > ReplicationOriginLock is acquired in ReplicationOriginExitCleanup()\n> > even if session_replication_state is reset to NULL by\n> > replorigin_session_reset(). Why can't there be a lockless exit path\n> > something like [1] similar to\n> > replorigin_session_reset() which checks session_replication_state ==\n> > NULL without a lock?\n>\n> I suppose we can do this on consistency grounds -- I'm pretty sure you'd\n> have a really hard time proving that this makes a performance difference --\n\nYes, can't measure the perf gain, however, as said upthread\nhttps://www.postgresql.org/message-id/CALj2ACVVhPn7BVQZLGPVvBrLoDZNRaV0LS9rBpt5y%2Bj%3DxRebWw%40mail.gmail.com,\nit avoids unnecessary lock acquisition and release.\n\n> but this patch is incomplete: just two lines below, we're still testing\n> session_replication_state for nullness, which would now be dead code.\n> Please repair.\n\nDone.\n\n> The comment on session_replication_state is confusing also:\n>\n> /*\n> * Backend-local, cached element from ReplicationState for use in a backend\n> * replaying remote commits, so we don't have to search ReplicationState for\n> * the backends current RepOriginId.\n> */\n>\n> My problem with it is that this is not a \"cached element\", but instead a\n> \"cached pointer to [shared memory]\". This is what makes testing that\n> pointer for null-ness doable, but access to each member therein\n> requiring lwlock acquisition.\n\nRight. I've reworded the comment a bit.\n\nPSA v1 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 23 Nov 2023 10:10:46 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Lockless exit path for ReplicationOriginExitCleanup"
},
{
"msg_contents": "Thanks, pushed. I reworded the comment again, though.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 15 Jan 2024 13:03:51 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lockless exit path for ReplicationOriginExitCleanup"
}
] |
[
{
"msg_contents": "We're about 2/3 of the way through.\n\nAt start:\nNeeds review: 210. Waiting on Author: 42. Ready for Committer: 29.\nCommitted: 55. Withdrawn: 10. Returned with Feedback: 1. Total: 347.\n\nToday:\nNeeds review: 183. Waiting on Author: 45. Ready for Committer: 25.\nCommitted: 76. Returned with Feedback: 4. Withdrawn: 13. Rejected: 1.\nTotal: 347.\n\nThe pace seems to have picked up a bit, based on number of commits.\n\n--\nJohn Naylor\n\n\n",
"msg_date": "Wed, 22 Nov 2023 16:26:23 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest 2023-11 update 2"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nWhen the scram_iterations value is set too large, the backend would hang for\na long time. And we can't use Ctrl+C to cancel this query, cause the loop don't\nprocess signal interrupts.\n\nAdd CHECK_FOR_INTERRUPTS within the loop of scram_SaltedPassword\nto handle any signals received during this period may be a good choice.\n\nI wrote a patch to solve this problem. What's your suggestions?\n\nDears\nBowen Shi",
"msg_date": "Wed, 22 Nov 2023 19:47:04 +0800",
"msg_from": "Bowen Shi <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop."
},
{
"msg_contents": "Hi,\n\n> When the scram_iterations value is set too large, the backend would hang for\n> a long time. And we can't use Ctrl+C to cancel this query, cause the loop don't\n> process signal interrupts.\n>\n> Add CHECK_FOR_INTERRUPTS within the loop of scram_SaltedPassword\n> to handle any signals received during this period may be a good choice.\n>\n> I wrote a patch to solve this problem. What's your suggestions?\n\nThanks for the patch.\n\nIt sort of makes sense. I wonder though if we should limit the maximum\nnumber of iterations instead. If somebody specified 1_000_000+\niteration this could also indicate a user error.\n\nIf we want to add CHECK_FOR_INTERRUPTS inside the loop I think a brief\ncomment would be appropriate.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 22 Nov 2023 16:30:35 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop."
},
{
"msg_contents": "> On 22 Nov 2023, at 14:30, Aleksander Alekseev <[email protected]> wrote:\n> \n> Hi,\n> \n>> When the scram_iterations value is set too large, the backend would hang for\n>> a long time. And we can't use Ctrl+C to cancel this query, cause the loop don't\n>> process signal interrupts.\n>> \n>> Add CHECK_FOR_INTERRUPTS within the loop of scram_SaltedPassword\n>> to handle any signals received during this period may be a good choice.\n>> \n>> I wrote a patch to solve this problem. What's your suggestions?\n> \n> Thanks for the patch.\n> \n> It sort of makes sense. I wonder though if we should limit the maximum\n> number of iterations instead. If somebody specified 1_000_000+\n> iteration this could also indicate a user error.\n\nI don't think it would be useful to limit this at an arbitrary point, iteration\ncount can be set per password and if someone want a specific password to be\nsuper-hard to brute force then why should we limit that?\n\n> If we want to add CHECK_FOR_INTERRUPTS inside the loop I think a brief\n> comment would be appropriate.\n\nAgreed, it would be helpful.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:59:07 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop."
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n>> On 22 Nov 2023, at 14:30, Aleksander Alekseev <[email protected]> wrote:\n>> It sort of makes sense. I wonder though if we should limit the maximum\n>> number of iterations instead. If somebody specified 1_000_000+\n>> iteration this could also indicate a user error.\n\n> I don't think it would be useful to limit this at an arbitrary point, iteration\n> count can be set per password and if someone want a specific password to be\n> super-hard to brute force then why should we limit that?\n\nMaybe because it could be used to construct a DOS scenario? In\nparticular, since CHECK_FOR_INTERRUPTS doesn't work on the frontend\nside, a situation like this wouldn't be interruptible there.\n\nI agree with Aleksander that such cases are much more likely to\nindicate user error than anything else.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Nov 2023 10:04:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop."
},
{
"msg_contents": "> I don't think it would be useful to limit this at an arbitrary point,\niteration\n> count can be set per password and if someone wants a specific password to\nbe\n> super-hard to brute force then why should we limit that?\nI agree with that. Maybe some users do want a super-hard password.\nRFC 7677 and RFC 5802 don't specify the maximum number of iterations.\n\n> If we want to add CHECK_FOR_INTERRUPTS inside the loop I think a brief\n> comment would be appropriate.\n\nThis has been completed in patch v2 and it's ready for review.\n\nRegards\nBowen Shi",
"msg_date": "Thu, 23 Nov 2023 12:05:34 +0800",
"msg_from": "Bowen Shi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop."
},
{
"msg_contents": "Hi,\n\n> > If we want to add CHECK_FOR_INTERRUPTS inside the loop I think a brief\n> > comment would be appropriate.\n>\n> This has been completed in patch v2 and it's ready for review.\n\nThanks!\n\n> > I don't think it would be useful to limit this at an arbitrary point, iteration\n> > count can be set per password and if someone wants a specific password to be\n> > super-hard to brute force then why should we limit that?\n> I agree with that. Maybe some users do want a super-hard password.\n> RFC 7677 and RFC 5802 don't specify the maximum number of iterations.\n\nThat's a fairly good point. However we are not obligated not to\nimplement everything that is missing in RFC. Also in fact we already\nlimit the number of iterations to INT_MAX.\n\nIf we decide to limit this number even further the actual problem is\nto figure out what the new practical limit would be. Regardless of the\nchosen number there is a possibility of breaking backward\ncompatibility for certain users.\n\nFor this reason I believe merging the proposed patch would be the\nright move at this point. It doesn't make anything worse for the\nexisting users and solves a potential problem for some of them.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 23 Nov 2023 11:19:51 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop."
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 11:19:51AM +0300, Aleksander Alekseev wrote:\n>>> I don't think it would be useful to limit this at an arbitrary point, iteration\n>>> count can be set per password and if someone wants a specific password to be\n>>> super-hard to brute force then why should we limit that?\n>>\n>> I agree with that. Maybe some users do want a super-hard password.\n>> RFC 7677 and RFC 5802 don't specify the maximum number of iterations.\n> \n> That's a fairly good point. However we are not obligated not to\n> implement everything that is missing in RFC. Also in fact we already\n> limit the number of iterations to INT_MAX.\n\nINT_MAX, as in the limit that we have for integer GUCs and the\nroutines building the hashed entry, so the Postgres internals are what\ndefines the limit here. I doubt that we'll see cases where somebody\nwould want more than that, but who knows in 10/20 years.\n\n> If we decide to limit this number even further the actual problem is\n> to figure out what the new practical limit would be. Regardless of the\n> chosen number there is a possibility of breaking backward\n> compatibility for certain users.\n\nNo idea what the limit should be if it were to be lowered down, but\nI suspect that even a new lower limit could be an issue for hosts in\nthe low-end specs when it comes to DOS. It's not like there are no\nways to eat CPU when you are already logged in.\n\n> For this reason I believe merging the proposed patch would be the\n> right move at this point. It doesn't make anything worse for the\n> existing users and solves a potential problem for some of them.\n\nYeah, agreed. Being stuck on a potential large tight loops is\nsomething we tend to avoid in the backend, so I agree that this is a\nthing to keep in the backend especially because we have\nscram_iterations and that it is user-settable.\n\nI think that we should backpatch that down to v16 at least where the\nGUC has been introduced as it's more like a nuisance if one sets the\nGUC to an incorrect value, and I'd like to apply the patch this way.\nAny objections or comments regarding that?\n--\nMichael",
"msg_date": "Sat, 25 Nov 2023 10:20:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop."
},
{
"msg_contents": "> I think that we should backpatch that down to v16 at least where the\n> GUC has been introduced as it's more like a nuisance if one sets the\n> GUC to an incorrect value, and I'd like to apply the patch this way.\n\nAgreed.\n\nThe patch has been submitted in https://commitfest.postgresql.org/46/4671/\n\nRegards\nBowen Shi\n\n> I think that we should backpatch that down to v16 at least where the> GUC has been introduced as it's more like a nuisance if one sets the> GUC to an incorrect value, and I'd like to apply the patch this way.Agreed. The patch has been submitted in https://commitfest.postgresql.org/46/4671/RegardsBowen Shi",
"msg_date": "Mon, 27 Nov 2023 11:56:31 +0800",
"msg_from": "Bowen Shi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop."
},
{
"msg_contents": "> On 25 Nov 2023, at 02:20, Michael Paquier <[email protected]> wrote:\n> \n> On Thu, Nov 23, 2023 at 11:19:51AM +0300, Aleksander Alekseev wrote:\n>>>> I don't think it would be useful to limit this at an arbitrary point, iteration\n>>>> count can be set per password and if someone wants a specific password to be\n>>>> super-hard to brute force then why should we limit that?\n>>> \n>>> I agree with that. Maybe some users do want a super-hard password.\n>>> RFC 7677 and RFC 5802 don't specify the maximum number of iterations.\n>> \n>> That's a fairly good point. However we are not obligated not to\n>> implement everything that is missing in RFC. Also in fact we already\n>> limit the number of iterations to INT_MAX.\n> \n> INT_MAX, as in the limit that we have for integer GUCs and the\n> routines building the hashed entry, so the Postgres internals are what\n> defines the limit here. I doubt that we'll see cases where somebody\n> would want more than that, but who knows in 10/20 years.\n> \n>> If we decide to limit this number even further the actual problem is\n>> to figure out what the new practical limit would be. Regardless of the\n>> chosen number there is a possibility of breaking backward\n>> compatibility for certain users.\n> \n> No idea what the limit should be if it were to be lowered down, but\n> I suspect that even a new lower limit could be an issue for hosts in\n> the low-end specs when it comes to DOS. It's not like there are no\n> ways to eat CPU when you are already logged in.\n\nThe whole point of this GUC (and the iteration count construct in the spec) is\nto allow hardened setups to make brute forcing passwords as hard as they choose\nthem to be, setting an upper limit (apart from the INT_MAX implementation\ndetail) where one isn't even mentioned in the RFC makes little sense when the\nloop can be canceled.\n\nOn the flip side, setups which have low end clients can choose to reduce it\nfrom the default to make scram an option at all where it previously was too\nexpensive and less secure schemes had to be used.\n\n>> For this reason I believe merging the proposed patch would be the\n>> right move at this point. It doesn't make anything worse for the\n>> existing users and solves a potential problem for some of them.\n> \n> Yeah, agreed. Being stuck on a potential large tight loops is\n> something we tend to avoid in the backend, so I agree that this is a\n> thing to keep in the backend especially because we have\n> scram_iterations and that it is user-settable.\n> \n> I think that we should backpatch that down to v16 at least where the\n> GUC has been introduced as it's more like a nuisance if one sets the\n> GUC to an incorrect value, and I'd like to apply the patch this way.\n> Any objections or comments regarding that?\n\nI don't see any reason to backpatch further down than 16 given how low the\nhardcoded value is set there, scanning the archives I see no complaints about\nit either. As a reference, CREATE ROLE using 4096 iterations takes 14ms on my\n10 year old laptop (1M iterations, 244x the default, takes less than a second).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 27 Nov 2023 10:05:49 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop."
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 10:05:49AM +0100, Daniel Gustafsson wrote:\n> I don't see any reason to backpatch further down than 16 given how low the\n> hardcoded value is set there, scanning the archives I see no complaints about\n> it either. As a reference, CREATE ROLE using 4096 iterations takes 14ms on my\n> 10 year old laptop (1M iterations, 244x the default, takes less than a second).\n\nAgreed, so done it this way. \\password has the same problem, where we\ncould perhaps do something with a callback or something like that, or\nperhaps that's just not worth bothering.\n--\nMichael",
"msg_date": "Tue, 28 Nov 2023 08:39:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop."
}
] |
[
{
"msg_contents": "\nI've been trying to get meson working with Cygwin. On a fresh install \n(Cygwin 3.4.9, gcc 11.4.0, meson 1.0.2, ninja 1.11.1) I get a bunch of \nerrors like this:\n\nERROR: incompatible library \n\"/home/andrew/bf/buildroot/HEAD/pgsql.build/tmp_install/home/andrew/bf/buildroot/HEAD/inst/lib/postgresql/plperl.dll\": \nmissing magic block\n\nSimilar things happen if I try to build with python.\n\nI'm not getting the same on a configure/make build. Not sure what would \nbe different.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 22 Nov 2023 07:02:10 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "meson vs Cygwin"
},
{
"msg_contents": "On 22/11/2023 13:02, Andrew Dunstan wrote:\n> \n> I've been trying to get meson working with Cygwin. On a fresh install \n> (Cygwin 3.4.9, gcc 11.4.0, meson 1.0.2, ninja 1.11.1) I get a bunch of \n> errors like this:\n> \n> ERROR: incompatible library \n> \"/home/andrew/bf/buildroot/HEAD/pgsql.build/tmp_install/home/andrew/bf/buildroot/HEAD/inst/lib/postgresql/plperl.dll\": missing magic block\n> \n> Similar things happen if I try to build with python.\n> \n> I'm not getting the same on a configure/make build. Not sure what would \n> be different.\n> \n> \n> cheers\n> \n> \n> andrew\n> \n\nHi Andrew,\nsorry for jumping on this request so late\n\nhow are you configuring the build ?\n\nMarco Atzeri\nPostgresql package manager for Cygwin\n\n\n\n\n\n",
"msg_date": "Tue, 13 Feb 2024 13:00:20 +0100",
"msg_from": "Marco Atzeri <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson vs Cygwin"
},
{
"msg_contents": "\nOn 2024-02-13 Tu 7:00 AM, Marco Atzeri wrote:\n> On 22/11/2023 13:02, Andrew Dunstan wrote:\n>>\n>> I've been trying to get meson working with Cygwin. On a fresh install \n>> (Cygwin 3.4.9, gcc 11.4.0, meson 1.0.2, ninja 1.11.1) I get a bunch \n>> of errors like this:\n>>\n>> ERROR: incompatible library \n>> \"/home/andrew/bf/buildroot/HEAD/pgsql.build/tmp_install/home/andrew/bf/buildroot/HEAD/inst/lib/postgresql/plperl.dll\": \n>> missing magic block\n>>\n>> Similar things happen if I try to build with python.\n>>\n>> I'm not getting the same on a configure/make build. Not sure what \n>> would be different.\n>>\n>>\n>> cheers\n>>\n>>\n>> andrew\n>>\n>\n> Hi Andrew,\n> sorry for jumping on this request so late\n>\n> how are you configuring the build ?\n>\n>\n\nSorry for not replying in turn :-(\n\nI just got this error again. All I did was:\n\n meson setup build .\n\n meson compile -C build\n\n meson test -C build\n\n\nI don't get the error if I build using\n\n ./configure --with-perl --with-python\n\n make world-bin\n\n make check-world\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 19 Jun 2024 14:13:54 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson vs Cygwin"
}
] |
[
{
"msg_contents": "Hi!\n\nRecently, one of our customers had reported a not working autovacuum.\nAfter a minor investigation, I've found that\nautovacuum launcher did, actually, run vacuum as expected, but with no\nresults. At the same time, no warnings or\nother anomies were present in the logs.\n\nAt first, I've thought may be statistics is broken, thus vacuum is not\nworking as expected. But in fact, something\nmore interesting is had happened.\n\nThe pg_class.relfrozenxid was set to some rubbish value from the future,\nthus broken in template1 DB, so any new\ndatabase will have it's broken too. Then, we create \"blocker\" DB and then\nin vac_update_datfrozenxid() we get \"bogus\" (from the future) value\nof relfrozenxid and *silently* return. Any other new created DB will not\nbe autovacuumed.\n\nFunny, but from the perspective of DBA, this looks like autovacuum is not\nworking any more for no reasons, although\nall the criterion for its launch is clearly observed.\n\nAFAICS, there are several solutions for this state:\n - run vacuumdb for all DB's\n - manually update broken pg_class.relfrozenxid\n - lowering of autovacuum_freeze_max_age to trigger prevent of transaction\nID wraparound\n\nI do understand, this behaviour hardly can be described as a bug of some\nsort, but could we make, at least, a useful\nmessage to help to clarify what is going on here?\n\n=== REPRODUCE ===\n$ cat <<EOF >> pgsql/data/postgresql.conf\nautovacuum_naptime = 1s\nautovacuum_freeze_max_age = 100000\nEOF\n$ ./pgsql/bin/pg_ctl -D pgsql/data -l pgsql/logfile start\nwaiting for server to start.... done\nserver started\n$ ./pgsql/bin/psql postgres\npsql (17devel)\nType \"help\" for help.\n\npostgres=# \\c template1\nYou are now connected to database \"template1\" as user \"orlov\".\ntemplate1=# update pg_class set relfrozenxid='200000' where oid = 1262;\nUPDATE 1\ntemplate1=# do $$\n\n begin\n\n while 120000 - txid_current()::text::int8 > 0 loop\n\n commit;\n\n end loop;\n\n end $$;\nDO\ntemplate1=# create database blocker;\nCREATE DATABASE\ntemplate1=# create database foo;\nCREATE DATABASE\ntemplate1=# \\c foo\nYou are now connected to database \"foo\" as user \"orlov\".\nfoo=# create table bar(baz int);\nCREATE TABLE\nfoo=# insert into bar select bar from generate_series(1, 8192) bar;\nINSERT 0 8192\nfoo=# update bar set baz=baz;\nUPDATE 8192\nfoo=# select relname, n_tup_ins, n_tup_upd, n_tup_del, n_live_tup,\nn_dead_tup, last_vacuum, last_autovacuum, autovacuum_count\n from\npg_stat_user_tables where relname = 'bar';\n relname | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup |\nlast_vacuum | last_autovacuum | autovacuum_count\n---------+-----------+-----------+-----------+------------+------------+-------------+-----------------+------------------\n bar | 8192 | 8192 | 0 | 8192 | 8192 |\n | | 0\n(1 row)\n\nfoo=# update bar set baz=baz;\nUPDATE 8192\nfoo=# select relname, n_tup_ins, n_tup_upd, n_tup_del, n_live_tup,\nn_dead_tup, last_vacuum, last_autovacuum, autovacuum_count\n from\npg_stat_user_tables where relname = 'bar';\n relname | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup |\nlast_vacuum | last_autovacuum | autovacuum_count\n---------+-----------+-----------+-----------+------------+------------+-------------+-----------------+------------------\n bar | 8192 | 16384 | 0 | 8192 | 16384 |\n | | 0\n(1 row)\n\n... and so on\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Wed, 22 Nov 2023 19:18:32 +0300",
"msg_from": "Maxim Orlov <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to stop autovacuum silently"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 8:18 AM Maxim Orlov <[email protected]> wrote:\n> Recently, one of our customers had reported a not working autovacuum. After a minor investigation, I've found that\n> autovacuum launcher did, actually, run vacuum as expected, but with no results. At the same time, no warnings or\n> other anomies were present in the logs.\n\nAre you aware of commit e83ebfe6d7, which added a similar WARNING at\nthe point when VACUUM overwrites a relfrozenxid/relminmxid \"from the\nfuture\"? It's a recent one.\n\n> At first, I've thought may be statistics is broken, thus vacuum is not working as expected. But in fact, something\n> more interesting is had happened.\n\nWas pg_upgrade even run against this database? My guess is that the\nunderlying problem was caused by the bug fixed by commit 74cf7d46.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Nov 2023 10:12:56 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to stop autovacuum silently"
},
{
"msg_contents": "On Wed, 22 Nov 2023 at 21:13, Peter Geoghegan <[email protected]> wrote:\n\n> Are you aware of commit e83ebfe6d7, which added a similar WARNING at\n> the point when VACUUM overwrites a relfrozenxid/relminmxid \"from the\n> future\"? It's a recent one.\n>\nThank you for reply! I hadn't noticed it. But in described above case, it\ndoesn't\nproduce any warnings. My concern here is that with a couple of updates, we\ncan\nstop autovacuum implicitly without any warnings.\n\n\n> Was pg_upgrade even run against this database? My guess is that the\n> underlying problem was caused by the bug fixed by commit 74cf7d46.\n>\nI'm pretty much sure it was, but, unfortunately, there are no way to 100%\nconfirm\nthis. All I know, they're using PG13 now.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Wed, 22 Nov 2023 at 21:13, Peter Geoghegan <[email protected]> wrote:\nAre you aware of commit e83ebfe6d7, which added a similar WARNING at\nthe point when VACUUM overwrites a relfrozenxid/relminmxid \"from the\nfuture\"? It's a recent one.Thank you for reply! I hadn't noticed it. But in described above case, it doesn't produce any warnings. My concern here is that with a couple of updates, we can stop autovacuum implicitly without any warnings. \nWas pg_upgrade even run against this database? My guess is that the\nunderlying problem was caused by the bug fixed by commit 74cf7d46.I'm pretty much sure it was, but, unfortunately, there are no way to 100% confirm this. All I know, they're using PG13 now.-- Best regards,Maxim Orlov.",
"msg_date": "Thu, 23 Nov 2023 11:43:41 +0300",
"msg_from": "Maxim Orlov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to stop autovacuum silently"
}
] |
[
{
"msg_contents": "hi,\n\n\ni found a discuss about parallel dml, it wrote as follow,\nPostgreSQL: Re: Write operations in parallel mode\n\n\nMake updates and deletes parallel-restricted rather than\nparallel-unsafe - i.e. allow updates and deletes but only in the\nleader. This similarly would allow Update -> Gather -> whatever and\nDelete -> Gather -> whatever. For this, you'd need a shared combo CID\nhash so that workers can learn about new combo CIDs created by the\nleader.\n\n\ni have some questions about this,\n\nwhen do update => gather => whatever, all update jobs done by leader , thus it know itself combo cid mapping,\nand only other workers can not learn about that, so why those workers must know leader's combo cids? why those worker\nneed see leader's updated tuples, could you give me some example cases or Unusual scenes for for this parallel update?\n\n\n\n\n\n\n| |\njiye\n|\n|\[email protected]\n|\n\n\n\nhi,\n i found a discuss about parallel dml, it wrote as follow,PostgreSQL: Re: Write operations in parallel modeMake updates and deletes parallel-restricted rather thanparallel-unsafe - i.e. allow updates and deletes but only in theleader. This similarly would allow Update -> Gather -> whatever andDelete -> Gather -> whatever. For this, you'd need a shared combo CIDhash so that workers can learn about new combo CIDs created by theleader.i have some questions about this,when do update => gather => whatever, all update jobs done by leader , thus it know itself combo cid mapping,and only other workers can not learn about that, so why those workers must know leader's combo cids? why those workerneed see leader's updated tuples, could you give me some example cases or Unusual scenes for for this parallel update?\n\n\n\n\n\n\[email protected]",
"msg_date": "Thu, 23 Nov 2023 12:55:43 +0800 (CST)",
"msg_from": "jiye <[email protected]>",
"msg_from_op": true,
"msg_subject": "confusion about Re: Write operations in parallel mode's update\n part."
}
] |
[
{
"msg_contents": "This patch set applies the explicit catalog representation of not-null \nconstraints introduced by b0e96f3119 for table constraints also to \ndomain not-null constraints.\n\nSince there is no inheritance or primary keys etc., this is much simpler \nand just applies the existing infrastructure to domains as well. As a \nresult, domain not-null constraints now appear in the information schema \ncorrectly. Another effect is that you can now use the ALTER DOMAIN ... \nADD/DROP CONSTRAINT syntax for not-null constraints as well. This makes \neverything consistent overall.\n\nFor the most part, I structured the code so that there are now separate \nsibling subroutines for domain check constraints and domain not-null \nconstraints. This seemed to work out best, but one could also consider \nother ways to refactor this.",
"msg_date": "Thu, 23 Nov 2023 07:56:48 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Catalog domain not-null constraints"
},
{
"msg_contents": "Hi,\n\n> This patch set applies the explicit catalog representation of not-null\n> constraints introduced by b0e96f3119 for table constraints also to\n> domain not-null constraints.\n\nInterestingly enough according to the documentation this syntax is\nalready supported [1][2], but the actual query will fail on `master`:\n\n```\n=# create domain connotnull integer;\nCREATE DOMAIN\n=# alter domain connotnull add not null value;\nERROR: unrecognized constraint subtype: 1\n```\n\nI wonder if we should reflect this limitation in the documentation\nand/or show better error messages. This could be quite surprising to\nthe user. However if we change the documentation on the `master`\nbranch this patch will have to change it back.\n\nI was curious about the semantic difference between `SET NOT NULL` and\n`ADD NOT NULL value`. When I wanted to figure this out I discovered\nsomething that seems to be a bug:\n\n```\n=# create domain connotnull1 integer;\n=# create domain connotnull2 integer;\n=# alter domain connotnull1 add not null value;\n=# alter domain connotnull2 set not null;\n=# \\dD\nERROR: unexpected null value in cached tuple for catalog\npg_constraint column conkey\n```\n\nAlso it turned out that I can do both: `SET NOT NULL` and `ADD NOT\nNULL value` for the same domain. Is it an intended behavior? We should\neither forbid it or cover this case with a test.\n\nNOT VALID is not supported:\n\n```\n=# alter domain connotnull add not null value not valid;\nERROR: NOT NULL constraints cannot be marked NOT VALID\n```\n\n... and this is correct: \"NOT VALID is only accepted for CHECK\nconstraints\" [1]. This code path however doesn't seem to be\ntest-covered even on `master`. While on it, I suggest fixing this.\n\n[1]: https://www.postgresql.org/docs/current/sql-alterdomain.html\n[2]: https://www.postgresql.org/docs/current/sql-createdomain.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 23 Nov 2023 16:13:30 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 2023-Nov-23, Aleksander Alekseev wrote:\n\n> Interestingly enough according to the documentation this syntax is\n> already supported [1][2], but the actual query will fail on `master`:\n> \n> ```\n> =# create domain connotnull integer;\n> CREATE DOMAIN\n> =# alter domain connotnull add not null value;\n> ERROR: unrecognized constraint subtype: 1\n> ```\n\nHah, nice ... this only fails in this way on master, though, as a\nside-effect of previous NOT NULL work during this cycle. So if we take\nPeter's patch, we don't need to worry about it. In 16 it behaves\nproperly, with a normal syntax error.\n\n> ```\n> =# create domain connotnull1 integer;\n> =# create domain connotnull2 integer;\n> =# alter domain connotnull1 add not null value;\n> =# alter domain connotnull2 set not null;\n> =# \\dD\n> ERROR: unexpected null value in cached tuple for catalog\n> pg_constraint column conkey\n> ```\n\nThis is also a master-only problem, as \"add not null\" is rejected in 16\nwith a syntax error (and obviously \\dD doesn't fail).\n\n> NOT VALID is not supported:\n> \n> ```\n> =# alter domain connotnull add not null value not valid;\n> ERROR: NOT NULL constraints cannot be marked NOT VALID\n> ```\n\nYeah, it'll take more work to let NOT NULL constraints be marked NOT\nVALID, both on domains and on tables. It'll be a good feature for sure.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La victoria es para quien se atreve a estar solo\"\n\n\n",
"msg_date": "Thu, 23 Nov 2023 17:35:25 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 2023-Nov-23, Peter Eisentraut wrote:\n\n> This patch set applies the explicit catalog representation of not-null\n> constraints introduced by b0e96f3119 for table constraints also to domain\n> not-null constraints.\n\nI like the idea of having domain not-null constraints appear in\npg_constraint.\n\n> Since there is no inheritance or primary keys etc., this is much simpler and\n> just applies the existing infrastructure to domains as well.\n\nIf you create a table with column of domain that has a NOT NULL\nconstraint, what happens? I mean, is the table column marked\nattnotnull, and how does it behave? Is there a separate pg_constraint\nrow for the constraint in the table? What happens if you do\nALTER TABLE ... DROP NOT NULL for that column?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Por suerte hoy explotó el califont porque si no me habría muerto\n de aburrido\" (Papelucho)\n\n\n",
"msg_date": "Thu, 23 Nov 2023 17:38:30 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 23.11.23 17:38, Alvaro Herrera wrote:\n> If you create a table with column of domain that has a NOT NULL\n> constraint, what happens? I mean, is the table column marked\n> attnotnull, and how does it behave?\n\nNo, the domain does not affect the catalog entry for the column. This \nis the same way it behaves now.\n\n> Is there a separate pg_constraint\n> row for the constraint in the table? What happens if you do\n> ALTER TABLE ... DROP NOT NULL for that column?\n\nThose are separate. After dropping the NOT NULL for a column, null \nvalues for the column could still be rejected by a domain. (This is the \nsame way CHECK constraints work.)\n\n\n\n",
"msg_date": "Mon, 27 Nov 2023 08:08:08 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 23.11.23 14:13, Aleksander Alekseev wrote:\n> =# create domain connotnull1 integer;\n> =# create domain connotnull2 integer;\n> =# alter domain connotnull1 add not null value;\n> =# alter domain connotnull2 set not null;\n> =# \\dD\n> ERROR: unexpected null value in cached tuple for catalog\n> pg_constraint column conkey\n\nYeah, for domain not-null constraints pg_constraint.conkey is indeed \nnull. Should we put something in there?\n\nAttached is an updated patch that avoids the error by taking a separate \ncode path for domain constraints in ruleutils.c. But maybe there is \nanother way to arrange this.",
"msg_date": "Tue, 28 Nov 2023 20:43:54 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On Wed, 29 Nov 2023 at 01:14, Peter Eisentraut <[email protected]> wrote:\n>\n> On 23.11.23 14:13, Aleksander Alekseev wrote:\n> > =# create domain connotnull1 integer;\n> > =# create domain connotnull2 integer;\n> > =# alter domain connotnull1 add not null value;\n> > =# alter domain connotnull2 set not null;\n> > =# \\dD\n> > ERROR: unexpected null value in cached tuple for catalog\n> > pg_constraint column conkey\n>\n> Yeah, for domain not-null constraints pg_constraint.conkey is indeed\n> null. Should we put something in there?\n>\n> Attached is an updated patch that avoids the error by taking a separate\n> code path for domain constraints in ruleutils.c. But maybe there is\n> another way to arrange this.\n\nOne of the test has failed in CFBot at [1] with:\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/domain.out\n/tmp/cirrus-ci-build/src/test/regress/results/domain.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/domain.out\n2024-01-14 15:40:01.793434601 +0000\n+++ /tmp/cirrus-ci-build/src/test/regress/results/domain.out\n2024-01-14 15:42:23.013332625 +0000\n@@ -1271,11 +1271,4 @@\n FROM information_schema.domain_constraints\n WHERE domain_name IN ('con', 'dom', 'pos_int', 'things'))\n ORDER BY constraint_name;\n- constraint_catalog | constraint_schema | constraint_name | check_clause\n---------------------+-------------------+------------------+-------------------\n- regression | public | con_check | (VALUE > 0)\n- regression | public | meow | (VALUE < 11)\n- regression | public | pos_int_check | (VALUE > 0)\n- regression | public | pos_int_not_null | VALUE IS NOT NULL\n-(4 rows)\n-\n+ERROR: could not open relation with OID 36379\n\n[1] - https://cirrus-ci.com/task/4536440638406656\n[2] - https://api.cirrus-ci.com/v1/artifact/task/4536440638406656/log/src/test/regress/regression.diffs\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 17 Jan 2024 17:45:29 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 17.01.24 13:15, vignesh C wrote:\n> One of the test has failed in CFBot at [1] with:\n> diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/domain.out\n> /tmp/cirrus-ci-build/src/test/regress/results/domain.out\n> --- /tmp/cirrus-ci-build/src/test/regress/expected/domain.out\n> 2024-01-14 15:40:01.793434601 +0000\n> +++ /tmp/cirrus-ci-build/src/test/regress/results/domain.out\n> 2024-01-14 15:42:23.013332625 +0000\n> @@ -1271,11 +1271,4 @@\n> FROM information_schema.domain_constraints\n> WHERE domain_name IN ('con', 'dom', 'pos_int', 'things'))\n> ORDER BY constraint_name;\n> - constraint_catalog | constraint_schema | constraint_name | check_clause\n> ---------------------+-------------------+------------------+-------------------\n> - regression | public | con_check | (VALUE > 0)\n> - regression | public | meow | (VALUE < 11)\n> - regression | public | pos_int_check | (VALUE > 0)\n> - regression | public | pos_int_not_null | VALUE IS NOT NULL\n> -(4 rows)\n> -\n> +ERROR: could not open relation with OID 36379\n> \n> [1] - https://cirrus-ci.com/task/4536440638406656\n> [2] - https://api.cirrus-ci.com/v1/artifact/task/4536440638406656/log/src/test/regress/regression.diffs\n\nInteresting. I couldn't reproduce this locally, even across different \noperating systems. The cfbot failures appear to be sporadic, but also \nhappening across multiple systems, so it's clearly not just a local \nenvironment failure. Can anyone else perhaps reproduce this locally?\n\n\n\n",
"msg_date": "Thu, 18 Jan 2024 07:53:57 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 18.01.24 07:53, Peter Eisentraut wrote:\n> On 17.01.24 13:15, vignesh C wrote:\n>> One of the test has failed in CFBot at [1] with:\n>> diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/domain.out\n>> /tmp/cirrus-ci-build/src/test/regress/results/domain.out\n>> --- /tmp/cirrus-ci-build/src/test/regress/expected/domain.out\n>> 2024-01-14 15:40:01.793434601 +0000\n>> +++ /tmp/cirrus-ci-build/src/test/regress/results/domain.out\n>> 2024-01-14 15:42:23.013332625 +0000\n>> @@ -1271,11 +1271,4 @@\n>> FROM information_schema.domain_constraints\n>> WHERE domain_name IN ('con', 'dom', 'pos_int', 'things'))\n>> ORDER BY constraint_name;\n>> - constraint_catalog | constraint_schema | constraint_name | \n>> check_clause\n>> ---------------------+-------------------+------------------+-------------------\n>> - regression | public | con_check | (VALUE > 0)\n>> - regression | public | meow | (VALUE < \n>> 11)\n>> - regression | public | pos_int_check | (VALUE > 0)\n>> - regression | public | pos_int_not_null | VALUE IS \n>> NOT NULL\n>> -(4 rows)\n>> -\n>> +ERROR: could not open relation with OID 36379\n>>\n>> [1] - https://cirrus-ci.com/task/4536440638406656\n>> [2] - \n>> https://api.cirrus-ci.com/v1/artifact/task/4536440638406656/log/src/test/regress/regression.diffs\n> \n> Interesting. I couldn't reproduce this locally, even across different \n> operating systems. The cfbot failures appear to be sporadic, but also \n> happening across multiple systems, so it's clearly not just a local \n> environment failure. Can anyone else perhaps reproduce this locally?\n\nThis patch set needed a rebase, so here it is.\n\nAbout the sporadic test failure above, I think that is an existing issue \nthat is just now exposed through some test timing changes. The \npg_get_expr() function used in information_schema.check_constraints has \nno locking against concurrent drops of tables. I think in this \nparticular case, the tests \"domain\" and \"alter_table\" are prone to this \nconflict. If I move \"domain\" to a separate test group, the issue goes \naway. I'll start a separate discussion about this issue.",
"msg_date": "Wed, 7 Feb 2024 09:10:52 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On Wed, Feb 7, 2024 at 4:11 PM Peter Eisentraut <[email protected]> wrote:\n>\n> >\n> > Interesting. I couldn't reproduce this locally, even across different\n> > operating systems. The cfbot failures appear to be sporadic, but also\n> > happening across multiple systems, so it's clearly not just a local\n> > environment failure. Can anyone else perhaps reproduce this locally?\n>\n> This patch set needed a rebase, so here it is.\n>\ndo you think\nadd following\nALTER DOMAIN <replaceable class=\"parameter\">name</replaceable> ADD NOT\nNULL VALUE\n\nto doc/src/sgml/ref/alter_domain.sgml synopsis makes sense?\notherwise it would be hard to find out this command, i think.\n\n\nI think I found a bug.\nconnotnull already set to not null.\nevery execution of `alter domain connotnull add not null value ;`\nwould concatenate 'NOT NULL VALUE' for the \"Check\" column,\nThat means changes in the function pg_get_constraintdef_worker are not\n100% correct.\nsee below demo:\n\n\nsrc8=# \\dD+\n List of domains\n Schema | Name | Type | Collation | Nullable | Default |\nCheck | Access privileges | Description\n--------+------------+---------+-----------+----------+---------+----------------+-------------------+-------------\n public | connotnull | integer | | | | NOT\nNULL VALUE | |\n public | nnint | integer | | not null | | NOT\nNULL VALUE | |\n(2 rows)\n\nsrc8=# alter domain connotnull add not null value ;\nALTER DOMAIN\nsrc8=# \\dD+\n List of domains\n Schema | Name | Type | Collation | Nullable | Default |\n Check | Access privileges | Descript\nion\n--------+------------+---------+-----------+----------+---------+-------------------------------+-------------------+---------\n----\n public | connotnull | integer | | not null | | NOT\nNULL VALUE NOT NULL VALUE | |\n public | nnint | integer | | not null | | NOT\nNULL VALUE | |\n(2 rows)\n\nsrc8=# alter domain connotnull add not null value ;\nALTER DOMAIN\nsrc8=# \\dD+\n List of domains\n Schema | Name | Type | Collation | Nullable | Default |\n Check | Access privil\neges | Description\n--------+------------+---------+-----------+----------+---------+----------------------------------------------+--------------\n-----+-------------\n public | connotnull | integer | | not null | | NOT\nNULL VALUE NOT NULL VALUE NOT NULL VALUE |\n |\n public | nnint | integer | | not null | | NOT\nNULL VALUE |\n |\n(2 rows)\n\n\n",
"msg_date": "Thu, 8 Feb 2024 20:17:09 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 08.02.24 13:17, jian he wrote:\n> I think I found a bug.\n> connotnull already set to not null.\n> every execution of `alter domain connotnull add not null value ;`\n> would concatenate 'NOT NULL VALUE' for the \"Check\" column,\n\nI would have expected that. Each invocation adds a new constraint.\n\nBut I see that table constraints do not work that way. A command like \nALTER TABLE t1 ADD NOT NULL c1 does nothing if the column already has a \nNOT NULL constraint. I'm not sure this is correct. At least it's not \ndocumented. We should probably make the domains feature work the same \nway, but I would like to understand why it works that way first.\n\n\n\n\n",
"msg_date": "Sun, 11 Feb 2024 22:10:10 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> But I see that table constraints do not work that way. A command like \n> ALTER TABLE t1 ADD NOT NULL c1 does nothing if the column already has a \n> NOT NULL constraint. I'm not sure this is correct. At least it's not \n> documented. We should probably make the domains feature work the same \n> way, but I would like to understand why it works that way first.\n\nThat's probably a hangover from when the underlying state was just\na boolean (attnotnull). Still, I'm a little hesitant to change the\nbehavior. I do agree that named constraints need to \"stack\", so\nthat you'd have to remove each one before not-nullness stops being\nenforced. Less sure about unnamed properties.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Feb 2024 16:42:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 2024-Feb-11, Peter Eisentraut wrote:\n\n> But I see that table constraints do not work that way. A command like ALTER\n> TABLE t1 ADD NOT NULL c1 does nothing if the column already has a NOT NULL\n> constraint. I'm not sure this is correct. At least it's not documented.\n> We should probably make the domains feature work the same way, but I would\n> like to understand why it works that way first.\n\nIt's an intentional design decision actually; I had it creating multiple\nconstraints at first, but it had some ugly problems, so I made it behave\nthis way (which was no small amount of changes). I think the first time\nI posted an implementation that worked this way was here\nhttps://postgr.es/m/[email protected]\n\nand then we debated it again later, starting at the bottom of\nhttps://www.postgresql.org/message-id/flat/CAEZATCUA_iPo5kqUun4myghoZtgqbY3jx62%3DGwcYKRMmxFUq_g%40mail.gmail.com#482db1d21bcf8a4c3ef4fbee609810f4\nA few messages later, I quoted the SQL standard for DROP NOT NULL, which\nis pretty clear that if you run that command, then the column becomes\npossibly nullable, which means that we'd have to drop all matching\nconstraints, or something.\n\nThe main source of nastiness, when we allow multiple constraints, is\nconstraint inheritance. If we allow just one constraint per column,\nthen it's always easy to know what to do on inheritance attach and\ndetach: just coninhcount+1 or coninhcount-1 of the one relevant\nconstraint (which can be matched by column name). If we have multiple\nones, we have to know which one(s) to match and how (by constraint\nname?); if the parent has two and the child has one, we need to create\nanother in the child, with its own coninhcount adjustments; if the\nparent has one named parent_col_not_null and the child also has\nchild_col_not_null, then at ADD INHERIT do we match these ignoring the\ndiffering name, or do we rename the one on child so that we now have\ntwo? Also, the clutter in psql/pg_dump becomes worse.\n\nI would suggest that domain not-null constraints should also allow just\none per column.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"People get annoyed when you try to debug them.\" (Larry Wall)\n\n\n",
"msg_date": "Mon, 12 Feb 2024 11:24:13 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "wandering around the function AlterDomainNotNull,\nthe following code can fix the previous undesired behavior.\nseems pretty simple, am I missing something?\nbased on v3-0001-Add-tests-for-domain-related-information-schema-v.patch\nand v3-0002-Catalog-domain-not-null-constraints.patch\n\ndiff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c\nindex 2f94e375..9069465a 100644\n--- a/src/backend/commands/typecmds.c\n+++ b/src/backend/commands/typecmds.c\n@@ -2904,7 +2904,7 @@ AlterDomainAddConstraint(List *names, Node *newConstraint,\n Form_pg_type typTup;\n Constraint *constr;\n char *ccbin;\n- ObjectAddress address;\n+ ObjectAddress address = InvalidObjectAddress;\n\n /* Make a TypeName so we can use standard type lookup machinery */\n typename = makeTypeNameFromNameList(names);\n@@ -3003,6 +3003,12 @@ AlterDomainAddConstraint(List *names, Node\n*newConstraint,\n }\n else if (constr->contype == CONSTR_NOTNULL)\n {\n+ /* Is the domain already set NOT NULL */\n+ if (typTup->typnotnull)\n+ {\n+ table_close(typrel, RowExclusiveLock);\n+ return address;\n+ }\n domainAddNotNullConstraint(domainoid, typTup->typnamespace,\ntypTup->typbasetype, typTup->typtypmod,\nconstr, NameStr(typTup->typname), constrAddr);\n\n\n",
"msg_date": "Wed, 21 Feb 2024 16:01:16 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 12.02.24 11:24, Alvaro Herrera wrote:\n> On 2024-Feb-11, Peter Eisentraut wrote:\n>> But I see that table constraints do not work that way. A command like ALTER\n>> TABLE t1 ADD NOT NULL c1 does nothing if the column already has a NOT NULL\n>> constraint. I'm not sure this is correct. At least it's not documented.\n>> We should probably make the domains feature work the same way, but I would\n>> like to understand why it works that way first.\n\n> The main source of nastiness, when we allow multiple constraints, is\n> constraint inheritance. If we allow just one constraint per column,\n> then it's always easy to know what to do on inheritance attach and\n> detach: just coninhcount+1 or coninhcount-1 of the one relevant\n> constraint (which can be matched by column name). If we have multiple\n> ones, we have to know which one(s) to match and how (by constraint\n> name?); if the parent has two and the child has one, we need to create\n> another in the child, with its own coninhcount adjustments; if the\n> parent has one named parent_col_not_null and the child also has\n> child_col_not_null, then at ADD INHERIT do we match these ignoring the\n> differing name, or do we rename the one on child so that we now have\n> two? Also, the clutter in psql/pg_dump becomes worse.\n> \n> I would suggest that domain not-null constraints should also allow just\n> one per column.\n\nPerhaps it would make sense if we change the ALTER TABLE command to be like\n\n ALTER TABLE t1 ADD IF NOT EXISTS NOT NULL c1\n\nThen the behavior is like one would expect.\n\nFor ALTER TABLE, we would reject this command if IF NOT EXISTS is not \nspecified. (Since this is mainly for pg_dump, it doesn't really matter \nfor usability.) For ALTER DOMAIN, we could accept both variants.\n\n\n\n",
"msg_date": "Thu, 14 Mar 2024 13:55:11 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 2024-Mar-14, Peter Eisentraut wrote:\n\n> Perhaps it would make sense if we change the ALTER TABLE command to be like\n> \n> ALTER TABLE t1 ADD IF NOT EXISTS NOT NULL c1\n> \n> Then the behavior is like one would expect.\n> \n> For ALTER TABLE, we would reject this command if IF NOT EXISTS is not\n> specified. (Since this is mainly for pg_dump, it doesn't really matter for\n> usability.) For ALTER DOMAIN, we could accept both variants.\n\nI don't understand why you want to change this behavior, though.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La victoria es para quien se atreve a estar solo\"\n\n\n",
"msg_date": "Thu, 14 Mar 2024 15:03:07 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 14.03.24 15:03, Alvaro Herrera wrote:\n> On 2024-Mar-14, Peter Eisentraut wrote:\n> \n>> Perhaps it would make sense if we change the ALTER TABLE command to be like\n>>\n>> ALTER TABLE t1 ADD IF NOT EXISTS NOT NULL c1\n>>\n>> Then the behavior is like one would expect.\n>>\n>> For ALTER TABLE, we would reject this command if IF NOT EXISTS is not\n>> specified. (Since this is mainly for pg_dump, it doesn't really matter for\n>> usability.) For ALTER DOMAIN, we could accept both variants.\n> \n> I don't understand why you want to change this behavior, though.\n\nBecause in the abstract, the behavior of\n\n ALTER TABLE t1 ADD <constraint specification>\n\nshould be to add a constraint.\n\nIn the current implementation, the behavior is different for different \nconstraint types.\n\n\n\n",
"msg_date": "Thu, 14 Mar 2024 15:35:52 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "Anyway, in order to move this forward, here is an updated patch where \nthe ADD CONSTRAINT ... NOT NULL behavior for domains matches the \nidempotent behavior of tables. This uses the patch that Jian He posted.",
"msg_date": "Mon, 18 Mar 2024 08:46:28 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "Hi,\n\n> Anyway, in order to move this forward, here is an updated patch where\n> the ADD CONSTRAINT ... NOT NULL behavior for domains matches the\n> idempotent behavior of tables. This uses the patch that Jian He posted.\n\nI tested the patch on Raspberry Pi 5 and Intel MacBook and also\nexperimented with it. Everything seems to work properly.\n\nPersonally I believe new functions such as\nvalidateDomainNotNullConstraint() and findDomainNotNullConstraint()\ncould use a few lines of comments (accepts..., returns..., etc). Also\nI think that the commit message should explicitly say that supporting\nNOT VALID constraints is out of scope of this patch.\n\nExcept for named nitpicks v4 LGTM.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 18 Mar 2024 13:02:17 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "create domain connotnull integer;\ncreate table domconnotnulltest\n( col1 connotnull\n, col2 connotnull\n);\nalter domain connotnull add not null value;\n---------------------------\nthe above query does not work in pg16.\nERROR: syntax error at or near \"not\".\n\nafter applying the patch, now this works.\nthis new syntax need to be added into the alter_domain.sgml's synopsis and also\nneed an explanation varlistentry?\n\n+ /*\n+ * Store the constraint in pg_constraint\n+ */\n+ ccoid =\n+ CreateConstraintEntry(constr->conname, /* Constraint Name */\n+ domainNamespace, /* namespace */\n+ CONSTRAINT_NOTNULL, /* Constraint Type */\n+ false, /* Is Deferrable */\n+ false, /* Is Deferred */\n+ !constr->skip_validation, /* Is Validated */\n+ InvalidOid, /* no parent constraint */\n+ InvalidOid, /* not a relation constraint */\n+ NULL,\n+ 0,\n+ 0,\n+ domainOid, /* domain constraint */\n+ InvalidOid, /* no associated index */\n+ InvalidOid, /* Foreign key fields */\n+ NULL,\n+ NULL,\n+ NULL,\n+ NULL,\n+ 0,\n+ ' ',\n+ ' ',\n+ NULL,\n+ 0,\n+ ' ',\n+ NULL, /* not an exclusion constraint */\n+ NULL,\n+ NULL,\n+ true, /* is local */\n+ 0, /* inhcount */\n+ false, /* connoinherit */\n+ false, /* conwithoutoverlaps */\n+ false); /* is_internal */\n\n/* conwithoutoverlaps */\nshould be\n/* conperiod */\n\n\n",
"msg_date": "Tue, 19 Mar 2024 17:57:57 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 18.03.24 11:02, Aleksander Alekseev wrote:\n> Hi,\n> \n>> Anyway, in order to move this forward, here is an updated patch where\n>> the ADD CONSTRAINT ... NOT NULL behavior for domains matches the\n>> idempotent behavior of tables. This uses the patch that Jian He posted.\n> \n> I tested the patch on Raspberry Pi 5 and Intel MacBook and also\n> experimented with it. Everything seems to work properly.\n> \n> Personally I believe new functions such as\n> validateDomainNotNullConstraint() and findDomainNotNullConstraint()\n> could use a few lines of comments (accepts..., returns..., etc).\n\nDone.\n\n> Also\n> I think that the commit message should explicitly say that supporting\n> NOT VALID constraints is out of scope of this patch.\n\nNot done. I don't know what NOT VALID has to do with this. This patch \njust changes the internal catalog representation, it doesn't claim to \nadd or change any features. The documentation already accurately states \nthat NOT VALID is not supported for NOT NULL constraints.\n\n> Except for named nitpicks v4 LGTM.\n\nCommitted, thanks.\n\n\n\n",
"msg_date": "Wed, 20 Mar 2024 10:41:33 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 19.03.24 10:57, jian he wrote:\n> this new syntax need to be added into the alter_domain.sgml's synopsis and also\n> need an explanation varlistentry?\n\nThe ALTER DOMAIN reference page refers to CREATE DOMAIN about the \ndetails of the constraint syntax. I believe this is still accurate. We \ncould add more detail locally on the ALTER DOMAIN page, but that is not \nthis patch's job. For example, the details of CHECK constraints are \nalso not shown on the ALTER DOMAIN page right now.\n\n> + false, /* connoinherit */\n> + false, /* conwithoutoverlaps */\n> + false); /* is_internal */\n> \n> /* conwithoutoverlaps */\n> should be\n> /* conperiod */\n\nGood catch, thanks.\n\n\n",
"msg_date": "Wed, 20 Mar 2024 10:43:47 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On Wed, 20 Mar 2024 at 09:43, Peter Eisentraut <[email protected]> wrote:\n>\n> On 19.03.24 10:57, jian he wrote:\n> > this new syntax need to be added into the alter_domain.sgml's synopsis and also\n> > need an explanation varlistentry?\n>\n> The ALTER DOMAIN reference page refers to CREATE DOMAIN about the\n> details of the constraint syntax. I believe this is still accurate. We\n> could add more detail locally on the ALTER DOMAIN page, but that is not\n> this patch's job. For example, the details of CHECK constraints are\n> also not shown on the ALTER DOMAIN page right now.\n>\n\nHmm, for CHECK constraints, the ALTER DOMAIN syntax for adding a\nconstraint is the same as for CREATE DOMAIN, but that's not the case\nfor NOT NULL constraints. So, for example, these both work:\n\nCREATE DOMAIN d AS int CONSTRAINT c1 CHECK (value > 0);\n\nALTER DOMAIN d ADD CONSTRAINT c2 CHECK (value < 10);\n\nHowever, for NOT NULL constraints, the ALTER DOMAIN syntax differs\nfrom the CREATE DOMAIN syntax, because it expects \"NOT NULL\" to be\nfollowed by a column name. So the following CREATE DOMAIN syntax\nworks:\n\nCREATE DOMAIN d AS int CONSTRAINT nn NOT NULL;\n\nbut the equivalent ALTER DOMAIN syntax doesn't work:\n\nALTER DOMAIN d ADD CONSTRAINT nn NOT NULL;\n\nERROR: syntax error at or near \";\"\nLINE 1: ALTER DOMAIN d ADD CONSTRAINT nn NOT NULL;\n ^\n\nAll the examples in the tests append \"value\" to this, presumably by\nanalogy with CHECK constraints, but it looks as though anything works,\nand is simply ignored:\n\nALTER DOMAIN d ADD CONSTRAINT nn NOT NULL xxx; -- works\n\nThat doesn't seem particularly satisfactory. I think it should not\nrequire (and reject) a column name after \"NOT NULL\".\n\nLooking in the SQL spec, it seems to only mention adding CHECK\nconstraints to domains, so the option to add NOT NULL constraints\nshould probably be listed in the \"Compatibility\" section.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 20 Mar 2024 11:22:58 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 20.03.24 12:22, Dean Rasheed wrote:\n> Hmm, for CHECK constraints, the ALTER DOMAIN syntax for adding a\n> constraint is the same as for CREATE DOMAIN, but that's not the case\n> for NOT NULL constraints. So, for example, these both work:\n> \n> CREATE DOMAIN d AS int CONSTRAINT c1 CHECK (value > 0);\n> \n> ALTER DOMAIN d ADD CONSTRAINT c2 CHECK (value < 10);\n> \n> However, for NOT NULL constraints, the ALTER DOMAIN syntax differs\n> from the CREATE DOMAIN syntax, because it expects \"NOT NULL\" to be\n> followed by a column name. So the following CREATE DOMAIN syntax\n> works:\n> \n> CREATE DOMAIN d AS int CONSTRAINT nn NOT NULL;\n> \n> but the equivalent ALTER DOMAIN syntax doesn't work:\n> \n> ALTER DOMAIN d ADD CONSTRAINT nn NOT NULL;\n> \n> ERROR: syntax error at or near \";\"\n> LINE 1: ALTER DOMAIN d ADD CONSTRAINT nn NOT NULL;\n> ^\n> \n> All the examples in the tests append \"value\" to this, presumably by\n> analogy with CHECK constraints, but it looks as though anything works,\n> and is simply ignored:\n> \n> ALTER DOMAIN d ADD CONSTRAINT nn NOT NULL xxx; -- works\n> \n> That doesn't seem particularly satisfactory. I think it should not\n> require (and reject) a column name after \"NOT NULL\".\n\nHmm. CREATE DOMAIN uses column constraint syntax, but ALTER DOMAIN uses \ntable constraint syntax. As long as you are only dealing with CHECK \nconstraints, there is no difference, but it shows up when using NOT NULL \nconstraint syntax. I agree that this is unsatisfactory. Attached is a \npatch to try to sort this out.\n\n> Looking in the SQL spec, it seems to only mention adding CHECK\n> constraints to domains, so the option to add NOT NULL constraints\n> should probably be listed in the \"Compatibility\" section.\n\n<canofworms>\n\nA quick reading of the SQL standard suggests to me that the way we are \ndoing null handling in domain constraints is all wrong. The standard \nsays that domain constraints are only checked on values that are not \nnull. So both the handling of constraints using the CHECK syntax is \nnonstandard and the existence of explicit NOT NULL constraints is an \nextension. The CREATE DOMAIN reference page already explains why all of \nthis is a bad idea. Do we want to document all of that further, or \nmaybe we just want to rip out domain not-null constraints, or at least \nnot add further syntax for it?\n\n</canofworms>",
"msg_date": "Thu, 21 Mar 2024 12:23:34 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> <canofworms>\n> A quick reading of the SQL standard suggests to me that the way we are \n> doing null handling in domain constraints is all wrong. The standard \n> says that domain constraints are only checked on values that are not \n> null. So both the handling of constraints using the CHECK syntax is \n> nonstandard and the existence of explicit NOT NULL constraints is an \n> extension. The CREATE DOMAIN reference page already explains why all of \n> this is a bad idea. Do we want to document all of that further, or \n> maybe we just want to rip out domain not-null constraints, or at least \n> not add further syntax for it?\n> </canofworms>\n\nYeah. The real problem with domain not null is: how can a column\nthat's propagated up through the nullable side of an outer join\nstill be considered to belong to such a domain?\n\nThe SQL spec's answer to that conundrum appears to be \"NULL is\na valid value of every domain, and if you don't like it, tough\".\nI'm too lazy to search the archives, but we have had at least one\nprevious discussion about how we should adopt the spec's semantics.\nIt'd be an absolutely trivial fix in CoerceToDomain (succeed\nimmediately if input is NULL), but the question is what to do\nwith existing \"DOMAIN NOT NULL\" DDL.\n\nAnyway, now that I recall all that, e5da0fe3c is throwing good work\nafter bad, and I wonder if we shouldn't revert it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Mar 2024 10:30:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On Thu, 21 Mar 2024 at 10:30, Tom Lane <[email protected]> wrote:\n\n\n> The SQL spec's answer to that conundrum appears to be \"NULL is\n> a valid value of every domain, and if you don't like it, tough\".\n>\n\nTo be fair, NULL is a valid value of every type. Even VOID has NULL.\n\nIn this context, it’s a bit weird to be able to decree up front when\ndefining a type that no table column of that type, anywhere, may ever\ncontain a NULL. It would be nice if there was a way to reverse the default\nso that if you (almost or) never want NULLs anywhere that’s what you get\nwithout saying \"NOT NULL\" all over the place, and instead just specify\n\"NULLABLE\" (or something) where you want. But that effectively means\noptionally changing the behaviour of CREATE TABLE and ALTER TABLE.\n\nOn Thu, 21 Mar 2024 at 10:30, Tom Lane <[email protected]> wrote: \nThe SQL spec's answer to that conundrum appears to be \"NULL is\na valid value of every domain, and if you don't like it, tough\".\nTo be fair, NULL is a valid value of every type. Even VOID has NULL.In this context, it’s a bit weird to be able to decree up front when defining a type that no table column of that type, anywhere, may ever contain a NULL. It would be nice if there was a way to reverse the default so that if you (almost or) never want NULLs anywhere that’s what you get without saying \"NOT NULL\" all over the place, and instead just specify \"NULLABLE\" (or something) where you want. But that effectively means optionally changing the behaviour of CREATE TABLE and ALTER TABLE.",
"msg_date": "Thu, 21 Mar 2024 13:36:25 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 3/21/24 15:30, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> <canofworms>\n>> A quick reading of the SQL standard suggests to me that the way we are\n>> doing null handling in domain constraints is all wrong. The standard\n>> says that domain constraints are only checked on values that are not\n>> null. So both the handling of constraints using the CHECK syntax is\n>> nonstandard and the existence of explicit NOT NULL constraints is an\n>> extension. The CREATE DOMAIN reference page already explains why all of\n>> this is a bad idea. Do we want to document all of that further, or\n>> maybe we just want to rip out domain not-null constraints, or at least\n>> not add further syntax for it?\n>> </canofworms>\n> \n> Yeah. The real problem with domain not null is: how can a column\n> that's propagated up through the nullable side of an outer join\n> still be considered to belong to such a domain?\n\n\nPer spec, it is not considered to be so. The domain only applies to \ntable storage and CASTs and gets \"forgotten\" in a query.\n\n\n> The SQL spec's answer to that conundrum appears to be \"NULL is\n> a valid value of every domain, and if you don't like it, tough\".\n\n\nI don't see how you can infer this from the standard at all.\n\n\n> I'm too lazy to search the archives, but we have had at least one\n> previous discussion about how we should adopt the spec's semantics.\n> It'd be an absolutely trivial fix in CoerceToDomain (succeed\n> immediately if input is NULL), but the question is what to do\n> with existing \"DOMAIN NOT NULL\" DDL.\n\n\nHere is a semi-random link into a conversation you and I have recently \nhad about this: \nhttps://www.postgresql.org/message-id/a13db59c-c68f-4a30-87a5-177fe135665e%40postgresfriends.org\n\nAs also said somewhere in that thread, I think that <cast specification> \nshort-cutting a NULL input value without considering the constraints of \na domain is a bug that needs to be fixed in the standard.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Thu, 21 Mar 2024 23:55:13 +0100",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "Vik Fearing <[email protected]> writes:\n> On 3/21/24 15:30, Tom Lane wrote:\n>> The SQL spec's answer to that conundrum appears to be \"NULL is\n>> a valid value of every domain, and if you don't like it, tough\".\n\n> I don't see how you can infer this from the standard at all.\n\nI believe where we got that from is 6.13 <cast specification>,\nwhich quoth (general rule 2):\n\n c) If SV is the null value, then the result of CS is the null\n value and no further General Rules of this Subclause are applied.\n\nIn particular, that short-circuits application of the domain\nconstraints (GR 23), implying that CAST(NULL AS some_domain) is\nalways successful. Now you could argue that there's some other\ncontext that would reject nulls, but being inconsistent with\nCAST would seem more like a bug than a feature.\n\n> As also said somewhere in that thread, I think that <cast specification> \n> short-cutting a NULL input value without considering the constraints of \n> a domain is a bug that needs to be fixed in the standard.\n\nI think it's probably intentional. It certainly fits with the lack of\nsyntax for DOMAIN NOT NULL. Also, it's been like that since SQL99;\ndo you think nobody's noticed it for 25 years?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Mar 2024 19:17:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 3/22/24 00:17, Tom Lane wrote:\n> Vik Fearing <[email protected]> writes:\n>> On 3/21/24 15:30, Tom Lane wrote:\n>>> The SQL spec's answer to that conundrum appears to be \"NULL is\n>>> a valid value of every domain, and if you don't like it, tough\".\n> \n>> I don't see how you can infer this from the standard at all.\n> \n> I believe where we got that from is 6.13 <cast specification>,\n> which quoth (general rule 2):\n> \n> c) If SV is the null value, then the result of CS is the null\n> value and no further General Rules of this Subclause are applied.\n> \n> In particular, that short-circuits application of the domain\n> constraints (GR 23), implying that CAST(NULL AS some_domain) is\n> always successful. Now you could argue that there's some other\n> context that would reject nulls, but being inconsistent with\n> CAST would seem more like a bug than a feature.\n\n\nI think the main bug is in what you quoted from <cast specification>.\n\nI believe that the POLA for casting to a domain is for all constraints \nof the domain to be verified for ALL values including the null value.\n\n\n>> As also said somewhere in that thread, I think that <cast specification>\n>> short-cutting a NULL input value without considering the constraints of\n>> a domain is a bug that needs to be fixed in the standard.\n> \n> I think it's probably intentional. It certainly fits with the lack of\n> syntax for DOMAIN NOT NULL. Also, it's been like that since SQL99;\n> do you think nobody's noticed it for 25 years?\n\n\nHaven't we (postgres) had bug reports of similar age?\n\nThere is also the possibility that no one has noticed because major \nplayers have not implemented domains. For example, Oracle only just got \nthem last year: \nhttps://blogs.oracle.com/coretec/post/less-coding-with-sql-domains-in-23c\n\nAnyway, I will bring this up with the committee and report back. My \nproposed solution will be for CAST to check domain constraints even if \nthe input is NULL.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 22 Mar 2024 00:38:09 +0100",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "Vik Fearing <[email protected]> writes:\n> On 3/22/24 00:17, Tom Lane wrote:\n>> Vik Fearing <[email protected]> writes:\n>>> As also said somewhere in that thread, I think that <cast specification>\n>>> short-cutting a NULL input value without considering the constraints of\n>>> a domain is a bug that needs to be fixed in the standard.\n\n>> I think it's probably intentional. It certainly fits with the lack of\n>> syntax for DOMAIN NOT NULL. Also, it's been like that since SQL99;\n>> do you think nobody's noticed it for 25 years?\n\n> Haven't we (postgres) had bug reports of similar age?\n\nWell, they've looked it at it since then. SQL99 has\n\n c) If SV is the null value, then the result is the null value.\n\nSQL:2008 and later have the text I quoted:\n\n c) If SV is the null value, then the result of CS is the null\n value and no further General Rules of this Sub-clause are\n applied.\n\nI find it *extremely* hard to believe that they would have added\nthat explicit text without noticing exactly which operations they\nwere saying to skip.\n\n> Anyway, I will bring this up with the committee and report back. My \n> proposed solution will be for CAST to check domain constraints even if \n> the input is NULL.\n\nPlease do not claim that that is the position of the Postgres project.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Mar 2024 20:46:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 3/22/24 01:46, Tom Lane wrote:\n> Vik Fearing <[email protected]> writes:\n>> Anyway, I will bring this up with the committee and report back. My\n>> proposed solution will be for CAST to check domain constraints even if\n>> the input is NULL.\n> \n> Please do not claim that that is the position of the Postgres project.\n\n\nEverything that I do on the SQL Committee is in my own name.\n\nI do not speak for either PostgreSQL or for EDB (my employer), even \nthough my opinions are of course often influenced by some combination of \nboth.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 22 Mar 2024 02:12:48 +0100",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 7:23 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 20.03.24 12:22, Dean Rasheed wrote:\n> > Hmm, for CHECK constraints, the ALTER DOMAIN syntax for adding a\n> > constraint is the same as for CREATE DOMAIN, but that's not the case\n> > for NOT NULL constraints. So, for example, these both work:\n> >\n> > CREATE DOMAIN d AS int CONSTRAINT c1 CHECK (value > 0);\n> >\n> > ALTER DOMAIN d ADD CONSTRAINT c2 CHECK (value < 10);\n> >\n> > However, for NOT NULL constraints, the ALTER DOMAIN syntax differs\n> > from the CREATE DOMAIN syntax, because it expects \"NOT NULL\" to be\n> > followed by a column name. So the following CREATE DOMAIN syntax\n> > works:\n> >\n> > CREATE DOMAIN d AS int CONSTRAINT nn NOT NULL;\n> >\n> > but the equivalent ALTER DOMAIN syntax doesn't work:\n> >\n> > ALTER DOMAIN d ADD CONSTRAINT nn NOT NULL;\n> >\n> > ERROR: syntax error at or near \";\"\n> > LINE 1: ALTER DOMAIN d ADD CONSTRAINT nn NOT NULL;\n> > ^\n> >\n> > All the examples in the tests append \"value\" to this, presumably by\n> > analogy with CHECK constraints, but it looks as though anything works,\n> > and is simply ignored:\n> >\n> > ALTER DOMAIN d ADD CONSTRAINT nn NOT NULL xxx; -- works\n> >\n> > That doesn't seem particularly satisfactory. I think it should not\n> > require (and reject) a column name after \"NOT NULL\".\n>\n> Hmm. CREATE DOMAIN uses column constraint syntax, but ALTER DOMAIN uses\n> table constraint syntax. As long as you are only dealing with CHECK\n> constraints, there is no difference, but it shows up when using NOT NULL\n> constraint syntax. I agree that this is unsatisfactory. Attached is a\n> patch to try to sort this out.\n>\n\n\n+ | NOT NULL_P ConstraintAttributeSpec\n+ {\n+ Constraint *n = makeNode(Constraint);\n+\n+ n->contype = CONSTR_NOTNULL;\n+ n->location = @1;\n+ n->keys = list_make1(makeString(\"value\"));\n+ /* no NOT VALID support yet */\n+ processCASbits($3, @3, \"NOT NULL\",\n+ NULL, NULL, NULL,\n+ &n->is_no_inherit, yyscanner);\n+ n->initially_valid = true;\n+ $$ = (Node *) n;\n+ }\n\ni don't understand this part.\n+ n->keys = list_make1(makeString(\"value\"));\n\nalso you should also change src/backend/utils/adt/ruleutils.c?\n\nsrc6=# create domain domain_test integer;\nalter domain domain_test add constraint pos1 check (value > 0);\nalter domain domain_test add constraint constr1 not null ;\nCREATE DOMAIN\nALTER DOMAIN\nALTER DOMAIN\nsrc6=# \\dD\n List of domains\n Schema | Name | Type | Collation | Nullable | Default |\n Check\n--------+-------------+---------+-----------+----------+---------+----------------------------------\n public | domain_test | integer | | not null | |\nCHECK (VALUE > 0) NOT NULL VALUE\n(1 row)\n\nprobably change to CHECK (VALUE IS NOT NULL)\n\n- appendStringInfoString(&buf,\n\"NOT NULL VALUE\");\n+ appendStringInfoString(&buf,\n\"CHECK (VALUE IS NOT NULL)\");\nseems works.\n\n\n",
"msg_date": "Fri, 22 Mar 2024 16:28:39 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On Fri, 22 Mar 2024 at 08:28, jian he <[email protected]> wrote:\n>\n> On Thu, Mar 21, 2024 at 7:23 PM Peter Eisentraut <[email protected]> wrote:\n> >\n> > Hmm. CREATE DOMAIN uses column constraint syntax, but ALTER DOMAIN uses\n> > table constraint syntax. Attached is a patch to try to sort this out.\n>\n> also you should also change src/backend/utils/adt/ruleutils.c?\n>\n> src6=# \\dD\n> List of domains\n> Schema | Name | Type | Collation | Nullable | Default |\n> Check\n> --------+-------------+---------+-----------+----------+---------+----------------------------------\n> public | domain_test | integer | | not null | |\n> CHECK (VALUE > 0) NOT NULL VALUE\n> (1 row)\n>\n> probably change to CHECK (VALUE IS NOT NULL)\n\nI'd say it should just output \"NOT NULL\", since that's the input\nsyntax that created the constraint. But then again, why display NOT\nNULL constraints in that column at all, when there's a separate\n\"Nullable\" column?\n\nAlso (not this patch's fault), psql doesn't seem to offer a way to\ndisplay domain constraint names -- something you need to know to drop\nor alter them. Perhaps \\dD+ could be made to do that?\n\n+ The syntax <literal>NOT NULL</literal> in this command is a\n+ <productname>PostgreSQL</productname> extension. (A standard-conforming\n+ way to write the same would be <literal>CHECK (VALUE IS NOT\n+ NULL)</literal>. However, per <xref linkend=\"sql-createdomain-notes\"/>,\n+ such constraints a best avoided in practice anyway.) The\n+ <literal>NULL</literal> <quote>constraint</quote> is a\n+ <productname>PostgreSQL</productname> extension (see also <xref\n+ linkend=\"sql-createtable-compatibility\"/>).\n\nI didn't verify this, but I thought that according to the SQL\nstandard, only non-NULL values should be passed to CHECK constraints,\nso there is no standard-conforming way to write a NOT NULL domain\nconstraint.\n\nFWIW, I think NOT NULL domain constraints are a useful feature to\nhave, and I suspect that there are more people out there who use them\nand like them, than who care what the SQL standard says. If so, I'm in\nfavour of allowing them to be named and managed in the same way as NOT\nNULL table constraints.\n\n+ processCASbits($5, @5, \"CHECK\",\n+ NULL, NULL, &n->skip_validation,\n+ &n->is_no_inherit, yyscanner);\n+ n->initially_valid = !n->skip_validation;\n\n+ /* no NOT VALID support yet */\n+ processCASbits($3, @3, \"NOT NULL\",\n+ NULL, NULL, NULL,\n+ &n->is_no_inherit, yyscanner);\n+ n->initially_valid = true;\n\nNO INHERIT is allowed for domain constraints? What does that even mean?\n\nThere's something very wonky about this:\n\nCREATE DOMAIN d1 AS int CHECK (value > 0) NO INHERIT; -- Rejected\nERROR: check constraints for domains cannot be marked NO INHERIT\n\nCREATE DOMAIN d1 AS int;\nALTER DOMAIN d1 ADD CHECK (value > 0) NO INHERIT; -- Allowed\n\nCREATE DOMAIN d2 AS int NOT NULL NO INHERIT; -- Now allowed (used to\nsyntax error)\n\nCREATE DOMAIN d3 AS int;\nALTER DOMAIN d3 ADD NOT NULL NO INHERIT; -- Allowed\n\nPresumably all of those should be rejected in the grammar.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 25 Mar 2024 18:28:28 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 2024-Mar-25, Dean Rasheed wrote:\n\n> Also (not this patch's fault), psql doesn't seem to offer a way to\n> display domain constraint names -- something you need to know to drop\n> or alter them. Perhaps \\dD+ could be made to do that?\n\nOoh, I remember we had offered a patch for \\d++ to display these\nconstraint names for tables, but didn't get around to gather consensus\nfor it. We did gather consensus on *not* wanting \\d+ to display them,\nbut we need *something*. I suppose we should do something symmetrical\nfor tables and domains. How about \\dD++ and \\dt++?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n",
"msg_date": "Tue, 26 Mar 2024 08:30:45 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On Tue, 26 Mar 2024 at 07:30, Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Mar-25, Dean Rasheed wrote:\n>\n> > Also (not this patch's fault), psql doesn't seem to offer a way to\n> > display domain constraint names -- something you need to know to drop\n> > or alter them. Perhaps \\dD+ could be made to do that?\n>\n> Ooh, I remember we had offered a patch for \\d++ to display these\n> constraint names for tables, but didn't get around to gather consensus\n> for it. We did gather consensus on *not* wanting \\d+ to display them,\n> but we need *something*. I suppose we should do something symmetrical\n> for tables and domains. How about \\dD++ and \\dt++?\n>\n\nPersonally, I quite like the fact that \\d+ displays NOT NULL\nconstraints, because it puts them on an equal footing with CHECK\nconstraints. However, I can appreciate that it will significantly\nincrease the length of the output in some cases.\n\nWith \\dD it's not so nice because of the way it puts all the details\non one line. The obvious output might look something like this:\n\n\\dD\n List of domains\n Schema | Name | Type | Collation | Nullable | Default | Check\n--------+------+---------+-----------+----------+---------+-------------------\n public | d1 | integer | | NOT NULL | | CHECK (VALUE > 0)\n\n\\dD+\n List of domains\n Schema | Name | Type | Collation | Nullable\n| Default | Check | Access privileges\n| Description\n--------+------+---------+-----------+---------------------------------+---------+---------------------------------------+-------------------+-------------\n public | d1 | integer | | CONSTRAINT d1_not_null NOT NULL\n| | CONSTRAINT d1_check CHECK (VALUE > 0) |\n|\n\nSo you'd need quite a wide window to easily view it (or use \\x). I\nsuppose the width could be reduced by dropping the word \"CONSTRAINT\"\nin the \\dD+ case, but it's probably still going to be wider than the\naverage window.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 26 Mar 2024 09:04:20 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 2:28 AM Dean Rasheed <[email protected]> wrote:\n>\n> On Fri, 22 Mar 2024 at 08:28, jian he <[email protected]> wrote:\n> >\n> > On Thu, Mar 21, 2024 at 7:23 PM Peter Eisentraut <[email protected]> wrote:\n> > >\n> > > Hmm. CREATE DOMAIN uses column constraint syntax, but ALTER DOMAIN uses\n> > > table constraint syntax. Attached is a patch to try to sort this out.\n> >\n> > also you should also change src/backend/utils/adt/ruleutils.c?\n> >\n> > src6=# \\dD\n> > List of domains\n> > Schema | Name | Type | Collation | Nullable | Default |\n> > Check\n> > --------+-------------+---------+-----------+----------+---------+----------------------------------\n> > public | domain_test | integer | | not null | |\n> > CHECK (VALUE > 0) NOT NULL VALUE\n> > (1 row)\n> >\n> > probably change to CHECK (VALUE IS NOT NULL)\n>\n> I'd say it should just output \"NOT NULL\", since that's the input\n> syntax that created the constraint. But then again, why display NOT\n> NULL constraints in that column at all, when there's a separate\n> \"Nullable\" column?\n>\ncreate table sss(a int not null);\nSELECT pg_get_constraintdef(oid) FROM pg_constraint WHERE conname =\n'sss_a_not_null';\nreturns\n\" NOT NULL a\"\n\nI think just outputting \"NOT NULL\" is ok for the domain, given the\ntable constraint is \"NOT NULL\" + table column, per above example.\nyech, we already have a \"Nullable\" column, so we don't need to display\n NOT NULL constraints.\n\n\n",
"msg_date": "Mon, 1 Apr 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 21.03.24 12:23, Peter Eisentraut wrote:\n>> All the examples in the tests append \"value\" to this, presumably by\n>> analogy with CHECK constraints, but it looks as though anything works,\n>> and is simply ignored:\n>>\n>> ALTER DOMAIN d ADD CONSTRAINT nn NOT NULL xxx; -- works\n>>\n>> That doesn't seem particularly satisfactory. I think it should not\n>> require (and reject) a column name after \"NOT NULL\".\n> \n> Hmm. CREATE DOMAIN uses column constraint syntax, but ALTER DOMAIN uses \n> table constraint syntax. As long as you are only dealing with CHECK \n> constraints, there is no difference, but it shows up when using NOT NULL \n> constraint syntax. I agree that this is unsatisfactory. Attached is a \n> patch to try to sort this out.\n\nAfter studying this a bit more, I think moving forward in this direction \nis the best way. Attached is a new patch version, mainly with a more \nelaborate commit message. This patch makes the not-null constraint \nsyntax consistent between CREATE DOMAIN and ALTER DOMAIN, and also makes \nthe respective documentation correct.\n\n(Note that, as I show in the commit message, commit e5da0fe3c22 had in \npassing fixed a couple of bugs in CREATE and ALTER DOMAIN, so just \nreverting that commit wouldn't be a complete solution.)",
"msg_date": "Mon, 8 Apr 2024 11:53:40 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 5:53 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 21.03.24 12:23, Peter Eisentraut wrote:\n> >> All the examples in the tests append \"value\" to this, presumably by\n> >> analogy with CHECK constraints, but it looks as though anything works,\n> >> and is simply ignored:\n> >>\n> >> ALTER DOMAIN d ADD CONSTRAINT nn NOT NULL xxx; -- works\n> >>\n> >> That doesn't seem particularly satisfactory. I think it should not\n> >> require (and reject) a column name after \"NOT NULL\".\n> >\n> > Hmm. CREATE DOMAIN uses column constraint syntax, but ALTER DOMAIN uses\n> > table constraint syntax. As long as you are only dealing with CHECK\n> > constraints, there is no difference, but it shows up when using NOT NULL\n> > constraint syntax. I agree that this is unsatisfactory. Attached is a\n> > patch to try to sort this out.\n>\n> After studying this a bit more, I think moving forward in this direction\n> is the best way. Attached is a new patch version, mainly with a more\n> elaborate commit message. This patch makes the not-null constraint\n> syntax consistent between CREATE DOMAIN and ALTER DOMAIN, and also makes\n> the respective documentation correct.\n>\n> (Note that, as I show in the commit message, commit e5da0fe3c22 had in\n> passing fixed a couple of bugs in CREATE and ALTER DOMAIN, so just\n> reverting that commit wouldn't be a complete solution.)\n\n\nin ruleutils.c\n/* conkey is null for domain not-null constraints */\nappendStringInfoString(&buf, \"NOT NULL VALUE\");\n\nshould be\n\n/* conkey is null for domain not-null constraints */\nappendStringInfoString(&buf, \"NOT NULL \");\n?\n\n\ncurrently\nsrc6=# \\dD connotnull\n/******** QUERY *********/\nSELECT n.nspname as \"Schema\",\n t.typname as \"Name\",\n pg_catalog.format_type(t.typbasetype, t.typtypmod) as \"Type\",\n (SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type bt\n WHERE c.oid = t.typcollation AND bt.oid = t.typbasetype AND\nt.typcollation <> bt.typcollation) as \"Collation\",\n CASE WHEN t.typnotnull THEN 'not null' END as \"Nullable\",\n t.typdefault as \"Default\",\n pg_catalog.array_to_string(ARRAY(\n SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM\npg_catalog.pg_constraint r WHERE t.oid = r.contypid\n ), ' ') as \"Check\"\nFROM pg_catalog.pg_type t\n LEFT JOIN pg_catalog.pg_namespace n ON n.oid = t.typnamespace\nWHERE t.typtype = 'd'\n AND t.typname OPERATOR(pg_catalog.~) '^(connotnull)$' COLLATE\npg_catalog.default\n AND pg_catalog.pg_type_is_visible(t.oid)\nORDER BY 1, 2;\n/************************/\n\n---\nSince the last column is already named as \"Check\", maybe we need to\nchange the query to\n pg_catalog.array_to_string(ARRAY(\n SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM\npg_catalog.pg_constraint r WHERE t.oid = r.contypid\n and r.contype = 'c'\n ), ' ') as \"Check\"\n\nThat means domain can be associated with check constraint and not-null\nconstraint.\n\n\n\nthe url link destination is fine, but the url rendered name is \"per\nthe section called “Notes”\" which seems strange,\nplease see attached.",
"msg_date": "Tue, 9 Apr 2024 16:44:18 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Catalog domain not-null constraints"
},
{
"msg_contents": "On 09.04.24 10:44, jian he wrote:\n>> After studying this a bit more, I think moving forward in this direction\n>> is the best way. Attached is a new patch version, mainly with a more\n>> elaborate commit message. This patch makes the not-null constraint\n>> syntax consistent between CREATE DOMAIN and ALTER DOMAIN, and also makes\n>> the respective documentation correct.\n>>\n>> (Note that, as I show in the commit message, commit e5da0fe3c22 had in\n>> passing fixed a couple of bugs in CREATE and ALTER DOMAIN, so just\n>> reverting that commit wouldn't be a complete solution.)\n\n> in ruleutils.c\n> /* conkey is null for domain not-null constraints */\n> appendStringInfoString(&buf, \"NOT NULL VALUE\");\n> \n> should be\n> \n> /* conkey is null for domain not-null constraints */\n> appendStringInfoString(&buf, \"NOT NULL \");\n\nGood catch, fixed.\n\n> currently\n> src6=# \\dD connotnull\n> /******** QUERY *********/\n> SELECT n.nspname as \"Schema\",\n> t.typname as \"Name\",\n> pg_catalog.format_type(t.typbasetype, t.typtypmod) as \"Type\",\n> (SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type bt\n> WHERE c.oid = t.typcollation AND bt.oid = t.typbasetype AND\n> t.typcollation <> bt.typcollation) as \"Collation\",\n> CASE WHEN t.typnotnull THEN 'not null' END as \"Nullable\",\n> t.typdefault as \"Default\",\n> pg_catalog.array_to_string(ARRAY(\n> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM\n> pg_catalog.pg_constraint r WHERE t.oid = r.contypid\n> ), ' ') as \"Check\"\n> FROM pg_catalog.pg_type t\n> LEFT JOIN pg_catalog.pg_namespace n ON n.oid = t.typnamespace\n> WHERE t.typtype = 'd'\n> AND t.typname OPERATOR(pg_catalog.~) '^(connotnull)$' COLLATE\n> pg_catalog.default\n> AND pg_catalog.pg_type_is_visible(t.oid)\n> ORDER BY 1, 2;\n> /************************/\n> \n> ---\n> Since the last column is already named as \"Check\", maybe we need to\n> change the query to\n> pg_catalog.array_to_string(ARRAY(\n> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM\n> pg_catalog.pg_constraint r WHERE t.oid = r.contypid\n> and r.contype = 'c'\n> ), ' ') as \"Check\"\n> \n> That means domain can be associated with check constraint and not-null\n> constraint.\n\nYes, I changed it like you wrote.\n\n> the url link destination is fine, but the url rendered name is \"per\n> the section called “Notes”\" which seems strange,\n> please see attached.\n\nHmm, changing that would be an independent project.\n\nI have committed the patch with the two amendments you provided.\n\nI had also added a test of \\dD and that caused some test output \ninstability, so I added a ORDER BY r.conname to the \\dD query.\n\n\n\n",
"msg_date": "Mon, 15 Apr 2024 09:43:08 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Catalog domain not-null constraints"
}
] |
[
{
"msg_contents": "While working on Materialize's streaming logical replication from Postgres [0],\nmy colleagues Sean Loiselle and Petros Angelatos (CC'd) discovered today what\nappears to be a correctness bug in pgoutput, introduced in v15.\n\nThe problem goes like this. A table with REPLICA IDENTITY FULL and some\ndata in it...\n\n CREATE TABLE t (a int);\n ALTER TABLE t REPLICA IDENTITY FULL;\n INSERT INTO t VALUES (1), (2), (3), ...;\n\n...undergoes a schema change to add a new column with a default:\n\n ALTER TABLE t ADD COLUMN b bool DEFAULT false NOT NULL;\n\nPostgreSQL is smart and does not rewrite the entire table during the schema\nchange. Instead it updates the tuple description to indicate to future readers\nof the table that if `b` is missing, it should be filled in with the default\nvalue, `false`.\n\nUnfortunately, since v15, pgoutput mishandles missing attributes. If a\ndownstream server is subscribed to changes from t via the pgoutput plugin, when\na row with a missing attribute is updated, e.g.:\n\n UPDATE t SET a = 2 WHERE a = 1\n\npgoutput will incorrectly report b's value as NULL in the old tuple, rather than\nfalse. Using the same example:\n\n old: a=1, b=NULL\n new: a=2, b=true\n\nThe subscriber will ignore the update (as it has no row with values\na=1, b=NULL), and thus the subscriber's copy of `t` will become out of sync with\nthe publisher's.\n\nI bisected the problem to 52e4f0cd4 [1], which introduced row filtering for\npublications. The problem appears to be the use of CreateTupleDescCopy where\nCreateTupleDescCopyConstr is required, as the former drops the constraints\nin the tuple description (specifically, the default value constraint) on the\nfloor.\n\nI've attached a patch which both fixes the issue and includes a test. I've\nverified that the test fails against the current master and passes against\nthe patched version.\n\nI'm relatively unfamiliar with the project norms here, but assuming the patch is\nacceptable, this strikes me as important enough to warrant a backport to both\nv15 and v16.\n\n[0]: https://materialize.com/docs/sql/create-source/postgres\n[1]: https://github.com/postgres/postgres/commit/52e4f0cd472d39d07732b99559989ea3b615be78",
"msg_date": "Thu, 23 Nov 2023 02:40:27 -0500",
"msg_from": "Nikhil Benesch <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgoutput incorrectly replaces missing values with NULL since\n PostgreSQL 15"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 1:10 PM Nikhil Benesch <[email protected]> wrote:\n>\n> While working on Materialize's streaming logical replication from Postgres [0],\n> my colleagues Sean Loiselle and Petros Angelatos (CC'd) discovered today what\n> appears to be a correctness bug in pgoutput, introduced in v15.\n>\n> The problem goes like this. A table with REPLICA IDENTITY FULL and some\n> data in it...\n>\n> CREATE TABLE t (a int);\n> ALTER TABLE t REPLICA IDENTITY FULL;\n> INSERT INTO t VALUES (1), (2), (3), ...;\n>\n> ...undergoes a schema change to add a new column with a default:\n>\n> ALTER TABLE t ADD COLUMN b bool DEFAULT false NOT NULL;\n>\n> PostgreSQL is smart and does not rewrite the entire table during the schema\n> change. Instead it updates the tuple description to indicate to future readers\n> of the table that if `b` is missing, it should be filled in with the default\n> value, `false`.\n>\n> Unfortunately, since v15, pgoutput mishandles missing attributes. If a\n> downstream server is subscribed to changes from t via the pgoutput plugin, when\n> a row with a missing attribute is updated, e.g.:\n>\n> UPDATE t SET a = 2 WHERE a = 1\n>\n> pgoutput will incorrectly report b's value as NULL in the old tuple, rather than\n> false.\n>\n\nThanks, I could reproduce this behavior. I'll look into your patch.\n\n> Using the same example:\n>\n> old: a=1, b=NULL\n> new: a=2, b=true\n>\n> The subscriber will ignore the update (as it has no row with values\n> a=1, b=NULL), and thus the subscriber's copy of `t` will become out of sync with\n> the publisher's.\n>\n> I bisected the problem to 52e4f0cd4 [1], which introduced row filtering for\n> publications. The problem appears to be the use of CreateTupleDescCopy where\n> CreateTupleDescCopyConstr is required, as the former drops the constraints\n> in the tuple description (specifically, the default value constraint) on the\n> floor.\n>\n> I've attached a patch which both fixes the issue and includes a test. I've\n> verified that the test fails against the current master and passes against\n> the patched version.\n>\n> I'm relatively unfamiliar with the project norms here, but assuming the patch is\n> acceptable, this strikes me as important enough to warrant a backport to both\n> v15 and v16.\n>\n\nRight.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 23 Nov 2023 14:33:14 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgoutput incorrectly replaces missing values with NULL since\n PostgreSQL 15"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 2:33 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Nov 23, 2023 at 1:10 PM Nikhil Benesch <[email protected]> wrote:\n> >\n> > While working on Materialize's streaming logical replication from Postgres [0],\n> > my colleagues Sean Loiselle and Petros Angelatos (CC'd) discovered today what\n> > appears to be a correctness bug in pgoutput, introduced in v15.\n> >\n> > The problem goes like this. A table with REPLICA IDENTITY FULL and some\n> > data in it...\n> >\n> > CREATE TABLE t (a int);\n> > ALTER TABLE t REPLICA IDENTITY FULL;\n> > INSERT INTO t VALUES (1), (2), (3), ...;\n> >\n> > ...undergoes a schema change to add a new column with a default:\n> >\n> > ALTER TABLE t ADD COLUMN b bool DEFAULT false NOT NULL;\n> >\n> > PostgreSQL is smart and does not rewrite the entire table during the schema\n> > change. Instead it updates the tuple description to indicate to future readers\n> > of the table that if `b` is missing, it should be filled in with the default\n> > value, `false`.\n> >\n> > Unfortunately, since v15, pgoutput mishandles missing attributes. If a\n> > downstream server is subscribed to changes from t via the pgoutput plugin, when\n> > a row with a missing attribute is updated, e.g.:\n> >\n> > UPDATE t SET a = 2 WHERE a = 1\n> >\n> > pgoutput will incorrectly report b's value as NULL in the old tuple, rather than\n> > false.\n> >\n>\n> Thanks, I could reproduce this behavior. I'll look into your patch.\n>\n\nI verified your fix is good and made minor modifications in the\ncomment. Note, that the test doesn't work for PG15, needs minor\nmodifications.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 24 Nov 2023 17:17:05 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgoutput incorrectly replaces missing values with NULL since\n PostgreSQL 15"
},
{
"msg_contents": "On Friday, November 24, 2023 7:47 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Thu, Nov 23, 2023 at 2:33 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Thu, Nov 23, 2023 at 1:10 PM Nikhil Benesch <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > While working on Materialize's streaming logical replication from\r\n> > > Postgres [0], my colleagues Sean Loiselle and Petros Angelatos\r\n> > > (CC'd) discovered today what appears to be a correctness bug in pgoutput,\r\n> introduced in v15.\r\n> > >\r\n> > > The problem goes like this. A table with REPLICA IDENTITY FULL and\r\n> > > some data in it...\r\n> > >\r\n> > > CREATE TABLE t (a int);\r\n> > > ALTER TABLE t REPLICA IDENTITY FULL;\r\n> > > INSERT INTO t VALUES (1), (2), (3), ...;\r\n> > >\r\n> > > ...undergoes a schema change to add a new column with a default:\r\n> > >\r\n> > > ALTER TABLE t ADD COLUMN b bool DEFAULT false NOT NULL;\r\n> > >\r\n> > > PostgreSQL is smart and does not rewrite the entire table during the\r\n> > > schema change. Instead it updates the tuple description to indicate\r\n> > > to future readers of the table that if `b` is missing, it should be\r\n> > > filled in with the default value, `false`.\r\n> > >\r\n> > > Unfortunately, since v15, pgoutput mishandles missing attributes. If\r\n> > > a downstream server is subscribed to changes from t via the pgoutput\r\n> > > plugin, when a row with a missing attribute is updated, e.g.:\r\n> > >\r\n> > > UPDATE t SET a = 2 WHERE a = 1\r\n> > >\r\n> > > pgoutput willz incorrectly report b's value as NULL in the old tuple,\r\n> > > rather than false.\r\n> > >\r\n> >\r\n> > Thanks, I could reproduce this behavior. I'll look into your patch.\r\n> >\r\n> \r\n> I verified your fix is good and made minor modifications in the comment. Note,\r\n> that the test doesn't work for PG15, needs minor modifications.\r\n\r\nThank you for fixing and reviewing the fix!\r\n\r\nThe fix also looks good to me. I verified that it can fix the problem in\r\nHEAD ~ PG15 and the added tap test can detect the problem without the fix. I\r\ntried to rebase the patch on PG15, and combines some queries into one safe_sql\r\nblock to simplify the code. Here are the patches for all branches.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Fri, 24 Nov 2023 12:21:46 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pgoutput incorrectly replaces missing values with NULL since\n PostgreSQL 15"
},
{
"msg_contents": "Thank you both for reviewing. The updated patch set LGTM.\n\n\nNikhil\n\nOn Fri, Nov 24, 2023 at 7:21 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Friday, November 24, 2023 7:47 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Nov 23, 2023 at 2:33 PM Amit Kapila <[email protected]>\n> > wrote:\n> > >\n> > > On Thu, Nov 23, 2023 at 1:10 PM Nikhil Benesch <[email protected]>\n> > wrote:\n> > > >\n> > > > While working on Materialize's streaming logical replication from\n> > > > Postgres [0], my colleagues Sean Loiselle and Petros Angelatos\n> > > > (CC'd) discovered today what appears to be a correctness bug in pgoutput,\n> > introduced in v15.\n> > > >\n> > > > The problem goes like this. A table with REPLICA IDENTITY FULL and\n> > > > some data in it...\n> > > >\n> > > > CREATE TABLE t (a int);\n> > > > ALTER TABLE t REPLICA IDENTITY FULL;\n> > > > INSERT INTO t VALUES (1), (2), (3), ...;\n> > > >\n> > > > ...undergoes a schema change to add a new column with a default:\n> > > >\n> > > > ALTER TABLE t ADD COLUMN b bool DEFAULT false NOT NULL;\n> > > >\n> > > > PostgreSQL is smart and does not rewrite the entire table during the\n> > > > schema change. Instead it updates the tuple description to indicate\n> > > > to future readers of the table that if `b` is missing, it should be\n> > > > filled in with the default value, `false`.\n> > > >\n> > > > Unfortunately, since v15, pgoutput mishandles missing attributes. If\n> > > > a downstream server is subscribed to changes from t via the pgoutput\n> > > > plugin, when a row with a missing attribute is updated, e.g.:\n> > > >\n> > > > UPDATE t SET a = 2 WHERE a = 1\n> > > >\n> > > > pgoutput willz incorrectly report b's value as NULL in the old tuple,\n> > > > rather than false.\n> > > >\n> > >\n> > > Thanks, I could reproduce this behavior. I'll look into your patch.\n> > >\n> >\n> > I verified your fix is good and made minor modifications in the comment. Note,\n> > that the test doesn't work for PG15, needs minor modifications.\n>\n> Thank you for fixing and reviewing the fix!\n>\n> The fix also looks good to me. I verified that it can fix the problem in\n> HEAD ~ PG15 and the added tap test can detect the problem without the fix. I\n> tried to rebase the patch on PG15, and combines some queries into one safe_sql\n> block to simplify the code. Here are the patches for all branches.\n>\n> Best Regards,\n> Hou zj\n>\n\n\n",
"msg_date": "Fri, 24 Nov 2023 08:53:07 -0500",
"msg_from": "Nikhil Benesch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgoutput incorrectly replaces missing values with NULL since\n PostgreSQL 15"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 7:23 PM Nikhil Benesch <[email protected]> wrote:\n>\n> Thank you both for reviewing. The updated patch set LGTM.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Nov 2023 12:09:53 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgoutput incorrectly replaces missing values with NULL since\n PostgreSQL 15"
},
{
"msg_contents": "Thank you for turning this around so quickly!\n\n\n",
"msg_date": "Mon, 27 Nov 2023 09:33:45 -0500",
"msg_from": "Nikhil Benesch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgoutput incorrectly replaces missing values with NULL since\n PostgreSQL 15"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI want to discuss a new feature for assigning a snowflake ID[1], which can be\ncluster-wide unique numbers. Also, Snowflake ID can be allocated standalone.\n\n# Use case\n\nA typical use case is a multi-master system constructed by logical replication.\nThis feature allows multi-node system to use GENERATED values. IIUC, this is\ndesired in another thread [2].\n\nWhen the postgres is standalone, it is quite often that a sequence is used as\ndefault value of the primary key. However, this cannot be done on the multi-master\nsystem as it is because the value on nodeA might be already used on nodeB.\nLogical decoding of sequence partially solves the issue, but not sufficient -\nwhat about the case of asynchronous replication? Managing chucks of values is worse.\n\n# What is the formats of Snowflake ID?\n\nSnowflake ID has a below form:\n\n[1bit - unused] + [41bit millisecond timestamp] + [10bit machine ID] + [12bit local sequence number]\n\nTrivially, the millisecond timestamp represents the time when the number is allocated.\nI.e., the time nextval() is called. Using a UNIX time seems an easiest way.\n\nMachine ID can be an arbitrary number, but recommended to be unique in the system.\nDuplicated machine ID might trigger a conflict.\n\n## Characteristics of snowflake ID\n\nSnowflake ID can generate a unique numbers standalone. According to the old discussion,\nallocating value spaces to each nodes was considered [3], but it must communicating\nwith other nodes, this brings extra difficulties. (e.g., Which protocol would be used?) \n\nAlso, Snowflake IDs are roughly time ordered. As Andres pointed out in the old\ndiscussions [4], large indexes over random values perform worse.\nSnowflake can avoid the situation.\n\nMoreover, Snowflake IDs are 64-bit integer, shorter than UUID (128-bit).\n\n# Implementation\n\nThere are several approaches for implementing a snowflake ID. For example,\n\n* Implement as contrib module. Features needed for each components of snowflakeID\n have already been implemented in core, so basically it can be.\n* Implement as a variant of sequence access method. I found that sequence AM was\n proposed many years ago [5], but it has not been active now. It might be a\n fundamental way but needs a huge works.\n\nAttached patch adds a minimal contrib module which can be used for testing my proposal.\nBelow shows an usage.\n\n```\n-- Create an extension\npostgres=# CREATE EXTENSION snowflake_sequence ;\nCREATE EXTENSION\n-- Create a sequence which generates snowflake IDs\npostgres=# SELECT snowflake_sequence.create_sequence('test_sequence');\n create_sequence \n-----------------\n \n(1 row)\n-- Get next snowflake ID\npostgres=# SELECT snowflake_sequence.nextval('test_sequence');\n nextval \n---------------------\n 3162329056562487297\n(1 row)\n```\n\nHow do you think?\n\n[1]: https://github.com/twitter-archive/snowflake/tree/b3f6a3c6ca8e1b6847baa6ff42bf72201e2c2231\n[2]: https://www.postgresql.org/message-id/1b25328f-5f4d-9b75-b3f2-f9d9931d1b9d%40postgresql.org\n[3]: https://www.postgresql.org/message-id/CA%2BU5nMLSh4fttA4BhAknpCE-iAWgK%2BBG-_wuJS%3DEAcx7hTYn-Q%40mail.gmail.com\n[4]: https://www.postgresql.org/message-id/201210161515.54895.andres%402ndquadrant.com\n[5]: https://www.postgresql.org/message-id/flat/CA%2BU5nMLV3ccdzbqCvcedd-HfrE4dUmoFmTBPL_uJ9YjsQbR7iQ%40mail.gmail.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Thu, 23 Nov 2023 10:18:59 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Proposal] global sequence implemented by snowflake ID"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 10:18:59AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> * Implement as a variant of sequence access method. I found that sequence AM was\n> proposed many years ago [5], but it has not been active now. It might be a\n> fundamental way but needs a huge works.\n\nWell, that's what I can call a timely proposal. I've been working\nthis week on a design for sequence AMs, while considering the cases\nthat the original thread wanted to handle (spoiler: there are a lot of\npieces in the original patch that are not necessary, other parts are\nincorrect like dump/restore), what you are trying to do here, and more\ncomplex scenarios in terms of globally-distributed sequences. My plan\nwas to send that next week or the week after, in time for January's\nCF.\n--\nMichael",
"msg_date": "Thu, 23 Nov 2023 19:45:07 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] global sequence implemented by snowflake ID"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 4:15 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Nov 23, 2023 at 10:18:59AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> > * Implement as a variant of sequence access method. I found that sequence AM was\n> > proposed many years ago [5], but it has not been active now. It might be a\n> > fundamental way but needs a huge works.\n>\n> Well, that's what I can call a timely proposal. I've been working\n> this week on a design for sequence AMs, while considering the cases\n> that the original thread wanted to handle (spoiler: there are a lot of\n> pieces in the original patch that are not necessary, other parts are\n> incorrect like dump/restore), what you are trying to do here, and more\n> complex scenarios in terms of globally-distributed sequences.\n>\n\nIt is interesting to see you want to work towards globally distributed\nsequences. I think it would be important to discuss how and what we\nwant to achieve with sequences w.r.t logical replication and or\nactive-active configuration. There is a patch [1] for logical\nreplication of sequences which will primarily achieve the failover\ncase, i.e. if the publisher goes down and the subscriber takes over\nthe role, one can re-direct connections to it. Now, if we have global\nsequences, one can imagine that even after failover the clients can\nstill get unique values of sequences. It will be a bit more flexible\nto use global sequences, for example, we can use the sequence on both\nnodes at the same time which won't be possible with the replication of\nsequences as they will become inconsistent. Now, it is also possible\nthat both serve different use cases and we need both functionalities\nbut it would be better to have some discussion on the same.\n\nThoughts?\n\n[1] - https://commitfest.postgresql.org/45/3823/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 Nov 2023 14:23:44 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] global sequence implemented by snowflake ID"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 02:23:44PM +0530, Amit Kapila wrote:\n> It is interesting to see you want to work towards globally distributed\n> sequences. I think it would be important to discuss how and what we\n> want to achieve with sequences w.r.t logical replication and or\n> active-active configuration. There is a patch [1] for logical\n> replication of sequences which will primarily achieve the failover\n> case, i.e. if the publisher goes down and the subscriber takes over\n> the role, one can re-direct connections to it. Now, if we have global\n> sequences, one can imagine that even after failover the clients can\n> still get unique values of sequences. It will be a bit more flexible\n> to use global sequences, for example, we can use the sequence on both\n> nodes at the same time which won't be possible with the replication of\n> sequences as they will become inconsistent. Now, it is also possible\n> that both serve different use cases and we need both functionalities\n> but it would be better to have some discussion on the same.\n> \n> Thoughts?\n> \n> [1] - https://commitfest.postgresql.org/45/3823/\n\nThanks for pointing this out. I've read through the patch proposed by\nTomas and both are independent things IMO. The logical decoding patch\nrelies on the SEQ_LOG records to find out which last_value/is_called\nto transfer, which is something directly depending on the in-core\nsequence implementation. Sequence AMs are concepts that cover much\nmore ground, leaving it up to the implementor to do what they want\nwhile hiding the activity with a RELKIND_SEQUENCE (generated columns\nincluded).\n\nTo put it short, I have the impression that one and the other don't\nreally conflict, but just cover different ground. However, I agree\nthat depending on the sequence AM implementation used in a cluster\n(snowflake IDs guarantee unicity with their machine ID), replication\nmay not be necessary because the sequence implementation may be able\nto ensure that no replication is required from the start.\n--\nMichael",
"msg_date": "Thu, 30 Nov 2023 10:18:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] global sequence implemented by snowflake ID"
},
{
"msg_contents": "Hi!\n\nI have reviewed the patch in this topic and have a question mentioning the\nmachine ID -\nINSERT INTO snowflake_sequence.machine_id\n SELECT round((random() * (0 - 511))::numeric, 0) + 511;\n\nThis kind of ID generation does not seem to guarantee from not having the\nsame ID in a pool\nof instances, does it?\n\nOn Thu, Nov 30, 2023 at 4:18 AM Michael Paquier <[email protected]> wrote:\n\n> On Tue, Nov 28, 2023 at 02:23:44PM +0530, Amit Kapila wrote:\n> > It is interesting to see you want to work towards globally distributed\n> > sequences. I think it would be important to discuss how and what we\n> > want to achieve with sequences w.r.t logical replication and or\n> > active-active configuration. There is a patch [1] for logical\n> > replication of sequences which will primarily achieve the failover\n> > case, i.e. if the publisher goes down and the subscriber takes over\n> > the role, one can re-direct connections to it. Now, if we have global\n> > sequences, one can imagine that even after failover the clients can\n> > still get unique values of sequences. It will be a bit more flexible\n> > to use global sequences, for example, we can use the sequence on both\n> > nodes at the same time which won't be possible with the replication of\n> > sequences as they will become inconsistent. Now, it is also possible\n> > that both serve different use cases and we need both functionalities\n> > but it would be better to have some discussion on the same.\n> >\n> > Thoughts?\n> >\n> > [1] - https://commitfest.postgresql.org/45/3823/\n>\n> Thanks for pointing this out. I've read through the patch proposed by\n> Tomas and both are independent things IMO. The logical decoding patch\n> relies on the SEQ_LOG records to find out which last_value/is_called\n> to transfer, which is something directly depending on the in-core\n> sequence implementation. Sequence AMs are concepts that cover much\n> more ground, leaving it up to the implementor to do what they want\n> while hiding the activity with a RELKIND_SEQUENCE (generated columns\n> included).\n>\n> To put it short, I have the impression that one and the other don't\n> really conflict, but just cover different ground. However, I agree\n> that depending on the sequence AM implementation used in a cluster\n> (snowflake IDs guarantee unicity with their machine ID), replication\n> may not be necessary because the sequence implementation may be able\n> to ensure that no replication is required from the start.\n> --\n> Michael\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!I have reviewed the patch in this topic and have a question mentioning the machine ID -INSERT INTO snowflake_sequence.machine_id SELECT round((random() * (0 - 511))::numeric, 0) + 511;This kind of ID generation does not seem to guarantee from not having the same ID in a poolof instances, does it?On Thu, Nov 30, 2023 at 4:18 AM Michael Paquier <[email protected]> wrote:On Tue, Nov 28, 2023 at 02:23:44PM +0530, Amit Kapila wrote:\n> It is interesting to see you want to work towards globally distributed\n> sequences. I think it would be important to discuss how and what we\n> want to achieve with sequences w.r.t logical replication and or\n> active-active configuration. There is a patch [1] for logical\n> replication of sequences which will primarily achieve the failover\n> case, i.e. if the publisher goes down and the subscriber takes over\n> the role, one can re-direct connections to it. Now, if we have global\n> sequences, one can imagine that even after failover the clients can\n> still get unique values of sequences. It will be a bit more flexible\n> to use global sequences, for example, we can use the sequence on both\n> nodes at the same time which won't be possible with the replication of\n> sequences as they will become inconsistent. Now, it is also possible\n> that both serve different use cases and we need both functionalities\n> but it would be better to have some discussion on the same.\n> \n> Thoughts?\n> \n> [1] - https://commitfest.postgresql.org/45/3823/\n\nThanks for pointing this out. I've read through the patch proposed by\nTomas and both are independent things IMO. The logical decoding patch\nrelies on the SEQ_LOG records to find out which last_value/is_called\nto transfer, which is something directly depending on the in-core\nsequence implementation. Sequence AMs are concepts that cover much\nmore ground, leaving it up to the implementor to do what they want\nwhile hiding the activity with a RELKIND_SEQUENCE (generated columns\nincluded).\n\nTo put it short, I have the impression that one and the other don't\nreally conflict, but just cover different ground. However, I agree\nthat depending on the sequence AM implementation used in a cluster\n(snowflake IDs guarantee unicity with their machine ID), replication\nmay not be necessary because the sequence implementation may be able\nto ensure that no replication is required from the start.\n--\nMichael\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Thu, 30 Nov 2023 11:15:13 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] global sequence implemented by snowflake ID"
},
{
"msg_contents": "Dear Nikita,\r\n\r\nThanks for reading my patch!\r\n\r\n>\r\nI have reviewed the patch in this topic and have a question mentioning the machine ID -\r\nINSERT INTO snowflake_sequence.machine_id\r\n SELECT round((random() * (0 - 511))::numeric, 0) + 511;\r\n\r\nThis kind of ID generation does not seem to guarantee from not having the same ID in a pool\r\nof instances, does it?\r\n>\r\n\r\nYou are right. For now the part is randomly assigned, but it might be duplicated on another instance.\r\nMaybe we should provide a way for setting it manually. Or, we may able to use another way\r\nfor determining machine ID.\r\n(system_identifier is too long to use here...)\r\n\r\nNote that the contrib module was provided just for the reference. We are now\r\ndiscussing high-level items, like needs, use-cases and approaches. Could you\r\nplease your opinion around here if you have?\r\n\r\nThe implementation may be completely changed, so I did not change yet. Of course,\r\nyour comment is quite helpful so that it will be handled eventually.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 30 Nov 2023 08:42:27 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] global sequence implemented by snowflake ID"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 6:48 AM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Nov 28, 2023 at 02:23:44PM +0530, Amit Kapila wrote:\n> > It is interesting to see you want to work towards globally distributed\n> > sequences. I think it would be important to discuss how and what we\n> > want to achieve with sequences w.r.t logical replication and or\n> > active-active configuration. There is a patch [1] for logical\n> > replication of sequences which will primarily achieve the failover\n> > case, i.e. if the publisher goes down and the subscriber takes over\n> > the role, one can re-direct connections to it. Now, if we have global\n> > sequences, one can imagine that even after failover the clients can\n> > still get unique values of sequences. It will be a bit more flexible\n> > to use global sequences, for example, we can use the sequence on both\n> > nodes at the same time which won't be possible with the replication of\n> > sequences as they will become inconsistent. Now, it is also possible\n> > that both serve different use cases and we need both functionalities\n> > but it would be better to have some discussion on the same.\n> >\n> > Thoughts?\n> >\n> > [1] - https://commitfest.postgresql.org/45/3823/\n>\n> Thanks for pointing this out. I've read through the patch proposed by\n> Tomas and both are independent things IMO. The logical decoding patch\n> relies on the SEQ_LOG records to find out which last_value/is_called\n> to transfer, which is something directly depending on the in-core\n> sequence implementation. Sequence AMs are concepts that cover much\n> more ground, leaving it up to the implementor to do what they want\n> while hiding the activity with a RELKIND_SEQUENCE (generated columns\n> included).\n>\n\nRight, I understand that implementation-wise and or concept-wise they\nare different. It is more about the use case, see below.\n\n> To put it short, I have the impression that one and the other don't\n> really conflict, but just cover different ground. However, I agree\n> that depending on the sequence AM implementation used in a cluster\n> (snowflake IDs guarantee unicity with their machine ID), replication\n> may not be necessary because the sequence implementation may be able\n> to ensure that no replication is required from the start.\n>\n\nThis was the key point that I wanted to discuss or hear opinions\nabout. So, if we wish to have some sort of global sequences then it is\nnot clear to me what benefits will we get by having replication of\nnon-global sequences. One thing that comes to mind is replication\ncovers a subset of use cases (like help in case of failover or\nswitchover to subscriber) and till the time we have some\nimplementation of global sequences, it can help users.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Nov 2023 16:26:08 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] global sequence implemented by snowflake ID"
},
{
"msg_contents": "On 11/30/23 11:56, Amit Kapila wrote:\n> On Thu, Nov 30, 2023 at 6:48 AM Michael Paquier <[email protected]> wrote:\n>>\n>> On Tue, Nov 28, 2023 at 02:23:44PM +0530, Amit Kapila wrote:\n>>> It is interesting to see you want to work towards globally distributed\n>>> sequences. I think it would be important to discuss how and what we\n>>> want to achieve with sequences w.r.t logical replication and or\n>>> active-active configuration. There is a patch [1] for logical\n>>> replication of sequences which will primarily achieve the failover\n>>> case, i.e. if the publisher goes down and the subscriber takes over\n>>> the role, one can re-direct connections to it. Now, if we have global\n>>> sequences, one can imagine that even after failover the clients can\n>>> still get unique values of sequences. It will be a bit more flexible\n>>> to use global sequences, for example, we can use the sequence on both\n>>> nodes at the same time which won't be possible with the replication of\n>>> sequences as they will become inconsistent. Now, it is also possible\n>>> that both serve different use cases and we need both functionalities\n>>> but it would be better to have some discussion on the same.\n>>>\n>>> Thoughts?\n>>>\n>>> [1] - https://commitfest.postgresql.org/45/3823/\n>>\n>> Thanks for pointing this out. I've read through the patch proposed by\n>> Tomas and both are independent things IMO. The logical decoding patch\n>> relies on the SEQ_LOG records to find out which last_value/is_called\n>> to transfer, which is something directly depending on the in-core\n>> sequence implementation. Sequence AMs are concepts that cover much\n>> more ground, leaving it up to the implementor to do what they want\n>> while hiding the activity with a RELKIND_SEQUENCE (generated columns\n>> included).\n>>\n> \n> Right, I understand that implementation-wise and or concept-wise they\n> are different. It is more about the use case, see below.\n> \n>> To put it short, I have the impression that one and the other don't\n>> really conflict, but just cover different ground. However, I agree\n>> that depending on the sequence AM implementation used in a cluster\n>> (snowflake IDs guarantee unicity with their machine ID), replication\n>> may not be necessary because the sequence implementation may be able\n>> to ensure that no replication is required from the start.\n>>\n\nI certainly do agree solutions like UUID or SnowflakeID may be a better\nchoice for distributed systems (especially in active-active case),\nbecause there's no internal state to replicate. That's what I'd use for\nsuch systems, I think.\n\nAs for implementation/replication, I haven't checked the code, but I'd\nimagine the AM should be able to decide whether something needs to be\nreplicated (and how) or not. So the traditional sequences would\nreplicate, and the alternative sequences would not replicate anything.\n\n> \n> This was the key point that I wanted to discuss or hear opinions\n> about. So, if we wish to have some sort of global sequences then it is\n> not clear to me what benefits will we get by having replication of\n> non-global sequences. One thing that comes to mind is replication\n> covers a subset of use cases (like help in case of failover or\n> switchover to subscriber) and till the time we have some\n> implementation of global sequences, it can help users.\n> \n\nWhat are you going to do about use cases like using logical replication\nfor upgrade to the next major version? Or applications that prefer (or\nhave to) use traditional sequences?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 30 Nov 2023 12:51:38 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] global sequence implemented by snowflake ID"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 12:51:38PM +0100, Tomas Vondra wrote:\n> As for implementation/replication, I haven't checked the code, but I'd\n> imagine the AM should be able to decide whether something needs to be\n> replicated (and how) or not. So the traditional sequences would\n> replicate, and the alternative sequences would not replicate anything.\n\nYep, exactly. Keeping compatibility for the in-core sequence\ncomputation is very important (including the fact that this stuff uses\npseudo-heap tables for its metadata with the values computed).\n\n>> This was the key point that I wanted to discuss or hear opinions\n>> about. So, if we wish to have some sort of global sequences then it is\n>> not clear to me what benefits will we get by having replication of\n>> non-global sequences. One thing that comes to mind is replication\n>> covers a subset of use cases (like help in case of failover or\n>> switchover to subscriber) and till the time we have some\n>> implementation of global sequences, it can help users.\n> \n> What are you going to do about use cases like using logical replication\n> for upgrade to the next major version? Or applications that prefer (or\n> have to) use traditional sequences?\n\nYeah, and that's why the logical replication of sequence has value.\nGiving the possibility for users or application developers to use a\ncustom computation method may be useful for some applications, but not\nothers. The use cases are too much different, so IMO both are useful,\nwhen applied to each user's requirements.\n--\nMichael",
"msg_date": "Fri, 1 Dec 2023 13:02:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] global sequence implemented by snowflake ID"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 5:21 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 11/30/23 11:56, Amit Kapila wrote:\n>\n> >\n> > This was the key point that I wanted to discuss or hear opinions\n> > about. So, if we wish to have some sort of global sequences then it is\n> > not clear to me what benefits will we get by having replication of\n> > non-global sequences. One thing that comes to mind is replication\n> > covers a subset of use cases (like help in case of failover or\n> > switchover to subscriber) and till the time we have some\n> > implementation of global sequences, it can help users.\n> >\n>\n> What are you going to do about use cases like using logical replication\n> for upgrade to the next major version?\n\n\nAs per my understanding, they should work as it is when using a global\nsequence. Just for the sake of example, considering we have a\nsame-name global sequence on both pub and sub now it should work\nduring and after major version upgrades.\n\n>\n> Or applications that prefer (or\n> have to) use traditional sequences?\n>\n\nI think we have to suggest them to use global sequence for the use\ncases where they want those to work with logical replication use\ncases. Now, if still users want their existing sequences to work then\nwe can probably see if there is a way to provide an option via Alter\nSequence to change it to a global sequence.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Dec 2023 11:45:20 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] global sequence implemented by snowflake ID"
},
{
"msg_contents": "On 12/1/23 07:15, Amit Kapila wrote:\n> On Thu, Nov 30, 2023 at 5:21 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 11/30/23 11:56, Amit Kapila wrote:\n>>\n>>>\n>>> This was the key point that I wanted to discuss or hear opinions\n>>> about. So, if we wish to have some sort of global sequences then it is\n>>> not clear to me what benefits will we get by having replication of\n>>> non-global sequences. One thing that comes to mind is replication\n>>> covers a subset of use cases (like help in case of failover or\n>>> switchover to subscriber) and till the time we have some\n>>> implementation of global sequences, it can help users.\n>>>\n>>\n>> What are you going to do about use cases like using logical replication\n>> for upgrade to the next major version?\n> \n> \n> As per my understanding, they should work as it is when using a global\n> sequence. Just for the sake of example, considering we have a\n> same-name global sequence on both pub and sub now it should work\n> during and after major version upgrades.\n> \n\nSequential IDs have significant benefits too, it's simply not that these\nglobal sequences are universally superior. For example, with sequential\nsequences you often get locality, because recent data have about the\nsame sequence values. With global sequences that's not really the case,\nbecause they are often based on randomness, which massively limits the\naccess locality. (Yes, some variants may maintain the ordering, others\ndon't.)\n\n>>\n>> Or applications that prefer (or\n>> have to) use traditional sequences?\n>>\n> \n> I think we have to suggest them to use global sequence for the use\n> cases where they want those to work with logical replication use\n> cases. Now, if still users want their existing sequences to work then\n> we can probably see if there is a way to provide an option via Alter\n> Sequence to change it to a global sequence.\n> \n\nI really don't know how that would work e.g. for existing applications\nthat have already designed the schema long time ago. Or for systems that\nuse 32-bit sequences - I'm not aware of global sequences that narrow.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 2 Dec 2023 01:00:50 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] global sequence implemented by snowflake ID"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nAfter discussing the issue on irc, it looks like it could be possible \nfor the planner to use a partial index matching an expression exactly to \nestimate its selectivity.\n\nHere is a simplified version (thanks ysch) of the issue I am facing:\n\nhttps://dbfiddle.uk/flPq8-pj\n\nI have tried using CREATE STATISTICS as well but haven't found a way to \nimprove the planner estimation for that query.\n\nI have worked around the problem for my specific use case but that \nbehavior is counter-intuitive, is there any interest in improving it?\n\nThank you!\n\n\n\n",
"msg_date": "Thu, 23 Nov 2023 12:55:58 +0100",
"msg_from": "Bono Stebler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use index to estimate expression selectivity"
},
{
"msg_contents": "Bono Stebler <[email protected]> writes:\n> After discussing the issue on irc, it looks like it could be possible \n> for the planner to use a partial index matching an expression exactly to \n> estimate its selectivity.\n\nI think going forward we're going to be more interested in extending\nCREATE STATISTICS than in adding special behaviors around indexes.\nAn index is a pretty expensive thing to maintain if you really only\nwant some statistics. Contrariwise, if you need the index for\nfunctional reasons (perhaps to enforce some strange uniqueness\nconstraint) but you don't like some decision the planner takes because\nof the existence of that index, you're kind of stuck. So decoupling\nthis stuff makes more sense from where I sit.\n\nHaving said that ...\n\n> Here is a simplified version (thanks ysch) of the issue I am facing:\n> https://dbfiddle.uk/flPq8-pj\n> I have tried using CREATE STATISTICS as well but haven't found a way to \n> improve the planner estimation for that query.\n\nI assume what you did was try to make stats on \"synchronized_at IS\nDISTINCT FROM updated_at\"? Yeah, it does not surprise me that we fail\nto match that to this query. The trouble with expression statistics\n(and expression indexes) is that it's impractical to match every\nsubexpression of the query to every subexpression that might be\npresented by CREATE STATISTICS: you soon get into exponential\nbehavior. So there's a limited set of contexts where we look for\na match.\n\nI experimented a bit and found that if you do have statistics on that,\nthen \"WHERE (synchronized_at IS DISTINCT FROM updated_at) IS TRUE\"\nwill consult the stats. Might do as a hacky workaround.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Nov 2023 12:30:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use index to estimate expression selectivity"
},
{
"msg_contents": "\n\nOn 11/23/23 18:30, Tom Lane wrote:\n> Bono Stebler <[email protected]> writes:\n>> After discussing the issue on irc, it looks like it could be possible \n>> for the planner to use a partial index matching an expression exactly to \n>> estimate its selectivity.\n> \n> I think going forward we're going to be more interested in extending\n> CREATE STATISTICS than in adding special behaviors around indexes.\n> An index is a pretty expensive thing to maintain if you really only\n> want some statistics. Contrariwise, if you need the index for\n> functional reasons (perhaps to enforce some strange uniqueness\n> constraint) but you don't like some decision the planner takes because\n> of the existence of that index, you're kind of stuck. So decoupling\n> this stuff makes more sense from where I sit.\n> \n\nI agree adding indexes if you only really want the statistics part would\nbe rather expensive, but I do think using indexes created for functional\nreasons as a source of statistics is worth consideration.\n\nActually, I've been experimenting with using btree indexes to estimate\ncertain conditions (e.g. the simplest var=const), and the estimates\ncalculated from internal pages is quite useful. Maybe not as the primary\nestimate, but to set \"safe\" range for non-MCV items. For example if the\ntraditional estimate says 1 row matches, but we see there's ~100 leaf\npages for that key, maybe we should bump up the estimate ...\n\nBut yeah, it may affect the query planning in undesirable ways. Perhaps\nwe could have \"use for statistics\" reloption or something ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Nov 2023 19:00:35 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use index to estimate expression selectivity"
}
] |
[
{
"msg_contents": "Hello PostgreSQL Hackers,\n\nI am pleased to submit a series of patches related to the Table Access\nMethod (AM) interface, which I initially announced during my talk at\nPGCon 2023 [1]. These patches are primarily designed to support the\nOrioleDB engine, but I believe they could be beneficial for other\ntable AM implementations as well.\n\nThe focus of these patches is to introduce more flexibility and\ncapabilities into the Table AM interface. This is particularly\nrelevant for advanced use cases like index-organized tables,\nalternative MVCC implementations, etc.\n\nHere's a brief overview of the patches included in this set:\n\n0001-Allow-locking-updated-tuples-in-tuple_update-and--v1.patch\n\nOptimizes the process of locking concurrently updated tuples during\nupdate and delete operations. Helpful for table AMs where refinding\nexisting tuples is expensive.\n\n0002-Add-EvalPlanQual-delete-returning-isolation-test-v1.patch\n\nThe new isolation test is related to the previous patch. These two\npatches were previously discussed in [2].\n\n0003-Allow-table-AM-to-store-complex-data-structures-i-v1.patch\n\nAllows table AM to store complex data structure in rd_amcache rather\nthan a single chunk of memory.\n\n0004-Add-table-AM-tuple_is_current-method-v1.patch\n\nThis allows us to abstract how/whether table AM uses transaction identifiers.\n\n0005-Generalize-relation-analyze-in-table-AM-interface-v1.patch\n\nProvides a more flexible API for sampling tuples, beneficial for\nnon-standard table types like index-organized tables.\n\n0006-Generalize-table-AM-API-for-INSERT-.-ON-CONFLICT-v1.patch\n\nProvides a new table AM API method to encapsulate the whole INSERT ...\nON CONFLICT ... algorithm rather than just implementation of\nspeculative tokens.\n\n0007-Allow-table-AM-tuple_insert-method-to-return-the--v1.patch\n\nThis allows table AM to return a native tuple slot, which is aware of\ntable AM-specific system attributes.\n\n0008-Let-table-AM-insertion-methods-control-index-inse-v1.patch\n\nAllows table AM to skip index insertions in the executor and handle\nthose insertions itself.\n\n0009-Custom-reloptions-for-table-AM-v1.patch\n\nEnables table AMs to define and override reloptions for tables and indexes.\n\n0010-Notify-table-AM-about-index-creation-v1.patch\n\nAllows table AMs to prepare or update specific meta-information during\nindex creation.\n\n011-Introduce-RowRefType-which-describes-the-table-ro-v1.patch\n\nSeparates the row identifier type from the lock mode in RowMarkType,\nproviding clearer semantics and more flexibility.\n\n0012-Introduce-RowID-bytea-tuple-identifier-v1.patch\n\n`This patch introduces 'RowID', a new bytea tuple identifier, to\novercome the limitations of the current 32-bit block number and 16-bit\noffset-based tuple identifier. This is particularly useful for\nindex-organized tables and other advanced use cases.\n\nEach commit message contains a detailed explanation of the changes and\ntheir rationale. I believe these enhancements will significantly\nimprove the flexibility and capabilities of the PostgreSQL Table AM\ninterface.\n\nI am looking forward to your feedback and suggestions on these patches.\n\nLinks\n\n1. https://www.pgcon.org/events/pgcon_2023/schedule/session/470-future-of-table-access-methods/\n2. https://www.postgresql.org/message-id/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 23 Nov 2023 14:42:49 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn Thu, 23 Nov 2023 at 13:43, Alexander Korotkov <[email protected]> wrote:\n>\n> Hello PostgreSQL Hackers,\n>\n> I am pleased to submit a series of patches related to the Table Access\n> Method (AM) interface, which I initially announced during my talk at\n> PGCon 2023 [1]. These patches are primarily designed to support the\n> OrioleDB engine, but I believe they could be beneficial for other\n> table AM implementations as well.\n>\n> The focus of these patches is to introduce more flexibility and\n> capabilities into the Table AM interface. This is particularly\n> relevant for advanced use cases like index-organized tables,\n> alternative MVCC implementations, etc.\n>\n> Here's a brief overview of the patches included in this set:\n\nNote: no significant review of the patches, just a first response on\nthe cover letters and oddities I noticed:\n\nOverall, this patchset adds significant API area to TableAmRoutine,\nwithout adding the relevant documentation on how it's expected to be\nused. With the overall size of the patchset also being very\nsignificant, I don't think this patch is reviewable as is; the goal\nisn't clear enough, the APIs aren't well explained, and the\ninteractions with the index API are left up in the air.\n\n> 0001-Allow-locking-updated-tuples-in-tuple_update-and--v1.patch\n>\n> Optimizes the process of locking concurrently updated tuples during\n> update and delete operations. Helpful for table AMs where refinding\n> existing tuples is expensive.\n\nIs this essentially an optimized implementation of the \"DELETE FROM\n... RETURNING *\" per-tuple primitive?\n\n> 0003-Allow-table-AM-to-store-complex-data-structures-i-v1.patch\n>\n> Allows table AM to store complex data structure in rd_amcache rather\n> than a single chunk of memory.\n\nI don't think we should allow arbitrarily large and arbitrarily many\nchunks of data in the relcache or table caches. Why isn't one chunk\nenough?\n\n> 0004-Add-table-AM-tuple_is_current-method-v1.patch\n>\n> This allows us to abstract how/whether table AM uses transaction identifiers.\n\nI'm not a fan of the indirection here. Also, assuming that table slots\ndon't outlive transactions, wouldn't this be a more appropriate fit\nwith the table tuple slot API?\n\n> 0005-Generalize-relation-analyze-in-table-AM-interface-v1.patch\n>\n> Provides a more flexible API for sampling tuples, beneficial for\n> non-standard table types like index-organized tables.\n>\n> 0006-Generalize-table-AM-API-for-INSERT-.-ON-CONFLICT-v1.patch\n>\n> Provides a new table AM API method to encapsulate the whole INSERT ...\n> ON CONFLICT ... algorithm rather than just implementation of\n> speculative tokens.\n\nDoes this not still require speculative inserts, with speculative\ntokens, for secondary indexes? Why make AMs implement that all over\nagain?\n\n> 0007-Allow-table-AM-tuple_insert-method-to-return-the--v1.patch\n>\n> This allows table AM to return a native tuple slot, which is aware of\n> table AM-specific system attributes.\n\nThis seems reasonable.\n\n> 0008-Let-table-AM-insertion-methods-control-index-inse-v1.patch\n>\n> Allows table AM to skip index insertions in the executor and handle\n> those insertions itself.\n\nWho handles index tuple removal then? I don't see a patch that\ndescribes index AM changes for this...\n\n> 0009-Custom-reloptions-for-table-AM-v1.patch\n>\n> Enables table AMs to define and override reloptions for tables and indexes.\n>\n> 0010-Notify-table-AM-about-index-creation-v1.patch\n>\n> Allows table AMs to prepare or update specific meta-information during\n> index creation.\n\nI don't think the described use case of this API is OK - a table AM\ncannot know about the internals of index AMs, and is definitely not\nallowed to overwrite the information of that index.\nIf I ask for an index that uses the \"btree\" index, then that needs to\nbe the AM actually used, or an error needs to be raised if it is\nsomehow incompatible with the table AM used. It can't be that we\nsilently update information and create an index that is explicitly not\nwhat the user asked to create.\n\nI also don't see updates in documentation, which I think is quite a\nshame as I have trouble understanding some parts.\n\n> 0012-Introduce-RowID-bytea-tuple-identifier-v1.patch\n>\n> `This patch introduces 'RowID', a new bytea tuple identifier, to\n> overcome the limitations of the current 32-bit block number and 16-bit\n> offset-based tuple identifier. This is particularly useful for\n> index-organized tables and other advanced use cases.\n\nWe don't have any index methods that can handle anything but\nblock+offset TIDs, and I don't see any changes to the IndexAM APIs to\nsupport these RowID tuples, so what's the plan here? I don't see any\nof that in the commit message, nor in the rest of this patchset.\n\n> Each commit message contains a detailed explanation of the changes and\n> their rationale. I believe these enhancements will significantly\n> improve the flexibility and capabilities of the PostgreSQL Table AM\n> interface.\n\nI've noticed there is not a lot of rationale for several of the\nchanges as to why PostgreSQL needs these changes implemented like\nthis, amongst which the index-related tableAM changes.\n\nI understand that index-organized tables can be quite useful, but I\ndon't see design solutions to the more complex questions that would\nstill be required before we could host such table AMs like OreoleDB's\nas a first-party citizen: For index-organized tables, you also need\nindex AM APIs that support TIDS with more than 48 bits of data\n(assuming we actually want primary keys with >48 bits of usable\nspace), and for undo-based logging you would probably need index APIs\nfor retail index tuple deletion. Neither is supplied here, nor is\ndescribed why these APIs were omitted.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 24 Nov 2023 00:07:12 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "\n\n> On Nov 23, 2023, at 4:42 AM, Alexander Korotkov <[email protected]> wrote:\n\n\n> 0006-Generalize-table-AM-API-for-INSERT-.-ON-CONFLICT-v1.patch\n> \n> Provides a new table AM API method to encapsulate the whole INSERT ...\n> ON CONFLICT ... algorithm rather than just implementation of\n> speculative tokens.\n\nI *think* I understand that you are taking the part of INSERT..ON CONFLICT that lives outside the table AM and pulling it inside so that table AM authors are free to come up with whatever implementation is more suited for them. The most straightforward way of doing so results in an EState parameter in the interface definition. That seems not so good, as the EState is a fairly complicated structure, and future changes in the executor might want to rearrange what EState tracks, which would change which values tuple_insert_with_arbiter() can depend on. Should the patch at least document which parts of the EState are expected to be in which states, and which parts should be viewed as undefined? If the implementors of table AMs rely on any/all aspects of EState, doesn't that prevent future changes to how that structure is used?\n\n> 0008-Let-table-AM-insertion-methods-control-index-inse-v1.patch\n> \n> Allows table AM to skip index insertions in the executor and handle\n> those insertions itself.\n\nThe new parameter could use more documentation.\n\n> 0009-Custom-reloptions-for-table-AM-v1.patch\n> \n> Enables table AMs to define and override reloptions for tables and indexes.\n\nThis could use some regression tests to exercise the custom reloptions.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 24 Nov 2023 07:18:36 -0800",
"msg_from": "Mark Dilger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 5:18 PM Mark Dilger\n<[email protected]> wrote:\n> > On Nov 23, 2023, at 4:42 AM, Alexander Korotkov <[email protected]> wrote:\n>\n>\n> > 0006-Generalize-table-AM-API-for-INSERT-.-ON-CONFLICT-v1.patch\n> >\n> > Provides a new table AM API method to encapsulate the whole INSERT ...\n> > ON CONFLICT ... algorithm rather than just implementation of\n> > speculative tokens.\n>\n> I *think* I understand that you are taking the part of INSERT..ON CONFLICT that lives outside the table AM and pulling it inside so that table AM authors are free to come up with whatever implementation is more suited for them. The most straightforward way of doing so results in an EState parameter in the interface definition. That seems not so good, as the EState is a fairly complicated structure, and future changes in the executor might want to rearrange what EState tracks, which would change which values tuple_insert_with_arbiter() can depend on.\n\nI think this is the correct understanding.\n\n> Should the patch at least document which parts of the EState are expected to be in which states, and which parts should be viewed as undefined? If the implementors of table AMs rely on any/all aspects of EState, doesn't that prevent future changes to how that structure is used?\n\nNew tuple tuple_insert_with_arbiter() table AM API method needs EState\nargument to call executor functions: ExecCheckIndexConstraints(),\nExecUpdateLockMode(), and ExecInsertIndexTuples(). I think we\nprobably need to invent some opaque way to call this executor function\nwithout revealing EState to table AM. Do you think this could work?\n\n> > 0008-Let-table-AM-insertion-methods-control-index-inse-v1.patch\n> >\n> > Allows table AM to skip index insertions in the executor and handle\n> > those insertions itself.\n>\n> The new parameter could use more documentation.\n>\n> > 0009-Custom-reloptions-for-table-AM-v1.patch\n> >\n> > Enables table AMs to define and override reloptions for tables and indexes.\n>\n> This could use some regression tests to exercise the custom reloptions.\n\nThank you for these notes. I'll take this into account for the next\npatchset version.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 25 Nov 2023 19:47:57 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "\n\n> On Nov 25, 2023, at 9:47 AM, Alexander Korotkov <[email protected]> wrote:\n> \n>> Should the patch at least document which parts of the EState are expected to be in which states, and which parts should be viewed as undefined? If the implementors of table AMs rely on any/all aspects of EState, doesn't that prevent future changes to how that structure is used?\n> \n> New tuple tuple_insert_with_arbiter() table AM API method needs EState\n> argument to call executor functions: ExecCheckIndexConstraints(),\n> ExecUpdateLockMode(), and ExecInsertIndexTuples(). I think we\n> probably need to invent some opaque way to call this executor function\n> without revealing EState to table AM. Do you think this could work?\n\nWe're clearly not accessing all of the EState, just some specific fields, such as es_per_tuple_exprcontext. I think you could at least refactor to pass the minimum amount of state information through the table AM API.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 27 Nov 2023 12:19:09 -0800",
"msg_from": "Mark Dilger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\nI think table AM extensibility is a very good idea generally, not only in\nthe scope of APIs that are needed in OrioleDB. Thanks for your proposals!\n\nFor patches\n\n> 0001-Allow-locking-updated-tuples-in-tuple_update-and--v1.patch\n\n0002-Add-EvalPlanQual-delete-returning-isolation-test-v1.patch\n\n\nThe new isolation test is related to the previous patch. These two\n\npatches were previously discussed in [2].\n\n\nAs discussion in [2] seems close to the patches being committed and the\nonly thing it is not in v16 yet is that it was too close to feature freeze,\nI've copied these most recent versions of patches 0001 and 0002 from this\nthread in [2] to finish and commit them there.\n\nI'm planning to review some of the other patches from the current patchset\nsoon.\n\n[2].\nhttps://www.postgresql.org/message-id/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com\n\nKind regards,\nPavel Borisov\n\nHi, Alexander!I think table AM extensibility is a very good idea generally, not only in the scope of APIs that are needed in OrioleDB. Thanks for your proposals!For patches0001-Allow-locking-updated-tuples-in-tuple_update-and--v1.patch0002-Add-EvalPlanQual-delete-returning-isolation-test-v1.patchThe new isolation test is related to the previous patch. These twopatches were previously discussed in [2].As discussion in [2] seems close to the patches being committed and the only thing it is not in v16 yet is that it was too close to feature freeze, I've copied these most recent versions of patches 0001 and 0002 from this thread in [2] to finish and commit them there. I'm planning to review some of the other patches from the current patchset soon.[2]. https://www.postgresql.org/message-id/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.comKind regards,Pavel Borisov",
"msg_date": "Tue, 28 Nov 2023 14:33:56 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\n> I'm planning to review some of the other patches from the current patchset\n> soon.\n>\n\nI've looked into the patch 0003.\nThe patch looks in good shape and is uncontroversial to me. Making memory\nstructures to be dynamically allocated is simple enough and it allows to\nstore complex data like lists etc. I consider places like this that expect\nmemory structures to be flat and allocated at once are because the was no\nneed in more complex ones previously. If there is a need for them, I think\nthey could be added without much doubt, provided the simplicity of the\nchange.\n\nFor the code:\n+static inline void\n+table_free_rd_amcache(Relation rel)\n+{\n+ if (rel->rd_tableam && rel->rd_tableam->free_rd_amcache)\n+ {\n+ rel->rd_tableam->free_rd_amcache(rel);\n+ }\n+ else\n+ {\n+ if (rel->rd_amcache)\n+ pfree(rel->rd_amcache);\n+ rel->rd_amcache = NULL;\n+ }\n\nhere I suggest adding Assert(rel->rd_amcache == NULL) (or maybe better an\nerror report) after calling free_rd_amcache to be sure the custom\nimplementation has done what it should do.\n\nAlso, I think some brief documentation about writing this custom method is\nquite relevant maybe based on already existing comments in the code.\n\nKind regards,\nPavel\n\nHi, Alexander!I'm planning to review some of the other patches from the current patchset soon.I've looked into the patch 0003.The patch looks in good shape and is uncontroversial to me. Making memory structures to be dynamically allocated is simple enough and it allows to store complex data like lists etc. I consider places like this that expect memory structures to be flat and allocated at once are because the was no need in more complex ones previously. If there is a need for them, I think they could be added without much doubt, provided the simplicity of the change.For the code:+static inline void+table_free_rd_amcache(Relation rel)+{+ if (rel->rd_tableam && rel->rd_tableam->free_rd_amcache)+ {+ rel->rd_tableam->free_rd_amcache(rel);+ }+ else+ {+ if (rel->rd_amcache)+ pfree(rel->rd_amcache);+ rel->rd_amcache = NULL;+ }here I suggest adding Assert(rel->rd_amcache == NULL) (or maybe better an error report) after calling free_rd_amcache to be sure the custom implementation has done what it should do. Also, I think some brief documentation about writing this custom method is quite relevant maybe based on already existing comments in the code. Kind regards,Pavel",
"msg_date": "Wed, 29 Nov 2023 17:55:38 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nPavel, as far as I understand Alexander's idea assertion and especially\nereport\nhere does not make any sense - this method is not considered to report\nerror, it\nsilently calls if there is underlying [free] function and simply falls\nthrough otherwise,\nalso, take into account that it could be located in the uninterruptible\npart of the code.\n\nOn the whole topic I have to\n\nOn Wed, Nov 29, 2023 at 4:56 PM Pavel Borisov <[email protected]>\nwrote:\n\n> Hi, Alexander!\n>\n>> I'm planning to review some of the other patches from the current\n>> patchset soon.\n>>\n>\n> I've looked into the patch 0003.\n> The patch looks in good shape and is uncontroversial to me. Making memory\n> structures to be dynamically allocated is simple enough and it allows to\n> store complex data like lists etc. I consider places like this that expect\n> memory structures to be flat and allocated at once are because the was no\n> need in more complex ones previously. If there is a need for them, I think\n> they could be added without much doubt, provided the simplicity of the\n> change.\n>\n> For the code:\n> +static inline void\n> +table_free_rd_amcache(Relation rel)\n> +{\n> + if (rel->rd_tableam && rel->rd_tableam->free_rd_amcache)\n> + {\n> + rel->rd_tableam->free_rd_amcache(rel);\n> + }\n> + else\n> + {\n> + if (rel->rd_amcache)\n> + pfree(rel->rd_amcache);\n> + rel->rd_amcache = NULL;\n> + }\n>\n> here I suggest adding Assert(rel->rd_amcache == NULL) (or maybe better an\n> error report) after calling free_rd_amcache to be sure the custom\n> implementation has done what it should do.\n>\n> Also, I think some brief documentation about writing this custom method is\n> quite relevant maybe based on already existing comments in the code.\n>\n> Kind regards,\n> Pavel\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Pavel, as far as I understand Alexander's idea assertion and especially ereporthere does not make any sense - this method is not considered to report error, itsilently calls if there is underlying [free] function and simply falls through otherwise,also, take into account that it could be located in the uninterruptible part of the code.On the whole topic I have to On Wed, Nov 29, 2023 at 4:56 PM Pavel Borisov <[email protected]> wrote:Hi, Alexander!I'm planning to review some of the other patches from the current patchset soon.I've looked into the patch 0003.The patch looks in good shape and is uncontroversial to me. Making memory structures to be dynamically allocated is simple enough and it allows to store complex data like lists etc. I consider places like this that expect memory structures to be flat and allocated at once are because the was no need in more complex ones previously. If there is a need for them, I think they could be added without much doubt, provided the simplicity of the change.For the code:+static inline void+table_free_rd_amcache(Relation rel)+{+ if (rel->rd_tableam && rel->rd_tableam->free_rd_amcache)+ {+ rel->rd_tableam->free_rd_amcache(rel);+ }+ else+ {+ if (rel->rd_amcache)+ pfree(rel->rd_amcache);+ rel->rd_amcache = NULL;+ }here I suggest adding Assert(rel->rd_amcache == NULL) (or maybe better an error report) after calling free_rd_amcache to be sure the custom implementation has done what it should do. Also, I think some brief documentation about writing this custom method is quite relevant maybe based on already existing comments in the code. Kind regards,Pavel\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Wed, 29 Nov 2023 17:27:20 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Nikita!\n\nOn Wed, 29 Nov 2023 at 18:27, Nikita Malakhov <[email protected]> wrote:\n\n> Hi,\n>\n> Pavel, as far as I understand Alexander's idea assertion and especially\n> ereport\n> here does not make any sense - this method is not considered to report\n> error, it\n> silently calls if there is underlying [free] function and simply falls\n> through otherwise,\n> also, take into account that it could be located in the uninterruptible\n> part of the code.\n>\n> On the whole topic I have to\n>\n> On Wed, Nov 29, 2023 at 4:56 PM Pavel Borisov <[email protected]>\n> wrote:\n>\n>> Hi, Alexander!\n>>\n>>> I'm planning to review some of the other patches from the current\n>>> patchset soon.\n>>>\n>>\n>> I've looked into the patch 0003.\n>> The patch looks in good shape and is uncontroversial to me. Making memory\n>> structures to be dynamically allocated is simple enough and it allows to\n>> store complex data like lists etc. I consider places like this that expect\n>> memory structures to be flat and allocated at once are because the was no\n>> need in more complex ones previously. If there is a need for them, I think\n>> they could be added without much doubt, provided the simplicity of the\n>> change.\n>>\n>> For the code:\n>> +static inline void\n>> +table_free_rd_amcache(Relation rel)\n>> +{\n>> + if (rel->rd_tableam && rel->rd_tableam->free_rd_amcache)\n>> + {\n>> + rel->rd_tableam->free_rd_amcache(rel);\n>> + }\n>> + else\n>> + {\n>> + if (rel->rd_amcache)\n>> + pfree(rel->rd_amcache);\n>> + rel->rd_amcache = NULL;\n>> + }\n>>\n>> here I suggest adding Assert(rel->rd_amcache == NULL) (or maybe better an\n>> error report) after calling free_rd_amcache to be sure the custom\n>> implementation has done what it should do.\n>>\n>> Also, I think some brief documentation about writing this custom method\n>> is quite relevant maybe based on already existing comments in the code.\n>>\n>> Kind regards,\n>> Pavel\n>>\n>\n> When we do default single chunk routine we invalidate rd_amcache pointer,\n+ if (rel->rd_amcache)\n+ pfree(rel->rd_amcache);\n+ rel->rd_amcache = NULL;\n\nIf we delegate this to method, my idea is to check the method\nimplementation don't leave this pointer valid.\nIf it's not needed, I'm ok with it, but to me it seems that the check I\nproposed makes sense.\n\nRegards,\nPavel\n\nHi, Nikita!On Wed, 29 Nov 2023 at 18:27, Nikita Malakhov <[email protected]> wrote:Hi,Pavel, as far as I understand Alexander's idea assertion and especially ereporthere does not make any sense - this method is not considered to report error, itsilently calls if there is underlying [free] function and simply falls through otherwise,also, take into account that it could be located in the uninterruptible part of the code.On the whole topic I have to On Wed, Nov 29, 2023 at 4:56 PM Pavel Borisov <[email protected]> wrote:Hi, Alexander!I'm planning to review some of the other patches from the current patchset soon.I've looked into the patch 0003.The patch looks in good shape and is uncontroversial to me. Making memory structures to be dynamically allocated is simple enough and it allows to store complex data like lists etc. I consider places like this that expect memory structures to be flat and allocated at once are because the was no need in more complex ones previously. If there is a need for them, I think they could be added without much doubt, provided the simplicity of the change.For the code:+static inline void+table_free_rd_amcache(Relation rel)+{+ if (rel->rd_tableam && rel->rd_tableam->free_rd_amcache)+ {+ rel->rd_tableam->free_rd_amcache(rel);+ }+ else+ {+ if (rel->rd_amcache)+ pfree(rel->rd_amcache);+ rel->rd_amcache = NULL;+ }here I suggest adding Assert(rel->rd_amcache == NULL) (or maybe better an error report) after calling free_rd_amcache to be sure the custom implementation has done what it should do. Also, I think some brief documentation about writing this custom method is quite relevant maybe based on already existing comments in the code. Kind regards,Pavel\nWhen we do default single chunk routine we invalidate rd_amcache pointer, + if (rel->rd_amcache)+ pfree(rel->rd_amcache);+ rel->rd_amcache = NULL;If we delegate this to method, my idea is to check the method implementation don't leave this pointer valid.If it's not needed, I'm ok with it, but to me it seems that the check I proposed makes sense.Regards,Pavel",
"msg_date": "Wed, 29 Nov 2023 18:35:28 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\nI've reviewed patch 0004. It's clear enough and I think does what it's\nsupposed.\nOne thing, in function signature\n+bool (*tuple_is_current) (Relation rel, TupleTableSlot *slot);\nthere is a Relation agrument, which is unused in both existing heapam\nmethod. Also it's unused in OrioleDb implementation of tuple_is_current.\nFor what goal it is needed in the interface?\n\nNo other objections around this patch.\n\nI've also looked at 0005-0007. Although it is not a thorough review, they\nseem to depend on previous patch 0004.\nAdditionally changes in 0007 looks dependent from 0005. Does replacement of\nslot inside ExecInsert, that is already used in the code below the call of\n\n>/* insert the tuple normally */\n>- table_tuple_insert(resultRelationDesc, slot,\n>- estate->es_output_cid,\n>- 0, NULL);\n\ncould be done without side effects?\n\nKind regards,\nPavel.\n\n>\n\nHi, Alexander!I've reviewed patch 0004. It's clear enough and I think does what it's supposed.One thing, in function signature +bool\t\t(*tuple_is_current) (Relation rel, TupleTableSlot *slot);there is a Relation agrument, which is unused in both existing heapam method. Also it's unused in OrioleDb implementation of tuple_is_current. For what goal it is needed in the interface?No other objections around this patch.I've also looked at 0005-0007. Although it is not a thorough review, they seem to depend on previous patch 0004.Additionally changes in 0007 looks dependent from 0005. Does replacement of slot inside ExecInsert, that is already used in the code below the call of >/* insert the tuple normally */>-\t\t\ttable_tuple_insert(resultRelationDesc, slot,>-\t\t\t\t\t\t\t estate->es_output_cid,>-\t\t\t\t\t\t\t 0, NULL); could be done without side effects?Kind regards,Pavel.",
"msg_date": "Wed, 20 Dec 2023 16:51:23 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": ">\n>\n> Additionally changes in 0007 looks dependent from 0005. Does replacement\n> of slot inside ExecInsert, that is already used in the code below the call\n> of\n>\n> >/* insert the tuple normally */\n> >- table_tuple_insert(resultRelationDesc, slot,\n> >- estate->es_output_cid,\n> >- 0, NULL);\n>\n> could be done without side effects?\n>\n\nI'm sorry that I inserter not all relevant code in the previous message:\n\n /* insert the tuple normally */\n- table_tuple_insert(resultRelationDesc, slot,\n- estate->es_output_cid,\n- 0, NULL);\n+ slot = table_tuple_insert(resultRelationDesc, slot,\n+ estate->es_output_cid,\n+\n(Previously slot variable that exists in the ExecInsert() and could be used\nlater was not modified at the quoted code block)\n\nPavel.\n\nAdditionally changes in 0007 looks dependent from 0005. Does replacement of slot inside ExecInsert, that is already used in the code below the call of >/* insert the tuple normally */>-\t\t\ttable_tuple_insert(resultRelationDesc, slot,>-\t\t\t\t\t\t\t estate->es_output_cid,>-\t\t\t\t\t\t\t 0, NULL); could be done without side effects?I'm sorry that I inserter not all relevant code in the previous message: /* insert the tuple normally */-\t\t\ttable_tuple_insert(resultRelationDesc, slot,-\t\t\t\t\t\t\t estate->es_output_cid,-\t\t\t\t\t\t\t 0, NULL);+\t\t\tslot = table_tuple_insert(resultRelationDesc, slot,+\t\t\t\t\t\t\t\t\t estate->es_output_cid,+ (Previously slot variable that exists in the ExecInsert() and could be used later was not modified at the quoted code block)Pavel.",
"msg_date": "Wed, 20 Dec 2023 16:56:45 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Matthias!\n\nOn Fri, Nov 24, 2023 at 1:07 AM Matthias van de Meent\n<[email protected]> wrote:\n> On Thu, 23 Nov 2023 at 13:43, Alexander Korotkov <[email protected]> wrote:\n> >\n> > Hello PostgreSQL Hackers,\n> >\n> > I am pleased to submit a series of patches related to the Table Access\n> > Method (AM) interface, which I initially announced during my talk at\n> > PGCon 2023 [1]. These patches are primarily designed to support the\n> > OrioleDB engine, but I believe they could be beneficial for other\n> > table AM implementations as well.\n> >\n> > The focus of these patches is to introduce more flexibility and\n> > capabilities into the Table AM interface. This is particularly\n> > relevant for advanced use cases like index-organized tables,\n> > alternative MVCC implementations, etc.\n> >\n> > Here's a brief overview of the patches included in this set:\n>\n> Note: no significant review of the patches, just a first response on\n> the cover letters and oddities I noticed:\n>\n> Overall, this patchset adds significant API area to TableAmRoutine,\n> without adding the relevant documentation on how it's expected to be\n> used.\n\nI have to note that, unlike documentation for index access methods,\nour documentation for table access methods doesn't have an explanation\nof API functions. Instead, it just refers to tableam.h for details.\nThe patches touching tableam.h also revise relevant comments. These\ncomments are for sure a target for improvements.\n\n> With the overall size of the patchset also being very\n> significant\n\nI wouldn't say that volume is very significant. It's just 2K lines,\nnot the great size of a patchset. But it for sure represents changes\nof great importance.\n\n> I don't think this patch is reviewable as is; the goal\n> isn't clear enough,\n\nThe goal is to revise table AM API so that new full-featured\nimplementations could exist. AFAICS, the current API was designed\nkeeping zheap in mind, but even zheap was always shipped with the core\npatch. All other implementations of table AM, which I've seen, are\nvery limited. Basically, there is still no real alternative and\nfunctional OLTP table AM. I believe API limitation is one of the\nreasons for that.\n\n> the APIs aren't well explained, and\n\nAs I mentioned before, the table AM API is documented by the comments\nin tableam.h. The comments in the patchset aren't perfect for sure,\nbut a subject for the incremental improvements.\n\n> the interactions with the index API are left up in the air.\n\nRight. These patches bring more control on interactions with indexes\nto table AMs without touching the index API. In my PGCon 2016 talk I\nproposed that table AM could have its own implementation of index AM.\n\nAs you mentioned before, this patchset isn't very small already.\nConsidering it all together with a patchset for index AM redesign\nwould make it a mess. I propose we can consider here the patches,\nwhich are usable by themselves even without index AM changes. And the\npatches tightly coupled with index AM API changes could be considered\nlater together with those changes.\n\n> > 0001-Allow-locking-updated-tuples-in-tuple_update-and--v1.patch\n> >\n> > Optimizes the process of locking concurrently updated tuples during\n> > update and delete operations. Helpful for table AMs where refinding\n> > existing tuples is expensive.\n>\n> Is this essentially an optimized implementation of the \"DELETE FROM\n> ... RETURNING *\" per-tuple primitive?\n\nNot really. The test for \"DELETE FROM ... RETURNING *\" was used just\nto reproduce one of the bugs stopped in [2]. The general idea is to\navoid repeated calls for tuple lock.\n\n> > 0003-Allow-table-AM-to-store-complex-data-structures-i-v1.patch\n> >\n> > Allows table AM to store complex data structure in rd_amcache rather\n> > than a single chunk of memory.\n>\n> I don't think we should allow arbitrarily large and arbitrarily many\n> chunks of data in the relcache or table caches.\n\nHmm.. It seems to be far out of control of API what and how large\nPostgreSQL extensions could actually cache.\n\n> Why isn't one chunk\n> enough?\n\nIt's generally possible to fit everything into one chunk, but that's\nextremely unhandy when your cache contains something at least as\ncomplex as tuple slots and descriptors. I think the reason that we\nstill have one chunk restriction is that we don't have a full-featured\nimplementation fitting API yet. If we had it, I can't imagine there\nwould be one chunk for a cache.\n\n> > 0004-Add-table-AM-tuple_is_current-method-v1.patch\n> >\n> > This allows us to abstract how/whether table AM uses transaction identifiers.\n>\n> I'm not a fan of the indirection here. Also, assuming that table slots\n> don't outlive transactions, wouldn't this be a more appropriate fit\n> with the table tuple slot API?\n\nThis is a good idea. I will update the patch accordingly.\n\n> > 0005-Generalize-relation-analyze-in-table-AM-interface-v1.patch\n> >\n> > Provides a more flexible API for sampling tuples, beneficial for\n> > non-standard table types like index-organized tables.\n> >\n> > 0006-Generalize-table-AM-API-for-INSERT-.-ON-CONFLICT-v1.patch\n> >\n> > Provides a new table AM API method to encapsulate the whole INSERT ...\n> > ON CONFLICT ... algorithm rather than just implementation of\n> > speculative tokens.\n>\n> Does this not still require speculative inserts, with speculative\n> tokens, for secondary indexes? Why make AMs implement that all over\n> again?\n\nThe idea here is to generalize upsert and leave speculative tokens as\ndetails of one particular implementation. Imagine an index-organized\ntable and upsert on primary key. For that you need to just locate the\nrelevant page in a tree and do insert or update. Speculative tokens\nwould rather be an unreasonable complication for this case.\n\n> > 0007-Allow-table-AM-tuple_insert-method-to-return-the--v1.patch\n> >\n> > This allows table AM to return a native tuple slot, which is aware of\n> > table AM-specific system attributes.\n>\n> This seems reasonable.\n>\n> > 0008-Let-table-AM-insertion-methods-control-index-inse-v1.patch\n> >\n> > Allows table AM to skip index insertions in the executor and handle\n> > those insertions itself.\n>\n> Who handles index tuple removal then?\n\nTable AM implementation decides what actions to perform on tuple\nupdate/delete. The reason why it can't really care about updating\nindexes is that the executor already does it.\nThe situation is different with deletes, because the executor doesn't\ndo something immediately about the corresponding index tuples. They\nare deleted later by vacuum, which is also controlled by table AM\nimplementation.\n\n> I don't see a patch that describes index AM changes for this...\n\nYes, index AM should be revised for that. See my comment about that earlier.\n\n> > 0009-Custom-reloptions-for-table-AM-v1.patch\n> >\n> > Enables table AMs to define and override reloptions for tables and indexes.\n> >\n> > 0010-Notify-table-AM-about-index-creation-v1.patch\n> >\n> > Allows table AMs to prepare or update specific meta-information during\n> > index creation.\n>\n> I don't think the described use case of this API is OK - a table AM\n> cannot know about the internals of index AMs, and is definitely not\n> allowed to overwrite the information of that index.\n> If I ask for an index that uses the \"btree\" index, then that needs to\n> be the AM actually used, or an error needs to be raised if it is\n> somehow incompatible with the table AM used. It can't be that we\n> silently update information and create an index that is explicitly not\n> what the user asked to create.\n\nI agree that this currently looks more like workarounds rather than\nproper API changes. I propose these two should be considered later\ntogether with relevant index API changes.\n\n> I also don't see updates in documentation, which I think is quite a\n> shame as I have trouble understanding some parts.\n\nSorry for this. I hope I gave some answers in this message and I'll\nupdate the patchset comments and commit messages accordingly. And I'm\nopen to answer any further questions.\n\n> > 0012-Introduce-RowID-bytea-tuple-identifier-v1.patch\n> >\n> > `This patch introduces 'RowID', a new bytea tuple identifier, to\n> > overcome the limitations of the current 32-bit block number and 16-bit\n> > offset-based tuple identifier. This is particularly useful for\n> > index-organized tables and other advanced use cases.\n>\n> We don't have any index methods that can handle anything but\n> block+offset TIDs, and I don't see any changes to the IndexAM APIs to\n> support these RowID tuples, so what's the plan here? I don't see any\n> of that in the commit message, nor in the rest of this patchset.\n>\n> > Each commit message contains a detailed explanation of the changes and\n> > their rationale. I believe these enhancements will significantly\n> > improve the flexibility and capabilities of the PostgreSQL Table AM\n> > interface.\n>\n> I've noticed there is not a lot of rationale for several of the\n> changes as to why PostgreSQL needs these changes implemented like\n> this, amongst which the index-related tableAM changes.\n>\n> I understand that index-organized tables can be quite useful, but I\n> don't see design solutions to the more complex questions that would\n> still be required before we could host such table AMs like OreoleDB's\n> as a first-party citizen: For index-organized tables, you also need\n> index AM APIs that support TIDS with more than 48 bits of data\n> (assuming we actually want primary keys with >48 bits of usable\n> space), and for undo-based logging you would probably need index APIs\n> for retail index tuple deletion. Neither is supplied here, nor is\n> described why these APIs were omitted.\n\nAs I mentioned before, I agree that index AM changes haven't been\npresented yet. And yes, for bytea rowID there is currently no way to\nuse the current index API. However, I think this exact patch could be\nuseful even without index AM implements. This allows table AMs to\nidentify rows by custom bytea, even though these tables couldn't be\nindexed yet. So, if we allow a custom table AM to implement an\nindex-organized table, that would have use cases even if secondary\nindexes are not supported yet.\n\nLinks\n1. https://pgconf.ru/media/2016/02/19/06_Korotkov%20Extendability.pdf\n\n\n\n\n2. https://www.postgresql.org/message-id/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 3 Mar 2024 13:48:07 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 10:18 PM Mark Dilger\n<[email protected]> wrote:\n>\n> > On Nov 25, 2023, at 9:47 AM, Alexander Korotkov <[email protected]> wrote:\n> >\n> >> Should the patch at least document which parts of the EState are expected to be in which states, and which parts should be viewed as undefined? If the implementors of table AMs rely on any/all aspects of EState, doesn't that prevent future changes to how that structure is used?\n> >\n> > New tuple tuple_insert_with_arbiter() table AM API method needs EState\n> > argument to call executor functions: ExecCheckIndexConstraints(),\n> > ExecUpdateLockMode(), and ExecInsertIndexTuples(). I think we\n> > probably need to invent some opaque way to call this executor function\n> > without revealing EState to table AM. Do you think this could work?\n>\n> We're clearly not accessing all of the EState, just some specific fields, such as es_per_tuple_exprcontext. I think you could at least refactor to pass the minimum amount of state information through the table AM API.\n\nYes, the table AM doesn't need the full EState, just the ability to do\nspecific manipulation with tuples. I'll refactor the patch to make a\nbetter isolation for this.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 3 Mar 2024 13:50:38 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Sun, Mar 3, 2024 at 1:50 PM Alexander Korotkov <[email protected]> wrote:\n> On Mon, Nov 27, 2023 at 10:18 PM Mark Dilger\n> <[email protected]> wrote:\n> >\n> > > On Nov 25, 2023, at 9:47 AM, Alexander Korotkov <[email protected]> wrote:\n> > >\n> > >> Should the patch at least document which parts of the EState are expected to be in which states, and which parts should be viewed as undefined? If the implementors of table AMs rely on any/all aspects of EState, doesn't that prevent future changes to how that structure is used?\n> > >\n> > > New tuple tuple_insert_with_arbiter() table AM API method needs EState\n> > > argument to call executor functions: ExecCheckIndexConstraints(),\n> > > ExecUpdateLockMode(), and ExecInsertIndexTuples(). I think we\n> > > probably need to invent some opaque way to call this executor function\n> > > without revealing EState to table AM. Do you think this could work?\n> >\n> > We're clearly not accessing all of the EState, just some specific fields, such as es_per_tuple_exprcontext. I think you could at least refactor to pass the minimum amount of state information through the table AM API.\n>\n> Yes, the table AM doesn't need the full EState, just the ability to do\n> specific manipulation with tuples. I'll refactor the patch to make a\n> better isolation for this.\n\nPlease find the revised patchset attached. The changes are following:\n1. Patchset is rebase. to the current master.\n2. Patchset is reordered. I tried to put less debatable patches to the top.\n3. tuple_is_current() method is moved from the Table AM API to the\nslot as proposed by Matthias van de Meent.\n4. Assert added to the table_free_rd_amcache() as proposed by Pavel Borisov.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 19 Mar 2024 01:34:13 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Tue, 19 Mar 2024 at 03:34, Alexander Korotkov <[email protected]>\nwrote:\n\n> On Sun, Mar 3, 2024 at 1:50 PM Alexander Korotkov <[email protected]>\n> wrote:\n> > On Mon, Nov 27, 2023 at 10:18 PM Mark Dilger\n> > <[email protected]> wrote:\n> > >\n> > > > On Nov 25, 2023, at 9:47 AM, Alexander Korotkov <\n> [email protected]> wrote:\n> > > >\n> > > >> Should the patch at least document which parts of the EState are\n> expected to be in which states, and which parts should be viewed as\n> undefined? If the implementors of table AMs rely on any/all aspects of\n> EState, doesn't that prevent future changes to how that structure is used?\n> > > >\n> > > > New tuple tuple_insert_with_arbiter() table AM API method needs\n> EState\n> > > > argument to call executor functions: ExecCheckIndexConstraints(),\n> > > > ExecUpdateLockMode(), and ExecInsertIndexTuples(). I think we\n> > > > probably need to invent some opaque way to call this executor\n> function\n> > > > without revealing EState to table AM. Do you think this could work?\n> > >\n> > > We're clearly not accessing all of the EState, just some specific\n> fields, such as es_per_tuple_exprcontext. I think you could at least\n> refactor to pass the minimum amount of state information through the table\n> AM API.\n> >\n> > Yes, the table AM doesn't need the full EState, just the ability to do\n> > specific manipulation with tuples. I'll refactor the patch to make a\n> > better isolation for this.\n>\n> Please find the revised patchset attached. The changes are following:\n> 1. Patchset is rebase. to the current master.\n> 2. Patchset is reordered. I tried to put less debatable patches to the\n> top.\n> 3. tuple_is_current() method is moved from the Table AM API to the\n> slot as proposed by Matthias van de Meent.\n> 4. Assert added to the table_free_rd_amcache() as proposed by Pavel\n> Borisov.\n>\n\nPatches 0001-0002 are unchanged compared to the last version in thread [1].\nIn my opinion, it's still ready to be committed, which was not done for\ntime were too close to feature freeze one year ago.\n\n0003 - Assert added from previous version. I still have a strong opinion\nthat allowing multi-chunked data structures instead of single chunks is\ncompletely safe and makes natural process of Postgres improvement that is\nself-justified. The patch is simple enough and ready to be pushed.\n\n0004 (previously 0007) - Have not changed, and there is consensus that\nthis is reasonable. I've re-checked the current code. Looks safe\nconsidering returning a different slot, which I doubted before. So consider\nthis patch also ready.\n\n0005 (previously 0004) - Unused argument in the is_current_xact_tuple()\nsignature is removed. Also comparing to v1 the code shifted from tableam\nmethods to TTS's level.\n\nI'd propose to remove Assert(!TTS_EMPTY(slot))\nfor tts_minimal_is_current_xact_tuple()\nand tts_virtual_is_current_xact_tuple() as these are only error reporting\nfunctions that don't use slot actually.\n\nComment similar to:\n+/*\n+ * VirtualTupleTableSlots never have a storage tuples. We generally\n+ * shouldn't get here, but provide a user-friendly message if we do.\n+ */\nalso applies to tts_minimal_is_current_xact_tuple()\n\nI'd propose changes for clarity of this comment:\n%s/a storage tuples/storage tuples/g\n%s/generally//g\n\nOtherwise patch 0005 also looks good to me.\n\nI'm planning to review the remaining patches. Meanwhile think pushing what\nis now ready and uncontroversial is a good intention.\nThank you for the work done on this patchset!\n\nRegards,\nPavel Borisov,\nSupabase.\n\n\n\n\n\n\n[1].\nhttps://www.postgresql.org/message-id/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com\n\nHi, Alexander!On Tue, 19 Mar 2024 at 03:34, Alexander Korotkov <[email protected]> wrote:On Sun, Mar 3, 2024 at 1:50 PM Alexander Korotkov <[email protected]> wrote:\n> On Mon, Nov 27, 2023 at 10:18 PM Mark Dilger\n> <[email protected]> wrote:\n> >\n> > > On Nov 25, 2023, at 9:47 AM, Alexander Korotkov <[email protected]> wrote:\n> > >\n> > >> Should the patch at least document which parts of the EState are expected to be in which states, and which parts should be viewed as undefined? If the implementors of table AMs rely on any/all aspects of EState, doesn't that prevent future changes to how that structure is used?\n> > >\n> > > New tuple tuple_insert_with_arbiter() table AM API method needs EState\n> > > argument to call executor functions: ExecCheckIndexConstraints(),\n> > > ExecUpdateLockMode(), and ExecInsertIndexTuples(). I think we\n> > > probably need to invent some opaque way to call this executor function\n> > > without revealing EState to table AM. Do you think this could work?\n> >\n> > We're clearly not accessing all of the EState, just some specific fields, such as es_per_tuple_exprcontext. I think you could at least refactor to pass the minimum amount of state information through the table AM API.\n>\n> Yes, the table AM doesn't need the full EState, just the ability to do\n> specific manipulation with tuples. I'll refactor the patch to make a\n> better isolation for this.\n\nPlease find the revised patchset attached. The changes are following:\n1. Patchset is rebase. to the current master.\n2. Patchset is reordered. I tried to put less debatable patches to the top.\n3. tuple_is_current() method is moved from the Table AM API to the\nslot as proposed by Matthias van de Meent.\n4. Assert added to the table_free_rd_amcache() as proposed by Pavel Borisov.Patches 0001-0002 are unchanged compared to the last version in thread [1]. In my opinion, it's still ready to be committed, which was not done for time were too close to feature freeze one year ago.0003 - Assert added from previous version. I still have a strong opinion that allowing multi-chunked data structures instead of single chunks is completely safe and makes natural process of Postgres improvement that is self-justified. The patch is simple enough and ready to be pushed.0004 (previously 0007) - Have not changed, and there is consensus that this is reasonable. I've re-checked the current code. Looks safe considering returning a different slot, which I doubted before. So consider this patch also ready.0005 (previously 0004) - Unused argument in the is_current_xact_tuple() signature is removed. Also comparing to v1 the code shifted from tableam methods to TTS's level.I'd propose to remove Assert(!TTS_EMPTY(slot)) for tts_minimal_is_current_xact_tuple() and tts_virtual_is_current_xact_tuple() as these are only error reporting functions that don't use slot actually.Comment similar to:+/*+ * VirtualTupleTableSlots never have a storage tuples. We generally+ * shouldn't get here, but provide a user-friendly message if we do.+ */also applies to tts_minimal_is_current_xact_tuple()I'd propose changes for clarity of this comment:%s/a storage tuples/storage tuples/g%s/generally//gOtherwise patch 0005 also looks good to me. I'm planning to review the remaining patches. Meanwhile think pushing what is now ready and uncontroversial is a good intention.Thank you for the work done on this patchset!Regards, Pavel Borisov,Supabase.[1]. https://www.postgresql.org/message-id/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com",
"msg_date": "Tue, 19 Mar 2024 13:34:43 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Pavel!\n\nOn Tue, Mar 19, 2024 at 11:34 AM Pavel Borisov <[email protected]> wrote:\n> On Tue, 19 Mar 2024 at 03:34, Alexander Korotkov <[email protected]> wrote:\n>>\n>> On Sun, Mar 3, 2024 at 1:50 PM Alexander Korotkov <[email protected]> wrote:\n>> > On Mon, Nov 27, 2023 at 10:18 PM Mark Dilger\n>> > <[email protected]> wrote:\n>> > >\n>> > > > On Nov 25, 2023, at 9:47 AM, Alexander Korotkov <[email protected]> wrote:\n>> > > >\n>> > > >> Should the patch at least document which parts of the EState are expected to be in which states, and which parts should be viewed as undefined? If the implementors of table AMs rely on any/all aspects of EState, doesn't that prevent future changes to how that structure is used?\n>> > > >\n>> > > > New tuple tuple_insert_with_arbiter() table AM API method needs EState\n>> > > > argument to call executor functions: ExecCheckIndexConstraints(),\n>> > > > ExecUpdateLockMode(), and ExecInsertIndexTuples(). I think we\n>> > > > probably need to invent some opaque way to call this executor function\n>> > > > without revealing EState to table AM. Do you think this could work?\n>> > >\n>> > > We're clearly not accessing all of the EState, just some specific fields, such as es_per_tuple_exprcontext. I think you could at least refactor to pass the minimum amount of state information through the table AM API.\n>> >\n>> > Yes, the table AM doesn't need the full EState, just the ability to do\n>> > specific manipulation with tuples. I'll refactor the patch to make a\n>> > better isolation for this.\n>>\n>> Please find the revised patchset attached. The changes are following:\n>> 1. Patchset is rebase. to the current master.\n>> 2. Patchset is reordered. I tried to put less debatable patches to the top.\n>> 3. tuple_is_current() method is moved from the Table AM API to the\n>> slot as proposed by Matthias van de Meent.\n>> 4. Assert added to the table_free_rd_amcache() as proposed by Pavel Borisov.\n>\n>\n> Patches 0001-0002 are unchanged compared to the last version in thread [1]. In my opinion, it's still ready to be committed, which was not done for time were too close to feature freeze one year ago.\n>\n> 0003 - Assert added from previous version. I still have a strong opinion that allowing multi-chunked data structures instead of single chunks is completely safe and makes natural process of Postgres improvement that is self-justified. The patch is simple enough and ready to be pushed.\n>\n> 0004 (previously 0007) - Have not changed, and there is consensus that this is reasonable. I've re-checked the current code. Looks safe considering returning a different slot, which I doubted before. So consider this patch also ready.\n>\n> 0005 (previously 0004) - Unused argument in the is_current_xact_tuple() signature is removed. Also comparing to v1 the code shifted from tableam methods to TTS's level.\n>\n> I'd propose to remove Assert(!TTS_EMPTY(slot)) for tts_minimal_is_current_xact_tuple() and tts_virtual_is_current_xact_tuple() as these are only error reporting functions that don't use slot actually.\n>\n> Comment similar to:\n> +/*\n> + * VirtualTupleTableSlots never have a storage tuples. We generally\n> + * shouldn't get here, but provide a user-friendly message if we do.\n> + */\n> also applies to tts_minimal_is_current_xact_tuple()\n>\n> I'd propose changes for clarity of this comment:\n> %s/a storage tuples/storage tuples/g\n> %s/generally//g\n>\n> Otherwise patch 0005 also looks good to me.\n>\n> I'm planning to review the remaining patches. Meanwhile think pushing what is now ready and uncontroversial is a good intention.\n> Thank you for the work done on this patchset!\n\nThank you, Pavel!\n\nRegarding 0005, I did apply \"a storage tuples\" grammar fix. Regarding\nthe rest of the things, I'd like to keep methods\ntts_*_is_current_xact_tuple() to be similar to nearby\ntts_*_getsysattr(). This is why I'm keeping the rest unchanged. I\nthink we could refactor that later, but together with\ntts_*_getsysattr() methods.\n\nI'm going to push 0003, 0004 and 0005 if there are no objections.\n\nAnd I'll update 0001 and 0002 in their dedicated thread.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 19 Mar 2024 15:05:22 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "\nOn Tue, 19 Mar 2024 at 21:05, Alexander Korotkov <[email protected]> wrote:\n> Hi, Pavel!\n>\n> On Tue, Mar 19, 2024 at 11:34 AM Pavel Borisov <[email protected]> wrote:\n>> On Tue, 19 Mar 2024 at 03:34, Alexander Korotkov <[email protected]> wrote:\n>>>\n>>> On Sun, Mar 3, 2024 at 1:50 PM Alexander Korotkov <[email protected]> wrote:\n>>> > On Mon, Nov 27, 2023 at 10:18 PM Mark Dilger\n>>> > <[email protected]> wrote:\n>>> > >\n>>> > > > On Nov 25, 2023, at 9:47 AM, Alexander Korotkov <[email protected]> wrote:\n>>> > > >\n>>> > > >> Should the patch at least document which parts of the EState are expected to be in which states, and which parts should be viewed as undefined? If the implementors of table AMs rely on any/all aspects of EState, doesn't that prevent future changes to how that structure is used?\n>>> > > >\n>>> > > > New tuple tuple_insert_with_arbiter() table AM API method needs EState\n>>> > > > argument to call executor functions: ExecCheckIndexConstraints(),\n>>> > > > ExecUpdateLockMode(), and ExecInsertIndexTuples(). I think we\n>>> > > > probably need to invent some opaque way to call this executor function\n>>> > > > without revealing EState to table AM. Do you think this could work?\n>>> > >\n>>> > > We're clearly not accessing all of the EState, just some specific fields, such as es_per_tuple_exprcontext. I think you could at least refactor to pass the minimum amount of state information through the table AM API.\n>>> >\n>>> > Yes, the table AM doesn't need the full EState, just the ability to do\n>>> > specific manipulation with tuples. I'll refactor the patch to make a\n>>> > better isolation for this.\n>>>\n>>> Please find the revised patchset attached. The changes are following:\n>>> 1. Patchset is rebase. to the current master.\n>>> 2. Patchset is reordered. I tried to put less debatable patches to the top.\n>>> 3. tuple_is_current() method is moved from the Table AM API to the\n>>> slot as proposed by Matthias van de Meent.\n>>> 4. Assert added to the table_free_rd_amcache() as proposed by Pavel Borisov.\n>>\n>>\n>> Patches 0001-0002 are unchanged compared to the last version in thread [1]. In my opinion, it's still ready to be committed, which was not done for time were too close to feature freeze one year ago.\n>>\n>> 0003 - Assert added from previous version. I still have a strong opinion that allowing multi-chunked data structures instead of single chunks is completely safe and makes natural process of Postgres improvement that is self-justified. The patch is simple enough and ready to be pushed.\n>>\n>> 0004 (previously 0007) - Have not changed, and there is consensus that this is reasonable. I've re-checked the current code. Looks safe considering returning a different slot, which I doubted before. So consider this patch also ready.\n>>\n>> 0005 (previously 0004) - Unused argument in the is_current_xact_tuple() signature is removed. Also comparing to v1 the code shifted from tableam methods to TTS's level.\n>>\n>> I'd propose to remove Assert(!TTS_EMPTY(slot)) for tts_minimal_is_current_xact_tuple() and tts_virtual_is_current_xact_tuple() as these are only error reporting functions that don't use slot actually.\n>>\n>> Comment similar to:\n>> +/*\n>> + * VirtualTupleTableSlots never have a storage tuples. We generally\n>> + * shouldn't get here, but provide a user-friendly message if we do.\n>> + */\n>> also applies to tts_minimal_is_current_xact_tuple()\n>>\n>> I'd propose changes for clarity of this comment:\n>> %s/a storage tuples/storage tuples/g\n>> %s/generally//g\n>>\n>> Otherwise patch 0005 also looks good to me.\n>>\n>> I'm planning to review the remaining patches. Meanwhile think pushing what is now ready and uncontroversial is a good intention.\n>> Thank you for the work done on this patchset!\n>\n> Thank you, Pavel!\n>\n> Regarding 0005, I did apply \"a storage tuples\" grammar fix. Regarding\n> the rest of the things, I'd like to keep methods\n> tts_*_is_current_xact_tuple() to be similar to nearby\n> tts_*_getsysattr(). This is why I'm keeping the rest unchanged. I\n> think we could refactor that later, but together with\n> tts_*_getsysattr() methods.\n>\n> I'm going to push 0003, 0004 and 0005 if there are no objections.\n>\n> And I'll update 0001 and 0002 in their dedicated thread.\n>\n\nWhen I try to test the patch on Ubuntu 22.04 with GCC 11.4.0. There are some\nwarnings as following:\n\n/home/japin/Codes/postgres/build/../src/backend/access/heap/heapam_handler.c: In function ‘heapam_acquire_sample_rows’:\n/home/japin/Codes/postgres/build/../src/backend/access/heap/heapam_handler.c:1603:28: warning: implicit declaration of function ‘get_tablespace_maintenance_io_concurrency’ [-Wimplicit-function-declaration]\n 1603 | prefetch_maximum = get_tablespace_maintenance_io_concurrency(onerel->rd_rel->reltablespace);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/home/japin/Codes/postgres/build/../src/backend/access/heap/heapam_handler.c:1757:30: warning: implicit declaration of function ‘floor’ [-Wimplicit-function-declaration]\n 1757 | *totalrows = floor((liverows / bs.m) * totalblocks + 0.5);\n | ^~~~~\n/home/japin/Codes/postgres/build/../src/backend/access/heap/heapam_handler.c:49:1: note: include ‘<math.h>’ or provide a declaration of ‘floor’\n 48 | #include \"utils/sampling.h\"\n +++ |+#include <math.h>\n 49 |\n/home/japin/Codes/postgres/build/../src/backend/access/heap/heapam_handler.c:1757:30: warning: incompatible implicit declaration of built-in function ‘floor’ [-Wbuiltin-declaration-mismatch]\n 1757 | *totalrows = floor((liverows / bs.m) * totalblocks + 0.5);\n | ^~~~~\n/home/japin/Codes/postgres/build/../src/backend/access/heap/heapam_handler.c:1757:30: note: include ‘<math.h>’ or provide a declaration of ‘floor’\n/home/japin/Codes/postgres/build/../src/backend/access/heap/heapam_handler.c:1603:21: warning: implicit declaration of function 'get_tablespace_maintenance_io_concurrency' is invalid in C99 [-Wimplicit-function-declaration]\n prefetch_maximum = get_tablespace_maintenance_io_concurrency(onerel->rd_rel->reltablespace);\n ^\n\nIt seems you forgot to include math.h and utils/spccache.h header files\nin heapam_handler.c.\n\ndiff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c\nindex ac24691bd2..04365394f1 100644\n--- a/src/backend/access/heap/heapam_handler.c\n+++ b/src/backend/access/heap/heapam_handler.c\n@@ -19,6 +19,8 @@\n */\n #include \"postgres.h\"\n\n+#include <math.h>\n+\n #include \"access/genam.h\"\n #include \"access/heapam.h\"\n #include \"access/heaptoast.h\"\n@@ -46,6 +48,7 @@\n #include \"utils/builtins.h\"\n #include \"utils/rel.h\"\n #include \"utils/sampling.h\"\n+#include \"utils/spccache.h\"\n\n static TM_Result heapam_tuple_lock(Relation relation, Datum tupleid,\n \t\t\t\t\t\t\t\t Snapshot snapshot, TupleTableSlot *slot,\n\n\n",
"msg_date": "Tue, 19 Mar 2024 22:26:06 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 4:26 PM Japin Li <[email protected]> wrote:\n> On Tue, 19 Mar 2024 at 21:05, Alexander Korotkov <[email protected]> wrote:\n> > Regarding 0005, I did apply \"a storage tuples\" grammar fix. Regarding\n> > the rest of the things, I'd like to keep methods\n> > tts_*_is_current_xact_tuple() to be similar to nearby\n> > tts_*_getsysattr(). This is why I'm keeping the rest unchanged. I\n> > think we could refactor that later, but together with\n> > tts_*_getsysattr() methods.\n> >\n> > I'm going to push 0003, 0004 and 0005 if there are no objections.\n> >\n> > And I'll update 0001 and 0002 in their dedicated thread.\n> >\n>\n> When I try to test the patch on Ubuntu 22.04 with GCC 11.4.0. There are some\n> warnings as following:\n\nThank you for catching this!\nPlease, find the revised patchset attached.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 19 Mar 2024 17:28:41 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\nFor 0007:\n\nCode inside\n\n+heapam_reloptions(char relkind, Datum reloptions, bool validate)\n+{\n+ if (relkind == RELKIND_RELATION ||\n+ relkind == RELKIND_TOASTVALUE ||\n+ relkind == RELKIND_MATVIEW)\n+ return heap_reloptions(relkind, reloptions, validate);\n+\n+ return NULL;\n\nlooks redundant to what is done inside heap_reloptions(). Was this on\npurpose? Is it possible to leave only \"return heap_reloptions()\" ?\n\nThis looks like a duplicate:\nsrc/include/access/reloptions.h:extern bytea\n*index_reloptions(amoptions_function amoptions, Datum reloptions,\nsrc/include/access/tableam.h:extern bytea\n*index_reloptions(amoptions_function amoptions, Datum reloptions,\n\nOtherwise the patch looks good and doing what it's proposed to do.\n\nRegards,\nPavel Borisov.\n\nHi, Alexander!For 0007:Code inside+heapam_reloptions(char relkind, Datum reloptions, bool validate) +{+ if (relkind == RELKIND_RELATION ||+ relkind == RELKIND_TOASTVALUE ||+ relkind == RELKIND_MATVIEW)+ return heap_reloptions(relkind, reloptions, validate);++ return NULL;looks redundant to what is done inside heap_reloptions(). Was this on purpose? Is it possible to leave only \"return heap_reloptions()\" ?This looks like a duplicate:src/include/access/reloptions.h:extern bytea *index_reloptions(amoptions_function amoptions, Datum reloptions,src/include/access/tableam.h:extern bytea *index_reloptions(amoptions_function amoptions, Datum reloptions,Otherwise the patch looks good and doing what it's proposed to do.Regards,Pavel Borisov.",
"msg_date": "Wed, 20 Mar 2024 09:22:47 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\n\nOn Wed, 20 Mar 2024 at 09:22, Pavel Borisov <[email protected]> wrote:\n\n> Hi, Alexander!\n>\n> For 0007:\n>\n> Code inside\n>\n> +heapam_reloptions(char relkind, Datum reloptions, bool validate)\n> +{\n> + if (relkind == RELKIND_RELATION ||\n> + relkind == RELKIND_TOASTVALUE ||\n> + relkind == RELKIND_MATVIEW)\n> + return heap_reloptions(relkind, reloptions, validate);\n> +\n> + return NULL;\n>\n> looks redundant to what is done inside heap_reloptions(). Was this on\n> purpose? Is it possible to leave only \"return heap_reloptions()\" ?\n>\n> This looks like a duplicate:\n> src/include/access/reloptions.h:extern bytea\n> *index_reloptions(amoptions_function amoptions, Datum reloptions,\n> src/include/access/tableam.h:extern bytea\n> *index_reloptions(amoptions_function amoptions, Datum reloptions,\n>\n> Otherwise the patch looks good and doing what it's proposed to do.\n>\n\nFor patch 0006:\n\nThe change for analyze is in the same style as for previous table am\nextensibility patches.\n\ntable_scan_analyze_next_tuple/table_scan_analyze_next_block existing\nextensibility is dropped in favour of more general method\ntable_relation_analyze. I haven't found existing extensions on a GitHub\nthat use these table am's, so probably it's quite ok to remove the\nextensibility that didn't get any traction for many years.\n\nThe patch contains a big block of code copy-paste. I've checked that the\ncode is the same with only function name replacement in favor to using\ntable am instead of heap am. I'd propose restoring the static functions\ndeclaration in the head of the file, which was removed in the patch and\nplace heapam_acquire_sample_rows() above compare_rows() to make functions\ncopied as the whole code block. This is for better patch look only, not a\nprincipal change.\n\n-static int acquire_sample_rows(Relation onerel, int elevel,\n- HeapTuple *rows, int targrows,\n- double *totalrows, double *totaldeadrows);\n-static int compare_rows(const void *a, const void *b, void *arg)\n\nMay it also be a better place than vacuum.h for\ntypedef int (*AcquireSampleRowsFunc) ? Maybe sampling.h ?\n\n\nThe other patch that I'd like to review is 0012:\n\nFor a\ntypedef enum RowRefType\n I think some comments would be useful to avoid confusion about the changes\nlike\n- newrc->allMarkTypes = (1 << newrc->markType);\n+ newrc->allRefTypes = (1 << refType);\n\nAlso I think the semantical difference between ROW_REF_COPY\nand ROW_MARK_COPY is better to be mentioned in the comments and/or commit\nmessage. This may include a description of assigning different reftypes in\nparse_relation.c\n\nIn a comment there is a small confusion between markType and refType:\n\n * The parent's allRefTypes field gets the OR of (1<<refType) across all\n * its children (this definition allows children to use different\nmarkTypes).\n\nBoth patches look good to me and are ready, though they may need minimal\ncomments/cosmetic work.\n\nRegards,\nPavel Borisov\n\nHi, Alexander!On Wed, 20 Mar 2024 at 09:22, Pavel Borisov <[email protected]> wrote:Hi, Alexander!For 0007:Code inside+heapam_reloptions(char relkind, Datum reloptions, bool validate) +{+ if (relkind == RELKIND_RELATION ||+ relkind == RELKIND_TOASTVALUE ||+ relkind == RELKIND_MATVIEW)+ return heap_reloptions(relkind, reloptions, validate);++ return NULL;looks redundant to what is done inside heap_reloptions(). Was this on purpose? Is it possible to leave only \"return heap_reloptions()\" ?This looks like a duplicate:src/include/access/reloptions.h:extern bytea *index_reloptions(amoptions_function amoptions, Datum reloptions,src/include/access/tableam.h:extern bytea *index_reloptions(amoptions_function amoptions, Datum reloptions,Otherwise the patch looks good and doing what it's proposed to do.For patch 0006:The change for analyze is in the same style as for previous table am extensibility patches.table_scan_analyze_next_tuple/table_scan_analyze_next_block existing extensibility is dropped in favour of more general method table_relation_analyze. I haven't found existing extensions on a GitHub that use these table am's, so probably it's quite ok to remove the extensibility that didn't get any traction for many years.The patch contains a big block of code copy-paste. I've checked that the code is the same with only function name replacement in favor to using table am instead of heap am. I'd propose restoring the static functions declaration in the head of the file, which was removed in the patch and place heapam_acquire_sample_rows() above compare_rows() to make functions copied as the whole code block. This is for better patch look only, not a principal change.-static int acquire_sample_rows(Relation onerel, int elevel,- HeapTuple *rows, int targrows,- double *totalrows, double *totaldeadrows);-static int compare_rows(const void *a, const void *b, void *arg)May it also be a better place than vacuum.h for typedef int (*AcquireSampleRowsFunc) ? Maybe sampling.h ?The other patch that I'd like to review is 0012:For a typedef enum RowRefType I think some comments would be useful to avoid confusion about the changes like- newrc->allMarkTypes = (1 << newrc->markType);+ newrc->allRefTypes = (1 << refType);Also I think the semantical difference between ROW_REF_COPY and ROW_MARK_COPY is better to be mentioned in the comments and/or commit message. This may include a description of assigning different reftypes in parse_relation.c In a comment there is a small confusion between markType and refType: * The parent's allRefTypes field gets the OR of (1<<refType) across all * its children (this definition allows children to use different markTypes).Both patches look good to me and are ready, though they may need minimal comments/cosmetic work.Regards,Pavel Borisov",
"msg_date": "Thu, 21 Mar 2024 09:36:14 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\nThank you for working on this patchset and pushing some of these patches!\n\nI tried to write comments for tts_minimal_is_current_xact_tuple()\nand tts_minimal_getsomeattrs() for them to be the same as for the same\nfunctions for heap and virtual tuple slots, as I proposed above in the\nthread. (tts_minimal_getsysattr is not introduced by the current patchset,\nbut anyway)\n\nMeanwhile I found that (never appearing) error message\nfor tts_minimal_is_current_xact_tuple needs to be corrected. Please see the\npatch in the attachment.\n\nRegards,\nPavel Borisov",
"msg_date": "Fri, 22 Mar 2024 08:51:31 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Fri, 22 Mar 2024 at 08:51, Pavel Borisov <[email protected]> wrote:\n\n> Hi, Alexander!\n>\n> Thank you for working on this patchset and pushing some of these patches!\n>\n> I tried to write comments for tts_minimal_is_current_xact_tuple()\n> and tts_minimal_getsomeattrs() for them to be the same as for the same\n> functions for heap and virtual tuple slots, as I proposed above in the\n> thread. (tts_minimal_getsysattr is not introduced by the current patchset,\n> but anyway)\n>\n> Meanwhile I found that (never appearing) error message\n> for tts_minimal_is_current_xact_tuple needs to be corrected. Please see the\n> patch in the attachment.\n>\n> I need to correct myself: it's for tts_minimal_getsysattr() not\ntts_minimal_getsomeattrs()\n\nPavel.\n\nOn Fri, 22 Mar 2024 at 08:51, Pavel Borisov <[email protected]> wrote:Hi, Alexander!Thank you for working on this patchset and pushing some of these patches!I tried to write comments for tts_minimal_is_current_xact_tuple() and tts_minimal_getsomeattrs() for them to be the same as for the same functions for heap and virtual tuple slots, as I proposed above in the thread. (tts_minimal_getsysattr is not introduced by the current patchset, but anyway)Meanwhile I found that (never appearing) error message for tts_minimal_is_current_xact_tuple needs to be corrected. Please see the patch in the attachment.I need to correct myself: it's for tts_minimal_getsysattr() not tts_minimal_getsomeattrs() Pavel.",
"msg_date": "Fri, 22 Mar 2024 08:56:33 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 6:56 AM Pavel Borisov <[email protected]> wrote:\n>\n> On Fri, 22 Mar 2024 at 08:51, Pavel Borisov <[email protected]> wrote:\n>>\n>> Hi, Alexander!\n>>\n>> Thank you for working on this patchset and pushing some of these patches!\n>>\n>> I tried to write comments for tts_minimal_is_current_xact_tuple() and tts_minimal_getsomeattrs() for them to be the same as for the same functions for heap and virtual tuple slots, as I proposed above in the thread. (tts_minimal_getsysattr is not introduced by the current patchset, but anyway)\n>>\n>> Meanwhile I found that (never appearing) error message for tts_minimal_is_current_xact_tuple needs to be corrected. Please see the patch in the attachment.\n>>\n> I need to correct myself: it's for tts_minimal_getsysattr() not tts_minimal_getsomeattrs()\n\nPushed.\n\nThe revised rest of the patchset is attached.\n0001 (was 0006) – I prefer the definition of AcquireSampleRowsFunc to\nstay in vacuum.h. If we move it to sampling.h then we would have to\nadd there includes to define Relation, HeapTuple etc. I'd like to\navoid this kind of change. Also, I've deleted\ntable_beginscan_analyze(), because it's only called from\ntableam-specific AcquireSampleRowsFunc. Also I put some comments to\nheapam_scan_analyze_next_block() and heapam_scan_analyze_next_tuple()\ngiven that there are now no relevant comments for them in tableam.h.\nI've removed some redundancies from acquire_sample_rows(). And added\ncomments to AcquireSampleRowsFunc based on what we have in FDW docs\nfor this function. Did some small edits as well. As you suggested,\nturned back declarations for acquire_sample_rows() and compare_rows().\n\n0002 (was 0007) – I've turned the redundant \"if\", which you've pointed\nout, into an assert. Also, added some comments, most notably comment\nfor TableAmRoutine.reloptions based on the indexam docs.\n\n0007 (was 0012) – This patch doesn't make much sense if not removing\nROW_MARK_COPY. What an oversight by me! I managed to remove\nROW_MARK_COPY so that tests passed. Added a lot of comments and made\nother improvements. But the problem is that I didn't manage to\nresearch all the consequences of this patch to FDW. And I think there\nare open design questions. In particular how should ROW_REF_COPY work\nwith row marks other than ROW_MARK_REFERENCE and should it work at\nall? This would require some consensus, and it doesn't seem feasible\nto achieve before FF. So, I think this is not a subject for v17.\n\nOther patches are without changes.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 27 Mar 2024 01:22:35 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\nThe revised rest of the patchset is attached.\n> 0001 (was 0006) – I prefer the definition of AcquireSampleRowsFunc to\n> stay in vacuum.h. If we move it to sampling.h then we would have to\n> add there includes to define Relation, HeapTuple etc. I'd like to\n> avoid this kind of change. Also, I've deleted\n> table_beginscan_analyze(), because it's only called from\n> tableam-specific AcquireSampleRowsFunc. Also I put some comments to\n> heapam_scan_analyze_next_block() and heapam_scan_analyze_next_tuple()\n> given that there are now no relevant comments for them in tableam.h.\n> I've removed some redundancies from acquire_sample_rows(). And added\n> comments to AcquireSampleRowsFunc based on what we have in FDW docs\n> for this function. Did some small edits as well. As you suggested,\n> turned back declarations for acquire_sample_rows() and compare_rows().\n>\n\nIn my comment in the thread I was not thinking about returning the old name\nacquire_sample_rows(), it was only about the declarations and the order of\nfunctions to be one code block. To me heapam_acquire_sample_rows() looks\nbetter for a name of heap implementation of *AcquireSampleRowsFunc(). I\nsuggest returning the name heapam_acquire_sample_rows() from v4. Sorry for\nthe confusion in this.\n\nThe changed type of static function that always returned true for heap\nlooks good to me:\nstatic void heapam_scan_analyze_next_block\n\nThe same is for removing the comparison of always true \"block_accepted\" in\n(heapam_)acquire_sample_rows()\n\nRemoving table_beginscan_analyze and call scan_begin() is not in the same\nstyle as other table_beginscan_* functions. Though this is not a change in\nfunctionality, I'd leave this part as it was in v4. Also, a comment about\nit was introduced in v5:\n\nsrc/backend/access/heap/heapam_handler.c: * with table_beginscan_analyze()\n\nFor comments I'd propose:\n%s/In addition, store estimates/In addition, a function should store\nestimates/g\n%s/zerp/zero/g\n\n\n> 0002 (was 0007) – I've turned the redundant \"if\", which you've pointed\n> out, into an assert. Also, added some comments, most notably comment\n> for TableAmRoutine.reloptions based on the indexam docs.\n>\n\n%s/validate sthe/validates the/g\n\nThis seems not needed, it's already inited to InvalidOid before.\n+else\n+accessMethod = default_table_access_method;\n\n+ accessMethodId = InvalidOid;\n\nThis code came from 374c7a22904. I don't insist on this simplification in a\npatch 0002.\n\nOverall both patches look good to me.\n\nRegards,\nPavel Borisov.\n\nHi, Alexander!\nThe revised rest of the patchset is attached.\n0001 (was 0006) – I prefer the definition of AcquireSampleRowsFunc to\nstay in vacuum.h. If we move it to sampling.h then we would have to\nadd there includes to define Relation, HeapTuple etc. I'd like to\navoid this kind of change. Also, I've deleted\ntable_beginscan_analyze(), because it's only called from\ntableam-specific AcquireSampleRowsFunc. Also I put some comments to\nheapam_scan_analyze_next_block() and heapam_scan_analyze_next_tuple()\ngiven that there are now no relevant comments for them in tableam.h.\nI've removed some redundancies from acquire_sample_rows(). And added\ncomments to AcquireSampleRowsFunc based on what we have in FDW docs\nfor this function. Did some small edits as well. As you suggested,\nturned back declarations for acquire_sample_rows() and compare_rows().In my comment in the thread I was not thinking about returning the old name acquire_sample_rows(), it was only about the declarations and the order of functions to be one code block. To me heapam_acquire_sample_rows() looks better for a name of heap implementation of *AcquireSampleRowsFunc(). I suggest returning the name heapam_acquire_sample_rows() from v4. Sorry for the confusion in this.The changed type of static function that always returned true for heap looks good to me: static void heapam_scan_analyze_next_blockThe same is for removing the comparison of always true \"block_accepted\" in (heapam_)acquire_sample_rows()Removing table_beginscan_analyze and call scan_begin() is not in the same style as other table_beginscan_* functions. Though this is not a change in functionality, I'd leave this part as it was in v4. Also, a comment about it was introduced in v5:src/backend/access/heap/heapam_handler.c: * with table_beginscan_analyze()For comments I'd propose:%s/In addition, store estimates/In addition, a function should store estimates/g%s/zerp/zero/g \n0002 (was 0007) – I've turned the redundant \"if\", which you've pointed\nout, into an assert. Also, added some comments, most notably comment\nfor TableAmRoutine.reloptions based on the indexam docs.%s/validate sthe/validates the/gThis seems not needed, it's already inited to InvalidOid before.+else+accessMethod = default_table_access_method; + accessMethodId = InvalidOid; This code came from 374c7a22904. I don't insist on this simplification in a patch 0002.Overall both patches look good to me.Regards,Pavel Borisov.",
"msg_date": "Wed, 27 Mar 2024 16:51:54 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": ">\n> This seems not needed, it's already inited to InvalidOid before.\n> +else\n> +accessMethod = default_table_access_method;\n>\n> + accessMethodId = InvalidOid;\n>\n> This code came from 374c7a22904. I don't insist on this simplification in\n> a patch 0002.\n>\n\nA correction of the code quote for the previous message:\n\n+else\n+ accessMethodId = InvalidOid;\n\nThis seems not needed, it's already inited to InvalidOid before.+else+accessMethod = default_table_access_method; + accessMethodId = InvalidOid; This code came from 374c7a22904. I don't insist on this simplification in a patch 0002.A correction of the code quote for the previous message:+else + accessMethodId = InvalidOid;",
"msg_date": "Wed, 27 Mar 2024 16:54:51 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Pavel!\n\nThank you for your feedback. The revised patch set is attached.\n\nI found that vacuum.c has a lot of heap-specific code. Thus, it\nshould be OK for analyze.c to keep heap-specific code. Therefore, I\nrolled back the movement of functions between files. That leads to a\nsmaller patch, easier to review.\n\nOn Wed, Mar 27, 2024 at 2:52 PM Pavel Borisov <[email protected]> wrote:\n>> The revised rest of the patchset is attached.\n>> 0001 (was 0006) – I prefer the definition of AcquireSampleRowsFunc to\n>> stay in vacuum.h. If we move it to sampling.h then we would have to\n>> add there includes to define Relation, HeapTuple etc. I'd like to\n>> avoid this kind of change. Also, I've deleted\n>> table_beginscan_analyze(), because it's only called from\n>> tableam-specific AcquireSampleRowsFunc. Also I put some comments to\n>> heapam_scan_analyze_next_block() and heapam_scan_analyze_next_tuple()\n>> given that there are now no relevant comments for them in tableam.h.\n>> I've removed some redundancies from acquire_sample_rows(). And added\n>> comments to AcquireSampleRowsFunc based on what we have in FDW docs\n>> for this function. Did some small edits as well. As you suggested,\n>> turned back declarations for acquire_sample_rows() and compare_rows().\n>\n>\n> In my comment in the thread I was not thinking about returning the old name acquire_sample_rows(), it was only about the declarations and the order of functions to be one code block. To me heapam_acquire_sample_rows() looks better for a name of heap implementation of *AcquireSampleRowsFunc(). I suggest returning the name heapam_acquire_sample_rows() from v4. Sorry for the confusion in this.\n\nI found that the function name acquire_sample_rows is referenced in\nquite several places in the source code. So, I would prefer to save\nthe old name to keep the changes minimal.\n\n> The changed type of static function that always returned true for heap looks good to me:\n> static void heapam_scan_analyze_next_block\n>\n> The same is for removing the comparison of always true \"block_accepted\" in (heapam_)acquire_sample_rows()\n\nOk!\n\n> Removing table_beginscan_analyze and call scan_begin() is not in the same style as other table_beginscan_* functions. Though this is not a change in functionality, I'd leave this part as it was in v4.\n\nWith the patch, this method doesn't have usages outside of table am.\nI don't think we should keep a method, which doesn't have clear\nexternal usage patterns. But I agree that starting a scan with\nheap_beginscan() and ending with table_endscan() is not correct. Now\nending this scan with heap_endscan().\n\n> Also, a comment about it was introduced in v5:\n>\n> src/backend/access/heap/heapam_handler.c: * with table_beginscan_analyze()\n\nCorrected.\n\n> For comments I'd propose:\n> %s/In addition, store estimates/In addition, a function should store estimates/g\n> %s/zerp/zero/g\n\nFixed.\n\n>> 0002 (was 0007) – I've turned the redundant \"if\", which you've pointed\n>> out, into an assert. Also, added some comments, most notably comment\n>> for TableAmRoutine.reloptions based on the indexam docs.\n>\n> %s/validate sthe/validates the/g\n\nFixed.\n\n> This seems not needed, it's already inited to InvalidOid before.\n> +else\n> +accessMethod = default_table_access_method;\n> + accessMethodId = InvalidOid;\n>\n> This code came from 374c7a22904. I don't insist on this simplification in a patch 0002.\n\nThis is minor redundancy. I prefer to keep it. This makes it obvious\nthat patch just moved this code.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 28 Mar 2024 00:14:32 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\nThank you for working on these patches.\nOn Thu, 28 Mar 2024 at 02:14, Alexander Korotkov <[email protected]>\nwrote:\n\n> Hi, Pavel!\n>\n> Thank you for your feedback. The revised patch set is attached.\n>\n> I found that vacuum.c has a lot of heap-specific code. Thus, it\n> should be OK for analyze.c to keep heap-specific code. Therefore, I\n> rolled back the movement of functions between files. That leads to a\n> smaller patch, easier to review.\n>\nI agree with you.\nAnd with the changes remaining in heapam_handler.c I suppose we can also\nremove the includes introduced:\n\n#include <math.h>\n#include \"utils/sampling.h\"\n#include \"utils/spccache.h\"\n\nOn Wed, Mar 27, 2024 at 2:52 PM Pavel Borisov <[email protected]>\n> wrote:\n> >> The revised rest of the patchset is attached.\n> >> 0001 (was 0006) – I prefer the definition of AcquireSampleRowsFunc to\n> >> stay in vacuum.h. If we move it to sampling.h then we would have to\n> >> add there includes to define Relation, HeapTuple etc. I'd like to\n> >> avoid this kind of change. Also, I've deleted\n> >> table_beginscan_analyze(), because it's only called from\n> >> tableam-specific AcquireSampleRowsFunc. Also I put some comments to\n> >> heapam_scan_analyze_next_block() and heapam_scan_analyze_next_tuple()\n> >> given that there are now no relevant comments for them in tableam.h.\n> >> I've removed some redundancies from acquire_sample_rows(). And added\n> >> comments to AcquireSampleRowsFunc based on what we have in FDW docs\n> >> for this function. Did some small edits as well. As you suggested,\n> >> turned back declarations for acquire_sample_rows() and compare_rows().\n> >\n> >\n> > In my comment in the thread I was not thinking about returning the old\n> name acquire_sample_rows(), it was only about the declarations and the\n> order of functions to be one code block. To me heapam_acquire_sample_rows()\n> looks better for a name of heap implementation of *AcquireSampleRowsFunc().\n> I suggest returning the name heapam_acquire_sample_rows() from v4. Sorry\n> for the confusion in this.\n>\n> I found that the function name acquire_sample_rows is referenced in\n> quite several places in the source code. So, I would prefer to save\n> the old name to keep the changes minimal.\n>\nThe full list shows only a couple of changes in analyze.c and several\ncomments elsewhere.\n\ncontrib/postgres_fdw/postgres_fdw.c: * of the relation. Same\nalgorithm as in acquire_sample_rows in\nsrc/backend/access/heap/vacuumlazy.c: * match what analyze.c's\nacquire_sample_rows() does, otherwise VACUUM\nsrc/backend/access/heap/vacuumlazy.c: * The logic here is a bit\nsimpler than acquire_sample_rows(), as\nsrc/backend/access/heap/vacuumlazy.c: * what\nacquire_sample_rows() does.\nsrc/backend/access/heap/vacuumlazy.c: *\nacquire_sample_rows() does, so be consistent.\nsrc/backend/access/heap/vacuumlazy.c: * acquire_sample_rows()\nwill recognize the same LP_DEAD items as dead\nsrc/backend/commands/analyze.c:static int\nacquire_sample_rows(Relation onerel, int elevel,\nsrc/backend/commands/analyze.c: acquirefunc = acquire_sample_rows;\nsrc/backend/commands/analyze.c: * acquire_sample_rows -- acquire a random\nsample of rows from the table\nsrc/backend/commands/analyze.c:acquire_sample_rows(Relation onerel, int\nelevel,\nsrc/backend/commands/analyze.c: * This has the same API as\nacquire_sample_rows, except that rows are\nsrc/backend/commands/analyze.c: acquirefunc =\nacquire_sample_rows;\n\nMy point for renaming is to make clear that it's a heap implementation of\nacquire_sample_rows which could be useful for possible reworking heap\nimplementations of table am methods into a separate place later. But I'm\nalso ok with the existing naming.\n\n\n> > The changed type of static function that always returned true for heap\n> looks good to me:\n> > static void heapam_scan_analyze_next_block\n> >\n> > The same is for removing the comparison of always true \"block_accepted\"\n> in (heapam_)acquire_sample_rows()\n>\n> Ok!\n>\n> > Removing table_beginscan_analyze and call scan_begin() is not in the\n> same style as other table_beginscan_* functions. Though this is not a\n> change in functionality, I'd leave this part as it was in v4.\n>\n> With the patch, this method doesn't have usages outside of table am.\n> I don't think we should keep a method, which doesn't have clear\n> external usage patterns. But I agree that starting a scan with\n> heap_beginscan() and ending with table_endscan() is not correct. Now\n> ending this scan with heap_endscan().\n>\nGood!\n\n\n> > Also, a comment about it was introduced in v5:\n> >\n> > src/backend/access/heap/heapam_handler.c: * with\n> table_beginscan_analyze()\n>\n> Corrected.\n\n> For comments I'd propose:\n> > %s/In addition, store estimates/In addition, a function should store\n> estimates/g\n> > %s/zerp/zero/g\n>\n> Fixed.\n>\n> >> 0002 (was 0007) – I've turned the redundant \"if\", which you've pointed\n> >> out, into an assert. Also, added some comments, most notably comment\n> >> for TableAmRoutine.reloptions based on the indexam docs.\n> >\n> > %s/validate sthe/validates the/g\n>\n> Fixed.\n>\n> > This seems not needed, it's already inited to InvalidOid before.\n> > +else\n> > +accessMethod = default_table_access_method;\n> > + accessMethodId = InvalidOid;\n> >\n> > This code came from 374c7a22904. I don't insist on this simplification\n> in a patch 0002.\n>\n> This is minor redundancy. I prefer to keep it. This makes it obvious\n> that patch just moved this code.\n>\nI agree with the remaining.\n\nRegards,\nPavel Borisov\n\nHi, Alexander!Thank you for working on these patches.On Thu, 28 Mar 2024 at 02:14, Alexander Korotkov <[email protected]> wrote:Hi, Pavel!\n\nThank you for your feedback. The revised patch set is attached.\n\nI found that vacuum.c has a lot of heap-specific code. Thus, it\nshould be OK for analyze.c to keep heap-specific code. Therefore, I\nrolled back the movement of functions between files. That leads to a\nsmaller patch, easier to review.I agree with you.And with the changes remaining in heapam_handler.c I suppose we can also remove the includes introduced:#include <math.h>#include \"utils/sampling.h\"#include \"utils/spccache.h\"\nOn Wed, Mar 27, 2024 at 2:52 PM Pavel Borisov <[email protected]> wrote:\n>> The revised rest of the patchset is attached.\n>> 0001 (was 0006) – I prefer the definition of AcquireSampleRowsFunc to\n>> stay in vacuum.h. If we move it to sampling.h then we would have to\n>> add there includes to define Relation, HeapTuple etc. I'd like to\n>> avoid this kind of change. Also, I've deleted\n>> table_beginscan_analyze(), because it's only called from\n>> tableam-specific AcquireSampleRowsFunc. Also I put some comments to\n>> heapam_scan_analyze_next_block() and heapam_scan_analyze_next_tuple()\n>> given that there are now no relevant comments for them in tableam.h.\n>> I've removed some redundancies from acquire_sample_rows(). And added\n>> comments to AcquireSampleRowsFunc based on what we have in FDW docs\n>> for this function. Did some small edits as well. As you suggested,\n>> turned back declarations for acquire_sample_rows() and compare_rows().\n>\n>\n> In my comment in the thread I was not thinking about returning the old name acquire_sample_rows(), it was only about the declarations and the order of functions to be one code block. To me heapam_acquire_sample_rows() looks better for a name of heap implementation of *AcquireSampleRowsFunc(). I suggest returning the name heapam_acquire_sample_rows() from v4. Sorry for the confusion in this.\n\nI found that the function name acquire_sample_rows is referenced in\nquite several places in the source code. So, I would prefer to save\nthe old name to keep the changes minimal.The full list shows only a couple of changes in analyze.c and several comments elsewhere.contrib/postgres_fdw/postgres_fdw.c: * of the relation. Same algorithm as in acquire_sample_rows insrc/backend/access/heap/vacuumlazy.c: * match what analyze.c's acquire_sample_rows() does, otherwise VACUUMsrc/backend/access/heap/vacuumlazy.c: * The logic here is a bit simpler than acquire_sample_rows(), assrc/backend/access/heap/vacuumlazy.c: * what acquire_sample_rows() does.src/backend/access/heap/vacuumlazy.c: * acquire_sample_rows() does, so be consistent.src/backend/access/heap/vacuumlazy.c: * acquire_sample_rows() will recognize the same LP_DEAD items as deadsrc/backend/commands/analyze.c:static int acquire_sample_rows(Relation onerel, int elevel,src/backend/commands/analyze.c: acquirefunc = acquire_sample_rows;src/backend/commands/analyze.c: * acquire_sample_rows -- acquire a random sample of rows from the tablesrc/backend/commands/analyze.c:acquire_sample_rows(Relation onerel, int elevel,src/backend/commands/analyze.c: * This has the same API as acquire_sample_rows, except that rows aresrc/backend/commands/analyze.c: acquirefunc = acquire_sample_rows; My point for renaming is to make clear that it's a heap implementation of acquire_sample_rows which could be useful for possible reworking heap implementations of table am methods into a separate place later. But I'm also ok with the existing naming. \n> The changed type of static function that always returned true for heap looks good to me:\n> static void heapam_scan_analyze_next_block\n>\n> The same is for removing the comparison of always true \"block_accepted\" in (heapam_)acquire_sample_rows()\n\nOk!\n\n> Removing table_beginscan_analyze and call scan_begin() is not in the same style as other table_beginscan_* functions. Though this is not a change in functionality, I'd leave this part as it was in v4.\n\nWith the patch, this method doesn't have usages outside of table am.\nI don't think we should keep a method, which doesn't have clear\nexternal usage patterns. But I agree that starting a scan with\nheap_beginscan() and ending with table_endscan() is not correct. Now\nending this scan with heap_endscan().Good! \n> Also, a comment about it was introduced in v5:\n>\n> src/backend/access/heap/heapam_handler.c: * with table_beginscan_analyze()\n\nCorrected.\n> For comments I'd propose:\n> %s/In addition, store estimates/In addition, a function should store estimates/g\n> %s/zerp/zero/g\n\nFixed.\n\n>> 0002 (was 0007) – I've turned the redundant \"if\", which you've pointed\n>> out, into an assert. Also, added some comments, most notably comment\n>> for TableAmRoutine.reloptions based on the indexam docs.\n>\n> %s/validate sthe/validates the/g\n\nFixed.\n\n> This seems not needed, it's already inited to InvalidOid before.\n> +else\n> +accessMethod = default_table_access_method;\n> + accessMethodId = InvalidOid;\n>\n> This code came from 374c7a22904. I don't insist on this simplification in a patch 0002.\n\nThis is minor redundancy. I prefer to keep it. This makes it obvious\nthat patch just moved this code.I agree with the remaining.Regards, Pavel Borisov",
"msg_date": "Thu, 28 Mar 2024 13:46:30 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\nThe other extensibility that seems quite clear and uncontroversial to me is\n0006.\n\nIt simply shifts the decision on whether tuple inserts should invoke\ninserts to the related indices to the table am level. It doesn't change the\ncurrent heap insert behavior so it's safe for the existing heap access\nmethod. But new table access methods could redefine this (only for tables\ncreated with these am's) and make index inserts independently\nof ExecInsertIndexTuples inside their own implementations of\ntuple_insert/tuple_multi_insert methods.\n\nI'd propose changing the comment:\n\n1405 * This function sets `*insert_indexes` to true if expects caller to\nreturn\n1406 * the relevant index tuples. If `*insert_indexes` is set to false,\nthen\n1407 * this function cares about indexes itself.\n\nin the following way\n\nTableam implementation of tuple_insert should set `*insert_indexes` to true\nif it expects the caller to insert the relevant index tuples (as in heap\n implementation). It should set `*insert_indexes` to false if it cares\nabout index inserts itself and doesn't want the caller to do index inserts.\n\nMaybe, a commit message is also better to reformulate to describe better\nwho should do what.\n\nI think, with rebase and correction in the comments/commit message patch\n0006 is ready to be committed.\n\nRegards,\nPavel Borisov.\n\nHi, Alexander!The other extensibility that seems quite clear and uncontroversial to me is 0006.It simply shifts the decision on whether tuple inserts should invoke inserts to the related indices to the table am level. It doesn't change the current heap insert behavior so it's safe for the existing heap access method. But new table access methods could redefine this (only for tables created with these am's) and make index inserts independently of ExecInsertIndexTuples inside their own implementations of tuple_insert/tuple_multi_insert methods. I'd propose changing the comment:1405 * This function sets `*insert_indexes` to true if expects caller to return1406 * the relevant index tuples. If `*insert_indexes` is set to false, then1407 * this function cares about indexes itself.in the following wayTableam implementation of tuple_insert should set `*insert_indexes` to trueif it expects the caller to insert the relevant index tuples (as in heap implementation). It should set `*insert_indexes` to false if it cares about index inserts itself and doesn't want the caller to do index inserts.Maybe, a commit message is also better to reformulate to describe better who should do what.I think, with rebase and correction in the comments/commit message patch 0006 is ready to be committed.Regards,Pavel Borisov.",
"msg_date": "Thu, 28 Mar 2024 17:12:20 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi Pavel!\n\nRevised patchset is attached.\n\nOn Thu, Mar 28, 2024 at 3:12 PM Pavel Borisov <[email protected]> wrote:\n> The other extensibility that seems quite clear and uncontroversial to me is 0006.\n>\n> It simply shifts the decision on whether tuple inserts should invoke inserts to the related indices to the table am level. It doesn't change the current heap insert behavior so it's safe for the existing heap access method. But new table access methods could redefine this (only for tables created with these am's) and make index inserts independently of ExecInsertIndexTuples inside their own implementations of tuple_insert/tuple_multi_insert methods.\n>\n> I'd propose changing the comment:\n>\n> 1405 * This function sets `*insert_indexes` to true if expects caller to return\n> 1406 * the relevant index tuples. If `*insert_indexes` is set to false, then\n> 1407 * this function cares about indexes itself.\n>\n> in the following way\n>\n> Tableam implementation of tuple_insert should set `*insert_indexes` to true\n> if it expects the caller to insert the relevant index tuples (as in heap\n> implementation). It should set `*insert_indexes` to false if it cares\n> about index inserts itself and doesn't want the caller to do index inserts.\n\nChanged as you proposed.\n\n> Maybe, a commit message is also better to reformulate to describe better who should do what.\n\nDone.\n\nAlso, I removed extra includes in 0001 as you proposed and edited the\ncommit message in 0002.\n\n> I think, with rebase and correction in the comments/commit message patch 0006 is ready to be committed.\n\nI'm going to push 0001, 0002 and 0006 if no objections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 28 Mar 2024 15:26:07 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "\nOn Thu, 28 Mar 2024 at 21:26, Alexander Korotkov <[email protected]> wrote:\n> Hi Pavel!\n>\n> Revised patchset is attached.\n>\n> On Thu, Mar 28, 2024 at 3:12 PM Pavel Borisov <[email protected]> wrote:\n>> The other extensibility that seems quite clear and uncontroversial to me is 0006.\n>>\n>> It simply shifts the decision on whether tuple inserts should invoke inserts to the related indices to the table am level. It doesn't change the current heap insert behavior so it's safe for the existing heap access method. But new table access methods could redefine this (only for tables created with these am's) and make index inserts independently of ExecInsertIndexTuples inside their own implementations of tuple_insert/tuple_multi_insert methods.\n>>\n>> I'd propose changing the comment:\n>>\n>> 1405 * This function sets `*insert_indexes` to true if expects caller to return\n>> 1406 * the relevant index tuples. If `*insert_indexes` is set to false, then\n>> 1407 * this function cares about indexes itself.\n>>\n>> in the following way\n>>\n>> Tableam implementation of tuple_insert should set `*insert_indexes` to true\n>> if it expects the caller to insert the relevant index tuples (as in heap\n>> implementation). It should set `*insert_indexes` to false if it cares\n>> about index inserts itself and doesn't want the caller to do index inserts.\n>\n> Changed as you proposed.\n>\n>> Maybe, a commit message is also better to reformulate to describe better who should do what.\n>\n> Done.\n>\n> Also, I removed extra includes in 0001 as you proposed and edited the\n> commit message in 0002.\n>\n>> I think, with rebase and correction in the comments/commit message patch 0006 is ready to be committed.\n>\n> I'm going to push 0001, 0002 and 0006 if no objections.\n\nThanks for updating the patches. Here are some minor sugesstion.\n\n0003\n\n+static inline TupleTableSlot *\n+heapam_tuple_insert_with_arbiter(ResultRelInfo *resultRelInfo,\n\nI'm not entirely certain whether the \"inline\" keyword has any effect.\n\n0004\n\n+static bytea *\n+heapam_indexoptions(amoptions_function amoptions, char relkind,\n+ Datum reloptions, bool validate)\n+{\n+ return index_reloptions(amoptions, reloptions, validate);\n+}\n\nCould you please explain why we are not verifying the relkind like\nheapam_reloptions()?\n\n\n- case RELKIND_VIEW:\n case RELKIND_MATVIEW:\n+ case RELKIND_VIEW:\n case RELKIND_PARTITIONED_TABLE:\n\nI think this change is unnecessary.\n\n+ {\n+ Form_pg_class classForm;\n+ HeapTuple classTup;\n+\n+ /* fetch the relation's relcache entry */\n+ if (relation->rd_index->indrelid >= FirstNormalObjectId)\n+ {\n+ classTup = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(relation->rd_index->indrelid));\n+ classForm = (Form_pg_class) GETSTRUCT(classTup);\n+ if (classForm->relam >= FirstNormalObjectId)\n+ tableam = GetTableAmRoutineByAmOid(classForm->relam);\n+ else\n+ tableam = GetHeapamTableAmRoutine();\n+ heap_freetuple(classTup);\n+ }\n+ else\n+ {\n+ tableam = GetHeapamTableAmRoutine();\n+ }\n+ amoptsfn = relation->rd_indam->amoptions;\n+ }\n\n- We can reduce the indentation by moving the classFrom and classTup into\n the if branch.\n- Perhaps we could remove the brace of else branch to maintain consistency\n in the code style.\n\n--\nRegards,\nJapin Li\n\n\n",
"msg_date": "Thu, 28 Mar 2024 23:23:05 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "I found that after yesterday's e2395cdbe83a 0002 don't apply.\nRebased the whole patchset.\n\nPavel",
"msg_date": "Fri, 29 Mar 2024 13:33:30 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "I've looked at patch 0003.\n\nGenerally, it does a similar thing as 0001 - it exposes a more generalized\nmethod tuple_insert_with_arbiter that encapsulates\ntuple_insert_speculative/tuple_complete_speculative and at the same time\nallows extensibility of this i.e. different implementation for custom table\nother than heap by the extensions. Though the code rearrangement is little\nbit more complicated, the patch is clear. It doesn't change the behavior\nfor heap tables.\n\ntuple_insert_speculative/tuple_complete_speculative are removed from table\nAM methods. I think it would not be hard for existing users of this\nto adapt to the changes.\n\nCode block ExecCheckTupleVisible -- ExecCheckTIDVisible moved\nto heapam_handler.c I've checked, the code block unchanged except\nthat ExecCheckTIDVisible now gets Relation from the caller instead of\nconstructing it from ResultRelInfo.\n\nAlso two big code blocks are moved from ExecOnConflictUpdate and ExecInsert\nto a new method heapam_tuple_insert_with_arbiter. They correspond the old\ncode with several minor modifications.\n\nFor ExecOnConflictUpdate comment need to be revised. This one is for\nshifted code:\n> * Try to lock tuple for update as part of speculative insertion.\nProbably it is worth to be moved to a comment for\nheapam_tuple_insert_with_arbiter.\n\nFor heapam_tuple_insert_with_arbiter both \"return NULL\" could be shifted\nlevel up into the end of the block:\n>if (!ExecCheckIndexConstraints(resultRelInfo, slot, estate, &conflictTid,\n>+\n arbiterIndexes))\n\nAlso I'd add comment for heapam_tuple_insert_with_arbiter:\n/* See comments for table_tuple_insert_with_arbiter() */\n\nA comment to be corrected:\nsrc/backend/access/heap/heapam.c: * implement\ntable_tuple_insert_speculative()\n\nAs Jaipin said, I'd also propose removing \"inline\"\nfrom heapam_tuple_insert_with_arbiter.\n\nMore corrections for comments:\n%s/If tuple doesn't violates/If tuple doesn't violate/g\n%s/which comprises the list of/list, which comprises/g\n%s/conflicting tuple gets locked/conflicting tuple should be locked/g\n\nI think for better code look this could be removed:\n>vlock:\n > CHECK_FOR_INTERRUPTS();\ntogether with CHECK_FOR_INTERRUPTS(); in heapam_tuple_insert_with_arbiter\nplaced in the beginning of while loop.\n\nOverall the patch looks good enough to me.\n\nRegards,\nPavel\n\n>\n\nI've looked at patch 0003.Generally, it does a similar thing as 0001 - it exposes a more generalized method tuple_insert_with_arbiter that encapsulates tuple_insert_speculative/tuple_complete_speculative and at the same time allows extensibility of this i.e. different implementation for custom table other than heap by the extensions. Though the code rearrangement is little bit more complicated, the patch is clear. It doesn't change the behavior for heap tables.tuple_insert_speculative/tuple_complete_speculative are removed from table AM methods. I think it would not be hard for existing users of this to adapt to the changes.Code block ExecCheckTupleVisible -- ExecCheckTIDVisible moved to heapam_handler.c I've checked, the code block unchanged except that ExecCheckTIDVisible now gets Relation from the caller instead of constructing it from ResultRelInfo.Also two big code blocks are moved from ExecOnConflictUpdate and ExecInsert to a new method heapam_tuple_insert_with_arbiter. They correspond the old code with several minor modifications.For ExecOnConflictUpdate comment need to be revised. This one is for shifted code:> * Try to lock tuple for update as part of speculative insertion.Probably it is worth to be moved to a comment for heapam_tuple_insert_with_arbiter.For heapam_tuple_insert_with_arbiter both \"return NULL\" could be shifted level up into the end of the block:>if (!ExecCheckIndexConstraints(resultRelInfo, slot, estate, &conflictTid,>+ arbiterIndexes))Also I'd add comment for heapam_tuple_insert_with_arbiter:/* See comments for table_tuple_insert_with_arbiter() */A comment to be corrected: src/backend/access/heap/heapam.c: * implement table_tuple_insert_speculative()As Jaipin said, I'd also propose removing \"inline\" from heapam_tuple_insert_with_arbiter.More corrections for comments:%s/If tuple doesn't violates/If tuple doesn't violate/g%s/which comprises the list of/list, which comprises/g%s/conflicting tuple gets locked/conflicting tuple should be locked/gI think for better code look this could be removed:>vlock: > CHECK_FOR_INTERRUPTS();together with CHECK_FOR_INTERRUPTS(); in heapam_tuple_insert_with_arbiter placed in the beginning of while loop.Overall the patch looks good enough to me.Regards,Pavel",
"msg_date": "Fri, 29 Mar 2024 18:07:44 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": ">\n> I think for better code look this could be removed:\n> >vlock:\n> > CHECK_FOR_INTERRUPTS();\n> together with CHECK_FOR_INTERRUPTS(); in heapam_tuple_insert_with_arbiter\n> placed in the beginning of while loop.\n>\nTo clarify things, this I wrote only about CHECK_FOR_INTERRUPTS();\nrearrangement.\n\nRegards,\nPavel\n\nI think for better code look this could be removed:>vlock: > CHECK_FOR_INTERRUPTS();together with CHECK_FOR_INTERRUPTS(); in heapam_tuple_insert_with_arbiter placed in the beginning of while loop.To clarify things, this I wrote only about CHECK_FOR_INTERRUPTS(); rearrangement.Regards,Pavel",
"msg_date": "Fri, 29 Mar 2024 19:23:00 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Pavel!\n\nI've pushed 0001, 0002 and 0006.\n\nOn Fri, Mar 29, 2024 at 5:23 PM Pavel Borisov <[email protected]> wrote:\n>>\n>> I think for better code look this could be removed:\n>> >vlock:\n>> > CHECK_FOR_INTERRUPTS();\n>> together with CHECK_FOR_INTERRUPTS(); in heapam_tuple_insert_with_arbiter placed in the beginning of while loop.\n>\n> To clarify things, this I wrote only about CHECK_FOR_INTERRUPTS(); rearrangement.\n\nThank you for your review of this patch. But I still think there is a\nproblem that this patch moves part of the executor to table AM which\ndirectly uses executor data structures and functions. This works, but\nnot committable since it breaks the encapsulation.\n\nI think the way forward might be to introduce the new API, which would\nisolate executor details from table AM. We may introduce a new data\nstructure InsertWithArbiterContext which would contain EState and a\nset of callbacks which would avoid table AM from calling the executor\ndirectly. That should be our goal for pg18. Now, this is too close\nto FF to design a new API.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 30 Mar 2024 23:33:04 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On 31/3/2024 00:33, Alexander Korotkov wrote:\n> I think the way forward might be to introduce the new API, which would\n> isolate executor details from table AM. We may introduce a new data\n> structure InsertWithArbiterContext which would contain EState and a\n> set of callbacks which would avoid table AM from calling the executor\n> directly. That should be our goal for pg18. Now, this is too close\n> to FF to design a new API.\nI'm a bit late, but have you ever considered adding some sort of index \nprobing routine to the AM interface for estimation purposes?\nI am working out the problem when we have dubious estimations. For \nexample, we don't have MCV or do not fit MCV statistics for equality of \nmultiple clauses, or we detected that the constant value is out of the \nhistogram range. In such cases (especially for [parameterized] JOINs), \nthe optimizer could have a chance to probe the index and avoid huge \nunderestimation. This makes sense, especially for multicolumn \nfilters/clauses.\nHaving a probing AM method, we may invent something for this challenge.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 1 Apr 2024 14:00:42 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Coverity complained about what you did in RelationParseRelOptions\nin c95c25f9a:\n\n*** CID 1595992: Null pointer dereferences (FORWARD_NULL)\n/srv/coverity/git/pgsql-git/postgresql/src/backend/utils/cache/relcache.c: 499 in RelationParseRelOptions()\n493 \n494 \t/*\n495 \t * Fetch reloptions from tuple; have to use a hardwired descriptor because\n496 \t * we might not have any other for pg_class yet (consider executing this\n497 \t * code for pg_class itself)\n498 \t */\n>>> CID 1595992: Null pointer dereferences (FORWARD_NULL)\n>>> Passing null pointer \"tableam\" to \"extractRelOptions\", which dereferences it.\n499 \toptions = extractRelOptions(tuple, GetPgClassDescriptor(),\n500 \t\t\t\t\t\t\t\ttableam, amoptsfn);\n501 \n\nI see that extractRelOptions only uses the tableam argument for some\nrelkinds, and RelationParseRelOptions does set it up for those\nrelkinds --- but Coverity's complaint isn't without merit, because\nthose two switch statements are looking at *different copies of the\nrelkind*, which in theory could be different. This all seems quite\nmessy and poorly factored. Can't we do better? Why do we need to\ninvolve two copies of allegedly the same pg_class tuple, anyhow?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2024 12:36:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 1, 2024 at 7:36 PM Tom Lane <[email protected]> wrote:\n>\n> Coverity complained about what you did in RelationParseRelOptions\n> in c95c25f9a:\n>\n> *** CID 1595992: Null pointer dereferences (FORWARD_NULL)\n> /srv/coverity/git/pgsql-git/postgresql/src/backend/utils/cache/relcache.c: 499 in RelationParseRelOptions()\n> 493\n> 494 /*\n> 495 * Fetch reloptions from tuple; have to use a hardwired descriptor because\n> 496 * we might not have any other for pg_class yet (consider executing this\n> 497 * code for pg_class itself)\n> 498 */\n> >>> CID 1595992: Null pointer dereferences (FORWARD_NULL)\n> >>> Passing null pointer \"tableam\" to \"extractRelOptions\", which dereferences it.\n> 499 options = extractRelOptions(tuple, GetPgClassDescriptor(),\n> 500 tableam, amoptsfn);\n> 501\n>\n> I see that extractRelOptions only uses the tableam argument for some\n> relkinds, and RelationParseRelOptions does set it up for those\n> relkinds --- but Coverity's complaint isn't without merit, because\n> those two switch statements are looking at *different copies of the\n> relkind*, which in theory could be different. This all seems quite\n> messy and poorly factored. Can't we do better? Why do we need to\n> involve two copies of allegedly the same pg_class tuple, anyhow?\n\nThank you for reporting this, Tom.\nI'm planning to investigate this later today.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 1 Apr 2024 19:48:08 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 1, 2024 at 7:36 PM Tom Lane <[email protected]> wrote:\n> Coverity complained about what you did in RelationParseRelOptions\n> in c95c25f9a:\n>\n> *** CID 1595992: Null pointer dereferences (FORWARD_NULL)\n> /srv/coverity/git/pgsql-git/postgresql/src/backend/utils/cache/relcache.c: 499 in RelationParseRelOptions()\n> 493\n> 494 /*\n> 495 * Fetch reloptions from tuple; have to use a hardwired descriptor because\n> 496 * we might not have any other for pg_class yet (consider executing this\n> 497 * code for pg_class itself)\n> 498 */\n> >>> CID 1595992: Null pointer dereferences (FORWARD_NULL)\n> >>> Passing null pointer \"tableam\" to \"extractRelOptions\", which dereferences it.\n> 499 options = extractRelOptions(tuple, GetPgClassDescriptor(),\n> 500 tableam, amoptsfn);\n> 501\n>\n> I see that extractRelOptions only uses the tableam argument for some\n> relkinds, and RelationParseRelOptions does set it up for those\n> relkinds --- but Coverity's complaint isn't without merit, because\n> those two switch statements are looking at *different copies of the\n> relkind*, which in theory could be different. This all seems quite\n> messy and poorly factored. Can't we do better? Why do we need to\n> involve two copies of allegedly the same pg_class tuple, anyhow?\n\nI wasn't registered at Coverity yet. Now I've registered and am\nwaiting for approval to access the PostgreSQL analysis data.\n\nI wonder why Coverity complains about tableam, but not amoptsfn.\nTheir usage patterns are very similar.\n\nIt appears that relation->rd_rel isn't the full copy of pg_class tuple\n(see AllocateRelationDesc). RelationParseRelOptions() is going to\nupdate relation->rd_options, and thus needs a full pg_class tuple to\nfetch options out of it. However, it is really unnecessary to access\nboth tuples at the same time. We can use a full tuple, not\nrelation->rd_rel, in both cases. See the attached patch.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 2 Apr 2024 02:59:11 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "\nOn Tue, 02 Apr 2024 at 07:59, Alexander Korotkov <[email protected]> wrote:\n> On Mon, Apr 1, 2024 at 7:36 PM Tom Lane <[email protected]> wrote:\n>> Coverity complained about what you did in RelationParseRelOptions\n>> in c95c25f9a:\n>>\n>> *** CID 1595992: Null pointer dereferences (FORWARD_NULL)\n>> /srv/coverity/git/pgsql-git/postgresql/src/backend/utils/cache/relcache.c: 499 in RelationParseRelOptions()\n>> 493\n>> 494 /*\n>> 495 * Fetch reloptions from tuple; have to use a hardwired descriptor because\n>> 496 * we might not have any other for pg_class yet (consider executing this\n>> 497 * code for pg_class itself)\n>> 498 */\n>> >>> CID 1595992: Null pointer dereferences (FORWARD_NULL)\n>> >>> Passing null pointer \"tableam\" to \"extractRelOptions\", which dereferences it.\n>> 499 options = extractRelOptions(tuple, GetPgClassDescriptor(),\n>> 500 tableam, amoptsfn);\n>> 501\n>>\n>> I see that extractRelOptions only uses the tableam argument for some\n>> relkinds, and RelationParseRelOptions does set it up for those\n>> relkinds --- but Coverity's complaint isn't without merit, because\n>> those two switch statements are looking at *different copies of the\n>> relkind*, which in theory could be different. This all seems quite\n>> messy and poorly factored. Can't we do better? Why do we need to\n>> involve two copies of allegedly the same pg_class tuple, anyhow?\n>\n> I wasn't registered at Coverity yet. Now I've registered and am\n> waiting for approval to access the PostgreSQL analysis data.\n>\n> I wonder why Coverity complains about tableam, but not amoptsfn.\n> Their usage patterns are very similar.\n>\n> It appears that relation->rd_rel isn't the full copy of pg_class tuple\n> (see AllocateRelationDesc). RelationParseRelOptions() is going to\n> update relation->rd_options, and thus needs a full pg_class tuple to\n> fetch options out of it. However, it is really unnecessary to access\n> both tuples at the same time. We can use a full tuple, not\n> relation->rd_rel, in both cases. See the attached patch.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n\n\n+ Form_pg_class classForm = (Form_pg_class) GETSTRUCT(tuple);\n+;\n\nThere is an additional semicolon in the code.\n\n--\nRegards,\nJapin Li\n\n\n",
"msg_date": "Tue, 02 Apr 2024 08:57:47 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Sat, 2024-03-30 at 23:33 +0200, Alexander Korotkov wrote:\n> I've pushed 0001, 0002 and 0006.\n\nSorry to jump in to this discussion so late. I had worked on something\nlike the custom reloptions (0002) in the past, and there were some\ncomplications that don't seem to be addressed in commit c95c25f9af.\n\n* At minimum I think it needs some direction (comments, docs, tests)\nthat show how it's supposed to be used.\n\n* The bytea returned by the reloptions() method is not in a trivial\nformat. It's a StdRelOptions struct with string values stored after the\nend of the struct. To build the bytea internally, there's some\ninfrastructure like allocateRelOptStruct() and fillRelOptions(), and\nit's not very easy to extend those to support a few custom options.\n\n* If we ever decide to add a string option to StdRdOptions, I think the\ndesign breaks, because the code that looks for those string values\nwouldn't know how to skip over the custom options. Perhaps we can just\npromise to never do that, but we should make it explicit somehow.\n\n* Most existing heap reloptions (other than fillfactor) are used by\nother parts of the system (like autovacuum) so should be considered\nvalid for any AM. Most AMs will just want to add a handful of their own\noptions on top, so it would be good to demonstrate how this should be\ndone.\n\n* There are still places that are inappropriately calling\nheap_reloptions directly. For instance, in ProcessUtilitySlow(), it\nseems to assume that a toast table is a heap?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 01 Apr 2024 22:19:11 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Jeff!\n\nOn Tue, Apr 2, 2024 at 8:19 AM Jeff Davis <[email protected]> wrote:\n> On Sat, 2024-03-30 at 23:33 +0200, Alexander Korotkov wrote:\n> > I've pushed 0001, 0002 and 0006.\n>\n> Sorry to jump in to this discussion so late. I had worked on something\n> like the custom reloptions (0002) in the past, and there were some\n> complications that don't seem to be addressed in commit c95c25f9af.\n>\n> * At minimum I think it needs some direction (comments, docs, tests)\n> that show how it's supposed to be used.\n>\n> * The bytea returned by the reloptions() method is not in a trivial\n> format. It's a StdRelOptions struct with string values stored after the\n> end of the struct. To build the bytea internally, there's some\n> infrastructure like allocateRelOptStruct() and fillRelOptions(), and\n> it's not very easy to extend those to support a few custom options.\n>\n> * If we ever decide to add a string option to StdRdOptions, I think the\n> design breaks, because the code that looks for those string values\n> wouldn't know how to skip over the custom options. Perhaps we can just\n> promise to never do that, but we should make it explicit somehow.\n>\n> * Most existing heap reloptions (other than fillfactor) are used by\n> other parts of the system (like autovacuum) so should be considered\n> valid for any AM. Most AMs will just want to add a handful of their own\n> options on top, so it would be good to demonstrate how this should be\n> done.\n>\n> * There are still places that are inappropriately calling\n> heap_reloptions directly. For instance, in ProcessUtilitySlow(), it\n> seems to assume that a toast table is a heap?\n\nThank you for the detailed explanation. This piece definitely needs\nmore work. I've just reverted the c95c25f9af.\n\nI don't like the idea that every custom table AM reltoptions should\nbegin with StdRdOptions. I would rather introduce the new data\nstructure with table options, which need to be accessed outside of\ntable AM. Then reloptions will be a backbox only directly used in\ntable AM, while table AM has a freedom on what to store in reloptions\nand how to calculate externally-visible options. What do you think?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 2 Apr 2024 11:49:31 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Tue, 2024-04-02 at 11:49 +0300, Alexander Korotkov wrote:\n> I don't like the idea that every custom table AM reltoptions should\n> begin with StdRdOptions. I would rather introduce the new data\n> structure with table options, which need to be accessed outside of\n> table AM. Then reloptions will be a backbox only directly used in\n> table AM, while table AM has a freedom on what to store in reloptions\n> and how to calculate externally-visible options. What do you think?\n\nHi Alexander!\n\nI agree with all of that. It will take some refactoring to get there,\nthough.\n\nOne idea is to store StdRdOptions like normal, but if an unrecognized\noption is found, ask the table AM if it understands the option. In that\ncase I think we'd just use a different field in pg_class so that it can\nuse whatever format it wants to represent its options.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 02 Apr 2024 08:17:10 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, hackers!\n\nOn Tue, 2 Apr 2024 at 19:17, Jeff Davis <[email protected]> wrote:\n\n> On Tue, 2024-04-02 at 11:49 +0300, Alexander Korotkov wrote:\n> > I don't like the idea that every custom table AM reltoptions should\n> > begin with StdRdOptions. I would rather introduce the new data\n> > structure with table options, which need to be accessed outside of\n> > table AM. Then reloptions will be a backbox only directly used in\n> > table AM, while table AM has a freedom on what to store in reloptions\n> > and how to calculate externally-visible options. What do you think?\n>\n> Hi Alexander!\n>\n> I agree with all of that. It will take some refactoring to get there,\n> though.\n>\n> One idea is to store StdRdOptions like normal, but if an unrecognized\n> option is found, ask the table AM if it understands the option. In that\n> case I think we'd just use a different field in pg_class so that it can\n> use whatever format it wants to represent its options.\n>\n> Regards,\n> Jeff Davis\n>\nI tried to rework a patch regarding table am according to the input from\nAlexander and Jeff.\n\nIt splits table reloptions into two categories:\n- common for all tables (stored in a fixed size structure and could be\naccessed from outside)\n- table-am specific (variable size, parsed and accessed by access method\nonly)\n\nPlease find a patch attached.",
"msg_date": "Fri, 5 Apr 2024 19:58:23 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Pavel!\n\nOn Fri, Apr 5, 2024 at 6:58 PM Pavel Borisov <[email protected]> wrote:\n> On Tue, 2 Apr 2024 at 19:17, Jeff Davis <[email protected]> wrote:\n>>\n>> On Tue, 2024-04-02 at 11:49 +0300, Alexander Korotkov wrote:\n>> > I don't like the idea that every custom table AM reltoptions should\n>> > begin with StdRdOptions. I would rather introduce the new data\n>> > structure with table options, which need to be accessed outside of\n>> > table AM. Then reloptions will be a backbox only directly used in\n>> > table AM, while table AM has a freedom on what to store in reloptions\n>> > and how to calculate externally-visible options. What do you think?\n>>\n>> Hi Alexander!\n>>\n>> I agree with all of that. It will take some refactoring to get there,\n>> though.\n>>\n>> One idea is to store StdRdOptions like normal, but if an unrecognized\n>> option is found, ask the table AM if it understands the option. In that\n>> case I think we'd just use a different field in pg_class so that it can\n>> use whatever format it wants to represent its options.\n>>\n>> Regards,\n>> Jeff Davis\n>\n> I tried to rework a patch regarding table am according to the input from Alexander and Jeff.\n>\n> It splits table reloptions into two categories:\n> - common for all tables (stored in a fixed size structure and could be accessed from outside)\n> - table-am specific (variable size, parsed and accessed by access method only)\n\nThank you for your work. Please, check the revised patch.\n\nIt makes CommonRdOptions a separate data structure, not directly\ninvolved in parsing the reloption. Instead table AM can fill it on\nthe base of its reloptions or calculate the other way. Patch comes\nwith a test module, which comes with heap-based table AM. This table\nAM has \"enable_parallel\" reloption, which is used as the base to set\nthe value of CommonRdOptions.parallel_workers.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sun, 7 Apr 2024 06:33:46 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Sun, 7 Apr 2024 at 07:33, Alexander Korotkov <[email protected]>\nwrote:\n\n> Hi, Pavel!\n>\n> On Fri, Apr 5, 2024 at 6:58 PM Pavel Borisov <[email protected]>\n> wrote:\n> > On Tue, 2 Apr 2024 at 19:17, Jeff Davis <[email protected]> wrote:\n> >>\n> >> On Tue, 2024-04-02 at 11:49 +0300, Alexander Korotkov wrote:\n> >> > I don't like the idea that every custom table AM reltoptions should\n> >> > begin with StdRdOptions. I would rather introduce the new data\n> >> > structure with table options, which need to be accessed outside of\n> >> > table AM. Then reloptions will be a backbox only directly used in\n> >> > table AM, while table AM has a freedom on what to store in reloptions\n> >> > and how to calculate externally-visible options. What do you think?\n> >>\n> >> Hi Alexander!\n> >>\n> >> I agree with all of that. It will take some refactoring to get there,\n> >> though.\n> >>\n> >> One idea is to store StdRdOptions like normal, but if an unrecognized\n> >> option is found, ask the table AM if it understands the option. In that\n> >> case I think we'd just use a different field in pg_class so that it can\n> >> use whatever format it wants to represent its options.\n> >>\n> >> Regards,\n> >> Jeff Davis\n> >\n> > I tried to rework a patch regarding table am according to the input from\n> Alexander and Jeff.\n> >\n> > It splits table reloptions into two categories:\n> > - common for all tables (stored in a fixed size structure and could be\n> accessed from outside)\n> > - table-am specific (variable size, parsed and accessed by access method\n> only)\n>\n> Thank you for your work. Please, check the revised patch.\n>\n> It makes CommonRdOptions a separate data structure, not directly\n> involved in parsing the reloption. Instead table AM can fill it on\n> the base of its reloptions or calculate the other way. Patch comes\n> with a test module, which comes with heap-based table AM. This table\n> AM has \"enable_parallel\" reloption, which is used as the base to set\n> the value of CommonRdOptions.parallel_workers.\n>\nTo me, a patch v10 looks good.\n\nI think the comment for RelationData now applies only to rd_options, not\nto rd_common_options.\n>NULLs means \"use defaults\".\n\nRegards,\nPavel\n\nHi, Alexander!On Sun, 7 Apr 2024 at 07:33, Alexander Korotkov <[email protected]> wrote:Hi, Pavel!\n\nOn Fri, Apr 5, 2024 at 6:58 PM Pavel Borisov <[email protected]> wrote:\n> On Tue, 2 Apr 2024 at 19:17, Jeff Davis <[email protected]> wrote:\n>>\n>> On Tue, 2024-04-02 at 11:49 +0300, Alexander Korotkov wrote:\n>> > I don't like the idea that every custom table AM reltoptions should\n>> > begin with StdRdOptions. I would rather introduce the new data\n>> > structure with table options, which need to be accessed outside of\n>> > table AM. Then reloptions will be a backbox only directly used in\n>> > table AM, while table AM has a freedom on what to store in reloptions\n>> > and how to calculate externally-visible options. What do you think?\n>>\n>> Hi Alexander!\n>>\n>> I agree with all of that. It will take some refactoring to get there,\n>> though.\n>>\n>> One idea is to store StdRdOptions like normal, but if an unrecognized\n>> option is found, ask the table AM if it understands the option. In that\n>> case I think we'd just use a different field in pg_class so that it can\n>> use whatever format it wants to represent its options.\n>>\n>> Regards,\n>> Jeff Davis\n>\n> I tried to rework a patch regarding table am according to the input from Alexander and Jeff.\n>\n> It splits table reloptions into two categories:\n> - common for all tables (stored in a fixed size structure and could be accessed from outside)\n> - table-am specific (variable size, parsed and accessed by access method only)\n\nThank you for your work. Please, check the revised patch.\n\nIt makes CommonRdOptions a separate data structure, not directly\ninvolved in parsing the reloption. Instead table AM can fill it on\nthe base of its reloptions or calculate the other way. Patch comes\nwith a test module, which comes with heap-based table AM. This table\nAM has \"enable_parallel\" reloption, which is used as the base to set\nthe value of CommonRdOptions.parallel_workers.To me, a patch v10 looks good.I think the comment for RelationData now applies only to rd_options, not to rd_common_options.>NULLs means \"use defaults\".Regards,Pavel",
"msg_date": "Sun, 7 Apr 2024 12:34:42 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Sun, 7 Apr 2024 at 12:34, Pavel Borisov <[email protected]> wrote:\n\n> Hi, Alexander!\n>\n> On Sun, 7 Apr 2024 at 07:33, Alexander Korotkov <[email protected]>\n> wrote:\n>\n>> Hi, Pavel!\n>>\n>> On Fri, Apr 5, 2024 at 6:58 PM Pavel Borisov <[email protected]>\n>> wrote:\n>> > On Tue, 2 Apr 2024 at 19:17, Jeff Davis <[email protected]> wrote:\n>> >>\n>> >> On Tue, 2024-04-02 at 11:49 +0300, Alexander Korotkov wrote:\n>> >> > I don't like the idea that every custom table AM reltoptions should\n>> >> > begin with StdRdOptions. I would rather introduce the new data\n>> >> > structure with table options, which need to be accessed outside of\n>> >> > table AM. Then reloptions will be a backbox only directly used in\n>> >> > table AM, while table AM has a freedom on what to store in reloptions\n>> >> > and how to calculate externally-visible options. What do you think?\n>> >>\n>> >> Hi Alexander!\n>> >>\n>> >> I agree with all of that. It will take some refactoring to get there,\n>> >> though.\n>> >>\n>> >> One idea is to store StdRdOptions like normal, but if an unrecognized\n>> >> option is found, ask the table AM if it understands the option. In that\n>> >> case I think we'd just use a different field in pg_class so that it can\n>> >> use whatever format it wants to represent its options.\n>> >>\n>> >> Regards,\n>> >> Jeff Davis\n>> >\n>> > I tried to rework a patch regarding table am according to the input\n>> from Alexander and Jeff.\n>> >\n>> > It splits table reloptions into two categories:\n>> > - common for all tables (stored in a fixed size structure and could be\n>> accessed from outside)\n>> > - table-am specific (variable size, parsed and accessed by access\n>> method only)\n>>\n>> Thank you for your work. Please, check the revised patch.\n>>\n>> It makes CommonRdOptions a separate data structure, not directly\n>> involved in parsing the reloption. Instead table AM can fill it on\n>> the base of its reloptions or calculate the other way. Patch comes\n>> with a test module, which comes with heap-based table AM. This table\n>> AM has \"enable_parallel\" reloption, which is used as the base to set\n>> the value of CommonRdOptions.parallel_workers.\n>>\n> To me, a patch v10 looks good.\n>\n> I think the comment for RelationData now applies only to rd_options, not\n> to rd_common_options.\n> >NULLs means \"use defaults\".\n>\n> Regards,\n> Pavel\n>\n\nI made minor changes to the patch. Please find v11 attached.\n\nRegards,\nPavel.",
"msg_date": "Sun, 7 Apr 2024 23:15:00 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-03-30 23:33:04 +0200, Alexander Korotkov wrote:\n> I've pushed 0001, 0002 and 0006.\n\nI briefly looked at 27bc1772fc81 and I don't think the state post this commit\nmakes sense. Before this commit another block based AM could implement analyze\nwithout much code duplication. Now a large portion of analyze.c has to be\ncopied, because they can't stop acquire_sample_rows() from calling\nheapam_scan_analyze_next_block().\n\nI'm quite certain this will break a few out-of-core AMs in a way that can't\neasily be fixed.\n\n\nAnd even for non-block based AMs, the new interface basically requires\nreimplementing all of analyze.c.\n\nWhat am I missing here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2024 14:40:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn Mon, Apr 8, 2024 at 12:40 AM Andres Freund <[email protected]> wrote:\n> On 2024-03-30 23:33:04 +0200, Alexander Korotkov wrote:\n> > I've pushed 0001, 0002 and 0006.\n>\n> I briefly looked at 27bc1772fc81 and I don't think the state post this commit\n> makes sense. Before this commit another block based AM could implement analyze\n> without much code duplication. Now a large portion of analyze.c has to be\n> copied, because they can't stop acquire_sample_rows() from calling\n> heapam_scan_analyze_next_block().\n>\n> I'm quite certain this will break a few out-of-core AMs in a way that can't\n> easily be fixed.\n\nI was under the impression there are not so many out-of-core table\nAMs, which have non-dummy analysis implementations. And even if there\nare some, duplicating acquire_sample_rows() isn't a big deal.\n\nBut given your feedback, I'd like to propose to keep both options\nopen. Turn back the block-level API for analyze, but let table-AM\nimplement its own analyze function. Then existing out-of-core AMs\nwouldn't need to do anything (or probably just set the new API method\nto NULL).\n\n> And even for non-block based AMs, the new interface basically requires\n> reimplementing all of analyze.c.\n.\nNon-lock base AM needs to just provide an alternative implementation\nfor what acquire_sample_rows() does. This seems like reasonable\neffort for me, and surely not reimplementing all of analyze.c.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 8 Apr 2024 02:25:17 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 2:25 AM Alexander Korotkov <[email protected]> wrote:\n> On Mon, Apr 8, 2024 at 12:40 AM Andres Freund <[email protected]> wrote:\n> > On 2024-03-30 23:33:04 +0200, Alexander Korotkov wrote:\n> > > I've pushed 0001, 0002 and 0006.\n> >\n> > I briefly looked at 27bc1772fc81 and I don't think the state post this commit\n> > makes sense. Before this commit another block based AM could implement analyze\n> > without much code duplication. Now a large portion of analyze.c has to be\n> > copied, because they can't stop acquire_sample_rows() from calling\n> > heapam_scan_analyze_next_block().\n> >\n> > I'm quite certain this will break a few out-of-core AMs in a way that can't\n> > easily be fixed.\n>\n> I was under the impression there are not so many out-of-core table\n> AMs, which have non-dummy analysis implementations. And even if there\n> are some, duplicating acquire_sample_rows() isn't a big deal.\n>\n> But given your feedback, I'd like to propose to keep both options\n> open. Turn back the block-level API for analyze, but let table-AM\n> implement its own analyze function. Then existing out-of-core AMs\n> wouldn't need to do anything (or probably just set the new API method\n> to NULL).\n\nThe attached patch was to illustrate the approach. It surely needs\nsome polishing.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 8 Apr 2024 02:31:36 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-08 02:25:17 +0300, Alexander Korotkov wrote:\n> On Mon, Apr 8, 2024 at 12:40 AM Andres Freund <[email protected]> wrote:\n> > On 2024-03-30 23:33:04 +0200, Alexander Korotkov wrote:\n> > > I've pushed 0001, 0002 and 0006.\n> >\n> > I briefly looked at 27bc1772fc81 and I don't think the state post this commit\n> > makes sense. Before this commit another block based AM could implement analyze\n> > without much code duplication. Now a large portion of analyze.c has to be\n> > copied, because they can't stop acquire_sample_rows() from calling\n> > heapam_scan_analyze_next_block().\n> >\n> > I'm quite certain this will break a few out-of-core AMs in a way that can't\n> > easily be fixed.\n>\n> I was under the impression there are not so many out-of-core table\n> AMs, which have non-dummy analysis implementations.\n\nI know of at least 4 that have some production usage.\n\n\n> And even if there are some, duplicating acquire_sample_rows() isn't a big\n> deal.\n\nI don't agree. The code has evolved a bunch over time, duplicating it into\nvarious AMs is a bad idea.\n\n\n> But given your feedback, I'd like to propose to keep both options\n> open. Turn back the block-level API for analyze, but let table-AM\n> implement its own analyze function. Then existing out-of-core AMs\n> wouldn't need to do anything (or probably just set the new API method\n> to NULL).\n\nI think this patch simply hasn't been reviewed even close to careful enough\nand should be reverted. It's IMO to late for a redesign. Sorry for not\nlooking earlier, I was mostly out sick for the last few months.\n\nI think a dedicated tableam callback for sample acquisition probably makes\nsense, but if we want that, we need to provide an easy way for AMs that are\nsufficiently block-like to reuse the code, not have two different ways to\nimplement analyze.\n\nISTM that ->relation_analyze is quite misleading as a name. For one, it it\njust sets some callbacks, no? But more importantly, it sounds like it'd\nactually allow to wrap the whole analyze process, rather than just the\nacquisition of samples.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2024 16:49:31 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander and Andres!\n\nOn Mon, 8 Apr 2024 at 03:25, Alexander Korotkov <[email protected]>\nwrote:\n\n> Hi,\n>\n> On Mon, Apr 8, 2024 at 12:40 AM Andres Freund <[email protected]> wrote:\n> > On 2024-03-30 23:33:04 +0200, Alexander Korotkov wrote:\n> > > I've pushed 0001, 0002 and 0006.\n> >\n> > I briefly looked at 27bc1772fc81 and I don't think the state post this\n> commit\n> > makes sense. Before this commit another block based AM could implement\n> analyze\n> > without much code duplication. Now a large portion of analyze.c has to be\n> > copied, because they can't stop acquire_sample_rows() from calling\n> > heapam_scan_analyze_next_block().\n> >\n> > I'm quite certain this will break a few out-of-core AMs in a way that\n> can't\n> > easily be fixed.\n>\n> I was under the impression there are not so many out-of-core table\n> AMs, which have non-dummy analysis implementations. And even if there\n> are some, duplicating acquire_sample_rows() isn't a big deal.\n>\n> But given your feedback, I'd like to propose to keep both options\n> open. Turn back the block-level API for analyze, but let table-AM\n> implement its own analyze function. Then existing out-of-core AMs\n> wouldn't need to do anything (or probably just set the new API method\n> to NULL).\n>\nI think that providing both new and old interface functions for block-based\nand non-block based custom am is an excellent compromise.\n\nThe patch v1-0001-Turn-back.. is mainly an undo of part of the 27bc1772fc81\nthat had turned off _analyze_next_tuple..analyze_next_block for external\ncallers. If some extensions are already adapted to the old interface\nfunctions, they are free to still use it.\n\n> And even for non-block based AMs, the new interface basically requires\n> > reimplementing all of analyze.c.\n> .\n> Non-lock base AM needs to just provide an alternative implementation\n> for what acquire_sample_rows() does. This seems like reasonable\n> effort for me, and surely not reimplementing all of analyze.c.\n>\nI agree.\n\nRegards,\nPavel Borisov\nSupabase\n\nHi, Alexander and Andres!On Mon, 8 Apr 2024 at 03:25, Alexander Korotkov <[email protected]> wrote:Hi,\n\nOn Mon, Apr 8, 2024 at 12:40 AM Andres Freund <[email protected]> wrote:\n> On 2024-03-30 23:33:04 +0200, Alexander Korotkov wrote:\n> > I've pushed 0001, 0002 and 0006.\n>\n> I briefly looked at 27bc1772fc81 and I don't think the state post this commit\n> makes sense. Before this commit another block based AM could implement analyze\n> without much code duplication. Now a large portion of analyze.c has to be\n> copied, because they can't stop acquire_sample_rows() from calling\n> heapam_scan_analyze_next_block().\n>\n> I'm quite certain this will break a few out-of-core AMs in a way that can't\n> easily be fixed.\n\nI was under the impression there are not so many out-of-core table\nAMs, which have non-dummy analysis implementations. And even if there\nare some, duplicating acquire_sample_rows() isn't a big deal.\n\nBut given your feedback, I'd like to propose to keep both options\nopen. Turn back the block-level API for analyze, but let table-AM\nimplement its own analyze function. Then existing out-of-core AMs\nwouldn't need to do anything (or probably just set the new API method\nto NULL).I think that providing both new and old interface functions for block-based and non-block based custom am is an excellent compromise. The patch v1-0001-Turn-back.. is mainly an undo of part of the 27bc1772fc81 that had turned off _analyze_next_tuple..analyze_next_block for external callers. If some extensions are already adapted to the old interface functions, they are free to still use it.\n> And even for non-block based AMs, the new interface basically requires\n> reimplementing all of analyze.c.\n.\nNon-lock base AM needs to just provide an alternative implementation\nfor what acquire_sample_rows() does. This seems like reasonable\neffort for me, and surely not reimplementing all of analyze.c.I agree.Regards, Pavel BorisovSupabase",
"msg_date": "Mon, 8 Apr 2024 11:17:51 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 10:18 AM Pavel Borisov <[email protected]> wrote:\n> On Mon, 8 Apr 2024 at 03:25, Alexander Korotkov <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> On Mon, Apr 8, 2024 at 12:40 AM Andres Freund <[email protected]> wrote:\n>> > On 2024-03-30 23:33:04 +0200, Alexander Korotkov wrote:\n>> > > I've pushed 0001, 0002 and 0006.\n>> >\n>> > I briefly looked at 27bc1772fc81 and I don't think the state post this commit\n>> > makes sense. Before this commit another block based AM could implement analyze\n>> > without much code duplication. Now a large portion of analyze.c has to be\n>> > copied, because they can't stop acquire_sample_rows() from calling\n>> > heapam_scan_analyze_next_block().\n>> >\n>> > I'm quite certain this will break a few out-of-core AMs in a way that can't\n>> > easily be fixed.\n>>\n>> I was under the impression there are not so many out-of-core table\n>> AMs, which have non-dummy analysis implementations. And even if there\n>> are some, duplicating acquire_sample_rows() isn't a big deal.\n>>\n>> But given your feedback, I'd like to propose to keep both options\n>> open. Turn back the block-level API for analyze, but let table-AM\n>> implement its own analyze function. Then existing out-of-core AMs\n>> wouldn't need to do anything (or probably just set the new API method\n>> to NULL).\n>\n> I think that providing both new and old interface functions for block-based and non-block based custom am is an excellent compromise.\n>\n> The patch v1-0001-Turn-back.. is mainly an undo of part of the 27bc1772fc81 that had turned off _analyze_next_tuple..analyze_next_block for external callers. If some extensions are already adapted to the old interface functions, they are free to still use it.\n\nPlease, check this. Instead of keeping two APIs, it generalizes\nacquire_sample_rows(). The downside is change of\nAcquireSampleRowsFunc signature, which would need some changes in FDWs\ntoo.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 8 Apr 2024 12:34:09 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, 8 Apr 2024 at 13:34, Alexander Korotkov <[email protected]>\nwrote:\n\n> On Mon, Apr 8, 2024 at 10:18 AM Pavel Borisov <[email protected]>\n> wrote:\n> > On Mon, 8 Apr 2024 at 03:25, Alexander Korotkov <[email protected]>\n> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On Mon, Apr 8, 2024 at 12:40 AM Andres Freund <[email protected]>\n> wrote:\n> >> > On 2024-03-30 23:33:04 +0200, Alexander Korotkov wrote:\n> >> > > I've pushed 0001, 0002 and 0006.\n> >> >\n> >> > I briefly looked at 27bc1772fc81 and I don't think the state post\n> this commit\n> >> > makes sense. Before this commit another block based AM could\n> implement analyze\n> >> > without much code duplication. Now a large portion of analyze.c has\n> to be\n> >> > copied, because they can't stop acquire_sample_rows() from calling\n> >> > heapam_scan_analyze_next_block().\n> >> >\n> >> > I'm quite certain this will break a few out-of-core AMs in a way that\n> can't\n> >> > easily be fixed.\n> >>\n> >> I was under the impression there are not so many out-of-core table\n> >> AMs, which have non-dummy analysis implementations. And even if there\n> >> are some, duplicating acquire_sample_rows() isn't a big deal.\n> >>\n> >> But given your feedback, I'd like to propose to keep both options\n> >> open. Turn back the block-level API for analyze, but let table-AM\n> >> implement its own analyze function. Then existing out-of-core AMs\n> >> wouldn't need to do anything (or probably just set the new API method\n> >> to NULL).\n> >\n> > I think that providing both new and old interface functions for\n> block-based and non-block based custom am is an excellent compromise.\n> >\n> > The patch v1-0001-Turn-back.. is mainly an undo of part of the\n> 27bc1772fc81 that had turned off _analyze_next_tuple..analyze_next_block\n> for external callers. If some extensions are already adapted to the old\n> interface functions, they are free to still use it.\n>\n> Please, check this. Instead of keeping two APIs, it generalizes\n> acquire_sample_rows(). The downside is change of\n> AcquireSampleRowsFunc signature, which would need some changes in FDWs\n> too.\n>\nTo me, both approaches v1-0001-Turn-back... and v2-0001-Generalize... and\npatch v2 look good.\n\nPavel.\n\nOn Mon, 8 Apr 2024 at 13:34, Alexander Korotkov <[email protected]> wrote:On Mon, Apr 8, 2024 at 10:18 AM Pavel Borisov <[email protected]> wrote:\n> On Mon, 8 Apr 2024 at 03:25, Alexander Korotkov <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> On Mon, Apr 8, 2024 at 12:40 AM Andres Freund <[email protected]> wrote:\n>> > On 2024-03-30 23:33:04 +0200, Alexander Korotkov wrote:\n>> > > I've pushed 0001, 0002 and 0006.\n>> >\n>> > I briefly looked at 27bc1772fc81 and I don't think the state post this commit\n>> > makes sense. Before this commit another block based AM could implement analyze\n>> > without much code duplication. Now a large portion of analyze.c has to be\n>> > copied, because they can't stop acquire_sample_rows() from calling\n>> > heapam_scan_analyze_next_block().\n>> >\n>> > I'm quite certain this will break a few out-of-core AMs in a way that can't\n>> > easily be fixed.\n>>\n>> I was under the impression there are not so many out-of-core table\n>> AMs, which have non-dummy analysis implementations. And even if there\n>> are some, duplicating acquire_sample_rows() isn't a big deal.\n>>\n>> But given your feedback, I'd like to propose to keep both options\n>> open. Turn back the block-level API for analyze, but let table-AM\n>> implement its own analyze function. Then existing out-of-core AMs\n>> wouldn't need to do anything (or probably just set the new API method\n>> to NULL).\n>\n> I think that providing both new and old interface functions for block-based and non-block based custom am is an excellent compromise.\n>\n> The patch v1-0001-Turn-back.. is mainly an undo of part of the 27bc1772fc81 that had turned off _analyze_next_tuple..analyze_next_block for external callers. If some extensions are already adapted to the old interface functions, they are free to still use it.\n\nPlease, check this. Instead of keeping two APIs, it generalizes\nacquire_sample_rows(). The downside is change of\nAcquireSampleRowsFunc signature, which would need some changes in FDWs\ntoo.To me, both approaches v1-0001-Turn-back... and v2-0001-Generalize... and patch v2 look good. Pavel.",
"msg_date": "Mon, 8 Apr 2024 13:59:43 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander\n\nOn Mon, 8 Apr 2024 at 13:59, Pavel Borisov <[email protected]> wrote:\n\n>\n>\n> On Mon, 8 Apr 2024 at 13:34, Alexander Korotkov <[email protected]>\n> wrote:\n>\n>> On Mon, Apr 8, 2024 at 10:18 AM Pavel Borisov <[email protected]>\n>> wrote:\n>> > On Mon, 8 Apr 2024 at 03:25, Alexander Korotkov <[email protected]>\n>> wrote:\n>> >>\n>> >> Hi,\n>> >>\n>> >> On Mon, Apr 8, 2024 at 12:40 AM Andres Freund <[email protected]>\n>> wrote:\n>> >> > On 2024-03-30 23:33:04 +0200, Alexander Korotkov wrote:\n>> >> > > I've pushed 0001, 0002 and 0006.\n>> >> >\n>> >> > I briefly looked at 27bc1772fc81 and I don't think the state post\n>> this commit\n>> >> > makes sense. Before this commit another block based AM could\n>> implement analyze\n>> >> > without much code duplication. Now a large portion of analyze.c has\n>> to be\n>> >> > copied, because they can't stop acquire_sample_rows() from calling\n>> >> > heapam_scan_analyze_next_block().\n>> >> >\n>> >> > I'm quite certain this will break a few out-of-core AMs in a way\n>> that can't\n>> >> > easily be fixed.\n>> >>\n>> >> I was under the impression there are not so many out-of-core table\n>> >> AMs, which have non-dummy analysis implementations. And even if there\n>> >> are some, duplicating acquire_sample_rows() isn't a big deal.\n>> >>\n>> >> But given your feedback, I'd like to propose to keep both options\n>> >> open. Turn back the block-level API for analyze, but let table-AM\n>> >> implement its own analyze function. Then existing out-of-core AMs\n>> >> wouldn't need to do anything (or probably just set the new API method\n>> >> to NULL).\n>> >\n>> > I think that providing both new and old interface functions for\n>> block-based and non-block based custom am is an excellent compromise.\n>> >\n>> > The patch v1-0001-Turn-back.. is mainly an undo of part of the\n>> 27bc1772fc81 that had turned off _analyze_next_tuple..analyze_next_block\n>> for external callers. If some extensions are already adapted to the old\n>> interface functions, they are free to still use it.\n>>\n>> Please, check this. Instead of keeping two APIs, it generalizes\n>> acquire_sample_rows(). The downside is change of\n>> AcquireSampleRowsFunc signature, which would need some changes in FDWs\n>> too.\n>>\n> To me, both approaches v1-0001-Turn-back... and v2-0001-Generalize... and\n> patch v2 look good.\n>\n> Pavel.\n>\n\nI added some changes in comments to better reflect changes in patch v2. See\na patch v3 (code unchanged from v2)\n\nRegards,\nPavel",
"msg_date": "Mon, 8 Apr 2024 15:15:59 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-08 11:17:51 +0400, Pavel Borisov wrote:\n> On Mon, 8 Apr 2024 at 03:25, Alexander Korotkov <[email protected]>\n> > I was under the impression there are not so many out-of-core table\n> > AMs, which have non-dummy analysis implementations. And even if there\n> > are some, duplicating acquire_sample_rows() isn't a big deal.\n> >\n> > But given your feedback, I'd like to propose to keep both options\n> > open. Turn back the block-level API for analyze, but let table-AM\n> > implement its own analyze function. Then existing out-of-core AMs\n> > wouldn't need to do anything (or probably just set the new API method\n> > to NULL).\n> >\n> I think that providing both new and old interface functions for block-based\n> and non-block based custom am is an excellent compromise.\n\nI don't agree, that way lies an unmanageable API. To me the new API doesn't\nlook well polished either, so it's not a question of a smoother transition or\nsomething like that.\n\nI don't think redesigning extension APIs at this stage of the release cycle\nmakes sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Apr 2024 08:37:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On 2024-04-08 08:37:44 -0700, Andres Freund wrote:\n> On 2024-04-08 11:17:51 +0400, Pavel Borisov wrote:\n> > On Mon, 8 Apr 2024 at 03:25, Alexander Korotkov <[email protected]>\n> > > I was under the impression there are not so many out-of-core table\n> > > AMs, which have non-dummy analysis implementations. And even if there\n> > > are some, duplicating acquire_sample_rows() isn't a big deal.\n> > >\n> > > But given your feedback, I'd like to propose to keep both options\n> > > open. Turn back the block-level API for analyze, but let table-AM\n> > > implement its own analyze function. Then existing out-of-core AMs\n> > > wouldn't need to do anything (or probably just set the new API method\n> > > to NULL).\n> > >\n> > I think that providing both new and old interface functions for block-based\n> > and non-block based custom am is an excellent compromise.\n>\n> I don't agree, that way lies an unmanageable API. To me the new API doesn't\n> look well polished either, so it's not a question of a smoother transition or\n> something like that.\n>\n> I don't think redesigning extension APIs at this stage of the release cycle\n> makes sense.\n\nWait, you already pushed an API redesign? With a design that hasn't even seen\nthe list from what I can tell? Without even mentioning that on the list? You\ngot to be kidding me.\n\n\n",
"msg_date": "Mon, 8 Apr 2024 09:08:51 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 8, 2024, 19:08 Andres Freund <[email protected]> wrote:\n\n> On 2024-04-08 08:37:44 -0700, Andres Freund wrote:\n> > On 2024-04-08 11:17:51 +0400, Pavel Borisov wrote:\n> > > On Mon, 8 Apr 2024 at 03:25, Alexander Korotkov <[email protected]>\n> > > > I was under the impression there are not so many out-of-core table\n> > > > AMs, which have non-dummy analysis implementations. And even if\n> there\n> > > > are some, duplicating acquire_sample_rows() isn't a big deal.\n> > > >\n> > > > But given your feedback, I'd like to propose to keep both options\n> > > > open. Turn back the block-level API for analyze, but let table-AM\n> > > > implement its own analyze function. Then existing out-of-core AMs\n> > > > wouldn't need to do anything (or probably just set the new API method\n> > > > to NULL).\n> > > >\n> > > I think that providing both new and old interface functions for\n> block-based\n> > > and non-block based custom am is an excellent compromise.\n> >\n> > I don't agree, that way lies an unmanageable API. To me the new API\n> doesn't\n> > look well polished either, so it's not a question of a smoother\n> transition or\n> > something like that.\n> >\n> > I don't think redesigning extension APIs at this stage of the release\n> cycle\n> > makes sense.\n>\n> Wait, you already pushed an API redesign? With a design that hasn't even\n> seen\n> the list from what I can tell? Without even mentioning that on the list?\n> You\n> got to be kidding me.\n>\n\nYes, it was my mistake. I got rushing trying to fit this to FF, even doing\nsignificant changes just before commit.\nI'll revert this later today.\n\n------\nRegards,\nAlexander Korotkov\n\nOn Mon, Apr 8, 2024, 19:08 Andres Freund <[email protected]> wrote:On 2024-04-08 08:37:44 -0700, Andres Freund wrote:\n> On 2024-04-08 11:17:51 +0400, Pavel Borisov wrote:\n> > On Mon, 8 Apr 2024 at 03:25, Alexander Korotkov <[email protected]>\n> > > I was under the impression there are not so many out-of-core table\n> > > AMs, which have non-dummy analysis implementations. And even if there\n> > > are some, duplicating acquire_sample_rows() isn't a big deal.\n> > >\n> > > But given your feedback, I'd like to propose to keep both options\n> > > open. Turn back the block-level API for analyze, but let table-AM\n> > > implement its own analyze function. Then existing out-of-core AMs\n> > > wouldn't need to do anything (or probably just set the new API method\n> > > to NULL).\n> > >\n> > I think that providing both new and old interface functions for block-based\n> > and non-block based custom am is an excellent compromise.\n>\n> I don't agree, that way lies an unmanageable API. To me the new API doesn't\n> look well polished either, so it's not a question of a smoother transition or\n> something like that.\n>\n> I don't think redesigning extension APIs at this stage of the release cycle\n> makes sense.\n\nWait, you already pushed an API redesign? With a design that hasn't even seen\nthe list from what I can tell? Without even mentioning that on the list? You\ngot to be kidding me.Yes, it was my mistake. I got rushing trying to fit this to FF, even doing significant changes just before commit.I'll revert this later today.------Regards,Alexander Korotkov",
"msg_date": "Mon, 8 Apr 2024 19:32:50 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 12:33 PM Alexander Korotkov <[email protected]> wrote:\n> Yes, it was my mistake. I got rushing trying to fit this to FF, even doing significant changes just before commit.\n> I'll revert this later today.\n\nAlexander,\n\nExactly how much is getting reverted here? I see these, all since March 23rd:\n\ndd1f6b0c17 Provide a way block-level table AMs could re-use\nacquire_sample_rows()\n9bd99f4c26 Custom reloptions for table AM\n97ce821e3e Fix the parameters order for\nTableAmRoutine.relation_copy_for_cluster()\n867cc7b6dd Revert \"Custom reloptions for table AM\"\nb1484a3f19 Let table AM insertion methods control index insertion\nc95c25f9af Custom reloptions for table AM\n27bc1772fc Generalize relation analyze in table AM interface\n87985cc925 Allow locking updated tuples in tuple_update() and tuple_delete()\nc35a3fb5e0 Allow table AM tuple_insert() method to return the different slot\n02eb07ea89 Allow table AM to store complex data structures in rd_amcache\n\nI'm not really feeling very good about all of this, because:\n\n- 87985cc925 was previously committed as 11470f544e on March 23, 2023,\nand almost immediately reverted. Now you tried again on March 26,\n2024. I know there was a bunch of rework in the middle, but there are\ntimes in the year that things can be committed other than right before\nthe feature freeze. Like, don't wait a whole year for the next attempt\nand then again do it right before the cutoff.\n\n- The Discussion links in the commit messages do not seem to stand for\nthe proposition that these particular patches ought to be committed in\nthis form. Some of them are just links to the messages where the patch\nwas originally posted, which is probably not against policy or\nanything, but it'd be nicer to see links to versions of the patch with\nwhich people are, in nearby emails, agreeing. Even worse, some of\nthese are links to emails where somebody said, \"hey, some earlier\ncommit does not look good.\" In particular,\ndd1f6b0c172a643a73d6b71259fa2d10378b39eb has a discussion link where\nAndres complains about 27bc1772fc814946918a5ac8ccb9b5c5ad0380aa, but\nit's not clear how that justifies the new commit.\n\n- The commit message for 867cc7b6dd says \"This reverts commit\nc95c25f9af4bc77f2f66a587735c50da08c12b37 due to multiple design issues\nspotted after commit.\" That's not a very good justification for then\ntrying again 6 days later with 9bd99f4c26, and it's *definitely* not a\ngood justification for there being no meaningful discussion links in\nthe commit message for 9bd99f4c26. They're just the same links you had\nin the previous attempt, so it's pretty hard for anybody to understand\nwhat got fixed and whether all of the concerns were really addressed.\nJust looking over the commit, it's pretty hard to understand what is\nbeing changed and why: there's not a lot of comment updates, there's\nno documentation changes, and there's not a lot of explanation in the\ncommit message either. Even if this feature is great and all the code\nis perfect now, it's going to be hard for anyone to figure out how to\nuse it.\n\n97ce821e3e looks like a clear bug fix to me, but I wonder if the rest\nof this should all just be reverted, with a ban on ever trying it\nagain after March 1 of any year. I'd like to believe that there are\nonly bookkeeping problems here, and that there was in fact clear\nagreement that all of these changes should be made in this form, and\nthat the commit messages simply failed to reference the most relevant\nemails. But what I fear, especially in view of Andres's remarks, is\nthat these commits were done in haste without adequate consensus, and\nI think that's a serious problem.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Apr 2024 14:54:46 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 9:54 PM Robert Haas <[email protected]> wrote:\n> On Mon, Apr 8, 2024 at 12:33 PM Alexander Korotkov <[email protected]> wrote:\n> > Yes, it was my mistake. I got rushing trying to fit this to FF, even doing significant changes just before commit.\n> > I'll revert this later today.\n\nIt appears to be a non-trivial revert, because 041b96802e already\nrevised the relation analyze after 27bc1772fc. That is, I would need\nto \"backport\" 041b96802e. Sorry, I'm too tired to do this today.\nI'll come back to this tomorrow.\n\n> Alexander,\n>\n> Exactly how much is getting reverted here? I see these, all since March 23rd:\n>\n> dd1f6b0c17 Provide a way block-level table AMs could re-use\n> acquire_sample_rows()\n> 9bd99f4c26 Custom reloptions for table AM\n> 97ce821e3e Fix the parameters order for\n> TableAmRoutine.relation_copy_for_cluster()\n> 867cc7b6dd Revert \"Custom reloptions for table AM\"\n> b1484a3f19 Let table AM insertion methods control index insertion\n> c95c25f9af Custom reloptions for table AM\n> 27bc1772fc Generalize relation analyze in table AM interface\n> 87985cc925 Allow locking updated tuples in tuple_update() and tuple_delete()\n> c35a3fb5e0 Allow table AM tuple_insert() method to return the different slot\n> 02eb07ea89 Allow table AM to store complex data structures in rd_amcache\n\nIt would be discouraging to revert all of this. Some items are very\nsimple, some items get a lot of work. I'll come back tomorrow and\nanswer all your points.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 8 Apr 2024 23:49:46 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 9:54 PM Robert Haas <[email protected]> wrote:\n> On Mon, Apr 8, 2024 at 12:33 PM Alexander Korotkov <[email protected]> wrote:\n> > Yes, it was my mistake. I got rushing trying to fit this to FF, even doing significant changes just before commit.\n> > I'll revert this later today.\n\nThe patch to revert is attached. Given that revert touches the work\ndone in 041b96802e, I think it needs some feedback before push.\n\n> Alexander,\n>\n> Exactly how much is getting reverted here? I see these, all since March 23rd:\n>\n> dd1f6b0c17 Provide a way block-level table AMs could re-use\n> acquire_sample_rows()\n> 9bd99f4c26 Custom reloptions for table AM\n> 97ce821e3e Fix the parameters order for\n> TableAmRoutine.relation_copy_for_cluster()\n> 867cc7b6dd Revert \"Custom reloptions for table AM\"\n> b1484a3f19 Let table AM insertion methods control index insertion\n> c95c25f9af Custom reloptions for table AM\n> 27bc1772fc Generalize relation analyze in table AM interface\n> 87985cc925 Allow locking updated tuples in tuple_update() and tuple_delete()\n> c35a3fb5e0 Allow table AM tuple_insert() method to return the different slot\n> 02eb07ea89 Allow table AM to store complex data structures in rd_amcache\n>\n> I'm not really feeling very good about all of this, because:\n>\n> - 87985cc925 was previously committed as 11470f544e on March 23, 2023,\n> and almost immediately reverted. Now you tried again on March 26,\n> 2024. I know there was a bunch of rework in the middle, but there are\n> times in the year that things can be committed other than right before\n> the feature freeze. Like, don't wait a whole year for the next attempt\n> and then again do it right before the cutoff.\n\nI agree with the facts. But I have a different interpretation on\nthis. The patch was committed as 11470f544e on March 23, 2023, then\nreverted on April 3. I've proposed the revised version, but Andres\ncomplained that this is the new API design days before FF. Then the\npatch with this design was published in the thread for the year with\nperiodical rebases. So, I think I expressed my intention with that\ndesign before 2023 FF, nobody prevented me from expressing objections\nor other feedback during the year. Then I realized that 2024 FF is\napproaching and decided to give this another try for pg18.\n\nBut I don't yet see it's wrong with this patch. I waited a year for\nfeedback. I waited 2 days after saying \"I will push this if no\nobjections\". Given your feedback now, I get that it would be better to\ndo another attempt to commit this earlier.\n\nI admit my mistake with dd1f6b0c17. I get rushed trying to fix the\nthings actually making things worse. I apologise for this. But if\nI'm forced to revert 87985cc925 without even hearing any reasonable\ncritics besides imperfection of timing, I feel like this is the\npunishment for my mistake with dd1f6b0c17. Pretty unreasonable\npunishment in my view.\n\n> - The Discussion links in the commit messages do not seem to stand for\n> the proposition that these particular patches ought to be committed in\n> this form. Some of them are just links to the messages where the patch\n> was originally posted, which is probably not against policy or\n> anything, but it'd be nicer to see links to versions of the patch with\n> which people are, in nearby emails, agreeing. Even worse, some of\n> these are links to emails where somebody said, \"hey, some earlier\n> commit does not look good.\" In particular,\n> dd1f6b0c172a643a73d6b71259fa2d10378b39eb has a discussion link where\n> Andres complains about 27bc1772fc814946918a5ac8ccb9b5c5ad0380aa, but\n> it's not clear how that justifies the new commit.\n\nI have to repeat again, that I admit my mistake with dd1f6b0c17,\napologize for that, and make my own conclusions to not repeat this.\nBut dd1f6b0c17 seems to be the only one that has a link to the message\nwith complains. I went through the list of commits above, it seems\nthat others have just linked to the first message of the thread.\nProbably, there is a lack of consensus for some of them. But I never\nheard about a policy to link not just the discussion start, but also\nexact messages expressing agreeing. And I didn't see others doing\nthat.\n\n> - The commit message for 867cc7b6dd says \"This reverts commit\n> c95c25f9af4bc77f2f66a587735c50da08c12b37 due to multiple design issues\n> spotted after commit.\" That's not a very good justification for then\n> trying again 6 days later with 9bd99f4c26, and it's *definitely* not a\n> good justification for there being no meaningful discussion links in\n> the commit message for 9bd99f4c26. They're just the same links you had\n> in the previous attempt, so it's pretty hard for anybody to understand\n> what got fixed and whether all of the concerns were really addressed.\n> Just looking over the commit, it's pretty hard to understand what is\n> being changed and why: there's not a lot of comment updates, there's\n> no documentation changes, and there's not a lot of explanation in the\n> commit message either. Even if this feature is great and all the code\n> is perfect now, it's going to be hard for anyone to figure out how to\n> use it.\n\n1) 9bd99f4c26 comprises the reworked patch after working with notes\nfrom Jeff Davis. I agree it would be better to wait for him to\nexpress explicit agreement. Before reverting this, I would prefer to\nhear his opinion.\n2) One of the issues here is that table AM API doesn't have\ndocumentation, it has just a very brief page which doesn't go deep\nexplaining particular API methods. I have heard a lot of complains\nabout that from users attempting to write table access methods. It's\nnow too late to complain about that (but if I had a wisdom of now back\nduring pg12 development I would definitely object against table AM API\nbeing committed at that shape). I understand I could be more\nproactive and propose a patch with that documentation.\n\n> 97ce821e3e looks like a clear bug fix to me, but I wonder if the rest\n> of this should all just be reverted, with a ban on ever trying it\n> again after March 1 of any year.\n\nDo you propose a ban from March 1 to the end of any year? I think the\nfirst doesn't make sense, because it leaves only 2 months a year for\nthe work. That would create a potential rush during these 2 month and\ncould serve exactly opposite to the intention. So, I guess this means\na ban from March 1 to the FF of any year. The situation now is quite\nunpleasant for me. So I'm far from repeating this next year.\nHowever, if there should be a formal ban, it should be specified.\nDoes it relate to the patches I've pushed, all patches in this thread,\nall similar patches, all table AM patches, or other API patches?\n\nSurely, I'm an interested party and can't be impartial. But I think\nit would be nice if we introduce some general rules based on this\nexperience. Could we have some API freeze date some time before the\nfeature freeze?\n\n> I'd like to believe that there are\n> only bookkeeping problems here, and that there was in fact clear\n> agreement that all of these changes should be made in this form, and\n> that the commit messages simply failed to reference the most relevant\n> emails. But what I fear, especially in view of Andres's remarks, is\n> that these commits were done in haste without adequate consensus, and\n> I think that's a serious problem.\n\nThis thread had a lot of patches for table AM API. My intention for\npg17 was to commit the easiest and least contradictory of them. I\nunderstand there should be more consensus for some of them and\ncommitting dd1f6b0c17 instead of reverting 27bc1772fc was a mistake.\nBut I don't feel good about reverting everything in a row without\nclear feedback.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 10 Apr 2024 15:19:47 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 8:20 AM Alexander Korotkov <[email protected]> wrote:\n> I agree with the facts. But I have a different interpretation on\n> this. The patch was committed as 11470f544e on March 23, 2023, then\n> reverted on April 3. I've proposed the revised version, but Andres\n> complained that this is the new API design days before FF.\n\nWell, his first complaint that your committed patch was full of bugs:\n\nhttps://www.postgresql.org/message-id/20230323003003.plgaxjqahjgkuxrk%40awork3.anarazel.de\n\nWhen you commit a patch and another committer writes a post-commit\nreview saying that your patch has so many serious problems that he\ngave up on reviewing before enumerating all of them, that's a really\nbad sign. That should be a strong signal to you to step back and take\na close look at whether you really understand the area of the code\nthat you're touching well enough to be doing whatever it is that\nyou're doing. If I got a review like that, I would have reverted the\npatch instantly, given up for the release cycle, possibly given up on\nthe patch permanently, and most definitely not tried again to commit\nunless I was absolutely certain that I'd learned a lot in the meantime\n*and* had the agreement of the committer who wrote that review (or\nmaybe some other committer who was acknowledged as an expert in that\narea of the code).\n\nWhat you did instead is try to do a bunch of post-commit fixup in a\ndesperate rush right before feature freeze, to which Andres\nunderstandably objected. But that was your second mistake, not your\nfirst one.\n\n> Then the\n> patch with this design was published in the thread for the year with\n> periodical rebases. So, I think I expressed my intention with that\n> design before 2023 FF, nobody prevented me from expressing objections\n> or other feedback during the year. Then I realized that 2024 FF is\n> approaching and decided to give this another try for pg18.\n\nThis doesn't seem to match the facts as I understand them. It appears\nto me that there was no activity on the thread from April until\nNovember. The message in November was not written by you. Your first\npost to the thread after April of 2023 was on March 19, 2024. Five\ndays later you said you wanted to commit. That doesn't look to me like\nyou worked diligently on the patch set throughout the year and other\npeople had reasonable notice that you planned to get the work done\nthis cycle. It looks like you ignored the patch for 11 months and then\ncommitted it without any real further feedback from anyone. True,\nPavel did post and say that he thought the patches were in good shape.\nBut you could hardly take that as evidence that Andres was now content\nthat the problems he'd raised earlier had been fixed, because (a)\nPavel had also been involved beforehand and had not raised the\nconcerns that Andres later raised and (b) Pavel wrote nothing in his\nemail specifically about why he thought your changes or his had\nresolved those concerns. I certainly agree that Andres doesn't always\ngive as much review feedback as I'd like to have from him in, and it's\nalso true that he doesn't always give that feedback as quickly as I'd\nlike to have it ... but you know what?\n\nIt's not Andres's job to make sure my patches are not broken. It's my\njob. That applies to the patches I write, and the patches written by\nother people that I commit. If I commit something and it turns out\nthat it is broken, that's my bad. If I commit something and it turns\nout that it does not have consensus, that is also my bad. It is not\nthe fault of the other people for not helping me get my patches to a\nstate where they are up to project standard. It is my fault, and my\nfault alone, for committing something that was not ready. Now that\ndoes not mean that it isn't frustrating when I can't get the help I\nneed. It is extremely frustrating. But the solution is not to commit\nanyway and then blame the other people for not providing feedback.\n\nI mean, committing without explicit agreement from someone else is OK\nif you're pretty sure that you've got everything sorted out correctly.\nBut I don't think that the paper trail here supports the narrative\nthat you worked on this diligently throughout the year and had every\nreason to believe it would be acceptable to the community. If I'd\nlooked at this thread, I would have concluded that you'd abandoned the\nproject. I would have expected that, when you picked it up again,\nthere would be a series of emails over a period of time carefully\nworking through the various issues that had been raised, inviting\nspecific commentary on specific discussion points, and generally\nrefining the work, and then maybe a suggestion of a commit at the end.\nI would not have expected an email or two basically saying \"well,\nseems like it's all fixed now,\" followed by a commit.\n\n> Do you propose a ban from March 1 to the end of any year? I think the\n> first doesn't make sense, because it leaves only 2 months a year for\n> the work. That would create a potential rush during these 2 month and\n> could serve exactly opposite to the intention. So, I guess this means\n> a ban from March 1 to the FF of any year. The situation now is quite\n> unpleasant for me. So I'm far from repeating this next year.\n> However, if there should be a formal ban, it should be specified.\n> Does it relate to the patches I've pushed, all patches in this thread,\n> all similar patches, all table AM patches, or other API patches?\n\nI meant from March 1 to feature freeze, but maybe I should have\nproposed that you shouldn't ever commit these patches. The more I look\nat this, the less happy I am with how you did it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Apr 2024 09:19:15 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Wed, 10 Apr 2024 at 16:20, Alexander Korotkov <[email protected]>\nwrote:\n\n> On Mon, Apr 8, 2024 at 9:54 PM Robert Haas <[email protected]> wrote:\n> > On Mon, Apr 8, 2024 at 12:33 PM Alexander Korotkov <[email protected]>\n> wrote:\n> > > Yes, it was my mistake. I got rushing trying to fit this to FF, even\n> doing significant changes just before commit.\n> > > I'll revert this later today.\n>\n> The patch to revert is attached. Given that revert touches the work\n> done in 041b96802e, I think it needs some feedback before push.\n>\n> > Alexander,\n> >\n> > Exactly how much is getting reverted here? I see these, all since March\n> 23rd:\n> >\n> > dd1f6b0c17 Provide a way block-level table AMs could re-use\n> > acquire_sample_rows()\n> > 9bd99f4c26 Custom reloptions for table AM\n> > 97ce821e3e Fix the parameters order for\n> > TableAmRoutine.relation_copy_for_cluster()\n> > 867cc7b6dd Revert \"Custom reloptions for table AM\"\n> > b1484a3f19 Let table AM insertion methods control index insertion\n> > c95c25f9af Custom reloptions for table AM\n> > 27bc1772fc Generalize relation analyze in table AM interface\n> > 87985cc925 Allow locking updated tuples in tuple_update() and\n> tuple_delete()\n> > c35a3fb5e0 Allow table AM tuple_insert() method to return the different\n> slot\n> > 02eb07ea89 Allow table AM to store complex data structures in rd_amcache\n> >\n> > I'm not really feeling very good about all of this, because:\n> >\n> > - 87985cc925 was previously committed as 11470f544e on March 23, 2023,\n> > and almost immediately reverted. Now you tried again on March 26,\n> > 2024. I know there was a bunch of rework in the middle, but there are\n> > times in the year that things can be committed other than right before\n> > the feature freeze. Like, don't wait a whole year for the next attempt\n> > and then again do it right before the cutoff.\n>\n> I agree with the facts. But I have a different interpretation on\n> this. The patch was committed as 11470f544e on March 23, 2023, then\n> reverted on April 3. I've proposed the revised version, but Andres\n> complained that this is the new API design days before FF. Then the\n> patch with this design was published in the thread for the year with\n> periodical rebases. So, I think I expressed my intention with that\n> design before 2023 FF, nobody prevented me from expressing objections\n> or other feedback during the year. Then I realized that 2024 FF is\n> approaching and decided to give this another try for pg18.\n>\n> But I don't yet see it's wrong with this patch. I waited a year for\n> feedback. I waited 2 days after saying \"I will push this if no\n> objections\". Given your feedback now, I get that it would be better to\n> do another attempt to commit this earlier.\n>\n> I admit my mistake with dd1f6b0c17. I get rushed trying to fix the\n> things actually making things worse. I apologise for this. But if\n> I'm forced to revert 87985cc925 without even hearing any reasonable\n> critics besides imperfection of timing, I feel like this is the\n> punishment for my mistake with dd1f6b0c17. Pretty unreasonable\n> punishment in my view.\n>\n> > - The Discussion links in the commit messages do not seem to stand for\n> > the proposition that these particular patches ought to be committed in\n> > this form. Some of them are just links to the messages where the patch\n> > was originally posted, which is probably not against policy or\n> > anything, but it'd be nicer to see links to versions of the patch with\n> > which people are, in nearby emails, agreeing. Even worse, some of\n> > these are links to emails where somebody said, \"hey, some earlier\n> > commit does not look good.\" In particular,\n> > dd1f6b0c172a643a73d6b71259fa2d10378b39eb has a discussion link where\n> > Andres complains about 27bc1772fc814946918a5ac8ccb9b5c5ad0380aa, but\n> > it's not clear how that justifies the new commit.\n>\n> I have to repeat again, that I admit my mistake with dd1f6b0c17,\n> apologize for that, and make my own conclusions to not repeat this.\n> But dd1f6b0c17 seems to be the only one that has a link to the message\n> with complains. I went through the list of commits above, it seems\n> that others have just linked to the first message of the thread.\n> Probably, there is a lack of consensus for some of them. But I never\n> heard about a policy to link not just the discussion start, but also\n> exact messages expressing agreeing. And I didn't see others doing\n> that.\n>\n> > - The commit message for 867cc7b6dd says \"This reverts commit\n> > c95c25f9af4bc77f2f66a587735c50da08c12b37 due to multiple design issues\n> > spotted after commit.\" That's not a very good justification for then\n> > trying again 6 days later with 9bd99f4c26, and it's *definitely* not a\n> > good justification for there being no meaningful discussion links in\n> > the commit message for 9bd99f4c26. They're just the same links you had\n> > in the previous attempt, so it's pretty hard for anybody to understand\n> > what got fixed and whether all of the concerns were really addressed.\n> > Just looking over the commit, it's pretty hard to understand what is\n> > being changed and why: there's not a lot of comment updates, there's\n> > no documentation changes, and there's not a lot of explanation in the\n> > commit message either. Even if this feature is great and all the code\n> > is perfect now, it's going to be hard for anyone to figure out how to\n> > use it.\n>\n> 1) 9bd99f4c26 comprises the reworked patch after working with notes\n> from Jeff Davis. I agree it would be better to wait for him to\n> express explicit agreement. Before reverting this, I would prefer to\n> hear his opinion.\n> 2) One of the issues here is that table AM API doesn't have\n> documentation, it has just a very brief page which doesn't go deep\n> explaining particular API methods. I have heard a lot of complains\n> about that from users attempting to write table access methods. It's\n> now too late to complain about that (but if I had a wisdom of now back\n> during pg12 development I would definitely object against table AM API\n> being committed at that shape). I understand I could be more\n> proactive and propose a patch with that documentation.\n>\n> > 97ce821e3e looks like a clear bug fix to me, but I wonder if the rest\n> > of this should all just be reverted, with a ban on ever trying it\n> > again after March 1 of any year.\n>\n> Do you propose a ban from March 1 to the end of any year? I think the\n> first doesn't make sense, because it leaves only 2 months a year for\n> the work. That would create a potential rush during these 2 month and\n> could serve exactly opposite to the intention. So, I guess this means\n> a ban from March 1 to the FF of any year. The situation now is quite\n> unpleasant for me. So I'm far from repeating this next year.\n> However, if there should be a formal ban, it should be specified.\n> Does it relate to the patches I've pushed, all patches in this thread,\n> all similar patches, all table AM patches, or other API patches?\n>\n> Surely, I'm an interested party and can't be impartial. But I think\n> it would be nice if we introduce some general rules based on this\n> experience. Could we have some API freeze date some time before the\n> feature freeze?\n>\n> > I'd like to believe that there are\n> > only bookkeeping problems here, and that there was in fact clear\n> > agreement that all of these changes should be made in this form, and\n> > that the commit messages simply failed to reference the most relevant\n> > emails. But what I fear, especially in view of Andres's remarks, is\n> > that these commits were done in haste without adequate consensus, and\n> > I think that's a serious problem.\n>\n> This thread had a lot of patches for table AM API. My intention for\n> pg17 was to commit the easiest and least contradictory of them. I\n> understand there should be more consensus for some of them and\n> committing dd1f6b0c17 instead of reverting 27bc1772fc was a mistake.\n> But I don't feel good about reverting everything in a row without\n> clear feedback.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nIn my view, the actual list of what has raised discussion is:\ndd1f6b0c17 Provide a way block-level table AMs could re-use\nacquire_sample_rows()\n27bc1772fc Generalize relation analyze in table AM interface\n\nProposals to revert the other patches in a wholesale way look to me like an\nill-performed continuation of a discussion [1]. I can't believe that \"Let's\nselect which commits close to FF looks worse than the others\" based on\nwhereabouts, not patch contents is a good and productive way for the\ncommunity to use.\n\nAt the same time if Andres, who is the most experienced person in the scope\nof access methods is willing to give his post-commit re-review of any of\nthe committed patches and will recommend some of them reverted, it would be\na good sensible input to act accordingly.\npatch\n\n[1]\nhttps://www.postgresql.org/message-id/flat/39b1e953-6397-44ba-bb18-d3fdd61839c1%40joeconway.com#e5457f348b8ca90150cb9666aea94547\n\nHi, Alexander!On Wed, 10 Apr 2024 at 16:20, Alexander Korotkov <[email protected]> wrote:On Mon, Apr 8, 2024 at 9:54 PM Robert Haas <[email protected]> wrote:\n> On Mon, Apr 8, 2024 at 12:33 PM Alexander Korotkov <[email protected]> wrote:\n> > Yes, it was my mistake. I got rushing trying to fit this to FF, even doing significant changes just before commit.\n> > I'll revert this later today.\n\nThe patch to revert is attached. Given that revert touches the work\ndone in 041b96802e, I think it needs some feedback before push.\n\n> Alexander,\n>\n> Exactly how much is getting reverted here? I see these, all since March 23rd:\n>\n> dd1f6b0c17 Provide a way block-level table AMs could re-use\n> acquire_sample_rows()\n> 9bd99f4c26 Custom reloptions for table AM\n> 97ce821e3e Fix the parameters order for\n> TableAmRoutine.relation_copy_for_cluster()\n> 867cc7b6dd Revert \"Custom reloptions for table AM\"\n> b1484a3f19 Let table AM insertion methods control index insertion\n> c95c25f9af Custom reloptions for table AM\n> 27bc1772fc Generalize relation analyze in table AM interface\n> 87985cc925 Allow locking updated tuples in tuple_update() and tuple_delete()\n> c35a3fb5e0 Allow table AM tuple_insert() method to return the different slot\n> 02eb07ea89 Allow table AM to store complex data structures in rd_amcache\n>\n> I'm not really feeling very good about all of this, because:\n>\n> - 87985cc925 was previously committed as 11470f544e on March 23, 2023,\n> and almost immediately reverted. Now you tried again on March 26,\n> 2024. I know there was a bunch of rework in the middle, but there are\n> times in the year that things can be committed other than right before\n> the feature freeze. Like, don't wait a whole year for the next attempt\n> and then again do it right before the cutoff.\n\nI agree with the facts. But I have a different interpretation on\nthis. The patch was committed as 11470f544e on March 23, 2023, then\nreverted on April 3. I've proposed the revised version, but Andres\ncomplained that this is the new API design days before FF. Then the\npatch with this design was published in the thread for the year with\nperiodical rebases. So, I think I expressed my intention with that\ndesign before 2023 FF, nobody prevented me from expressing objections\nor other feedback during the year. Then I realized that 2024 FF is\napproaching and decided to give this another try for pg18.\n\nBut I don't yet see it's wrong with this patch. I waited a year for\nfeedback. I waited 2 days after saying \"I will push this if no\nobjections\". Given your feedback now, I get that it would be better to\ndo another attempt to commit this earlier.\n\nI admit my mistake with dd1f6b0c17. I get rushed trying to fix the\nthings actually making things worse. I apologise for this. But if\nI'm forced to revert 87985cc925 without even hearing any reasonable\ncritics besides imperfection of timing, I feel like this is the\npunishment for my mistake with dd1f6b0c17. Pretty unreasonable\npunishment in my view.\n\n> - The Discussion links in the commit messages do not seem to stand for\n> the proposition that these particular patches ought to be committed in\n> this form. Some of them are just links to the messages where the patch\n> was originally posted, which is probably not against policy or\n> anything, but it'd be nicer to see links to versions of the patch with\n> which people are, in nearby emails, agreeing. Even worse, some of\n> these are links to emails where somebody said, \"hey, some earlier\n> commit does not look good.\" In particular,\n> dd1f6b0c172a643a73d6b71259fa2d10378b39eb has a discussion link where\n> Andres complains about 27bc1772fc814946918a5ac8ccb9b5c5ad0380aa, but\n> it's not clear how that justifies the new commit.\n\nI have to repeat again, that I admit my mistake with dd1f6b0c17,\napologize for that, and make my own conclusions to not repeat this.\nBut dd1f6b0c17 seems to be the only one that has a link to the message\nwith complains. I went through the list of commits above, it seems\nthat others have just linked to the first message of the thread.\nProbably, there is a lack of consensus for some of them. But I never\nheard about a policy to link not just the discussion start, but also\nexact messages expressing agreeing. And I didn't see others doing\nthat.\n\n> - The commit message for 867cc7b6dd says \"This reverts commit\n> c95c25f9af4bc77f2f66a587735c50da08c12b37 due to multiple design issues\n> spotted after commit.\" That's not a very good justification for then\n> trying again 6 days later with 9bd99f4c26, and it's *definitely* not a\n> good justification for there being no meaningful discussion links in\n> the commit message for 9bd99f4c26. They're just the same links you had\n> in the previous attempt, so it's pretty hard for anybody to understand\n> what got fixed and whether all of the concerns were really addressed.\n> Just looking over the commit, it's pretty hard to understand what is\n> being changed and why: there's not a lot of comment updates, there's\n> no documentation changes, and there's not a lot of explanation in the\n> commit message either. Even if this feature is great and all the code\n> is perfect now, it's going to be hard for anyone to figure out how to\n> use it.\n\n1) 9bd99f4c26 comprises the reworked patch after working with notes\nfrom Jeff Davis. I agree it would be better to wait for him to\nexpress explicit agreement. Before reverting this, I would prefer to\nhear his opinion.\n2) One of the issues here is that table AM API doesn't have\ndocumentation, it has just a very brief page which doesn't go deep\nexplaining particular API methods. I have heard a lot of complains\nabout that from users attempting to write table access methods. It's\nnow too late to complain about that (but if I had a wisdom of now back\nduring pg12 development I would definitely object against table AM API\nbeing committed at that shape). I understand I could be more\nproactive and propose a patch with that documentation.\n\n> 97ce821e3e looks like a clear bug fix to me, but I wonder if the rest\n> of this should all just be reverted, with a ban on ever trying it\n> again after March 1 of any year.\n\nDo you propose a ban from March 1 to the end of any year? I think the\nfirst doesn't make sense, because it leaves only 2 months a year for\nthe work. That would create a potential rush during these 2 month and\ncould serve exactly opposite to the intention. So, I guess this means\na ban from March 1 to the FF of any year. The situation now is quite\nunpleasant for me. So I'm far from repeating this next year.\nHowever, if there should be a formal ban, it should be specified.\nDoes it relate to the patches I've pushed, all patches in this thread,\nall similar patches, all table AM patches, or other API patches?\n\nSurely, I'm an interested party and can't be impartial. But I think\nit would be nice if we introduce some general rules based on this\nexperience. Could we have some API freeze date some time before the\nfeature freeze?\n\n> I'd like to believe that there are\n> only bookkeeping problems here, and that there was in fact clear\n> agreement that all of these changes should be made in this form, and\n> that the commit messages simply failed to reference the most relevant\n> emails. But what I fear, especially in view of Andres's remarks, is\n> that these commits were done in haste without adequate consensus, and\n> I think that's a serious problem.\n\nThis thread had a lot of patches for table AM API. My intention for\npg17 was to commit the easiest and least contradictory of them. I\nunderstand there should be more consensus for some of them and\ncommitting dd1f6b0c17 instead of reverting 27bc1772fc was a mistake.\nBut I don't feel good about reverting everything in a row without\nclear feedback.\n\n------\nRegards,\nAlexander KorotkovIn my view, the actual list of what has raised discussion is:dd1f6b0c17 Provide a way block-level table AMs could re-use acquire_sample_rows()27bc1772fc Generalize relation analyze in table AM interfaceProposals to revert the other patches in a wholesale way look to me like an ill-performed continuation of a discussion [1]. I can't believe that \"Let's select which commits close to FF looks worse than the others\" based on whereabouts, not patch contents is a good and productive way for the community to use.At the same time if Andres, who is the most experienced person in the scope of access methods is willing to give his post-commit re-review of any of the committed patches and will recommend some of them reverted, it would be a good sensible input to act accordingly.patch [1] https://www.postgresql.org/message-id/flat/39b1e953-6397-44ba-bb18-d3fdd61839c1%40joeconway.com#e5457f348b8ca90150cb9666aea94547",
"msg_date": "Wed, 10 Apr 2024 17:42:51 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On 4/10/24 09:19, Robert Haas wrote:\n> When you commit a patch and another committer writes a post-commit\n> review saying that your patch has so many serious problems that he\n> gave up on reviewing before enumerating all of them, that's a really\n> bad sign. That should be a strong signal to you to step back and take\n> a close look at whether you really understand the area of the code\n> that you're touching well enough to be doing whatever it is that\n> you're doing. If I got a review like that, I would have reverted the\n> patch instantly, given up for the release cycle, possibly given up on\n> the patch permanently, and most definitely not tried again to commit\n> unless I was absolutely certain that I'd learned a lot in the meantime\n> *and* had the agreement of the committer who wrote that review (or\n> maybe some other committer who was acknowledged as an expert in that\n> area of the code).\n\n<snip>\n\n> It's not Andres's job to make sure my patches are not broken. It's my\n> job. That applies to the patches I write, and the patches written by\n> other people that I commit. If I commit something and it turns out\n> that it is broken, that's my bad. If I commit something and it turns\n> out that it does not have consensus, that is also my bad. It is not\n> the fault of the other people for not helping me get my patches to a\n> state where they are up to project standard. It is my fault, and my\n> fault alone, for committing something that was not ready. Now that\n> does not mean that it isn't frustrating when I can't get the help I\n> need. It is extremely frustrating. But the solution is not to commit\n> anyway and then blame the other people for not providing feedback.\n\n+many\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 10 Apr 2024 09:57:30 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 4:19 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Apr 10, 2024 at 8:20 AM Alexander Korotkov <[email protected]> wrote:\n> > I agree with the facts. But I have a different interpretation on\n> > this. The patch was committed as 11470f544e on March 23, 2023, then\n> > reverted on April 3. I've proposed the revised version, but Andres\n> > complained that this is the new API design days before FF.\n>\n> Well, his first complaint that your committed patch was full of bugs:\n>\n> https://www.postgresql.org/message-id/20230323003003.plgaxjqahjgkuxrk%40awork3.anarazel.de\n>\n> When you commit a patch and another committer writes a post-commit\n> review saying that your patch has so many serious problems that he\n> gave up on reviewing before enumerating all of them, that's a really\n> bad sign. That should be a strong signal to you to step back and take\n> a close look at whether you really understand the area of the code\n> that you're touching well enough to be doing whatever it is that\n> you're doing. If I got a review like that, I would have reverted the\n> patch instantly, given up for the release cycle, possibly given up on\n> the patch permanently, and most definitely not tried again to commit\n> unless I was absolutely certain that I'd learned a lot in the meantime\n> *and* had the agreement of the committer who wrote that review (or\n> maybe some other committer who was acknowledged as an expert in that\n> area of the code).\n>\n> What you did instead is try to do a bunch of post-commit fixup in a\n> desperate rush right before feature freeze, to which Andres\n> understandably objected. But that was your second mistake, not your\n> first one.\n>\n> > Then the\n> > patch with this design was published in the thread for the year with\n> > periodical rebases. So, I think I expressed my intention with that\n> > design before 2023 FF, nobody prevented me from expressing objections\n> > or other feedback during the year. Then I realized that 2024 FF is\n> > approaching and decided to give this another try for pg18.\n>\n> This doesn't seem to match the facts as I understand them. It appears\n> to me that there was no activity on the thread from April until\n> November. The message in November was not written by you. Your first\n> post to the thread after April of 2023 was on March 19, 2024. Five\n> days later you said you wanted to commit. That doesn't look to me like\n> you worked diligently on the patch set throughout the year and other\n> people had reasonable notice that you planned to get the work done\n> this cycle. It looks like you ignored the patch for 11 months and then\n> committed it without any real further feedback from anyone. True,\n> Pavel did post and say that he thought the patches were in good shape.\n> But you could hardly take that as evidence that Andres was now content\n> that the problems he'd raised earlier had been fixed, because (a)\n> Pavel had also been involved beforehand and had not raised the\n> concerns that Andres later raised and (b) Pavel wrote nothing in his\n> email specifically about why he thought your changes or his had\n> resolved those concerns. I certainly agree that Andres doesn't always\n> give as much review feedback as I'd like to have from him in, and it's\n> also true that he doesn't always give that feedback as quickly as I'd\n> like to have it ... but you know what?\n>\n> It's not Andres's job to make sure my patches are not broken. It's my\n> job. That applies to the patches I write, and the patches written by\n> other people that I commit. If I commit something and it turns out\n> that it is broken, that's my bad. If I commit something and it turns\n> out that it does not have consensus, that is also my bad. It is not\n> the fault of the other people for not helping me get my patches to a\n> state where they are up to project standard. It is my fault, and my\n> fault alone, for committing something that was not ready. Now that\n> does not mean that it isn't frustrating when I can't get the help I\n> need. It is extremely frustrating. But the solution is not to commit\n> anyway and then blame the other people for not providing feedback.\n>\n> I mean, committing without explicit agreement from someone else is OK\n> if you're pretty sure that you've got everything sorted out correctly.\n> But I don't think that the paper trail here supports the narrative\n> that you worked on this diligently throughout the year and had every\n> reason to believe it would be acceptable to the community. If I'd\n> looked at this thread, I would have concluded that you'd abandoned the\n> project. I would have expected that, when you picked it up again,\n> there would be a series of emails over a period of time carefully\n> working through the various issues that had been raised, inviting\n> specific commentary on specific discussion points, and generally\n> refining the work, and then maybe a suggestion of a commit at the end.\n> I would not have expected an email or two basically saying \"well,\n> seems like it's all fixed now,\" followed by a commit.\n\nRobert, I appreciate your feedback. I don't say I agree with\neverything. For example, I definitely wasn't going to place the blame\non others for not giving feedback. My point was to show that it\nwasn't so that I've committed that patch without taking feedback into\naccount. But arguing on every point doesn't feel reasonable for now.\nI would better share particular conclusions I made:\n1) I shouldn't argue too much about reverting patches especially with\ncommitters more experienced with relevant part of codebase.\n2) The fact that previous feedback is taken into account should be\nexpressed more explicitly everywhere: in comments, commit messages,\nmailing list messages etc.\n\nBut I have to mention that even that I've committed table AM stuff\nclose to the FF, there has been quite amount of depended work\ncommitted. So, revert of these patches is promising to be not\nsomething immediate and easy, which requires just the decision. It\nwould touch others work. And and revert patches might also need\nreview. I get the point that patches got lack of consensus. But in\nterms of efforts (not my efforts) it's probably makes sense to get\nthem some post-commit review.\n\n> > Do you propose a ban from March 1 to the end of any year? I think the\n> > first doesn't make sense, because it leaves only 2 months a year for\n> > the work. That would create a potential rush during these 2 month and\n> > could serve exactly opposite to the intention. So, I guess this means\n> > a ban from March 1 to the FF of any year. The situation now is quite\n> > unpleasant for me. So I'm far from repeating this next year.\n> > However, if there should be a formal ban, it should be specified.\n> > Does it relate to the patches I've pushed, all patches in this thread,\n> > all similar patches, all table AM patches, or other API patches?\n>\n> I meant from March 1 to feature freeze, but maybe I should have\n> proposed that you shouldn't ever commit these patches. The more I look\n> at this, the less happy I am with how you did it.\n\nRobert, look. Last year I went through the arrest for expressing my\nopinion. I that was not what normal arrest should look like, but a\nperiod of survival. My family went through a period of fear, struggle\nand uncertainty. Now, we're healthy and safe, but there is still\nuncertainty given asylum seeker status. During all this period, I\nhave to just obey, agree with everything, lie that I apologize about\nthings I don't apologize. I had to do this, because the price of\nexpressing myself was not just my life, but also health, freedom and\nwell-being of my family.\n\nI owe you great respect for all your work for PostgreSQL, and\nespecially for your efforts on getting things organized. But it\nwouldn't work the way you increase my potential punishment and I just\nsay that I'm obey and you're right about everything. You may even\ninitiate the procedure of my exclusion from committers (no idea what\nthe procedure is), ban from the list etc. I see you express many\nvaluable points, but my view is not exactly same as yours. And like a\nconclusion to some as result of discussion not threats.\n\nI feel the sense of blame and fear in latest discussions, and I don't\nlike it. That's OK to place the blame from time to time. But I would\nlike to add here more joy and respect (and I'm sorry I personally\ndidn't do enough in this matter). It's important get things right\netc. But in long term relationships may mean more.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 10 Apr 2024 19:36:19 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-08 14:54:46 -0400, Robert Haas wrote:\n> Exactly how much is getting reverted here? I see these, all since March 23rd:\n\nIMO:\n\n\n> dd1f6b0c17 Provide a way block-level table AMs could re-use\n> acquire_sample_rows()\n\nShould be reverted.\n\n\n> 9bd99f4c26 Custom reloptions for table AM\n\nHm. There are some oddities here:\n\n- It doesn't seem great that relcache.c now needs to know about the default\n values for all kinds of reloptions.\n\n- why is there table_reloptions() and tableam_reloptions()?\n\n- Why does extractRelOptions() need a TableAmRoutine parameter, extracted by a\n caller, instead of doing that work itself?\n\n\n\n> 97ce821e3e Fix the parameters order for\n> TableAmRoutine.relation_copy_for_cluster()\n\nShouldn't be, this is a clear fix.\n\n\n> b1484a3f19 Let table AM insertion methods control index insertion\n\nI'm not sure. I'm not convinced this is right, nor the opposite. If the\ntableam takes control of index insertion, shouldn't nodeModifyTuple know this\nearlier, so it doesn't prepare a bunch of index insertion state? Also,\nthere's pretty much no motivating explanation in the commit.\n\n\n> 27bc1772fc Generalize relation analyze in table AM interface\n\nShould be reverted.\n\n\n> 87985cc925 Allow locking updated tuples in tuple_update() and tuple_delete()\n\nStrongly suspect this should be reverted. The last time this was committed it\nwas far from ready. It's very easy to cause corruption due to subtle bugs in\nthis area.\n\n\n> c35a3fb5e0 Allow table AM tuple_insert() method to return the different slot\n\nIf the AM returns a different slot, who is responsible for cleaning it up? And\nhow is creating a new slot for every insert not going to be a measurable\noverhead?\n\n\n> 02eb07ea89 Allow table AM to store complex data structures in rd_amcache\n\nI am doubtful this is right. Is it really sufficient to have a callback for\nfreeing? What happens when relcache entries are swapped as part of a rebuild?\nThat works for \"flat\" caches, but I don't immediately see how it works for\nmore complicated datastructures. At least from the commit message it's hard\nto evaluate how this actually intended to be used.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Apr 2024 09:52:36 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 12:36 PM Alexander Korotkov\n<[email protected]> wrote:\n> But I have to mention that even that I've committed table AM stuff\n> close to the FF, there has been quite amount of depended work\n> committed. So, revert of these patches is promising to be not\n> something immediate and easy, which requires just the decision. It\n> would touch others work. And and revert patches might also need\n> review. I get the point that patches got lack of consensus. But in\n> terms of efforts (not my efforts) it's probably makes sense to get\n> them some post-commit review.\n\nThat is somewhat fair, but it is also a lot of work. There are\nmultiple people asking for you to revert things on multiple threads,\nand figuring out all of the revert requests and trying to come to some\nconsensus about what should be done in each case is going to take an\nenormous amount of time. I know you've done lots of good work on\nPostgreSQL in the past and I respect that, but I think you also have\nto realize that you're asking other people to spend a LOT of time\nfiguring out what to do about the current situation. I see Andres has\nposted more specifically about what he thinks should happen to each of\nthe table AM patches and I am willing to defer to his opinion, but we\nneed to make some quick decisions here to either keep things or take\nthem out. Extensive reworks after feature freeze should not be an\noption that is on the table; that's what makes it a freeze.\n\nI also do not think I really believe that there's been so much stuff\ncommitted that a blanket revert would be all that hard to carry off,\nif that were the option that the community ended up preferring.\n\n> Robert, look. Last year I went through the arrest for expressing my\n> opinion. I that was not what normal arrest should look like, but a\n> period of survival. My family went through a period of fear, struggle\n> and uncertainty. Now, we're healthy and safe, but there is still\n> uncertainty given asylum seeker status. During all this period, I\n> have to just obey, agree with everything, lie that I apologize about\n> things I don't apologize. I had to do this, because the price of\n> expressing myself was not just my life, but also health, freedom and\n> well-being of my family.\n>\n> I owe you great respect for all your work for PostgreSQL, and\n> especially for your efforts on getting things organized. But it\n> wouldn't work the way you increase my potential punishment and I just\n> say that I'm obey and you're right about everything. You may even\n> initiate the procedure of my exclusion from committers (no idea what\n> the procedure is), ban from the list etc. I see you express many\n> valuable points, but my view is not exactly same as yours. And like a\n> conclusion to some as result of discussion not threats.\n>\n> I feel the sense of blame and fear in latest discussions, and I don't\n> like it. That's OK to place the blame from time to time. But I would\n> like to add here more joy and respect (and I'm sorry I personally\n> didn't do enough in this matter). It's important get things right\n> etc. But in long term relationships may mean more.\n\nI am not sure how to respond to this. On a personal level, I am sorry\nto hear that you were arrested and, if I can be of some help to you,\nwe can discuss that off-list. However, if you're suggesting that there\nis some kind of equivalence between me criticizing your decisions\nabout what to commit and someone in a position of authority putting\nyou in jail, well, I don't think it's remotely fair to compare those\nthings.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Apr 2024 13:25:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 1:25 PM Robert Haas <[email protected]> wrote:\n> That is somewhat fair, but it is also a lot of work. There are\n> multiple people asking for you to revert things on multiple threads,\n> and figuring out all of the revert requests and trying to come to some\n> consensus about what should be done in each case is going to take an\n> enormous amount of time. I know you've done lots of good work on\n> PostgreSQL in the past and I respect that, but I think you also have\n> to realize that you're asking other people to spend a LOT of time\n> figuring out what to do about the current situation. I see Andres has\n> posted more specifically about what he thinks should happen to each of\n> the table AM patches and I am willing to defer to his opinion, but we\n> need to make some quick decisions here to either keep things or take\n> them out. Extensive reworks after feature freeze should not be an\n> option that is on the table; that's what makes it a freeze.\n\nAlexander has been sharply criticized for acting in haste, pushing\nwork in multiple areas when it was clearly not ready. And that seems\nproportionate to me. I agree that he showed poor judgement in the past\nfew months, and especially in the past few weeks. Not just on one\noccasion, but on several. That must have consequences.\n\n> I also do not think I really believe that there's been so much stuff\n> committed that a blanket revert would be all that hard to carry off,\n> if that were the option that the community ended up preferring.\n\nIt seems to me that emotions are running high right now. I think that\nit would be a mistake to act in haste when determining next steps.\nIt's very important, but it's not very urgent.\n\nI've known Alexander for about 15 years. I think that he deserves some\nconsideration here. Say a week or two, to work through some of the\nmore complicated issues -- and to take a breather. I just don't see\nany upside to rushing through this process, given where we are now.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 10 Apr 2024 14:13:02 -0400",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 05:42:51PM +0400, Pavel Borisov wrote:\n> Hi, Alexander!\n> In my view, the actual list of what has raised discussion is:\n> dd1f6b0c17 Provide a way block-level table AMs could re-use acquire_sample_rows\n> ()\n> 27bc1772fc Generalize relation analyze in table AM interface\n> \n> Proposals to revert the other patches in a wholesale way look to me like an\n> ill-performed continuation of a discussion [1]. I can't believe that \"Let's\n\nFor reference this disussion was:\n\n\tI don't dispute that we could do better, and this is just a\n\tsimplistic look based on \"number of commits per day\", but the\n\tattached does put it in perspective to some extent.\n\n> select which commits close to FF looks worse than the others\" based on\n> whereabouts, not patch contents is a good and productive way for the community\n> to use.\n\nI don't know how you can say these patches are being questioned just\nbecause they are near the feature freeze (FF). There are clear\nconcerns, and post-feature freeze is not the time to be evaluating which\npatches which were pushed in near feature freeze need help.\n\nWhat is the huge rush for these patches, and if they were so important,\nwhy was this not done earlier? This can all wait until PG 18. If\nSupabase or someone else needs these patches for PG 17, they will need\nto create a patched verison of PG 17 with these patches.\n\n> At the same time if Andres, who is the most experienced person in the scope of\n> access methods is willing to give his post-commit re-review of any of the\n> committed patches and will recommend some of them reverted, it would be a good\n> sensible input to act accordingly.\n> patch \n\nSo the patches were rushed, have problems, and now we are requiring\nAndres to stop what he is doing to give immediate feedback --- that is\nnot fair to him.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 10 Apr 2024 15:24:32 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-10 15:19:47 +0300, Alexander Korotkov wrote:\n> On Mon, Apr 8, 2024 at 9:54 PM Robert Haas <[email protected]> wrote:\n> > On Mon, Apr 8, 2024 at 12:33 PM Alexander Korotkov <[email protected]> wrote:\n> > > Yes, it was my mistake. I got rushing trying to fit this to FF, even doing significant changes just before commit.\n> > > I'll revert this later today.\n>\n> The patch to revert is attached. Given that revert touches the work\n> done in 041b96802e, I think it needs some feedback before push.\n\nHm. It's a bit annoying to revert it, you're right. I think on its own the\nrevert looks reasonable from what I've seen so far, will continue looking for\na bit.\n\nI think we'll need to do some cleanup of 041b96802e separately afterwards -\npossibly in 17, possibly in 18. Particularly post-27bc1772fc8\nacquire_sample_rows() was tied hard to heapam, so it made sense for 041b96802e\nto create the stream in acquire_sample_rows() and have\nblock_sampling_read_stream_next() be in analyze.c. But eventually that should\nbe in access/heap/. Compared to 16, the state post the revert does tie\nanalyze.c a bit closer to the internals of the AM than before, but I'm not\nsure the increase matters.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Apr 2024 13:03:42 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 4:03 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-04-10 15:19:47 +0300, Alexander Korotkov wrote:\n> > On Mon, Apr 8, 2024 at 9:54 PM Robert Haas <[email protected]> wrote:\n> > > On Mon, Apr 8, 2024 at 12:33 PM Alexander Korotkov <[email protected]> wrote:\n> > > > Yes, it was my mistake. I got rushing trying to fit this to FF, even doing significant changes just before commit.\n> > > > I'll revert this later today.\n> >\n> > The patch to revert is attached. Given that revert touches the work\n> > done in 041b96802e, I think it needs some feedback before push.\n>\n> Hm. It's a bit annoying to revert it, you're right. I think on its own the\n> revert looks reasonable from what I've seen so far, will continue looking for\n> a bit.\n>\n> I think we'll need to do some cleanup of 041b96802e separately afterwards -\n> possibly in 17, possibly in 18. Particularly post-27bc1772fc8\n> acquire_sample_rows() was tied hard to heapam, so it made sense for 041b96802e\n> to create the stream in acquire_sample_rows() and have\n> block_sampling_read_stream_next() be in analyze.c. But eventually that should\n> be in access/heap/. Compared to 16, the state post the revert does tie\n> analyze.c a bit closer to the internals of the AM than before, but I'm not\n> sure the increase matters.\n\nYes in an earlier version of 041b96802e, I gave the review feedback\nthat the read stream should be pushed down into heap-specific code,\nbut then after 27bc1772fc8, Bilal took the approach of putting the\nread stream code in acquire_sample_rows() since that was no longer\ntable AM-agnostic.\n\nThis thread has been moving pretty fast, so could someone point out\nwhich version of the patch has the modifications to\nacquire_sample_rows() that would be relevant for Bilal (and others\ninvolved in analyze streaming read) to review? Is it\nv1-0001-revert-Generalize-relation-analyze-in-table-AM-in.patch?\n\n- Melanie\n\n\n",
"msg_date": "Wed, 10 Apr 2024 16:24:40 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-10 16:24:40 -0400, Melanie Plageman wrote:\n> This thread has been moving pretty fast, so could someone point out\n> which version of the patch has the modifications to\n> acquire_sample_rows() that would be relevant for Bilal (and others\n> involved in analyze streaming read) to review? Is it\n> v1-0001-revert-Generalize-relation-analyze-in-table-AM-in.patch?\n\nI think so. It's at least what I've been looking at.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Apr 2024 13:33:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 4:33 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-04-10 16:24:40 -0400, Melanie Plageman wrote:\n> > This thread has been moving pretty fast, so could someone point out\n> > which version of the patch has the modifications to\n> > acquire_sample_rows() that would be relevant for Bilal (and others\n> > involved in analyze streaming read) to review? Is it\n> > v1-0001-revert-Generalize-relation-analyze-in-table-AM-in.patch?\n>\n> I think so. It's at least what I've been looking at.\n\nI took a look at this patch, and you're right we will need to do\nfollow-on work with streaming ANALYZE. The streaming read code will\nhave to be moved now that acquire_sample_rows() is table-AM agnostic\nagain.\n\nI don't think there was ever a version that Bilal wrote\nwhere the streaming read code was outside of acquire_sample_rows(). By\nthe time he got that review feedback, 27bc1772fc8 had gone in.\n\nThis brings up a question about the prefetching. We never had to have\nthis discussion for sequential scan streaming read because it didn't\n(and still doesn't) do prefetching. But, if we push the streaming read\ncode down into the heap AM layer, it will be doing the prefetching.\nSo, do we remove the prefetching from acquire_sample_rows() and expect\nother table AMs to implement it themselves or use the streaming read\nAPI?\n\n- Melanie\n\n\n",
"msg_date": "Wed, 10 Apr 2024 16:50:44 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-10 16:50:44 -0400, Melanie Plageman wrote:\n> This brings up a question about the prefetching. We never had to have\n> this discussion for sequential scan streaming read because it didn't\n> (and still doesn't) do prefetching. But, if we push the streaming read\n> code down into the heap AM layer, it will be doing the prefetching.\n> So, do we remove the prefetching from acquire_sample_rows() and expect\n> other table AMs to implement it themselves or use the streaming read\n> API?\n\nThe prefetching added to acquire_sample_rows was quite narrowly tailored to\nsomething heap-like - it pretty much required that block numbers to be 1:1\nwith the actual physical on-disk location for the specific AM. So I think\nit's pretty much required for this to be pushed down.\n\nUsing a read stream is a few lines for something like this, so I'm not worried\nabout it. I guess we could have a default implementation for block based AMs,\nsimilar what we have around table_block_parallelscan_*, but not sure it's\nworth doing that, the complexity is much lower than in the\ntable_block_parallelscan_ case.\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Wed, 10 Apr 2024 14:21:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi Andres,\n\nOn Wed, Apr 10, 2024 at 7:52 PM Andres Freund <[email protected]> wrote:\n> On 2024-04-08 14:54:46 -0400, Robert Haas wrote:\n> > Exactly how much is getting reverted here? I see these, all since March 23rd:\n>\n> IMO:\n>\n>\n> > dd1f6b0c17 Provide a way block-level table AMs could re-use\n> > acquire_sample_rows()\n>\n> Should be reverted.\n>\n>\n> > 9bd99f4c26 Custom reloptions for table AM\n>\n> Hm. There are some oddities here:\n>\n> - It doesn't seem great that relcache.c now needs to know about the default\n> values for all kinds of reloptions.\n>\n> - why is there table_reloptions() and tableam_reloptions()?\n>\n> - Why does extractRelOptions() need a TableAmRoutine parameter, extracted by a\n> caller, instead of doing that work itself?\n>\n>\n>\n> > 97ce821e3e Fix the parameters order for\n> > TableAmRoutine.relation_copy_for_cluster()\n>\n> Shouldn't be, this is a clear fix.\n>\n>\n> > b1484a3f19 Let table AM insertion methods control index insertion\n>\n> I'm not sure. I'm not convinced this is right, nor the opposite. If the\n> tableam takes control of index insertion, shouldn't nodeModifyTuple know this\n> earlier, so it doesn't prepare a bunch of index insertion state? Also,\n> there's pretty much no motivating explanation in the commit.\n>\n>\n> > 27bc1772fc Generalize relation analyze in table AM interface\n>\n> Should be reverted.\n>\n>\n> > 87985cc925 Allow locking updated tuples in tuple_update() and tuple_delete()\n>\n> Strongly suspect this should be reverted. The last time this was committed it\n> was far from ready. It's very easy to cause corruption due to subtle bugs in\n> this area.\n>\n>\n> > c35a3fb5e0 Allow table AM tuple_insert() method to return the different slot\n>\n> If the AM returns a different slot, who is responsible for cleaning it up? And\n> how is creating a new slot for every insert not going to be a measurable\n> overhead?\n>\n>\n> > 02eb07ea89 Allow table AM to store complex data structures in rd_amcache\n>\n> I am doubtful this is right. Is it really sufficient to have a callback for\n> freeing? What happens when relcache entries are swapped as part of a rebuild?\n> That works for \"flat\" caches, but I don't immediately see how it works for\n> more complicated datastructures. At least from the commit message it's hard\n> to evaluate how this actually intended to be used.\n\nThank you for your feedback. I've reverted all of above.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 11 Apr 2024 16:26:15 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 5:21 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-04-10 16:50:44 -0400, Melanie Plageman wrote:\n> > This brings up a question about the prefetching. We never had to have\n> > this discussion for sequential scan streaming read because it didn't\n> > (and still doesn't) do prefetching. But, if we push the streaming read\n> > code down into the heap AM layer, it will be doing the prefetching.\n> > So, do we remove the prefetching from acquire_sample_rows() and expect\n> > other table AMs to implement it themselves or use the streaming read\n> > API?\n>\n> The prefetching added to acquire_sample_rows was quite narrowly tailored to\n> something heap-like - it pretty much required that block numbers to be 1:1\n> with the actual physical on-disk location for the specific AM. So I think\n> it's pretty much required for this to be pushed down.\n>\n> Using a read stream is a few lines for something like this, so I'm not worried\n> about it. I guess we could have a default implementation for block based AMs,\n> similar what we have around table_block_parallelscan_*, but not sure it's\n> worth doing that, the complexity is much lower than in the\n> table_block_parallelscan_ case.\n\nThis makes sense.\n\nI am working on pushing streaming ANALYZE into heap AM code, and I ran\ninto a few roadblocks.\n\nIf we want ANALYZE to make the ReadStream object in heap_beginscan()\n(like the read stream implementation of heap sequential and TID range\nscans do), I don't see any way around changing the scan_begin table AM\ncallback to take a BufferAccessStrategy at the least (and perhaps also\nthe BlockSamplerData).\n\nread_stream_begin_relation() doesn't just save the\nBufferAccessStrategy in the ReadStream, it uses it to set various\nother things in the ReadStream object. callback_private_data (which in\nANALYZE's case is the BlockSamplerData) is simply saved in the\nReadStream, so it could be set later, but that doesn't sound very\nclean to me.\n\nAs such, it seems like a cleaner alternative would be to add a table\nAM callback for creating a read stream object that takes the\nparameters of read_stream_begin_relation(). But, perhaps it is a bit\nlate for such additions.\n\nIt also opens us up to the question of whether or not sequential scan\nshould use such a callback instead of making the read stream object in\nheap_beginscan().\n\nI am happy to write a patch that does any of the above. But, I want to\nraise these questions, because perhaps I am simply missing an obvious\nalternative solution.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 11 Apr 2024 12:19:09 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Wed, 2024-04-10 at 15:19 +0300, Alexander Korotkov wrote:\n> 1) 9bd99f4c26 comprises the reworked patch after working with notes\n> from Jeff Davis. I agree it would be better to wait for him to\n> express explicit agreement. Before reverting this, I would prefer to\n> hear his opinion.\n\nOn this particular feature, I had tried it in the past myself, and\nthere were a number of minor frustrations and I left it unfinished. I\nquickly recognized that commit c95c25f9af was too simple to work.\n\nCommit 9bd99f4c26 looked substantially better, but I was surprised to\nsee it committed so soon after the redesign. I thought a revert was\nlikely outcome, but I put it on my list of things to review more deeply\nin the next couple weeks so I could give productive feedback.\n\nIt would benefit from more discussion in v18, and I apologize for not\ngetting involved earlier when the patch still could have made it into\nv17.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 11 Apr 2024 10:11:47 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 8:11 PM Jeff Davis <[email protected]> wrote:\n> On Wed, 2024-04-10 at 15:19 +0300, Alexander Korotkov wrote:\n> > 1) 9bd99f4c26 comprises the reworked patch after working with notes\n> > from Jeff Davis. I agree it would be better to wait for him to\n> > express explicit agreement. Before reverting this, I would prefer to\n> > hear his opinion.\n>\n> On this particular feature, I had tried it in the past myself, and\n> there were a number of minor frustrations and I left it unfinished. I\n> quickly recognized that commit c95c25f9af was too simple to work.\n>\n> Commit 9bd99f4c26 looked substantially better, but I was surprised to\n> see it committed so soon after the redesign. I thought a revert was\n> likely outcome, but I put it on my list of things to review more deeply\n> in the next couple weeks so I could give productive feedback.\n\nThank you for your feedback, Jeff.\n\n> It would benefit from more discussion in v18, and I apologize for not\n> getting involved earlier when the patch still could have made it into\n> v17.\n\nI believe you don't have to apologize. It's definitely not your fault\nthat I've committed this patch in this shape.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 11 Apr 2024 20:28:56 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi!\n\nOn Thu, Apr 11, 2024 at 7:19 PM Melanie Plageman\n<[email protected]> wrote:\n> On Wed, Apr 10, 2024 at 5:21 PM Andres Freund <[email protected]> wrote:\n> > On 2024-04-10 16:50:44 -0400, Melanie Plageman wrote:\n> > > This brings up a question about the prefetching. We never had to have\n> > > this discussion for sequential scan streaming read because it didn't\n> > > (and still doesn't) do prefetching. But, if we push the streaming read\n> > > code down into the heap AM layer, it will be doing the prefetching.\n> > > So, do we remove the prefetching from acquire_sample_rows() and expect\n> > > other table AMs to implement it themselves or use the streaming read\n> > > API?\n> >\n> > The prefetching added to acquire_sample_rows was quite narrowly tailored to\n> > something heap-like - it pretty much required that block numbers to be 1:1\n> > with the actual physical on-disk location for the specific AM. So I think\n> > it's pretty much required for this to be pushed down.\n> >\n> > Using a read stream is a few lines for something like this, so I'm not worried\n> > about it. I guess we could have a default implementation for block based AMs,\n> > similar what we have around table_block_parallelscan_*, but not sure it's\n> > worth doing that, the complexity is much lower than in the\n> > table_block_parallelscan_ case.\n>\n> This makes sense.\n>\n> I am working on pushing streaming ANALYZE into heap AM code, and I ran\n> into a few roadblocks.\n>\n> If we want ANALYZE to make the ReadStream object in heap_beginscan()\n> (like the read stream implementation of heap sequential and TID range\n> scans do), I don't see any way around changing the scan_begin table AM\n> callback to take a BufferAccessStrategy at the least (and perhaps also\n> the BlockSamplerData).\n>\n> read_stream_begin_relation() doesn't just save the\n> BufferAccessStrategy in the ReadStream, it uses it to set various\n> other things in the ReadStream object. callback_private_data (which in\n> ANALYZE's case is the BlockSamplerData) is simply saved in the\n> ReadStream, so it could be set later, but that doesn't sound very\n> clean to me.\n>\n> As such, it seems like a cleaner alternative would be to add a table\n> AM callback for creating a read stream object that takes the\n> parameters of read_stream_begin_relation(). But, perhaps it is a bit\n> late for such additions.\n>\n> It also opens us up to the question of whether or not sequential scan\n> should use such a callback instead of making the read stream object in\n> heap_beginscan().\n>\n> I am happy to write a patch that does any of the above. But, I want to\n> raise these questions, because perhaps I am simply missing an obvious\n> alternative solution.\n\nI understand that I'm the bad guy of this release, not sure if my\nopinion counts.\n\nBut what is going on here? I hope this work is targeting pg18.\nOtherwise, do I get this right that this post feature-freeze works on\ndesigning a new API? Yes, 27bc1772fc masked the problem. But it was\ncommitted on Mar 30. So that couldn't justify why the proper API\nwasn't designed in time. Are we judging different commits with the\nsame criteria?\n\nIMHO, 041b96802e should be just reverted.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 11 Apr 2024 20:46:02 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 12:19 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Wed, Apr 10, 2024 at 5:21 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2024-04-10 16:50:44 -0400, Melanie Plageman wrote:\n> > > This brings up a question about the prefetching. We never had to have\n> > > this discussion for sequential scan streaming read because it didn't\n> > > (and still doesn't) do prefetching. But, if we push the streaming read\n> > > code down into the heap AM layer, it will be doing the prefetching.\n> > > So, do we remove the prefetching from acquire_sample_rows() and expect\n> > > other table AMs to implement it themselves or use the streaming read\n> > > API?\n> >\n> > The prefetching added to acquire_sample_rows was quite narrowly tailored to\n> > something heap-like - it pretty much required that block numbers to be 1:1\n> > with the actual physical on-disk location for the specific AM. So I think\n> > it's pretty much required for this to be pushed down.\n> >\n> > Using a read stream is a few lines for something like this, so I'm not worried\n> > about it. I guess we could have a default implementation for block based AMs,\n> > similar what we have around table_block_parallelscan_*, but not sure it's\n> > worth doing that, the complexity is much lower than in the\n> > table_block_parallelscan_ case.\n>\n> This makes sense.\n>\n> I am working on pushing streaming ANALYZE into heap AM code, and I ran\n> into a few roadblocks.\n>\n> If we want ANALYZE to make the ReadStream object in heap_beginscan()\n> (like the read stream implementation of heap sequential and TID range\n> scans do), I don't see any way around changing the scan_begin table AM\n> callback to take a BufferAccessStrategy at the least (and perhaps also\n> the BlockSamplerData).\n\nI will also say that, had this been 6 months ago, I would probably\nsuggest we restructure ANALYZE's table AM interface to accommodate\nread stream setup and to address a few other things I find odd about\nthe current code. For example, I think creating a scan descriptor for\nthe analyze scan in acquire_sample_rows() is quite odd. It seems like\nit would be better done in the relation_analyze callback. The\nrelation_analyze callback saves some state like the callbacks for\nacquire_sample_rows() and the Buffer Access Strategy. But at least in\nthe heap implementation, it just saves them in static variables in\nanalyze.c. It seems like it would be better to save them in a useful\ndata structure that could be accessed later. We have access to pretty\nmuch everything we need at that point (in the relation_analyze\ncallback). I also think heap's implementation of\ntable_beginscan_analyze() doesn't need most of\nheap_beginscan()/initscan(), so doing this instead of something\nANALYZE specific seems more confusing than helpful.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 11 Apr 2024 13:48:25 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 1:46 PM Alexander Korotkov <[email protected]> wrote:\n> I understand that I'm the bad guy of this release, not sure if my\n> opinion counts.\n>\n> But what is going on here? I hope this work is targeting pg18.\n> Otherwise, do I get this right that this post feature-freeze works on\n> designing a new API? Yes, 27bc1772fc masked the problem. But it was\n> committed on Mar 30. So that couldn't justify why the proper API\n> wasn't designed in time. Are we judging different commits with the\n> same criteria?\n\nI mean, Andres already said that the cleanup was needed possibly in\n17, and possibly in 18.\n\nAs far as fairness is concerned, you'll get no argument from me if you\nsay the streaming read stuff was all committed far later than it\nshould have been. I said that in the very first email I wrote on the\n\"post-feature freeze cleanup\" thread. But if you're going to argue\nthat there's no opportunity for anyone to adjust patches that were\nsideswiped by the reverts of your patches, and that if any such\nadjustments seem advisable we should just revert the sideswiped\npatches entirely, I don't agree with that, and I don't see why anyone\nwould agree with that. I think it's fine to have the discussion, and\nif the result of that discussion is that somebody says \"hey, we want\nto do X in 17 for reason Y,\" then we can discuss that proposal on its\nmerits, taking into account the answers to questions like \"why wasn't\nthis done before the freeze?\" and \"is that adjustment more or less\nrisky than just reverting?\" and \"how about we just leave it alone for\nnow and deal with it next release?\".\n\n> IMHO, 041b96802e should be just reverted.\n\nIMHO, it's too early to decide that, because we don't know what change\nconcretely is going to be proposed, and there has been no discussion\nof why that change, whatever it is, belongs in this release or next\nrelease.\n\nI understand that you're probably not feeling great about being asked\nto revert a bunch of stuff here, and I do think it is a fair point to\nmake that we need to be even-handed and not overreact. Just because\nyou had some patches that had some problems doesn't mean that\neverything that got touched by the reverts can or should be whacked\naround a whole bunch more post-freeze, especially since that stuff was\n*also* committed very late, in haste, way closer to feature freeze\nthan it should have been. At the same time, it's also important to\nkeep in mind that our goal here is not to punish people for being bad,\nor to reward them for being good, or really to make any moral\njudgements at all, but to produce a quality release. I'm sure that,\nwhere possible, you'd prefer to fix bugs in a patch you committed\nrather than revert the whole thing as soon as anyone finds any\nproblem. I would also prefer that, both for your patches, and for\nmine. And everyone else deserves that same consideration.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 16:26:52 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-11 20:46:02 +0300, Alexander Korotkov wrote:\n> I hope this work is targeting pg18.\n\nI think anything of the scope discussed by Melanie would be very clearly\ntargeting 18. For 17, I don't know yet whether we should revert the the\nANALYZE streaming read user (041b96802ef), just do a bit of comment polishing,\nor some other small change.\n\nOne oddity is that before 041b96802ef, the opportunities for making the\ninterface cleaner were less apparent, because c6fc50cb4028 increased the\ncoupling between analyze.c and the way the table storage works.\n\n\n> Otherwise, do I get this right that this post feature-freeze works on\n> designing a new API? Yes, 27bc1772fc masked the problem. But it was\n> committed on Mar 30.\n\nNote that there were versions of the patch that were targeting the\npre-27bc1772fc interface.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Apr 2024 14:04:16 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Fri, Apr 12, 2024 at 12:04 AM Andres Freund <[email protected]> wrote:\n> On 2024-04-11 20:46:02 +0300, Alexander Korotkov wrote:\n> > I hope this work is targeting pg18.\n>\n> I think anything of the scope discussed by Melanie would be very clearly\n> targeting 18. For 17, I don't know yet whether we should revert the the\n> ANALYZE streaming read user (041b96802ef), just do a bit of comment polishing,\n> or some other small change.\n>\n> One oddity is that before 041b96802ef, the opportunities for making the\n> interface cleaner were less apparent, because c6fc50cb4028 increased the\n> coupling between analyze.c and the way the table storage works.\n\nThank you for pointing this out about c6fc50cb4028, I've missed this.\n\n> > Otherwise, do I get this right that this post feature-freeze works on\n> > designing a new API? Yes, 27bc1772fc masked the problem. But it was\n> > committed on Mar 30.\n>\n> Note that there were versions of the patch that were targeting the\n> pre-27bc1772fc interface.\n\nSure, I've checked this before writing. It looks quite similar to the\nresult of applying my revert patch [1] to the head.\n\nLet me describe my view over the current situation.\n\n1) If we just apply my revert patch and leave c6fc50cb4028 and\n041b96802ef in the tree, then we get our table AM API narrowed. As\nyou expressed the current API requires block numbers to be 1:1 with\nthe actual physical on-disk location [2]. Not a secret I think the\ncurrent API is quite restrictive. And we're getting the ANALYZE\ninterface narrower than it was since 737a292b5de. Frankly speaking, I\ndon't think this is acceptable.\n\n2) Pushing down the read stream and prefetch to heap am is related to\ndifficulties [3], [4]. That's quite a significant piece of work to be\ndone post FF.\n\nIn token of all of the above, is the in-tree state that bad? (if we\nabstract the way 27bc1772fc and dd1f6b0c17 were committed).\n\nThe in-tree state provides quite a general API for analyze, supporting\neven non-block storages. There is a way to reuse existing\nacquire_sample_rows() for table AMs, which have block numbers 1:1 with\nthe actual physical on-disk location. It requires some cleanup for\ncomments and docs, but does not require us to redesing the API post\nFF.\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfdvuT6DnguzaV-M1UQ2whYGDojaNU%3D-%3DiHc0A7qo9HBEJw%40mail.gmail.com\n2. https://www.postgresql.org/message-id/20240410212117.mxsldz2w6htrl36v%40awork3.anarazel.de\n3. https://www.postgresql.org/message-id/CAAKRu_ZxU6hucckrT1SOJxKfyN7q-K4KU1y62GhDwLBZWG%2BROg%40mail.gmail.com\n4. https://www.postgresql.org/message-id/CAAKRu_YkphAPNbBR2jcLqnxGhDEWTKhYfLFY%3D0R_oG5LHBH7Gw%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 12 Apr 2024 01:04:03 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 11:27 PM Robert Haas <[email protected]> wrote:\n> On Thu, Apr 11, 2024 at 1:46 PM Alexander Korotkov <[email protected]> wrote:\n> > I understand that I'm the bad guy of this release, not sure if my\n> > opinion counts.\n> >\n> > But what is going on here? I hope this work is targeting pg18.\n> > Otherwise, do I get this right that this post feature-freeze works on\n> > designing a new API? Yes, 27bc1772fc masked the problem. But it was\n> > committed on Mar 30. So that couldn't justify why the proper API\n> > wasn't designed in time. Are we judging different commits with the\n> > same criteria?\n>\n> I mean, Andres already said that the cleanup was needed possibly in\n> 17, and possibly in 18.\n>\n> As far as fairness is concerned, you'll get no argument from me if you\n> say the streaming read stuff was all committed far later than it\n> should have been. I said that in the very first email I wrote on the\n> \"post-feature freeze cleanup\" thread. But if you're going to argue\n> that there's no opportunity for anyone to adjust patches that were\n> sideswiped by the reverts of your patches, and that if any such\n> adjustments seem advisable we should just revert the sideswiped\n> patches entirely, I don't agree with that, and I don't see why anyone\n> would agree with that. I think it's fine to have the discussion, and\n> if the result of that discussion is that somebody says \"hey, we want\n> to do X in 17 for reason Y,\" then we can discuss that proposal on its\n> merits, taking into account the answers to questions like \"why wasn't\n> this done before the freeze?\" and \"is that adjustment more or less\n> risky than just reverting?\" and \"how about we just leave it alone for\n> now and deal with it next release?\".\n\nI don't think 041b96802e could be sideswiped by 27bc1772fc. The \"Use\nstreaming I/O in ANALYZE\" patch has the same issue before 27bc1772fc,\nwhich was committed on Mar 30. So, in the worst case 27bc1772fc\nsteals a week of work. I can imagine without 27bc1772fc , a new API\ncould be proposed days before FF. This means I saved patch authors\nfrom what you name in my case \"desperate rush\". Huh!\n\n> > IMHO, 041b96802e should be just reverted.\n>\n> IMHO, it's too early to decide that, because we don't know what change\n> concretely is going to be proposed, and there has been no discussion\n> of why that change, whatever it is, belongs in this release or next\n> release.\n>\n> I understand that you're probably not feeling great about being asked\n> to revert a bunch of stuff here, and I do think it is a fair point to\n> make that we need to be even-handed and not overreact. Just because\n> you had some patches that had some problems doesn't mean that\n> everything that got touched by the reverts can or should be whacked\n> around a whole bunch more post-freeze, especially since that stuff was\n> *also* committed very late, in haste, way closer to feature freeze\n> than it should have been. At the same time, it's also important to\n> keep in mind that our goal here is not to punish people for being bad,\n> or to reward them for being good, or really to make any moral\n> judgements at all, but to produce a quality release. I'm sure that,\n> where possible, you'd prefer to fix bugs in a patch you committed\n> rather than revert the whole thing as soon as anyone finds any\n> problem. I would also prefer that, both for your patches, and for\n> mine. And everyone else deserves that same consideration.\n\nI expressed my thoughts about producing a better release without a\ndesperate rush post-FF in my reply to Andres [2].\n\nLinks.\n1. https://www.postgresql.org/message-id/CA%2BTgmobZUnJQaaGkuoeo22Sydf9%3DmX864W11yZKd6sv-53-aEQ%40mail.gmail.com\n2. https://www.postgresql.org/message-id/CAPpHfdt%2BcCj6j6cR5AyBThP6SyDf6wxAz4dU-0NdXjfpiFca7Q%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 12 Apr 2024 01:30:09 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 6:04 PM Alexander Korotkov <[email protected]> wrote:\n>\n> On Fri, Apr 12, 2024 at 12:04 AM Andres Freund <[email protected]> wrote:\n> > On 2024-04-11 20:46:02 +0300, Alexander Korotkov wrote:\n> > > I hope this work is targeting pg18.\n> >\n> > I think anything of the scope discussed by Melanie would be very clearly\n> > targeting 18. For 17, I don't know yet whether we should revert the the\n> > ANALYZE streaming read user (041b96802ef), just do a bit of comment polishing,\n> > or some other small change.\n> >\n> > One oddity is that before 041b96802ef, the opportunities for making the\n> > interface cleaner were less apparent, because c6fc50cb4028 increased the\n> > coupling between analyze.c and the way the table storage works.\n>\n> Thank you for pointing this out about c6fc50cb4028, I've missed this.\n>\n> > > Otherwise, do I get this right that this post feature-freeze works on\n> > > designing a new API? Yes, 27bc1772fc masked the problem. But it was\n> > > committed on Mar 30.\n> >\n> > Note that there were versions of the patch that were targeting the\n> > pre-27bc1772fc interface.\n>\n> Sure, I've checked this before writing. It looks quite similar to the\n> result of applying my revert patch [1] to the head.\n>\n> Let me describe my view over the current situation.\n>\n> 1) If we just apply my revert patch and leave c6fc50cb4028 and\n> 041b96802ef in the tree, then we get our table AM API narrowed. As\n> you expressed the current API requires block numbers to be 1:1 with\n> the actual physical on-disk location [2]. Not a secret I think the\n> current API is quite restrictive. And we're getting the ANALYZE\n> interface narrower than it was since 737a292b5de. Frankly speaking, I\n> don't think this is acceptable.\n>\n> 2) Pushing down the read stream and prefetch to heap am is related to\n> difficulties [3], [4]. That's quite a significant piece of work to be\n> done post FF.\n\nI had operated under the assumption that we needed to push the\nstreaming read code into heap AM because that is what we did for\nsequential scan, but now that I think about it, I don't see why we\nwould have to. Bilal's patch pre-27bc1772fc did not do this. But I\nthink the code in acquire_sample_rows() isn't more tied to heap AM\nafter 041b96802ef than it was before it. Are you of the opinion that\nthe code with 041b96802ef ties acquire_sample_rows() more closely to\nheap format?\n\n- Melanie\n\n\n",
"msg_date": "Fri, 12 Apr 2024 13:48:30 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi, Melanie!\n\nOn Fri, Apr 12, 2024 at 8:48 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Thu, Apr 11, 2024 at 6:04 PM Alexander Korotkov <[email protected]> wrote:\n> >\n> > On Fri, Apr 12, 2024 at 12:04 AM Andres Freund <[email protected]> wrote:\n> > > On 2024-04-11 20:46:02 +0300, Alexander Korotkov wrote:\n> > > > I hope this work is targeting pg18.\n> > >\n> > > I think anything of the scope discussed by Melanie would be very clearly\n> > > targeting 18. For 17, I don't know yet whether we should revert the the\n> > > ANALYZE streaming read user (041b96802ef), just do a bit of comment polishing,\n> > > or some other small change.\n> > >\n> > > One oddity is that before 041b96802ef, the opportunities for making the\n> > > interface cleaner were less apparent, because c6fc50cb4028 increased the\n> > > coupling between analyze.c and the way the table storage works.\n> >\n> > Thank you for pointing this out about c6fc50cb4028, I've missed this.\n> >\n> > > > Otherwise, do I get this right that this post feature-freeze works on\n> > > > designing a new API? Yes, 27bc1772fc masked the problem. But it was\n> > > > committed on Mar 30.\n> > >\n> > > Note that there were versions of the patch that were targeting the\n> > > pre-27bc1772fc interface.\n> >\n> > Sure, I've checked this before writing. It looks quite similar to the\n> > result of applying my revert patch [1] to the head.\n> >\n> > Let me describe my view over the current situation.\n> >\n> > 1) If we just apply my revert patch and leave c6fc50cb4028 and\n> > 041b96802ef in the tree, then we get our table AM API narrowed. As\n> > you expressed the current API requires block numbers to be 1:1 with\n> > the actual physical on-disk location [2]. Not a secret I think the\n> > current API is quite restrictive. And we're getting the ANALYZE\n> > interface narrower than it was since 737a292b5de. Frankly speaking, I\n> > don't think this is acceptable.\n> >\n> > 2) Pushing down the read stream and prefetch to heap am is related to\n> > difficulties [3], [4]. That's quite a significant piece of work to be\n> > done post FF.\n>\n> I had operated under the assumption that we needed to push the\n> streaming read code into heap AM because that is what we did for\n> sequential scan, but now that I think about it, I don't see why we\n> would have to. Bilal's patch pre-27bc1772fc did not do this. But I\n> think the code in acquire_sample_rows() isn't more tied to heap AM\n> after 041b96802ef than it was before it. Are you of the opinion that\n> the code with 041b96802ef ties acquire_sample_rows() more closely to\n> heap format?\n\nYes, I think so. Table AM API deals with TIDs and block numbers, but\ndoesn't force on what they actually mean. For example, in ZedStore\n[1], data is stored on per-column B-trees, where TID used in table AM\nis just a logical key of that B-trees. Similarly, blockNumber is a\nrange for B-trees.\n\nc6fc50cb4028 and 041b96802ef are putting to acquire_sample_rows() an\nassumption that we are sampling physical blocks as they are stored in\ndata files. That couldn't anymore be some \"logical\" block numbers\nwith meaning only table AM implementation knows. That was pointed out\nby Andres [2]. I'm not sure if ZedStore is alive, but there could be\nother table AM implementations like this, or other implementations in\ndevelopment, etc. Anyway, I don't feel good about narrowing the API,\nwhich is there from pg12.\n\nLinks.\n1. https://www.pgcon.org/events/pgcon_2020/sessions/session/44/slides/13/Zedstore-PGCon2020-Virtual.pdf\n2. https://www.postgresql.org/message-id/20240410212117.mxsldz2w6htrl36v%40awork3.anarazel.de\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 13 Apr 2024 12:28:38 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Sat, Apr 13, 2024 at 5:28 AM Alexander Korotkov <[email protected]> wrote:\n> Yes, I think so. Table AM API deals with TIDs and block numbers, but\n> doesn't force on what they actually mean. For example, in ZedStore\n> [1], data is stored on per-column B-trees, where TID used in table AM\n> is just a logical key of that B-trees. Similarly, blockNumber is a\n> range for B-trees.\n>\n> c6fc50cb4028 and 041b96802ef are putting to acquire_sample_rows() an\n> assumption that we are sampling physical blocks as they are stored in\n> data files. That couldn't anymore be some \"logical\" block numbers\n> with meaning only table AM implementation knows. That was pointed out\n> by Andres [2]. I'm not sure if ZedStore is alive, but there could be\n> other table AM implementations like this, or other implementations in\n> development, etc. Anyway, I don't feel good about narrowing the API,\n> which is there from pg12.\n\nI spent some time looking at this. I think it's valid to complain\nabout the tighter coupling, but c6fc50cb4028 is there starting in v14,\nso I don't think I understand why the situation after 041b96802ef is\nmaterially worse than what we've had for the last few releases. I\nthink it is worse in the sense that, before, you could dodge the\nproblem without defining USE_PREFETCH, and now you can't, but I don't\nthink we can regard nonphysical block numbers as a supported scenario\non that basis.\n\nBut maybe I'm not correctly understanding the situation?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Apr 2024 11:36:21 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, 15 Apr 2024 at 19:36, Robert Haas <[email protected]> wrote:\n\n> On Sat, Apr 13, 2024 at 5:28 AM Alexander Korotkov <[email protected]>\n> wrote:\n> > Yes, I think so. Table AM API deals with TIDs and block numbers, but\n> > doesn't force on what they actually mean. For example, in ZedStore\n> > [1], data is stored on per-column B-trees, where TID used in table AM\n> > is just a logical key of that B-trees. Similarly, blockNumber is a\n> > range for B-trees.\n> >\n> > c6fc50cb4028 and 041b96802ef are putting to acquire_sample_rows() an\n> > assumption that we are sampling physical blocks as they are stored in\n> > data files. That couldn't anymore be some \"logical\" block numbers\n> > with meaning only table AM implementation knows. That was pointed out\n> > by Andres [2]. I'm not sure if ZedStore is alive, but there could be\n> > other table AM implementations like this, or other implementations in\n> > development, etc. Anyway, I don't feel good about narrowing the API,\n> > which is there from pg12.\n>\n> I spent some time looking at this. I think it's valid to complain\n> about the tighter coupling, but c6fc50cb4028 is there starting in v14,\n> so I don't think I understand why the situation after 041b96802ef is\n> materially worse than what we've had for the last few releases. I\n> think it is worse in the sense that, before, you could dodge the\n> problem without defining USE_PREFETCH, and now you can't, but I don't\n> think we can regard nonphysical block numbers as a supported scenario\n> on that basis.\n>\n> But maybe I'm not correctly understanding the situation?\n>\nHi, Robert!\n\nIn my understanding, the downside of 041b96802ef is bringing read_stream*\nthings from being heap-only-related up to the level\nof acquire_sample_rows() that is not supposed to be tied to heap. And\nchanging *_analyze_next_block() function signature to use ReadStream\nexplicitly in the signature.\n\nRegards,\nPavel.\n\nOn Mon, 15 Apr 2024 at 19:36, Robert Haas <[email protected]> wrote:On Sat, Apr 13, 2024 at 5:28 AM Alexander Korotkov <[email protected]> wrote:\n> Yes, I think so. Table AM API deals with TIDs and block numbers, but\n> doesn't force on what they actually mean. For example, in ZedStore\n> [1], data is stored on per-column B-trees, where TID used in table AM\n> is just a logical key of that B-trees. Similarly, blockNumber is a\n> range for B-trees.\n>\n> c6fc50cb4028 and 041b96802ef are putting to acquire_sample_rows() an\n> assumption that we are sampling physical blocks as they are stored in\n> data files. That couldn't anymore be some \"logical\" block numbers\n> with meaning only table AM implementation knows. That was pointed out\n> by Andres [2]. I'm not sure if ZedStore is alive, but there could be\n> other table AM implementations like this, or other implementations in\n> development, etc. Anyway, I don't feel good about narrowing the API,\n> which is there from pg12.\n\nI spent some time looking at this. I think it's valid to complain\nabout the tighter coupling, but c6fc50cb4028 is there starting in v14,\nso I don't think I understand why the situation after 041b96802ef is\nmaterially worse than what we've had for the last few releases. I\nthink it is worse in the sense that, before, you could dodge the\nproblem without defining USE_PREFETCH, and now you can't, but I don't\nthink we can regard nonphysical block numbers as a supported scenario\non that basis.\n\nBut maybe I'm not correctly understanding the situation?Hi, Robert!In my understanding, the downside of 041b96802ef is bringing read_stream* things from being heap-only-related up to the level of acquire_sample_rows() that is not supposed to be tied to heap. And changing *_analyze_next_block() function signature to use ReadStream explicitly in the signature. Regards,Pavel.",
"msg_date": "Mon, 15 Apr 2024 20:37:39 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn Mon, 15 Apr 2024 at 18:36, Robert Haas <[email protected]> wrote:\n>\n> On Sat, Apr 13, 2024 at 5:28 AM Alexander Korotkov <[email protected]> wrote:\n> > Yes, I think so. Table AM API deals with TIDs and block numbers, but\n> > doesn't force on what they actually mean. For example, in ZedStore\n> > [1], data is stored on per-column B-trees, where TID used in table AM\n> > is just a logical key of that B-trees. Similarly, blockNumber is a\n> > range for B-trees.\n> >\n> > c6fc50cb4028 and 041b96802ef are putting to acquire_sample_rows() an\n> > assumption that we are sampling physical blocks as they are stored in\n> > data files. That couldn't anymore be some \"logical\" block numbers\n> > with meaning only table AM implementation knows. That was pointed out\n> > by Andres [2]. I'm not sure if ZedStore is alive, but there could be\n> > other table AM implementations like this, or other implementations in\n> > development, etc. Anyway, I don't feel good about narrowing the API,\n> > which is there from pg12.\n>\n> I spent some time looking at this. I think it's valid to complain\n> about the tighter coupling, but c6fc50cb4028 is there starting in v14,\n> so I don't think I understand why the situation after 041b96802ef is\n> materially worse than what we've had for the last few releases. I\n> think it is worse in the sense that, before, you could dodge the\n> problem without defining USE_PREFETCH, and now you can't, but I don't\n> think we can regard nonphysical block numbers as a supported scenario\n> on that basis.\n\nI agree with you but I did not understand one thing. If out-of-core\nAMs are used, does not all block sampling logic (BlockSampler_Init(),\nBlockSampler_Next() etc.) need to be edited as well since these\nfunctions assume block numbers are actual physical on-disk location,\nright? I mean if the block number is something different than the\nactual physical on-disk location, the acquire_sample_rows() function\nlooks wrong to me before c6fc50cb4028 as well.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 15 Apr 2024 19:41:19 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 15, 2024 at 12:37 PM Pavel Borisov <[email protected]> wrote:\n> In my understanding, the downside of 041b96802ef is bringing read_stream* things from being heap-only-related up to the level of acquire_sample_rows() that is not supposed to be tied to heap. And changing *_analyze_next_block() function signature to use ReadStream explicitly in the signature.\n\nI don't think that really clarifies anything. The ReadStream is\nbasically just acting as a wrapper for a stream of block numbers, and\nthe API took a BlockNumber before. So why does it make any difference?\n\nIf I understand correctly, Alexander thinks that, before 041b96802ef,\nthe block number didn't necessarily have to be the physical block\nnumber on disk, but could instead be any 32-bit quantity that the\ntable AM wanted to pack into the block number. But I don't think\nthat's true, because acquire_sample_rows() was already passing those\nblock numbers to PrefetchBuffer(), which already requires physical\nblock numbers.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Apr 2024 14:08:51 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 15, 2024 at 12:41 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> I agree with you but I did not understand one thing. If out-of-core\n> AMs are used, does not all block sampling logic (BlockSampler_Init(),\n> BlockSampler_Next() etc.) need to be edited as well since these\n> functions assume block numbers are actual physical on-disk location,\n> right? I mean if the block number is something different than the\n> actual physical on-disk location, the acquire_sample_rows() function\n> looks wrong to me before c6fc50cb4028 as well.\n\nYes, this is also a problem with trying to use non-physical block\nnumbers. We can hypothesize an AM where it works out OK in practice,\nsay because there are always exactly the same number of logical block\nnumbers as there are physical block numbers. Or, because there are\nalways more logical block numbers than physical block numbers, but for\nsome reason the table AM author doesn't care because they believe that\nin the target use case for their AM the data distribution will be\nsufficiently uniform that sampling only low-numbered blocks won't\nreally hurt anything.\n\nBut that does seem a bit strained. In practice, I suspect that table\nAMs that use logical block numbers might want to replace this line\nfrom acquire_sample_rows() with a call to a tableam method that\nreturns the number of logical blocks:\n\n totalblocks = RelationGetNumberOfBlocks(onerel);\n\nBut even that does not seem like enough, because my guess would be\nthat a lot of table AMs would end up with a sparse logical block\nspace. For instance, you might create a logical block number sequence\nthat starts at 0 and just counts up towards 2^32 and eventually either\nwraps around or errors out. Each new tuple gets the next TID that\nisn't yet used. Well, what's going to happen eventually in a lot of\nworkloads is that the low-numbered logical blocks are going to be\nmostly or entirely empty, and the data is going to be clustered in the\nones that are nearer to the highest logical block number that's so far\nbeen assigned. So, then, as you say, you'd want to replace the whole\nBlockSampler thing entirely.\n\nThat said, I find it a little bit hard to know what people are already\ndoing or realistically might try to do with table AMs. If somebody\nsays they have a table AM where the number of logical block numbers\nequals the number of physical block numbers (or is somewhat larger but\nin a way that doesn't really matter) and the existing block sampling\nlogic works well enough, I can't really disprove that. It puts awfully\ntight limits on what the AM can be doing, but, OK, sometimes people\nwant to develop AMs for very specific purposes. However, because of\nthe prefetching thing, I think even that fairly narrow use case was\nalready broken before 041b96802efa33d2bc9456f2ad946976b92b5ae1. So I\njust don't really see how that commit made anything worse in any way\nthat really matters.\n\nBut maybe it did. People often find extremely creative ways of working\naround the limitations of the core interfaces. I think it could be the\ncase that someone found a clever way of dodging all of these problems\nand had something that was working well enough that they were happy\nwith it, and now they can't make it work after the changes for some\nreason. If that someone is reading this thread and wants to spell that\nout, we can consider whether there's some relief that we could give to\nthat person, *especially* if they can demonstrate that they raised the\nalarm before the commit went in. But in the absence of that, my\ncurrent belief is that nonphysical block numbers were never a\nsupported scenario; hence, the idea that\n041b96802efa33d2bc9456f2ad946976b92b5ae1 should be reverted for\nde-supporting them ought to be rejected.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Apr 2024 15:09:47 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, 15 Apr 2024 at 22:09, Robert Haas <[email protected]> wrote:\n\n> On Mon, Apr 15, 2024 at 12:37 PM Pavel Borisov <[email protected]>\n> wrote:\n> > In my understanding, the downside of 041b96802ef is bringing\n> read_stream* things from being heap-only-related up to the level of\n> acquire_sample_rows() that is not supposed to be tied to heap. And changing\n> *_analyze_next_block() function signature to use ReadStream explicitly in\n> the signature.\n>\n> I don't think that really clarifies anything. The ReadStream is\n> basically just acting as a wrapper for a stream of block numbers, and\n> the API took a BlockNumber before. So why does it make any difference?\n>\n> If I understand correctly, Alexander thinks that, before 041b96802ef,\n> the block number didn't necessarily have to be the physical block\n> number on disk, but could instead be any 32-bit quantity that the\n> table AM wanted to pack into the block number. But I don't think\n> that's true, because acquire_sample_rows() was already passing those\n> block numbers to PrefetchBuffer(), which already requires physical\n> block numbers.\n>\n\nHi, Robert!\n\nWhy it makes a difference looks a little bit unclear to me, I can't comment\non this. I noticed that before 041b96802ef we had a block number and block\nsampler state that tied acquire_sample_rows() to the actual block\nstructure. After we have the whole struct ReadStream which doesn't comprise\njust a wrapper for the same variables, but the state that ties\nacquire_sample_rows() to the streaming read algorithm (and heap). Yes, we\ndon't have other access methods other than heap implemented for analyze\nroutine, so the patch works anyway, but from the view on\nacquire_sample_rows() as a general method that is intended to have\ndifferent implementations in the future it doesn't look good.\n\nIt's my impression on 041b96802ef, please forgive me if I haven't\nunderstood something.\n\nRegards,\nPavel Borisov\nSupabase\n\nOn Mon, 15 Apr 2024 at 22:09, Robert Haas <[email protected]> wrote:On Mon, Apr 15, 2024 at 12:37 PM Pavel Borisov <[email protected]> wrote:\n> In my understanding, the downside of 041b96802ef is bringing read_stream* things from being heap-only-related up to the level of acquire_sample_rows() that is not supposed to be tied to heap. And changing *_analyze_next_block() function signature to use ReadStream explicitly in the signature.\n\nI don't think that really clarifies anything. The ReadStream is\nbasically just acting as a wrapper for a stream of block numbers, and\nthe API took a BlockNumber before. So why does it make any difference?\n\nIf I understand correctly, Alexander thinks that, before 041b96802ef,\nthe block number didn't necessarily have to be the physical block\nnumber on disk, but could instead be any 32-bit quantity that the\ntable AM wanted to pack into the block number. But I don't think\nthat's true, because acquire_sample_rows() was already passing those\nblock numbers to PrefetchBuffer(), which already requires physical\nblock numbers. Hi, Robert!Why it makes a difference looks a little bit unclear to me, I can't comment on this. I noticed that before 041b96802ef we had a block number and block sampler state that tied acquire_sample_rows() to the actual block structure. After we have the whole struct ReadStream which doesn't comprise just a wrapper for the same variables, but the state that ties acquire_sample_rows() to the streaming read algorithm (and heap). Yes, we don't have other access methods other than heap implemented for analyze routine, so the patch works anyway, but from the view on acquire_sample_rows() as a general method that is intended to have different implementations in the future it doesn't look good. It's my impression on 041b96802ef, please forgive me if I haven't understood something. Regards,Pavel BorisovSupabase",
"msg_date": "Mon, 15 Apr 2024 23:14:01 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-15 23:14:01 +0400, Pavel Borisov wrote:\n> Why it makes a difference looks a little bit unclear to me, I can't comment\n> on this. I noticed that before 041b96802ef we had a block number and block\n> sampler state that tied acquire_sample_rows() to the actual block\n> structure.\n\nThat, and the prefetch calls actually translating the block numbers 1:1 to\nphysical locations within the underlying file.\n\nAnd before 041b96802ef they were tied much more closely by the direct calls to\nheapam added in 27bc1772fc81.\n\n\n> After we have the whole struct ReadStream which doesn't comprise just a\n> wrapper for the same variables, but the state that ties\n> acquire_sample_rows() to the streaming read algorithm (and heap).\n\nYes ... ? I don't see how that is a meaningful difference to the state as of\n27bc1772fc81. Nor fundamentally worse than the state 27bc1772fc81^, given\nthat we already issued requests for specific blocks in the file.\n\nThat said, I don't like the state after applying\nhttps://postgr.es/m/CAPpHfdvuT6DnguzaV-M1UQ2whYGDojaNU%3D-%3DiHc0A7qo9HBEJw%40mail.gmail.com\nbecause there's too much coupling. Hence talking about needing to iterate on\nthe interface in some form, earlier in the thread.\n\n\nWhat are you actually arguing for here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Apr 2024 12:47:52 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 15, 2024 at 3:47 PM Andres Freund <[email protected]> wrote:\n> That said, I don't like the state after applying\n> https://postgr.es/m/CAPpHfdvuT6DnguzaV-M1UQ2whYGDojaNU%3D-%3DiHc0A7qo9HBEJw%40mail.gmail.com\n> because there's too much coupling. Hence talking about needing to iterate on\n> the interface in some form, earlier in the thread.\n\nMmph, I can't follow what the actual state of things is here. Are we\nwaiting for Alexander to push that patch? Is he waiting for somebody\nto sign off on that patch? Do you want that patch applied, not\napplied, or applied with some set of modifications?\n\nI find the discussion of \"too much coupling\" too abstract. I want to\nget down to specific proposals for what we should change, or not\nchange.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Apr 2024 16:02:00 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-12 01:04:03 +0300, Alexander Korotkov wrote:\n> 1) If we just apply my revert patch and leave c6fc50cb4028 and\n> 041b96802ef in the tree, then we get our table AM API narrowed. As\n> you expressed the current API requires block numbers to be 1:1 with\n> the actual physical on-disk location [2]. Not a secret I think the\n> current API is quite restrictive. And we're getting the ANALYZE\n> interface narrower than it was since 737a292b5de. Frankly speaking, I\n> don't think this is acceptable.\n\nAs others already pointed out, c6fc50cb4028 was committed quite a while\nago. I'm fairly unhappy about c6fc50cb4028, fwiw, but didn't realize that\nuntil it was too late.\n\n\n> In token of all of the above, is the in-tree state that bad? (if we\n> abstract the way 27bc1772fc and dd1f6b0c17 were committed).\n\nTo me the 27bc1772fc doesn't make much sense on its own. You added calls\ndirectly to heapam internals to a file in src/backend/commands/, that just\ndoesn't make sense.\n\nLeaving that aside, I think the interface isn't good on its own:\ntable_relation_analyze() doesn't actually do anything, it just sets callbacks,\nthat then later are called from analyze.c, which doesn't at all fit to the\nname of the callback/function. I realize that this is kinda cribbed from the\nFDW code, but I don't think that is a particularly good excuse.\n\nI don't think dd1f6b0c17 improves the situation, at all. It sets global\nvariables to redirect how an individual acquire_sample_rows invocation\nworks:\nvoid\nblock_level_table_analyze(Relation relation,\n\t\t\t\t\t\t AcquireSampleRowsFunc *func,\n\t\t\t\t\t\t BlockNumber *totalpages,\n\t\t\t\t\t\t BufferAccessStrategy bstrategy,\n\t\t\t\t\t\t ScanAnalyzeNextBlockFunc scan_analyze_next_block_cb,\n\t\t\t\t\t\t ScanAnalyzeNextTupleFunc scan_analyze_next_tuple_cb)\n{\n\t*func = acquire_sample_rows;\n\t*totalpages = RelationGetNumberOfBlocks(relation);\n\tvac_strategy = bstrategy;\n\tscan_analyze_next_block = scan_analyze_next_block_cb;\n\tscan_analyze_next_tuple = scan_analyze_next_tuple_cb;\n}\n\nNotably it does so within the ->relation_analyze tableam callback, which does\n*NOT* not actually do anything other than returning a callback. So if\n->relation_analyze() for another relation is called, the acquire_sample_rows()\nfor the earlier relation will do something different. Note that this isn't a\ntheoretical risk, acquire_inherited_sample_rows() actually collects the\nacquirefunc for all the inherited relations before calling acquirefunc.\n\nThis is honestly leaving me somewhat speechless.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Apr 2024 13:10:57 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-15 16:02:00 -0400, Robert Haas wrote:\n> On Mon, Apr 15, 2024 at 3:47 PM Andres Freund <[email protected]> wrote:\n> > That said, I don't like the state after applying\n> > https://postgr.es/m/CAPpHfdvuT6DnguzaV-M1UQ2whYGDojaNU%3D-%3DiHc0A7qo9HBEJw%40mail.gmail.com\n> > because there's too much coupling. Hence talking about needing to iterate on\n> > the interface in some form, earlier in the thread.\n> \n> Mmph, I can't follow what the actual state of things is here. Are we\n> waiting for Alexander to push that patch? Is he waiting for somebody\n> to sign off on that patch?\n\nI think Alexander is arguing that we shouldn't revert 27bc1772fc & dd1f6b0c17\nin 17. I already didn't think that was an option, because I didn't like the\nadded interfaces, but now am even more certain, given how broken dd1f6b0c17\nseems to be:\nhttps://postgr.es/m/20240415201057.khoyxbwwxfgzomeo%40awork3.anarazel.de\n\n\n> Do you want that patch applied, not applied, or applied with some set of\n> modifications?\n\nI think we should apply Alexander's proposed revert and then separately\ndiscuss what we should do about 041b96802ef.\n\n\n> I find the discussion of \"too much coupling\" too abstract. I want to\n> get down to specific proposals for what we should change, or not\n> change.\n\nI think it's a bit hard to propose something concrete until we've decided\nwhether we'll revert 27bc1772fc & dd1f6b0c17.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Apr 2024 13:17:50 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 15, 2024 at 11:11 PM Andres Freund <[email protected]> wrote:\n> On 2024-04-12 01:04:03 +0300, Alexander Korotkov wrote:\n> > 1) If we just apply my revert patch and leave c6fc50cb4028 and\n> > 041b96802ef in the tree, then we get our table AM API narrowed. As\n> > you expressed the current API requires block numbers to be 1:1 with\n> > the actual physical on-disk location [2]. Not a secret I think the\n> > current API is quite restrictive. And we're getting the ANALYZE\n> > interface narrower than it was since 737a292b5de. Frankly speaking, I\n> > don't think this is acceptable.\n>\n> As others already pointed out, c6fc50cb4028 was committed quite a while\n> ago. I'm fairly unhappy about c6fc50cb4028, fwiw, but didn't realize that\n> until it was too late.\n\n+1\n\n> > In token of all of the above, is the in-tree state that bad? (if we\n> > abstract the way 27bc1772fc and dd1f6b0c17 were committed).\n>\n> To me the 27bc1772fc doesn't make much sense on its own. You added calls\n> directly to heapam internals to a file in src/backend/commands/, that just\n> doesn't make sense.\n>\n> Leaving that aside, I think the interface isn't good on its own:\n> table_relation_analyze() doesn't actually do anything, it just sets callbacks,\n> that then later are called from analyze.c, which doesn't at all fit to the\n> name of the callback/function. I realize that this is kinda cribbed from the\n> FDW code, but I don't think that is a particularly good excuse.\n>\n> I don't think dd1f6b0c17 improves the situation, at all. It sets global\n> variables to redirect how an individual acquire_sample_rows invocation\n> works:\n> void\n> block_level_table_analyze(Relation relation,\n> AcquireSampleRowsFunc *func,\n> BlockNumber *totalpages,\n> BufferAccessStrategy bstrategy,\n> ScanAnalyzeNextBlockFunc scan_analyze_next_block_cb,\n> ScanAnalyzeNextTupleFunc scan_analyze_next_tuple_cb)\n> {\n> *func = acquire_sample_rows;\n> *totalpages = RelationGetNumberOfBlocks(relation);\n> vac_strategy = bstrategy;\n> scan_analyze_next_block = scan_analyze_next_block_cb;\n> scan_analyze_next_tuple = scan_analyze_next_tuple_cb;\n> }\n>\n> Notably it does so within the ->relation_analyze tableam callback, which does\n> *NOT* not actually do anything other than returning a callback. So if\n> ->relation_analyze() for another relation is called, the acquire_sample_rows()\n> for the earlier relation will do something different. Note that this isn't a\n> theoretical risk, acquire_inherited_sample_rows() actually collects the\n> acquirefunc for all the inherited relations before calling acquirefunc.\n\nYou're right. No sense trying to fix this. Reverted.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 16 Apr 2024 13:33:53 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Mon, Apr 15, 2024 at 11:17 PM Andres Freund <[email protected]> wrote:\n> On 2024-04-15 16:02:00 -0400, Robert Haas wrote:\n> > Do you want that patch applied, not applied, or applied with some set of\n> > modifications?\n>\n> I think we should apply Alexander's proposed revert and then separately\n> discuss what we should do about 041b96802ef.\n\nTaking a closer look at acquire_sample_rows(), I think it would be\ngood if table AM implementation would care about block-level (or\nwhatever-level) sampling. So that acquire_sample_rows() just fetches\ntuples one-by-one from table AM implementation without any care about\nblocks. Possible table_beginscan_analyze() could take an argument of\ntarget number of tuples, then those tuples are just fetches with\ntable_scan_analyze_next_tuple(). What do you think?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 16 Apr 2024 13:52:26 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Tue, 16 Apr 2024 at 14:52, Alexander Korotkov <[email protected]>\nwrote:\n\n> On Mon, Apr 15, 2024 at 11:17 PM Andres Freund <[email protected]> wrote:\n> > On 2024-04-15 16:02:00 -0400, Robert Haas wrote:\n> > > Do you want that patch applied, not applied, or applied with some set\n> of\n> > > modifications?\n> >\n> > I think we should apply Alexander's proposed revert and then separately\n> > discuss what we should do about 041b96802ef.\n>\n> Taking a closer look at acquire_sample_rows(), I think it would be\n> good if table AM implementation would care about block-level (or\n> whatever-level) sampling. So that acquire_sample_rows() just fetches\n> tuples one-by-one from table AM implementation without any care about\n> blocks. Possible table_beginscan_analyze() could take an argument of\n> target number of tuples, then those tuples are just fetches with\n> table_scan_analyze_next_tuple(). What do you think?\n>\nHi, Alexander!\n\nI like the idea of splitting abstraction levels for:\n1. acquirefuncs (FDW or physical table)\n2. new specific row fetch functions (alike to existing\n_scan_analyze_next_tuple()), that could be AM-specific.\n\nThen scan_analyze_next_block() or another iteration algorithm would be\ncontained inside table AM implementation of _scan_analyze_next_tuple().\n\nSo, init of scan state would be inside table AM implementation of\n_beginscan_analyze(). Scan state (like BlockSamplerData or other state that\ncould be custom for AM) could be transferred from _beginscan_analyze() to\n_scan_analyze_next_tuple() by some opaque AM-specific data structure. If so\nwe'll also may need AM-specific table_endscan_analyze to clean it.\n\nRegards,\nPavel\n\nOn Tue, 16 Apr 2024 at 14:52, Alexander Korotkov <[email protected]> wrote:On Mon, Apr 15, 2024 at 11:17 PM Andres Freund <[email protected]> wrote:\n> On 2024-04-15 16:02:00 -0400, Robert Haas wrote:\n> > Do you want that patch applied, not applied, or applied with some set of\n> > modifications?\n>\n> I think we should apply Alexander's proposed revert and then separately\n> discuss what we should do about 041b96802ef.\n\nTaking a closer look at acquire_sample_rows(), I think it would be\ngood if table AM implementation would care about block-level (or\nwhatever-level) sampling. So that acquire_sample_rows() just fetches\ntuples one-by-one from table AM implementation without any care about\nblocks. Possible table_beginscan_analyze() could take an argument of\ntarget number of tuples, then those tuples are just fetches with\ntable_scan_analyze_next_tuple(). What do you think?Hi, Alexander!I like the idea of splitting abstraction levels for: 1. acquirefuncs (FDW or physical table) 2. new specific row fetch functions (alike to existing _scan_analyze_next_tuple()), that could be AM-specific. Then scan_analyze_next_block() or another iteration algorithm would be contained inside table AM implementation of _scan_analyze_next_tuple().So, init of scan state would be inside table AM implementation of _beginscan_analyze(). Scan state (like BlockSamplerData or other state that could be custom for AM) could be transferred from _beginscan_analyze() to _scan_analyze_next_tuple() by some opaque AM-specific data structure. If so we'll also may need AM-specific table_endscan_analyze to clean it.Regards,Pavel",
"msg_date": "Tue, 16 Apr 2024 16:10:38 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 6:52 AM Alexander Korotkov <[email protected]> wrote:\n> Taking a closer look at acquire_sample_rows(), I think it would be\n> good if table AM implementation would care about block-level (or\n> whatever-level) sampling. So that acquire_sample_rows() just fetches\n> tuples one-by-one from table AM implementation without any care about\n> blocks. Possible table_beginscan_analyze() could take an argument of\n> target number of tuples, then those tuples are just fetches with\n> table_scan_analyze_next_tuple(). What do you think?\n\nAndres is the expert here, but FWIW, that plan seems reasonable to me.\nOne downside is that every block-based tableam is going to end up with\na very similar implementation, which is kind of something I don't like\nabout the tableam API in general: if you want to make something that\nis basically heap plus a little bit of special sauce, you have to copy\na mountain of code. Right now we don't really care about that problem,\nbecause we don't have any other tableams in core, but if we ever do, I\nthink we're going to find ourselves very unhappy with that aspect of\nthings. But maybe now is not the time to start worrying. That problem\nisn't unique to analyze, and giving out-of-core tableams the\nflexibility to do what they want is better than not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Apr 2024 08:31:24 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On 2024-04-16 13:33:53 +0300, Alexander Korotkov wrote:\n> Reverted.\n\nThanks!\n\n\n",
"msg_date": "Tue, 16 Apr 2024 08:58:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-16 08:31:24 -0400, Robert Haas wrote:\n> On Tue, Apr 16, 2024 at 6:52 AM Alexander Korotkov <[email protected]> wrote:\n> > Taking a closer look at acquire_sample_rows(), I think it would be\n> > good if table AM implementation would care about block-level (or\n> > whatever-level) sampling. So that acquire_sample_rows() just fetches\n> > tuples one-by-one from table AM implementation without any care about\n> > blocks. Possible table_beginscan_analyze() could take an argument of\n> > target number of tuples, then those tuples are just fetches with\n> > table_scan_analyze_next_tuple(). What do you think?\n> \n> Andres is the expert here, but FWIW, that plan seems reasonable to me.\n> One downside is that every block-based tableam is going to end up with\n> a very similar implementation, which is kind of something I don't like\n> about the tableam API in general: if you want to make something that\n> is basically heap plus a little bit of special sauce, you have to copy\n> a mountain of code. Right now we don't really care about that problem,\n> because we don't have any other tableams in core, but if we ever do, I\n> think we're going to find ourselves very unhappy with that aspect of\n> things. But maybe now is not the time to start worrying. That problem\n> isn't unique to analyze, and giving out-of-core tableams the\n> flexibility to do what they want is better than not.\n\nI think that can partially be addressed by having more \"block oriented AM\"\nhelpers in core, like we have for table_block_parallelscan*. Doesn't work for\neverything, but should for something like analyze.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Apr 2024 08:59:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 1:43 PM Alexander Korotkov <[email protected]>\nwrote:\n\n> Hello PostgreSQL Hackers,\n>\n> I am pleased to submit a series of patches related to the Table Access\n> Method (AM) interface, which I initially announced during my talk at\n> PGCon 2023 [1]. These patches are primarily designed to support the\n> OrioleDB engine, but I believe they could be beneficial for other\n> table AM implementations as well.\n>\n> The focus of these patches is to introduce more flexibility and\n> capabilities into the Table AM interface. This is particularly\n> relevant for advanced use cases like index-organized tables,\n> alternative MVCC implementations, etc.\n>\n\nHi Alexander and great to see some action around in the table access method\ninterface.\n\nSorry for being late to the game, but wondering a few things about the\npatches, but I'll start with the first one that caught my eye.\n\n0007-Allow-table-AM-tuple_insert-method-to-return-the--v1.patch\n>\n> This allows table AM to return a native tuple slot, which is aware of\n> table AM-specific system attributes.\n>\n\nThis patch seems straightforward enough, but from reading the surrounding\ncode and trying to understand the context I am wondering a few things.\nReading the thread, I am unsure if this will go in or not, but just wanted\nto point out a concern I had. My apologies if I am raising an issue that is\nalready resolved.\n\nAFAICT, the general contract for working with table tuple slots is creating\nthem for a particular purpose, filling it in, and then passing around a\npointer to it. Since the slot is created using a \"source\" implementation,\nthe \"source\" is responsible for memory allocation and also other updates to\nthe state. Please correct me if I have misunderstood how this is intended\nto work, but this seems like a good API since it avoids\nunnecessary allocation and, in particular, unbounded creation of new slots\naffecting memory usage while a query is executing. For a plan you want to\nexecute, you just make sure that you have slots of the right kind in each\nplan node and there is no need to dynamically allocate more slots. If you\nwant one for the table access method, just make sure to fetch the slot\ncallbacks from the table access method use those correctly. As a result,\nthe number of slots used during execution is bounded\n\nAssuming that I've understood it correct, if a TTS is then created and\npassed to tuple_insert, and it needs to return a different slot, this\nraises two questions:\n\n - As Andres pointed out: who is responsible for taking care of and\n dealing with the cleanup of the returned slot here? Note that this is not\n just a matter of releasing memory, there are other stateful things that\n they might need to deal with that the TAM have created for in the slot. For\n this, some sort of callback is needed and the tuple_insert implementation\n needs to call that correctly.\n - The dual is the cleanup of the \"original\" slot passed in: a slot of a\n particular kind is passed in and you need to deal with this correctly to\n release the resources allocated by the original slot, using some sort of\n callback.\n\nFor both these cases, the question is what cleanup function to call.\n\nIn most cases, the slot comes from a subplan and is not dynamically\nallocated, i.e., it cannot just use release() since it is reused later. For\nexample, for ExecScanFetch the slot ss_ScanTupleSlot is returned, which is\nthen used with tuple_insert (unless I've misread the code), which is\ntypically cleared, not released.\n\nIf clear() is used instead, and you clear this slot as part of inserting a\ntuple, you can instead clear a premature intermediate result\n(ss_ScanTupleSlot, in the example above), which can cause strange issues if\nthis result is needed later.\n\nSo, given that the dynamic allocation of new slots is unbounded within a\nquery and it is complicated to make sure that slots are\ncleared/reset/released correctly depending on context, this seems to be\nhard to get to work correctly and not risk introducing bugs. IMHO, it would\nbe preferable to have a very simple contract where you init, set, clear,\nand release the slot to avoid bugs creeping into the code, which is what\nthe PostgreSQL code mostly has now.\n\nSo, the question here is why changing the slot implementation is needed. I\ndo not know the details of OrioleDB, but this slot is immediately used\nwith ExecInsertIndexTuples() after the call in nodeModifyTable. If the need\nis to pass information from the TAM to the IAM then it might be better to\nstore this information in the execution state. Is there a case where the\ncorrect slot is not created, then fixing that location might be better.\n(I've noticed that the copyFrom code has a somewhat naïve assumption of\nwhat slot implementation should be used, but that is a separate discussion.)\n\nBest wishes,\nMats Kindahl\n\nOn Thu, Nov 23, 2023 at 1:43 PM Alexander Korotkov <[email protected]> wrote:Hello PostgreSQL Hackers,\n\nI am pleased to submit a series of patches related to the Table Access\nMethod (AM) interface, which I initially announced during my talk at\nPGCon 2023 [1]. These patches are primarily designed to support the\nOrioleDB engine, but I believe they could be beneficial for other\ntable AM implementations as well.\n\nThe focus of these patches is to introduce more flexibility and\ncapabilities into the Table AM interface. This is particularly\nrelevant for advanced use cases like index-organized tables,\nalternative MVCC implementations, etc.Hi Alexander and great to see some action around in the table access method interface.Sorry for being late to the game, but wondering a few things about the patches, but I'll start with the first one that caught my eye.\n0007-Allow-table-AM-tuple_insert-method-to-return-the--v1.patch\n\nThis allows table AM to return a native tuple slot, which is aware of\ntable AM-specific system attributes.This patch seems straightforward enough, but from reading the surrounding code and trying to understand the context I am wondering a few things. Reading the thread, I am unsure if this will go in or not, but just wanted to point out a concern I had. My apologies if I am raising an issue that is already resolved.AFAICT, the general contract for working with table tuple slots is creating them for a particular purpose, filling it in, and then passing around a pointer to it. Since the slot is created using a \"source\" implementation, the \"source\" is responsible for memory allocation and also other updates to the state. Please correct me if I have misunderstood how this is intended to work, but this seems like a good API since it avoids unnecessary allocation and, in particular, unbounded creation of new slots affecting memory usage while a query is executing. For a plan you want to execute, you just make sure that you have slots of the right kind in each plan node and there is no need to dynamically allocate more slots. If you want one for the table access method, just make sure to fetch the slot callbacks from the table access method use those correctly. As a result, the number of slots used during execution is boundedAssuming that I've understood it correct, if a TTS is then created and passed to tuple_insert, and it needs to return a different slot, this raises two questions:As Andres pointed out: who is responsible for taking care of and dealing with the cleanup of the returned slot here? Note that this is not just a matter of releasing memory, there are other stateful things that they might need to deal with that the TAM have created for in the slot. For this, some sort of callback is needed and the tuple_insert implementation needs to call that correctly.The dual is the cleanup of the \"original\" slot passed in: a slot of a particular kind is passed in and you need to deal with this correctly to release the resources allocated by the original slot, using some sort of callback.For both these cases, the question is what cleanup function to call.In most cases, the slot comes from a subplan and is not dynamically allocated, i.e., it cannot just use release() since it is reused later. For example, for ExecScanFetch the slot ss_ScanTupleSlot is returned, which is then used with tuple_insert (unless I've misread the code), which is typically cleared, not released.If clear() is used instead, and you clear this slot as part of inserting a tuple, you can instead clear a premature intermediate result (ss_ScanTupleSlot, in the example above), which can cause strange issues if this result is needed later. So, given that the dynamic allocation of new slots is unbounded within a query and it is complicated to make sure that slots are cleared/reset/released correctly depending on context, this seems to be hard to get to work correctly and not risk introducing bugs. IMHO, it would be preferable to have a very simple contract where you init, set, clear, and release the slot to avoid bugs creeping into the code, which is what the PostgreSQL code mostly has now.So, the question here is why changing the slot implementation is needed. I do not know the details of OrioleDB, but this slot is immediately used with ExecInsertIndexTuples() after the call in nodeModifyTable. If the need is to pass information from the TAM to the IAM then it might be better to store this information in the execution state. Is there a case where the correct slot is not created, then fixing that location might be better. (I've noticed that the copyFrom code has a somewhat naïve assumption of what slot implementation should be used, but that is a separate discussion.)Best wishes,Mats Kindahl",
"msg_date": "Fri, 24 May 2024 08:36:32 +0200",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "Hi,\n\nOn Tue, 16 Apr 2024 at 12:34, Alexander Korotkov <[email protected]> wrote:\n>\n> You're right. No sense trying to fix this. Reverted.\n\nI just noticed that this revert (commit 6377e12a) seems to have\nintroduced two comment blocks atop TableAmRoutine's\nscan_analyze_next_block, and I can't find a clear reason why these are\ntwo separate comment blocks.\nFurthermore, both comment blocks seemingly talk about different\nimplementations of a block-based analyze functionality, and I don't\nhave the time to analyze which of these comments is authorative and\nwhich are misplaced or obsolete.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 21 Jun 2024 18:36:48 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table AM Interface Enhancements"
},
{
"msg_contents": "On Fri, Jun 21, 2024 at 7:37 PM Matthias van de Meent\n<[email protected]> wrote:\n> On Tue, 16 Apr 2024 at 12:34, Alexander Korotkov <[email protected]> wrote:\n> >\n> > You're right. No sense trying to fix this. Reverted.\n>\n> I just noticed that this revert (commit 6377e12a) seems to have\n> introduced two comment blocks atop TableAmRoutine's\n> scan_analyze_next_block, and I can't find a clear reason why these are\n> two separate comment blocks.\n> Furthermore, both comment blocks seemingly talk about different\n> implementations of a block-based analyze functionality, and I don't\n> have the time to analyze which of these comments is authorative and\n> which are misplaced or obsolete.\n\nThank you, I've just removed the first comment. It contains\nheap-specific information and has been copied here from\nheapam_scan_analyze_next_block().\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sat, 22 Jun 2024 16:18:51 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table AM Interface Enhancements"
}
] |
[
{
"msg_contents": "\n\n",
"msg_date": "Thu, 23 Nov 2023 22:48:45 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Does pg support to write a new buffer cache as an extension?"
}
] |
[
{
"msg_contents": "Hi,\n\nOver in [0] and [1] there are patches that touch on the topic of\n'natual ordering' index retrieval, and [2] also touches on the topic.\nFor those patches, I've been looking at how the planner and executor\nindicate to index AMs that they expects the output to be ordered, and\nhow this ordering should work.\nI've mostly found how it works for index_key opr constant, but I've\nyet to find a good mental model for how the planner handles indexes\nthat can expose the 'intrinsic order' of data, i.e. indexes with\n`amcanorder=true`, because there is very little (if any) real\ndocumentation on what is expected from indexes when it advertises\ncertain features, and how the executor signals to the AM that it wants\nto make use of those features.\n\nFor example, btree ignores any ordering scan keys that it is given in\nbtrescan, which seems fine for btree because the ordering of a btree\nis static (and no other order than that is expected apart from it's\nreverse order), but this becomes problematic for other indexes that\ncould return ordered data but would prefer not to have to go through\nthe motions of making sure the return order is 100% correct, rather\nthan a k-sorted sequence, or just the matches to the quals (like\nGIST). Is returning index scan results in (reverse) natural order not\noptional but required with amcanorder? If it is required, why is the\nam indicator called 'canorder' instead of 'willorder', 'doesorder' or\n'isordered'?\n\nAlternatively, if an am should be using the order scan keys from\n.amrescan and natural order scans also get scan keys, is there some\nplace I find the selection process for ordered index AMs, and how this\nordering should be interepreted? There are no good examples available\nin core code because btree is special-cased, and there are no other\nin-tree AMs that have facilities where both `amcanorderbyop` and\n`amcanorder` are set.\n\nI did read through indexam.sgml, but that does not give a clear answer\non this distinction of 'amcanorder' having required ordered results or\nnot, nor on how to interpret amrescan's orderbys argument. I also\nlooked at planner code where it interacts with amcanorder /\namcanorderbyop, but I couldn't wrap my head around its interactions\nwith indexes, either (more specifically, the ordering part of those\nindexes) due to the complexity of the planner and the many layers that\nthe various concepts are passed through. The README in\nbackend/optimizer didn't answer this question for me, either.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/message-id/flat/EB2AF704-70FC-4D73-A97A-A7884A0381B5%40kleczek.org\n[1] https://www.postgresql.org/message-id/flat/CAH2-Wz%3DksvN_sjcnD1%2BBt-WtifRA5ok48aDYnq3pkKhxgMQpcw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/e70fa091-e338-1598-9de4-6d0ef6b693e2%40enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Nov 2023 18:16:31 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questions regarding Index AMs and natural ordering"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 9:16 AM Matthias van de Meent\n<[email protected]> wrote:\n> For example, btree ignores any ordering scan keys that it is given in\n> btrescan, which seems fine for btree because the ordering of a btree\n> is static (and no other order than that is expected apart from it's\n> reverse order), but this becomes problematic for other indexes that\n> could return ordered data but would prefer not to have to go through\n> the motions of making sure the return order is 100% correct, rather\n> than a k-sorted sequence, or just the matches to the quals (like\n> GIST). Is returning index scan results in (reverse) natural order not\n> optional but required with amcanorder? If it is required, why is the\n> am indicator called 'canorder' instead of 'willorder', 'doesorder' or\n> 'isordered'?\n\nI don't know. I have a hard time imagining an index AM that is\namcanorder=true that isn't either nbtree, or something very similar\n(so similar that it seems unlikely that anybody would actually go to\nthe trouble of implementing it from scratch).\n\nYou didn't mention support for merge joins. That's one of the defining\ncharacteristics of an amcanorder=true index AM, since an\n\"ammarkpos/amrestrpos function need only be provided if the access\nmethod supports ordered scans\". It's hard to imagine how that could\nwork with a loosely ordered index. It seems to imply that the scan\nmust always work with a simple linear order.\n\nCases where the planner uses a merge join often involve an index path\nwith an \"interesting sort order\" that \"enables\" the merge join.\nPerhaps most of the alternative plans (that were almost as cheap as\nthe merge join plan) would have had to scan the same index in the same\nway anyway, so it ends up making sense to use a merge join. The merge\njoin can get ordered results from the index \"at no extra cost\". (All\nof this is implicit, of course -- the actual reason why the planner\nchose the merge join plan is because it worked out to be the cheapest\nplan.)\n\nMy impression is that this structure is fairly well baked in -- which\nthe optimizer/README goes into. That is, the planner likes to think of\npaths as having an intrinsic order that merge joins can take advantage\nof -- merge joins tend to win by being \"globally optimal\" without\nbeing \"locally optimal\". So generating extra index paths that don't\nhave any intrinsic order (but can be ordered using a special kind of\nindex scan) seems like it might be awkward to integrate.\n\nI'm far from an expert on the planner, so take anything that I say\nabout it with a grain of salt.\n\n> Alternatively, if an am should be using the order scan keys from\n> .amrescan and natural order scans also get scan keys, is there some\n> place I find the selection process for ordered index AMs, and how this\n> ordering should be interepreted? There are no good examples available\n> in core code because btree is special-cased, and there are no other\n> in-tree AMs that have facilities where both `amcanorderbyop` and\n> `amcanorder` are set.\n\nThe general notion of a data type's sort order comes from its default\nbtree operator class, so the whole idea of a generic sort order is\ndeeply tied to the nbtree AM. That's why we sometimes have btree\noperator classes for types that you'd never actually want to index (at\nleast not using a btree index).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 23 Nov 2023 10:51:51 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions regarding Index AMs and natural ordering"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Thu, Nov 23, 2023 at 9:16 AM Matthias van de Meent\n> <[email protected]> wrote:\n>> Is returning index scan results in (reverse) natural order not\n>> optional but required with amcanorder? If it is required, why is the\n>> am indicator called 'canorder' instead of 'willorder', 'doesorder' or\n>> 'isordered'?\n\n> I don't know. I have a hard time imagining an index AM that is\n> amcanorder=true that isn't either nbtree, or something very similar\n> (so similar that it seems unlikely that anybody would actually go to\n> the trouble of implementing it from scratch).\n\nAgreed on that, but I don't have that hard a time imagining cases\nwhere it might be useful for btree not to guarantee ordered output.\nIIRC, it currently has to do extra pushups to ensure that behavior\nin ScalarArrayOp cases. We've not bothered to expand the planner\ninfrastructure to distinguish \"could, but doesn't\" paths for btree\nscans, but that's more about it not being a priority than because it\nwouldn't make sense. If we did put work into that, we'd probably\ngenerate multiple indexscan Paths for the same index and same index\nconditions, some of which are marked with sort ordering PathKeys and\nsome of which aren't (and have a flag that would eventually tell the\nindex AM not to bother with sorting at runtime).\n\n> The general notion of a data type's sort order comes from its default\n> btree operator class, so the whole idea of a generic sort order is\n> deeply tied to the nbtree AM.\n\nRight. There hasn't been a reason to decouple that, just as our\nnotions of hashing are tied to the hash AM. This doesn't entirely\nforeclose other AMs that handle sorted output, but it constrains\nthem to use btree's opclasses to represent the ordering.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Nov 2023 14:15:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions regarding Index AMs and natural ordering"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 11:15 AM Tom Lane <[email protected]> wrote:\n> Agreed on that, but I don't have that hard a time imagining cases\n> where it might be useful for btree not to guarantee ordered output.\n> IIRC, it currently has to do extra pushups to ensure that behavior\n> in ScalarArrayOp cases. We've not bothered to expand the planner\n> infrastructure to distinguish \"could, but doesn't\" paths for btree\n> scans, but that's more about it not being a priority than because it\n> wouldn't make sense.\n\nI'm glad that you brought that up, because it's an important issue for\nmy ScalarArrayOp patch (which Matthias referenced). My patch simply\nremoves this restriction from the planner -- now ScalarArrayOp clauses\naren't treated as a special case when generating path keys. This works\nin all cases because the patch generalizes nbtree's approach to native\nScalarArrayOp execution, allowing index scans to work as one\ncontinuous index scan in all cases.\n\nAs you might recall, the test case that exercises the issue is:\n\nSELECT thousand, tenthous FROM tenk1\nWHERE thousand < 2 AND tenthous IN (1001,3000)\nORDER BY thousand;\n\nIt doesn't actually make much sense to execute this as two primitive\nindex scans, though. The most efficient approach is to perform a\nsingle index descent, while still being able to use a true index qual\nfor \"tenthous\" (since using a filter qual is so much slower due to the\noverhead of accessing the heap just to filter out non-matching\ntuples). That's what the patch does.\n\nThis would be true even without the \"ORDER BY\" -- accessing the leaf\npage once is faster than accessing it twice (same goes for the root).\nSo I see no principled reason why we'd ever really want to start a\nprimitive index scan that wasn't \"anchored to an equality constraint\".\nThis is reliably faster, while also preserving index sort order,\nalmost as a side-effect.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 23 Nov 2023 11:45:07 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions regarding Index AMs and natural ordering"
},
{
"msg_contents": "On Thu, 23 Nov 2023 at 19:52, Peter Geoghegan <[email protected]> wrote:\n>\n> On Thu, Nov 23, 2023 at 9:16 AM Matthias van de Meent\n> <[email protected]> wrote:\n> > For example, btree ignores any ordering scan keys that it is given in\n> > btrescan, which seems fine for btree because the ordering of a btree\n> > is static (and no other order than that is expected apart from it's\n> > reverse order), but this becomes problematic for other indexes that\n> > could return ordered data but would prefer not to have to go through\n> > the motions of making sure the return order is 100% correct, rather\n> > than a k-sorted sequence, or just the matches to the quals (like\n> > GIST). Is returning index scan results in (reverse) natural order not\n> > optional but required with amcanorder? If it is required, why is the\n> > am indicator called 'canorder' instead of 'willorder', 'doesorder' or\n> > 'isordered'?\n>\n> I don't know. I have a hard time imagining an index AM that is\n> amcanorder=true that isn't either nbtree, or something very similar\n> (so similar that it seems unlikely that anybody would actually go to\n> the trouble of implementing it from scratch).\n\nWell, BRIN (with minmax opclasses) could return ordered results if it\nneeds to (see [0]; though that implements it as a distinct plan node).\nOrdering the tuples correctly takes quite some effort, but is quite\nlikely to use less effort and/or scratch space than a table/bitmap\nscan + sort, because we won't have to manage all tuples of the table\nat the same time. However, it woould be extremely expensive if the\nplanner expects this to always return the data in btree order.\n\nFor GIST with the btree_gist opclasses it is even easier to return\nordered results (patch over at [1]), but then still it prefers not to\nhave to make a strict ordering as it adds overhead vs 'normal' index\nscans.\n\nAlso, was that a confirmation that amcanorder is a requirement for the\nAM to return data in index order (unless amrescan's orderbys is not\nnull), or just a comment on the reason for the name of 'amcanorder'\nbeing unclear?\n\n> You didn't mention support for merge joins. That's one of the defining\n> characteristics of an amcanorder=true index AM, since an\n> \"ammarkpos/amrestrpos function need only be provided if the access\n> method supports ordered scans\". It's hard to imagine how that could\n> work with a loosely ordered index. It seems to imply that the scan\n> must always work with a simple linear order.\n\nI probably didn't think of merge join support because 'merge join' is\nnot mentioned as such in the index AM api - I knew of\nammarkpos/amrestrpos, but hadn't yet gone into detail what they're\nused for.\n\n> Cases where the planner uses a merge join often involve an index path\n> with an \"interesting sort order\" that \"enables\" the merge join.\n> Perhaps most of the alternative plans (that were almost as cheap as\n> the merge join plan) would have had to scan the same index in the same\n> way anyway, so it ends up making sense to use a merge join. The merge\n> join can get ordered results from the index \"at no extra cost\". (All\n> of this is implicit, of course -- the actual reason why the planner\n> chose the merge join plan is because it worked out to be the cheapest\n> plan.)\n\nCouldn't the merge join (or scan node) use a tuple store to return to\nsome earlier point in the index scan when a native version of markpos\nis not easily supported? It would add (potentially very significant)\nIO overhead, but it would also allow merge joins on ordered paths that\ncurrently don't have a natural way of marking their position.\n\n> > Alternatively, if an am should be using the order scan keys from\n> > .amrescan and natural order scans also get scan keys, is there some\n> > place I find the selection process for ordered index AMs, and how this\n> > ordering should be interepreted? There are no good examples available\n> > in core code because btree is special-cased, and there are no other\n> > in-tree AMs that have facilities where both `amcanorderbyop` and\n> > `amcanorder` are set.\n>\n> The general notion of a data type's sort order comes from its default\n> btree operator class, so the whole idea of a generic sort order is\n> deeply tied to the nbtree AM. That's why we sometimes have btree\n> operator classes for types that you'd never actually want to index (at\n> least not using a btree index).\n\nYes, the part where btree opclasses determine a type's ordering is\nclear. But what I'm looking for is \"how do I, as an index AM\nimplementation, get the signal that I need to return column-ordered\ndata?\" If that signal is \"index AM marked amcanorder == index must\nreturn ordered data\", then that's suboptimal for the index AM writer,\nbut understandable. If amcanorder does not imply always ordered\nretrieval, then I'd like to know how it is signaled to the AM. But as\nof yet I've not found a clear and conclusive answer either way.\n\nKind regards,\n\nMattthias van de Meent.\n\n[0] https://www.postgresql.org/message-id/flat/e70fa091-e338-1598-9de4-6d0ef6b693e2%40enterprisedb.com\n[1] https://www.postgresql.org/message-id/flat/EB2AF704-70FC-4D73-A97A-A7884A0381B5%40kleczek.org\n\n\n",
"msg_date": "Fri, 24 Nov 2023 17:44:42 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions regarding Index AMs and natural ordering"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 8:44 AM Matthias van de Meent\n<[email protected]> wrote:\n> Also, was that a confirmation that amcanorder is a requirement for the\n> AM to return data in index order (unless amrescan's orderbys is not\n> null), or just a comment on the reason for the name of 'amcanorder'\n> being unclear?\n\nIt was both, I suppose.\n\n> > Cases where the planner uses a merge join often involve an index path\n> > with an \"interesting sort order\" that \"enables\" the merge join.\n> > Perhaps most of the alternative plans (that were almost as cheap as\n> > the merge join plan) would have had to scan the same index in the same\n> > way anyway, so it ends up making sense to use a merge join. The merge\n> > join can get ordered results from the index \"at no extra cost\". (All\n> > of this is implicit, of course -- the actual reason why the planner\n> > chose the merge join plan is because it worked out to be the cheapest\n> > plan.)\n>\n> Couldn't the merge join (or scan node) use a tuple store to return to\n> some earlier point in the index scan when a native version of markpos\n> is not easily supported?\n\nYou can materialize any executor node, allowing it to be accessed in\nrandom order, as required by merge joins (in many cases). But any\nindex AM that relies on that clearly isn't an amcanorder=true index\nAM, which is what you asked about.\n\nWhether or not you should actually care about whether your index AM\ncan meet the expectations that the API has for amcanorder=true index\nAMs is far from clear. In the end the design has to make sense, and\nintegrate into the existing API in some way or other -- but the\ndetails are likely to depend on things that nobody thought of just\nyet. I don't think that it's all that useful to discuss it while\neverything remains so abstract.\n\n> It would add (potentially very significant)\n> IO overhead, but it would also allow merge joins on ordered paths that\n> currently don't have a natural way of marking their position.\n\nI don't know. Maybe it's possible. It might even be practically\nachievable, while delivering a compelling benefit to users. This is\nthe kind of thing that I don't tend to speculate about, at least not\nbefore I have more concrete information about what is possible through\nsome kind of prototyping process.\n\n> Yes, the part where btree opclasses determine a type's ordering is\n> clear. But what I'm looking for is \"how do I, as an index AM\n> implementation, get the signal that I need to return column-ordered\n> data?\" If that signal is \"index AM marked amcanorder == index must\n> return ordered data\", then that's suboptimal for the index AM writer,\n> but understandable. If amcanorder does not imply always ordered\n> retrieval, then I'd like to know how it is signaled to the AM. But as\n> of yet I've not found a clear and conclusive answer either way.\n\nI suppose that amcanorder=true cannot mean that, since we have the\nSAOP path key thing (at least for now). But that wasn't true until bug\nfix commit 807a40c5, so prior to that point I guess it was tacitly the\ncase that B-Tree scans always returned ordered results.\n\nHere we are trying to divine the intentions of an \"abstract\" API by\ndiscussing highly obscure bugs that were either bugs in nbtree, or\nbugs in what we once expected of nbtree. Seems more than a bit silly\nto me.\n\nI'm not suggesting that the idea of an abstract API doesn't need to be\ntaken seriously at all -- far from it. Just that \"case law precedent\"\ncan play an important role in how it is interpreted. The fact that\nsome things remain ambiguous isn't necessarily a problem that needs to\nbe solved. If there is a real problem, then IMV it should always start\nwith a concrete complaint about how the API doesn't support certain\nrequirements. In other words, I'm of the opinion that such a complaint\nshould end with the API, as opposed to starting with it.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 24 Nov 2023 10:12:35 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions regarding Index AMs and natural ordering"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Fri, Nov 24, 2023 at 8:44 AM Matthias van de Meent\n> <[email protected]> wrote:\n>> Yes, the part where btree opclasses determine a type's ordering is\n>> clear. But what I'm looking for is \"how do I, as an index AM\n>> implementation, get the signal that I need to return column-ordered\n>> data?\" If that signal is \"index AM marked amcanorder == index must\n>> return ordered data\", then that's suboptimal for the index AM writer,\n>> but understandable. If amcanorder does not imply always ordered\n>> retrieval, then I'd like to know how it is signaled to the AM. But as\n>> of yet I've not found a clear and conclusive answer either way.\n\n> I suppose that amcanorder=true cannot mean that, since we have the\n> SAOP path key thing (at least for now).\n\nAs things stand, amcanorder definitely means that the index always\nreturns ordered data, since the planner will unconditionally apply\npathkeys to the indexscan Paths generated for it (see plancat.c's\nget_relation_info which sets up info->sortopfamily, and\nbuild_index_pathkeys which uses that). We could reconsider that\ndefinition if there were a reason to, but so far it hasn't seemed\ninteresting. The hack in 807a40c5 is a hack, without a doubt, but\nthat doesn't necessarily mean we should spend time on generalizing it,\nand even less that we should have done so in 2012. Maybe by now there\nare non-core index AMs that have cases where it's worth being pickier.\nWe'd have to invent some API that allows the index AM to have a say in\nwhat pathkeys get applied.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Nov 2023 13:58:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions regarding Index AMs and natural ordering"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 10:58 AM Tom Lane <[email protected]> wrote:\n> Peter Geoghegan <[email protected]> writes:\n> > I suppose that amcanorder=true cannot mean that, since we have the\n> > SAOP path key thing (at least for now).\n>\n> As things stand, amcanorder definitely means that the index always\n> returns ordered data, since the planner will unconditionally apply\n> pathkeys to the indexscan Paths generated for it (see plancat.c's\n> get_relation_info which sets up info->sortopfamily, and\n> build_index_pathkeys which uses that). We could reconsider that\n> definition if there were a reason to, but so far it hasn't seemed\n> interesting. The hack in 807a40c5 is a hack, without a doubt, but\n> that doesn't necessarily mean we should spend time on generalizing it,\n> and even less that we should have done so in 2012.\n\nThat is certainly my preferred interpretation. Not least because I am\nin the process of removing the hack completely, which has shown very\nsignificant benefits for queries with SAOPs that now get to depend on\nthe sort order in some crucial way.\n\n> Maybe by now there\n> are non-core index AMs that have cases where it's worth being pickier.\n> We'd have to invent some API that allows the index AM to have a say in\n> what pathkeys get applied.\n\nMatthias and I recently discussed a design sketched by James Coleman\nsome years back, which Matthias seemed particularly interested in:\n\nhttps://www.postgresql.org/message-id/flat/CAAaqYe-SsHgXKXPpjn7WCTUnB_RQSxXjpSaJd32aA%3DRquv0AgQ%40mail.gmail.com\n\nJames' test case benefits from my own patch in the obvious way: it can\nuse SAOP index quals, while still being able to get an ordered scan\nthat terminates early via a LIMIT. But the design he sketched\nproposes to go much further than that -- it's far more targeted. His\ndesign reconstructs a useful sort order by \"multiplexing\" different\nparts of the index (for different SAOP constants), merging together\nmultiple streams of ordered tuples into one stream. This means that\nthe index produces tuples in a useful order, sufficient to terminate\nthe scan early -- but it's a sort order that doesn't match the index\nat all. Obviously, that's a totally novel idea.\n\nIt's possible that something like that might just make sense -- it's\ntheoretically optimal, at least. My guess is that if it really did\nhappen then the planner would treat it like the special case that it\nis. It very much reminds me of loose index scan, where the index AM\nAPI has to be revised so that high level semantic information can be\npushed down into the index AM.\n\nIf something like that can offer stellar performance, that just isn't\nachievable in any other way, then I guess it's worth accepting the\ncost of an uglified index AM API. Whether or not such a feature really\ncan be that compelling likely depends on a myriad of factors that we\ncannot possibly anticipate ahead of time. There are just too many\nthings in this general area that *might* make sense someday.\n\nAs we discussed back in 2022, I think that MDAM style \"skip scan\"\n(meaning support for skipping around an index given a query with\n\"missing key predicates\") shouldn't be a special case in the planner\n-- only costing needs to know anything about skipping. In general I\nfind that it's most useful to classify all of these techniques\naccording to whether or not they are compatible with the orthodox\ndefinition of amcanorder that you described. In other words, to\nclassify them based on whether they involve pushing down high level\nsemantic information to the index AM.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 24 Nov 2023 11:43:36 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions regarding Index AMs and natural ordering"
},
{
"msg_contents": "On Fri, 24 Nov 2023, 19:58 Tom Lane, <[email protected]> wrote:\n>\n> Peter Geoghegan <[email protected]> writes:\n> > On Fri, Nov 24, 2023 at 8:44 AM Matthias van de Meent\n> > <[email protected]> wrote:\n> >> Yes, the part where btree opclasses determine a type's ordering is\n> >> clear. But what I'm looking for is \"how do I, as an index AM\n> >> implementation, get the signal that I need to return column-ordered\n> >> data?\" If that signal is \"index AM marked amcanorder == index must\n> >> return ordered data\", then that's suboptimal for the index AM writer,\n> >> but understandable. If amcanorder does not imply always ordered\n> >> retrieval, then I'd like to know how it is signaled to the AM. But as\n> >> of yet I've not found a clear and conclusive answer either way.\n>\n> > I suppose that amcanorder=true cannot mean that, since we have the\n> > SAOP path key thing (at least for now).\n>\n> As things stand, amcanorder definitely means that the index always\n> returns ordered data, since the planner will unconditionally apply\n> pathkeys to the indexscan Paths generated for it (see plancat.c's\n> get_relation_info which sets up info->sortopfamily, and\n> build_index_pathkeys which uses that). We could reconsider that\n> definition if there were a reason to, but so far it hasn't seemed\n> interesting.\n\nFor GIST there is now a case for improving the support for optionally\nordered retrieval, as there is a patch that tries to hack ORDER BY\nsupport into GIST. Right now that patch applies (what I consider) a\nhack by implicitly adding an operator ordering clause for ORDER BY\ncolumn with the column type's btree ordering operator, but with\nimproved APIs that shouldn't need such a hacky approach.\n\n> The hack in 807a40c5 is a hack, without a doubt, but\n> that doesn't necessarily mean we should spend time on generalizing it,\n> and even less that we should have done so in 2012. Maybe by now there\n> are non-core index AMs that have cases where it's worth being pickier.\n> We'd have to invent some API that allows the index AM to have a say in\n> what pathkeys get applied.\n\nI think that would be quite useful, as it would allow indexes to\nreturn ordered results in other orders than the defined key order, and\nit would allow e.g. BRIN to run its sort for ordered retrieval inside\nthe index scan node (rather than requiring its own sort node type).\n\nCC: Tomas, maybe you have some ideas about this as well? What was the\nreason for moving BRIN-assisted sort into its own node? Was there more\nto it than \"BRIN currently doesn't have amgettuple, and amgettuple\ncan't always be used\"?\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 27 Nov 2023 15:55:08 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions regarding Index AMs and natural ordering"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.