threads
listlengths
1
2.99k
[ { "msg_contents": "Hello hackers,\n\nPlease find the proposal for Conflict Detection and Resolution (CDR)\nfor Logical replication.\n<Thanks to Nisha, Hou-San, and Amit who helped in figuring out the\nbelow details.>\n\nIntroduction\n================\nIn case the node is subscribed to multiple providers, or when local\nwrites happen on a subscriber, conflicts can arise for the incoming\nchanges. CDR is the mechanism to automatically detect and resolve\nthese conflicts depending on the application and configurations.\nCDR is not applicable for the initial table sync. If locally, there\nexists conflicting data on the table, the table sync worker will fail.\nPlease find the details on CDR in apply worker for INSERT, UPDATE and\nDELETE operations:\n\nINSERT\n================\nTo resolve INSERT conflict on subscriber, it is important to find out\nthe conflicting row (if any) before we attempt an insertion. The\nindexes or search preference for the same will be:\nFirst check for replica identity (RI) index.\n - if not found, check for the primary key (PK) index.\n - if not found, then check for unique indexes (individual ones or\nadded by unique constraints)\n - if unique index also not found, skip CDR\n\nNote: if no RI index, PK, or unique index is found but\nREPLICA_IDENTITY_FULL is defined, CDR will still be skipped.\nThe reason being that even though a row can be identified with\nREPLICAT_IDENTITY_FULL, such tables are allowed to have duplicate\nrows. Hence, we should not go for conflict detection in such a case.\n\nIn case of replica identity ‘nothing’ and in absence of any suitable\nindex (as defined above), CDR will be skipped for INSERT.\n\nConflict Type:\n----------------\ninsert_exists: A conflict is detected when the table has the same\nvalue for a key column as the new value in the incoming row.\n\nConflict Resolution\n----------------\na) latest_timestamp_wins: The change with later commit timestamp wins.\nb) earliest_timestamp_wins: The change with earlier commit timestamp wins.\nc) apply: Always apply the remote change.\nd) skip: Remote change is skipped.\ne) error: Error out on conflict. Replication is stopped, manual\naction is needed.\n\nThe change will be converted to 'UPDATE' and applied if the decision\nis in favor of applying remote change.\n\nIt is important to have commit timestamp info available on subscriber\nwhen latest_timestamp_wins or earliest_timestamp_wins method is chosen\nas resolution method. Thus ‘track_commit_timestamp’ must be enabled\non subscriber, in absence of which, configuring the said\ntimestamp-based resolution methods will result in error.\n\nNote: If the user has chosen the latest or earliest_timestamp_wins,\nand the remote and local timestamps are the same, then it will go by\nsystem identifier. The change with a higher system identifier will\nwin. This will ensure that the same change is picked on all the nodes.\n\n UPDATE\n================\n\nConflict Detection Method:\n--------------------------------\nOrigin conflict detection: The ‘origin’ info is used to detect\nconflict which can be obtained from commit-timestamp generated for\nincoming txn at the source node. To compare remote’s origin with the\nlocal’s origin, we must have origin information for local txns as well\nwhich can be obtained from commit-timestamp after enabling\n‘track_commit_timestamp’ locally.\nThe one drawback here is the ‘origin’ information cannot be obtained\nonce the row is frozen and the commit-timestamp info is removed by\nvacuum. For a frozen row, conflicts cannot be raised, and thus the\nincoming changes will be applied in all the cases.\n\nConflict Types:\n----------------\na) update_differ: The origin of an incoming update's key row differs\nfrom the local row i.e.; the row has already been updated locally or\nby different nodes.\nb) update_missing: The row with the same value as that incoming\nupdate's key does not exist. Remote is trying to update a row which\ndoes not exist locally.\nc) update_deleted: The row with the same value as that incoming\nupdate's key does not exist. The row is already deleted. This conflict\ntype is generated only if the deleted row is still detectable i.e., it\nis not removed by VACUUM yet. If the row is removed by VACUUM already,\nit cannot detect this conflict. It will detect it as update_missing\nand will follow the default or configured resolver of update_missing\nitself.\n\n Conflict Resolutions:\n----------------\na) latest_timestamp_wins: The change with later commit timestamp\nwins. Can be used for ‘update_differ’.\nb) earliest_timestamp_wins: The change with earlier commit\ntimestamp wins. Can be used for ‘update_differ’.\nc) apply: The remote change is always applied. Can be used for\n‘update_differ’.\nd) apply_or_skip: Remote change is converted to INSERT and is\napplied. If the complete row cannot be constructed from the info\nprovided by the publisher, then the change is skipped. Can be used for\n‘update_missing’ or ‘update_deleted’.\ne) apply_or_error: Remote change is converted to INSERT and is\napplied. If the complete row cannot be constructed from the info\nprovided by the publisher, then error is raised. Can be used for\n‘update_missing’ or ‘update_deleted’.\nf) skip: Remote change is skipped and local one is retained. Can be\nused for any conflict type.\ng) error: Error out of conflict. Replication is stopped, manual\naction is needed. Can be used for any conflict type.\n\n To support UPDATE CDR, the presence of either replica identity Index\nor primary key is required on target node. Update CDR will not be\nsupported in absence of replica identity index or primary key even\nthough REPLICA IDENTITY FULL is set. Please refer to \"UPDATE\" in\n\"Noteworthey Scenarios\" section in [1] for further details.\n\nDELETE\n================\nConflict Type:\n----------------\ndelete_missing: An incoming delete is trying to delete a row on a\ntarget node which does not exist.\n\nConflict Resolutions:\n----------------\na) error : Error out on conflict. Replication is stopped, manual\naction is needed.\nb) skip : The remote change is skipped.\n\nConfiguring Conflict Resolution:\n------------------------------------------------\nThere are two parts when it comes to configuring CDR:\n\na) Enabling/Disabling conflict detection.\nb) Configuring conflict resolvers for different conflict types.\n\n Users can sometimes create multiple subscriptions on the same node,\nsubscribing to different tables to improve replication performance by\nstarting multiple apply workers. If the tables in one subscription are\nless likely to cause conflict, then it is possible that user may want\nconflict detection disabled for that subscription to avoid detection\nlatency while enabling it for other subscriptions. This generates a\nrequirement to make ‘conflict detection’ configurable per\nsubscription. While the conflict resolver configuration can remain\nglobal. All the subscriptions which opt for ‘conflict detection’ will\nfollow global conflict resolver configuration.\n\nTo implement the above, subscription commands will be changed to have\none more parameter 'conflict_resolution=on/off', default will be OFF.\n\nTo configure global resolvers, new DDL command will be introduced:\n\nCONFLICT RESOLVER ON <conflict_type> IS <conflict_resolver>\n\n-------------------------\n\nApart from the above three main operations and resolver configuration,\nthere are more conflict types like primary-key updates, multiple\nunique constraints etc and some special scenarios to be considered.\nComplete design details can be found in [1].\n\n[1]: https://wiki.postgresql.org/wiki/Conflict_Detection_and_Resolution\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 23 May 2024 12:06:51 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Conflict Detection and Resolution" }, { "msg_contents": "On 5/23/24 08:36, shveta malik wrote:\n> Hello hackers,\n> \n> Please find the proposal for Conflict Detection and Resolution (CDR)\n> for Logical replication.\n> <Thanks to Nisha, Hou-San, and Amit who helped in figuring out the\n> below details.>\n> \n> Introduction\n> ================\n> In case the node is subscribed to multiple providers, or when local\n> writes happen on a subscriber, conflicts can arise for the incoming\n> changes. CDR is the mechanism to automatically detect and resolve\n> these conflicts depending on the application and configurations.\n> CDR is not applicable for the initial table sync. If locally, there\n> exists conflicting data on the table, the table sync worker will fail.\n> Please find the details on CDR in apply worker for INSERT, UPDATE and\n> DELETE operations:\n> \n\nWhich architecture are you aiming for? Here you talk about multiple\nproviders, but the wiki page mentions active-active. I'm not sure how\nmuch this matters, but it might.\n\nAlso, what kind of consistency you expect from this? Because none of\nthese simple conflict resolution methods can give you the regular\nconsistency models we're used to, AFAICS.\n\n> INSERT\n> ================\n> To resolve INSERT conflict on subscriber, it is important to find out\n> the conflicting row (if any) before we attempt an insertion. The\n> indexes or search preference for the same will be:\n> First check for replica identity (RI) index.\n> - if not found, check for the primary key (PK) index.\n> - if not found, then check for unique indexes (individual ones or\n> added by unique constraints)\n> - if unique index also not found, skip CDR\n> \n> Note: if no RI index, PK, or unique index is found but\n> REPLICA_IDENTITY_FULL is defined, CDR will still be skipped.\n> The reason being that even though a row can be identified with\n> REPLICAT_IDENTITY_FULL, such tables are allowed to have duplicate\n> rows. Hence, we should not go for conflict detection in such a case.\n> \n\nIt's not clear to me why would REPLICA_IDENTITY_FULL mean the table is\nallowed to have duplicate values? It just means the upstream is sending\nthe whole original row, there can still be a PK/UNIQUE index on both the\npublisher and subscriber.\n\n> In case of replica identity ‘nothing’ and in absence of any suitable\n> index (as defined above), CDR will be skipped for INSERT.\n> \n> Conflict Type:\n> ----------------\n> insert_exists: A conflict is detected when the table has the same\n> value for a key column as the new value in the incoming row.\n> \n> Conflict Resolution\n> ----------------\n> a) latest_timestamp_wins: The change with later commit timestamp wins.\n> b) earliest_timestamp_wins: The change with earlier commit timestamp wins.\n> c) apply: Always apply the remote change.\n> d) skip: Remote change is skipped.\n> e) error: Error out on conflict. Replication is stopped, manual\n> action is needed.\n> \n\nWhy not to have some support for user-defined conflict resolution\nmethods, allowing to do more complex stuff (e.g. merging the rows in\nsome way, perhaps even with datatype-specific behavior)?\n\n> The change will be converted to 'UPDATE' and applied if the decision\n> is in favor of applying remote change.\n> \n> It is important to have commit timestamp info available on subscriber\n> when latest_timestamp_wins or earliest_timestamp_wins method is chosen\n> as resolution method. Thus ‘track_commit_timestamp’ must be enabled\n> on subscriber, in absence of which, configuring the said\n> timestamp-based resolution methods will result in error.\n> \n> Note: If the user has chosen the latest or earliest_timestamp_wins,\n> and the remote and local timestamps are the same, then it will go by\n> system identifier. The change with a higher system identifier will\n> win. This will ensure that the same change is picked on all the nodes.\n\nHow is this going to deal with the fact that commit LSN and timestamps\nmay not correlate perfectly? That is, commits may happen with LSN1 <\nLSN2 but with T1 > T2.\n\n> \n> UPDATE\n> ================\n> \n> Conflict Detection Method:\n> --------------------------------\n> Origin conflict detection: The ‘origin’ info is used to detect\n> conflict which can be obtained from commit-timestamp generated for\n> incoming txn at the source node. To compare remote’s origin with the\n> local’s origin, we must have origin information for local txns as well\n> which can be obtained from commit-timestamp after enabling\n> ‘track_commit_timestamp’ locally.\n> The one drawback here is the ‘origin’ information cannot be obtained\n> once the row is frozen and the commit-timestamp info is removed by\n> vacuum. For a frozen row, conflicts cannot be raised, and thus the\n> incoming changes will be applied in all the cases.\n> \n> Conflict Types:\n> ----------------\n> a) update_differ: The origin of an incoming update's key row differs\n> from the local row i.e.; the row has already been updated locally or\n> by different nodes.\n> b) update_missing: The row with the same value as that incoming\n> update's key does not exist. Remote is trying to update a row which\n> does not exist locally.\n> c) update_deleted: The row with the same value as that incoming\n> update's key does not exist. The row is already deleted. This conflict\n> type is generated only if the deleted row is still detectable i.e., it\n> is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> it cannot detect this conflict. It will detect it as update_missing\n> and will follow the default or configured resolver of update_missing\n> itself.\n> \n\nI don't understand the why should update_missing or update_deleted be\ndifferent, especially considering it's not detected reliably. And also\nthat even if we happen to find the row the associated TOAST data may\nhave already been removed. So why would this matter?\n\n\n> Conflict Resolutions:\n> ----------------\n> a) latest_timestamp_wins: The change with later commit timestamp\n> wins. Can be used for ‘update_differ’.\n> b) earliest_timestamp_wins: The change with earlier commit\n> timestamp wins. Can be used for ‘update_differ’.\n> c) apply: The remote change is always applied. Can be used for\n> ‘update_differ’.\n> d) apply_or_skip: Remote change is converted to INSERT and is\n> applied. If the complete row cannot be constructed from the info\n> provided by the publisher, then the change is skipped. Can be used for\n> ‘update_missing’ or ‘update_deleted’.\n> e) apply_or_error: Remote change is converted to INSERT and is\n> applied. If the complete row cannot be constructed from the info\n> provided by the publisher, then error is raised. Can be used for\n> ‘update_missing’ or ‘update_deleted’.\n> f) skip: Remote change is skipped and local one is retained. Can be\n> used for any conflict type.\n> g) error: Error out of conflict. Replication is stopped, manual\n> action is needed. Can be used for any conflict type.\n> \n> To support UPDATE CDR, the presence of either replica identity Index\n> or primary key is required on target node. Update CDR will not be\n> supported in absence of replica identity index or primary key even\n> though REPLICA IDENTITY FULL is set. Please refer to \"UPDATE\" in\n> \"Noteworthey Scenarios\" section in [1] for further details.\n> \n> DELETE\n> ================\n> Conflict Type:\n> ----------------\n> delete_missing: An incoming delete is trying to delete a row on a\n> target node which does not exist.\n> \n> Conflict Resolutions:\n> ----------------\n> a) error : Error out on conflict. Replication is stopped, manual\n> action is needed.\n> b) skip : The remote change is skipped.\n> \n> Configuring Conflict Resolution:\n> ------------------------------------------------\n> There are two parts when it comes to configuring CDR:\n> \n> a) Enabling/Disabling conflict detection.\n> b) Configuring conflict resolvers for different conflict types.\n> \n> Users can sometimes create multiple subscriptions on the same node,\n> subscribing to different tables to improve replication performance by\n> starting multiple apply workers. If the tables in one subscription are\n> less likely to cause conflict, then it is possible that user may want\n> conflict detection disabled for that subscription to avoid detection\n> latency while enabling it for other subscriptions. This generates a\n> requirement to make ‘conflict detection’ configurable per\n> subscription. While the conflict resolver configuration can remain\n> global. All the subscriptions which opt for ‘conflict detection’ will\n> follow global conflict resolver configuration.\n> \n> To implement the above, subscription commands will be changed to have\n> one more parameter 'conflict_resolution=on/off', default will be OFF.\n> \n> To configure global resolvers, new DDL command will be introduced:\n> \n> CONFLICT RESOLVER ON <conflict_type> IS <conflict_resolver>\n> \n\nI very much doubt we want a single global conflict resolver, or even one\nresolver per subscription. It seems like a very table-specific thing.\n\nAlso, doesn't all this whole design ignore the concurrency between\npublishers? Isn't this problematic considering the commit timestamps may\ngo backwards (for a given publisher), which means the conflict\nresolution is not deterministic (as it depends on how exactly it\ninterleaves)?\n\n\n> -------------------------\n> \n> Apart from the above three main operations and resolver configuration,\n> there are more conflict types like primary-key updates, multiple\n> unique constraints etc and some special scenarios to be considered.\n> Complete design details can be found in [1].\n> \n> [1]: https://wiki.postgresql.org/wiki/Conflict_Detection_and_Resolution\n> \n\nHmmm, not sure it's good to have a \"complete\" design on wiki, and only\nsome subset posted to the mailing list. I haven't compared what the\ndifferences are, though.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 May 2024 23:09:18 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Sat, May 25, 2024 at 2:39 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 5/23/24 08:36, shveta malik wrote:\n> > Hello hackers,\n> >\n> > Please find the proposal for Conflict Detection and Resolution (CDR)\n> > for Logical replication.\n> > <Thanks to Nisha, Hou-San, and Amit who helped in figuring out the\n> > below details.>\n> >\n> > Introduction\n> > ================\n> > In case the node is subscribed to multiple providers, or when local\n> > writes happen on a subscriber, conflicts can arise for the incoming\n> > changes. CDR is the mechanism to automatically detect and resolve\n> > these conflicts depending on the application and configurations.\n> > CDR is not applicable for the initial table sync. If locally, there\n> > exists conflicting data on the table, the table sync worker will fail.\n> > Please find the details on CDR in apply worker for INSERT, UPDATE and\n> > DELETE operations:\n> >\n>\n> Which architecture are you aiming for? Here you talk about multiple\n> providers, but the wiki page mentions active-active. I'm not sure how\n> much this matters, but it might.\n\nCurrently, we are working for multi providers case but ideally it\nshould work for active-active also. During further discussion and\nimplementation phase, if we find that, there are cases which will not\nwork in straight-forward way for active-active, then our primary focus\nwill remain to first implement it for multiple providers architecture.\n\n>\n> Also, what kind of consistency you expect from this? Because none of\n> these simple conflict resolution methods can give you the regular\n> consistency models we're used to, AFAICS.\n\nCan you please explain a little bit more on this.\n\n>\n> > INSERT\n> > ================\n> > To resolve INSERT conflict on subscriber, it is important to find out\n> > the conflicting row (if any) before we attempt an insertion. The\n> > indexes or search preference for the same will be:\n> > First check for replica identity (RI) index.\n> > - if not found, check for the primary key (PK) index.\n> > - if not found, then check for unique indexes (individual ones or\n> > added by unique constraints)\n> > - if unique index also not found, skip CDR\n> >\n> > Note: if no RI index, PK, or unique index is found but\n> > REPLICA_IDENTITY_FULL is defined, CDR will still be skipped.\n> > The reason being that even though a row can be identified with\n> > REPLICAT_IDENTITY_FULL, such tables are allowed to have duplicate\n> > rows. Hence, we should not go for conflict detection in such a case.\n> >\n>\n> It's not clear to me why would REPLICA_IDENTITY_FULL mean the table is\n> allowed to have duplicate values? It just means the upstream is sending\n> the whole original row, there can still be a PK/UNIQUE index on both the\n> publisher and subscriber.\n\nYes, right. Sorry for confusion. I meant the same i.e. in absence of\n'RI index, PK, or unique index', tables can have duplicates. So even\nin presence of Replica-identity (FULL in this case) but in absence of\nunique/primary index, CDR will be skipped for INSERT.\n\n>\n> > In case of replica identity ‘nothing’ and in absence of any suitable\n> > index (as defined above), CDR will be skipped for INSERT.\n> >\n> > Conflict Type:\n> > ----------------\n> > insert_exists: A conflict is detected when the table has the same\n> > value for a key column as the new value in the incoming row.\n> >\n> > Conflict Resolution\n> > ----------------\n> > a) latest_timestamp_wins: The change with later commit timestamp wins.\n> > b) earliest_timestamp_wins: The change with earlier commit timestamp wins.\n> > c) apply: Always apply the remote change.\n> > d) skip: Remote change is skipped.\n> > e) error: Error out on conflict. Replication is stopped, manual\n> > action is needed.\n> >\n>\n> Why not to have some support for user-defined conflict resolution\n> methods, allowing to do more complex stuff (e.g. merging the rows in\n> some way, perhaps even with datatype-specific behavior)?\n\nInitially, for the sake of simplicity, we are targeting to support\nbuilt-in resolvers. But we have a plan to work on user-defined\nresolvers as well. We shall propose that separately.\n\n>\n> > The change will be converted to 'UPDATE' and applied if the decision\n> > is in favor of applying remote change.\n> >\n> > It is important to have commit timestamp info available on subscriber\n> > when latest_timestamp_wins or earliest_timestamp_wins method is chosen\n> > as resolution method. Thus ‘track_commit_timestamp’ must be enabled\n> > on subscriber, in absence of which, configuring the said\n> > timestamp-based resolution methods will result in error.\n> >\n> > Note: If the user has chosen the latest or earliest_timestamp_wins,\n> > and the remote and local timestamps are the same, then it will go by\n> > system identifier. The change with a higher system identifier will\n> > win. This will ensure that the same change is picked on all the nodes.\n>\n> How is this going to deal with the fact that commit LSN and timestamps\n> may not correlate perfectly? That is, commits may happen with LSN1 <\n> LSN2 but with T1 > T2.\n\nAre you pointing to the issue where a session/txn has taken\n'xactStopTimestamp' timestamp earlier but is delayed to insert record\nin XLOG, while another session/txn which has taken timestamp slightly\nlater succeeded to insert the record IN XLOG sooner than the session1,\nmaking LSN and Timestamps out of sync? Going by this scenario, the\ncommit-timestamp may not be reflective of actual commits and thus\ntimestamp-based resolvers may take wrong decisions. Or do you mean\nsomething else?\n\nIf this is the problem you are referring to, then I think this needs a\nfix at the publisher side. Let me think more about it . Kindly let me\nknow if you have ideas on how to tackle it.\n\n> >\n> > UPDATE\n> > ================\n> >\n> > Conflict Detection Method:\n> > --------------------------------\n> > Origin conflict detection: The ‘origin’ info is used to detect\n> > conflict which can be obtained from commit-timestamp generated for\n> > incoming txn at the source node. To compare remote’s origin with the\n> > local’s origin, we must have origin information for local txns as well\n> > which can be obtained from commit-timestamp after enabling\n> > ‘track_commit_timestamp’ locally.\n> > The one drawback here is the ‘origin’ information cannot be obtained\n> > once the row is frozen and the commit-timestamp info is removed by\n> > vacuum. For a frozen row, conflicts cannot be raised, and thus the\n> > incoming changes will be applied in all the cases.\n> >\n> > Conflict Types:\n> > ----------------\n> > a) update_differ: The origin of an incoming update's key row differs\n> > from the local row i.e.; the row has already been updated locally or\n> > by different nodes.\n> > b) update_missing: The row with the same value as that incoming\n> > update's key does not exist. Remote is trying to update a row which\n> > does not exist locally.\n> > c) update_deleted: The row with the same value as that incoming\n> > update's key does not exist. The row is already deleted. This conflict\n> > type is generated only if the deleted row is still detectable i.e., it\n> > is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> > it cannot detect this conflict. It will detect it as update_missing\n> > and will follow the default or configured resolver of update_missing\n> > itself.\n> >\n>\n> I don't understand the why should update_missing or update_deleted be\n> different, especially considering it's not detected reliably. And also\n> that even if we happen to find the row the associated TOAST data may\n> have already been removed. So why would this matter?\n\nHere, we are trying to tackle the case where the row is 'recently'\ndeleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\nwant to opt for a different resolution in such a case as against the\none where the corresponding row was not even present in the first\nplace. The case where the row was deleted long back may not fall into\nthis category as there are higher chances that they have been removed\nby vacuum and can be considered equivalent to the update_ missing\ncase.\n\nRegarding \"TOAST column\" for deleted row cases, we may need to dig\nmore. Thanks for bringing this case. Let me analyze more here.\n\n>\n> > Conflict Resolutions:\n> > ----------------\n> > a) latest_timestamp_wins: The change with later commit timestamp\n> > wins. Can be used for ‘update_differ’.\n> > b) earliest_timestamp_wins: The change with earlier commit\n> > timestamp wins. Can be used for ‘update_differ’.\n> > c) apply: The remote change is always applied. Can be used for\n> > ‘update_differ’.\n> > d) apply_or_skip: Remote change is converted to INSERT and is\n> > applied. If the complete row cannot be constructed from the info\n> > provided by the publisher, then the change is skipped. Can be used for\n> > ‘update_missing’ or ‘update_deleted’.\n> > e) apply_or_error: Remote change is converted to INSERT and is\n> > applied. If the complete row cannot be constructed from the info\n> > provided by the publisher, then error is raised. Can be used for\n> > ‘update_missing’ or ‘update_deleted’.\n> > f) skip: Remote change is skipped and local one is retained. Can be\n> > used for any conflict type.\n> > g) error: Error out of conflict. Replication is stopped, manual\n> > action is needed. Can be used for any conflict type.\n> >\n> > To support UPDATE CDR, the presence of either replica identity Index\n> > or primary key is required on target node. Update CDR will not be\n> > supported in absence of replica identity index or primary key even\n> > though REPLICA IDENTITY FULL is set. Please refer to \"UPDATE\" in\n> > \"Noteworthey Scenarios\" section in [1] for further details.\n> >\n> > DELETE\n> > ================\n> > Conflict Type:\n> > ----------------\n> > delete_missing: An incoming delete is trying to delete a row on a\n> > target node which does not exist.\n> >\n> > Conflict Resolutions:\n> > ----------------\n> > a) error : Error out on conflict. Replication is stopped, manual\n> > action is needed.\n> > b) skip : The remote change is skipped.\n> >\n> > Configuring Conflict Resolution:\n> > ------------------------------------------------\n> > There are two parts when it comes to configuring CDR:\n> >\n> > a) Enabling/Disabling conflict detection.\n> > b) Configuring conflict resolvers for different conflict types.\n> >\n> > Users can sometimes create multiple subscriptions on the same node,\n> > subscribing to different tables to improve replication performance by\n> > starting multiple apply workers. If the tables in one subscription are\n> > less likely to cause conflict, then it is possible that user may want\n> > conflict detection disabled for that subscription to avoid detection\n> > latency while enabling it for other subscriptions. This generates a\n> > requirement to make ‘conflict detection’ configurable per\n> > subscription. While the conflict resolver configuration can remain\n> > global. All the subscriptions which opt for ‘conflict detection’ will\n> > follow global conflict resolver configuration.\n> >\n> > To implement the above, subscription commands will be changed to have\n> > one more parameter 'conflict_resolution=on/off', default will be OFF.\n> >\n> > To configure global resolvers, new DDL command will be introduced:\n> >\n> > CONFLICT RESOLVER ON <conflict_type> IS <conflict_resolver>\n> >\n>\n> I very much doubt we want a single global conflict resolver, or even one\n> resolver per subscription. It seems like a very table-specific thing.\n\nEven we thought about this. We feel that even if we go for table based\nor subscription based resolvers configuration, there may be use case\nscenarios where the user is not interested in configuring resolvers\nfor each table and thus may want to give global ones. Thus, we should\nprovide a way for users to do global configuration. Thus we started\nwith global one. I have noted your point here and would also like to\nknow the opinion of others. We are open to discussion. We can either\nopt for any of these 2 options (global or table) or we can opt for\nboth global and table/sub based one.\n\n>\n> Also, doesn't all this whole design ignore the concurrency between\n> publishers? Isn't this problematic considering the commit timestamps may\n> go backwards (for a given publisher), which means the conflict\n> resolution is not deterministic (as it depends on how exactly it\n> interleaves)?\n>\n>\n> > -------------------------\n> >\n> > Apart from the above three main operations and resolver configuration,\n> > there are more conflict types like primary-key updates, multiple\n> > unique constraints etc and some special scenarios to be considered.\n> > Complete design details can be found in [1].\n> >\n> > [1]: https://wiki.postgresql.org/wiki/Conflict_Detection_and_Resolution\n> >\n>\n> Hmmm, not sure it's good to have a \"complete\" design on wiki, and only\n> some subset posted to the mailing list. I haven't compared what the\n> differences are, though.\n\nIt would have been difficult to mention all the details in email\n(including examples and corner scenarios) and thus we thought that it\nwill be better to document everything in wiki page for the time being.\nWe can keep on discussing the design and all the scenarios on need\nbasis (before implementation phase of that part) and thus eventually\neverything will come in email on hackers. With out first patch, we\nplan to provide everything in a README as well.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 27 May 2024 11:18:53 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, May 27, 2024 at 11:19 AM shveta malik <[email protected]> wrote:\n>\n> On Sat, May 25, 2024 at 2:39 AM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > On 5/23/24 08:36, shveta malik wrote:\n> > > Hello hackers,\n> > >\n> > > Please find the proposal for Conflict Detection and Resolution (CDR)\n> > > for Logical replication.\n> > > <Thanks to Nisha, Hou-San, and Amit who helped in figuring out the\n> > > below details.>\n> > >\n> > > Introduction\n> > > ================\n> > > In case the node is subscribed to multiple providers, or when local\n> > > writes happen on a subscriber, conflicts can arise for the incoming\n> > > changes. CDR is the mechanism to automatically detect and resolve\n> > > these conflicts depending on the application and configurations.\n> > > CDR is not applicable for the initial table sync. If locally, there\n> > > exists conflicting data on the table, the table sync worker will fail.\n> > > Please find the details on CDR in apply worker for INSERT, UPDATE and\n> > > DELETE operations:\n> > >\n> >\n> > Which architecture are you aiming for? Here you talk about multiple\n> > providers, but the wiki page mentions active-active. I'm not sure how\n> > much this matters, but it might.\n>\n> Currently, we are working for multi providers case but ideally it\n> should work for active-active also. During further discussion and\n> implementation phase, if we find that, there are cases which will not\n> work in straight-forward way for active-active, then our primary focus\n> will remain to first implement it for multiple providers architecture.\n>\n> >\n> > Also, what kind of consistency you expect from this? Because none of\n> > these simple conflict resolution methods can give you the regular\n> > consistency models we're used to, AFAICS.\n>\n> Can you please explain a little bit more on this.\n>\n> >\n> > > INSERT\n> > > ================\n> > > To resolve INSERT conflict on subscriber, it is important to find out\n> > > the conflicting row (if any) before we attempt an insertion. The\n> > > indexes or search preference for the same will be:\n> > > First check for replica identity (RI) index.\n> > > - if not found, check for the primary key (PK) index.\n> > > - if not found, then check for unique indexes (individual ones or\n> > > added by unique constraints)\n> > > - if unique index also not found, skip CDR\n> > >\n> > > Note: if no RI index, PK, or unique index is found but\n> > > REPLICA_IDENTITY_FULL is defined, CDR will still be skipped.\n> > > The reason being that even though a row can be identified with\n> > > REPLICAT_IDENTITY_FULL, such tables are allowed to have duplicate\n> > > rows. Hence, we should not go for conflict detection in such a case.\n> > >\n> >\n> > It's not clear to me why would REPLICA_IDENTITY_FULL mean the table is\n> > allowed to have duplicate values? It just means the upstream is sending\n> > the whole original row, there can still be a PK/UNIQUE index on both the\n> > publisher and subscriber.\n>\n> Yes, right. Sorry for confusion. I meant the same i.e. in absence of\n> 'RI index, PK, or unique index', tables can have duplicates. So even\n> in presence of Replica-identity (FULL in this case) but in absence of\n> unique/primary index, CDR will be skipped for INSERT.\n>\n> >\n> > > In case of replica identity ‘nothing’ and in absence of any suitable\n> > > index (as defined above), CDR will be skipped for INSERT.\n> > >\n> > > Conflict Type:\n> > > ----------------\n> > > insert_exists: A conflict is detected when the table has the same\n> > > value for a key column as the new value in the incoming row.\n> > >\n> > > Conflict Resolution\n> > > ----------------\n> > > a) latest_timestamp_wins: The change with later commit timestamp wins.\n> > > b) earliest_timestamp_wins: The change with earlier commit timestamp wins.\n> > > c) apply: Always apply the remote change.\n> > > d) skip: Remote change is skipped.\n> > > e) error: Error out on conflict. Replication is stopped, manual\n> > > action is needed.\n> > >\n> >\n> > Why not to have some support for user-defined conflict resolution\n> > methods, allowing to do more complex stuff (e.g. merging the rows in\n> > some way, perhaps even with datatype-specific behavior)?\n>\n> Initially, for the sake of simplicity, we are targeting to support\n> built-in resolvers. But we have a plan to work on user-defined\n> resolvers as well. We shall propose that separately.\n>\n> >\n> > > The change will be converted to 'UPDATE' and applied if the decision\n> > > is in favor of applying remote change.\n> > >\n> > > It is important to have commit timestamp info available on subscriber\n> > > when latest_timestamp_wins or earliest_timestamp_wins method is chosen\n> > > as resolution method. Thus ‘track_commit_timestamp’ must be enabled\n> > > on subscriber, in absence of which, configuring the said\n> > > timestamp-based resolution methods will result in error.\n> > >\n> > > Note: If the user has chosen the latest or earliest_timestamp_wins,\n> > > and the remote and local timestamps are the same, then it will go by\n> > > system identifier. The change with a higher system identifier will\n> > > win. This will ensure that the same change is picked on all the nodes.\n> >\n> > How is this going to deal with the fact that commit LSN and timestamps\n> > may not correlate perfectly? That is, commits may happen with LSN1 <\n> > LSN2 but with T1 > T2.\n>\n> Are you pointing to the issue where a session/txn has taken\n> 'xactStopTimestamp' timestamp earlier but is delayed to insert record\n> in XLOG, while another session/txn which has taken timestamp slightly\n> later succeeded to insert the record IN XLOG sooner than the session1,\n> making LSN and Timestamps out of sync? Going by this scenario, the\n> commit-timestamp may not be reflective of actual commits and thus\n> timestamp-based resolvers may take wrong decisions. Or do you mean\n> something else?\n>\n> If this is the problem you are referring to, then I think this needs a\n> fix at the publisher side. Let me think more about it . Kindly let me\n> know if you have ideas on how to tackle it.\n>\n> > >\n> > > UPDATE\n> > > ================\n> > >\n> > > Conflict Detection Method:\n> > > --------------------------------\n> > > Origin conflict detection: The ‘origin’ info is used to detect\n> > > conflict which can be obtained from commit-timestamp generated for\n> > > incoming txn at the source node. To compare remote’s origin with the\n> > > local’s origin, we must have origin information for local txns as well\n> > > which can be obtained from commit-timestamp after enabling\n> > > ‘track_commit_timestamp’ locally.\n> > > The one drawback here is the ‘origin’ information cannot be obtained\n> > > once the row is frozen and the commit-timestamp info is removed by\n> > > vacuum. For a frozen row, conflicts cannot be raised, and thus the\n> > > incoming changes will be applied in all the cases.\n> > >\n> > > Conflict Types:\n> > > ----------------\n> > > a) update_differ: The origin of an incoming update's key row differs\n> > > from the local row i.e.; the row has already been updated locally or\n> > > by different nodes.\n> > > b) update_missing: The row with the same value as that incoming\n> > > update's key does not exist. Remote is trying to update a row which\n> > > does not exist locally.\n> > > c) update_deleted: The row with the same value as that incoming\n> > > update's key does not exist. The row is already deleted. This conflict\n> > > type is generated only if the deleted row is still detectable i.e., it\n> > > is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> > > it cannot detect this conflict. It will detect it as update_missing\n> > > and will follow the default or configured resolver of update_missing\n> > > itself.\n> > >\n> >\n> > I don't understand the why should update_missing or update_deleted be\n> > different, especially considering it's not detected reliably. And also\n> > that even if we happen to find the row the associated TOAST data may\n> > have already been removed. So why would this matter?\n>\n> Here, we are trying to tackle the case where the row is 'recently'\n> deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n> want to opt for a different resolution in such a case as against the\n> one where the corresponding row was not even present in the first\n> place. The case where the row was deleted long back may not fall into\n> this category as there are higher chances that they have been removed\n> by vacuum and can be considered equivalent to the update_ missing\n> case.\n>\n> Regarding \"TOAST column\" for deleted row cases, we may need to dig\n> more. Thanks for bringing this case. Let me analyze more here.\n>\nI tested a simple case with a table with one TOAST column and found\nthat when a tuple with a TOAST column is deleted, both the tuple and\ncorresponding pg_toast entries are marked as ‘deleted’ (dead) but not\nremoved immediately. The main tuple and respective pg_toast entry are\npermanently deleted only during vacuum. First, the main table’s dead\ntuples are vacuumed, followed by the secondary TOAST relation ones (if\navailable).\nPlease let us know if you have a specific scenario in mind where the\nTOAST column data is deleted immediately upon ‘delete’ operation,\nrather than during vacuum, which we are missing.\n\nThanks,\nNisha\n\n\n", "msg_date": "Tue, 28 May 2024 14:47:16 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Sat, May 25, 2024 at 2:39 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 5/23/24 08:36, shveta malik wrote:\n> >\n> > Conflict Resolution\n> > ----------------\n> > a) latest_timestamp_wins: The change with later commit timestamp wins.\n> > b) earliest_timestamp_wins: The change with earlier commit timestamp wins.\n> > c) apply: Always apply the remote change.\n> > d) skip: Remote change is skipped.\n> > e) error: Error out on conflict. Replication is stopped, manual\n> > action is needed.\n> >\n>\n> Why not to have some support for user-defined conflict resolution\n> methods, allowing to do more complex stuff (e.g. merging the rows in\n> some way, perhaps even with datatype-specific behavior)?\n>\n> > The change will be converted to 'UPDATE' and applied if the decision\n> > is in favor of applying remote change.\n> >\n> > It is important to have commit timestamp info available on subscriber\n> > when latest_timestamp_wins or earliest_timestamp_wins method is chosen\n> > as resolution method. Thus ‘track_commit_timestamp’ must be enabled\n> > on subscriber, in absence of which, configuring the said\n> > timestamp-based resolution methods will result in error.\n> >\n> > Note: If the user has chosen the latest or earliest_timestamp_wins,\n> > and the remote and local timestamps are the same, then it will go by\n> > system identifier. The change with a higher system identifier will\n> > win. This will ensure that the same change is picked on all the nodes.\n>\n> How is this going to deal with the fact that commit LSN and timestamps\n> may not correlate perfectly? That is, commits may happen with LSN1 <\n> LSN2 but with T1 > T2.\n>\n\nOne of the possible scenarios discussed at pgconf.dev with Tomas for\nthis was as follows:\n\nSay there are two publisher nodes PN1, PN2, and subscriber node SN3.\nThe logical replication is configured such that a subscription on SN3\nhas publications from both PN1 and PN2. For example, SN3 (sub) -> PN1,\nPN2 (p1, p2)\n\nNow, on PN1, we have the following operations that update the same row:\n\nT1\nUpdate-1 on table t1 at LSN1 (1000) on time (200)\n\nT2\nUpdate-2 on table t1 at LSN2 (2000) on time (100)\n\nThen in parallel, we have the following operation on node PN2 that\nupdates the same row as Update-1, and Update-2 on node PN1.\n\nT3\nUpdate-3 on table t1 at LSN(1500) on time (150)\n\nBy theory, we can have a different state on subscribers depending on\nthe order of updates arriving at SN3 which shouldn't happen. Say, the\norder in which they reach SN3 is: Update-1, Update-2, Update-3 then\nthe final row we have is by Update-3 considering we have configured\nlast_update_wins as a conflict resolution method. Now, consider the\nother order: Update-1, Update-3, Update-2, in this case, the final\nrow will be by Update-2 because when we try to apply Update-3, it will\ngenerate a conflict and as per the resolution method\n(last_update_wins) we need to retain Update-1.\n\nOn further thinking, the operations on node-1 PN-1 as defined above\nseem impossible because one of the Updates needs to wait for the other\nto write a commit record. So the commits may happen with LSN1 < LSN2\nbut with T1 > T2 but they can't be on the same row due to locks. So,\nthe order of apply should still be consistent. Am, I missing\nsomething?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 3 Jun 2024 13:00:17 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, May 27, 2024 at 11:19 AM shveta malik <[email protected]> wrote:\n>\n> On Sat, May 25, 2024 at 2:39 AM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > >\n> > > Conflict Resolution\n> > > ----------------\n> > > a) latest_timestamp_wins: The change with later commit timestamp wins.\n> > > b) earliest_timestamp_wins: The change with earlier commit timestamp wins.\n\nCan you share the use case of \"earliest_timestamp_wins\" resolution\nmethod? It seems after the initial update on the local node, it will\nnever allow remote update to succeed which sounds a bit odd. Jan has\nshared this and similar concerns about this resolution method, so I\nhave added him to the email as well.\n\n> > >\n> > > Conflict Types:\n> > > ----------------\n> > > a) update_differ: The origin of an incoming update's key row differs\n> > > from the local row i.e.; the row has already been updated locally or\n> > > by different nodes.\n> > > b) update_missing: The row with the same value as that incoming\n> > > update's key does not exist. Remote is trying to update a row which\n> > > does not exist locally.\n> > > c) update_deleted: The row with the same value as that incoming\n> > > update's key does not exist. The row is already deleted. This conflict\n> > > type is generated only if the deleted row is still detectable i.e., it\n> > > is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> > > it cannot detect this conflict. It will detect it as update_missing\n> > > and will follow the default or configured resolver of update_missing\n> > > itself.\n> > >\n> >\n> > I don't understand the why should update_missing or update_deleted be\n> > different, especially considering it's not detected reliably. And also\n> > that even if we happen to find the row the associated TOAST data may\n> > have already been removed. So why would this matter?\n>\n> Here, we are trying to tackle the case where the row is 'recently'\n> deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n> want to opt for a different resolution in such a case as against the\n> one where the corresponding row was not even present in the first\n> place. The case where the row was deleted long back may not fall into\n> this category as there are higher chances that they have been removed\n> by vacuum and can be considered equivalent to the update_ missing\n> case.\n>\n\nI think to make 'update_deleted' work, we need another scan with a\ndifferent snapshot type to find the recently deleted row. I don't know\nif it is a good idea to scan the index twice with different snapshots,\nso for the sake of simplicity, can we consider 'updated_deleted' same\nas 'update_missing'? If we think it is an important case to consider\nthen we can try to accomplish this once we finalize the\ndesign/implementation of other resolution methods.\n\n> > >\n> > > To implement the above, subscription commands will be changed to have\n> > > one more parameter 'conflict_resolution=on/off', default will be OFF.\n> > >\n> > > To configure global resolvers, new DDL command will be introduced:\n> > >\n> > > CONFLICT RESOLVER ON <conflict_type> IS <conflict_resolver>\n> > >\n> >\n> > I very much doubt we want a single global conflict resolver, or even one\n> > resolver per subscription. It seems like a very table-specific thing.\n>\n\n+1 to make it a table-level configuration but we probably need\nsomething at the global level as well such that by default if users\ndon't define anything at table-level global-level configuration will\nbe used.\n\n>\n> >\n> > Also, doesn't all this whole design ignore the concurrency between\n> > publishers? Isn't this problematic considering the commit timestamps may\n> > go backwards (for a given publisher), which means the conflict\n> > resolution is not deterministic (as it depends on how exactly it\n> > interleaves)?\n> >\n\nI am not able to imagine the cases you are worried about. Can you\nplease be specific? Is it similar to the case I described in\nyesterday's email [1]?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JTMiBOoGqkt%3DaLPLU8Rs45ihbLhXaGHsz8XC76%2BOG3%2BQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 4 Jun 2024 09:37:09 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 4, 2024 at 9:37 AM Amit Kapila <[email protected]> wrote:\n>\n> > >\n> > > >\n> > > > Conflict Resolution\n> > > > ----------------\n> > > > a) latest_timestamp_wins: The change with later commit timestamp wins.\n> > > > b) earliest_timestamp_wins: The change with earlier commit timestamp wins.\n>\n> Can you share the use case of \"earliest_timestamp_wins\" resolution\n> method? It seems after the initial update on the local node, it will\n> never allow remote update to succeed which sounds a bit odd. Jan has\n> shared this and similar concerns about this resolution method, so I\n> have added him to the email as well.\n\nI do not have the exact scenario for this. But I feel, if 2 nodes are\nconcurrently inserting different data against a primary key, then some\nusers may have preferences that retain the row which was inserted\nearlier. It is no different from latest_timestamp_wins. It totally\ndepends upon what kind of application and requirement the user may\nhave, based on which, he may discard the later coming rows (specially\nfor INSERT case).\n\n> > > > Conflict Types:\n> > > > ----------------\n> > > > a) update_differ: The origin of an incoming update's key row differs\n> > > > from the local row i.e.; the row has already been updated locally or\n> > > > by different nodes.\n> > > > b) update_missing: The row with the same value as that incoming\n> > > > update's key does not exist. Remote is trying to update a row which\n> > > > does not exist locally.\n> > > > c) update_deleted: The row with the same value as that incoming\n> > > > update's key does not exist. The row is already deleted. This conflict\n> > > > type is generated only if the deleted row is still detectable i.e., it\n> > > > is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> > > > it cannot detect this conflict. It will detect it as update_missing\n> > > > and will follow the default or configured resolver of update_missing\n> > > > itself.\n> > > >\n> > >\n> > > I don't understand the why should update_missing or update_deleted be\n> > > different, especially considering it's not detected reliably. And also\n> > > that even if we happen to find the row the associated TOAST data may\n> > > have already been removed. So why would this matter?\n> >\n> > Here, we are trying to tackle the case where the row is 'recently'\n> > deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n> > want to opt for a different resolution in such a case as against the\n> > one where the corresponding row was not even present in the first\n> > place. The case where the row was deleted long back may not fall into\n> > this category as there are higher chances that they have been removed\n> > by vacuum and can be considered equivalent to the update_ missing\n> > case.\n> >\n>\n> I think to make 'update_deleted' work, we need another scan with a\n> different snapshot type to find the recently deleted row. I don't know\n> if it is a good idea to scan the index twice with different snapshots,\n> so for the sake of simplicity, can we consider 'updated_deleted' same\n> as 'update_missing'? If we think it is an important case to consider\n> then we can try to accomplish this once we finalize the\n> design/implementation of other resolution methods.\n\nI think it is important for scenarios when data is being updated and\ndeleted concurrently. But yes, I agree that implementation may have\nsome performance hit for this case. We can tackle this scenario at a\nlater stage.\n\n> > > >\n> > > > To implement the above, subscription commands will be changed to have\n> > > > one more parameter 'conflict_resolution=on/off', default will be OFF.\n> > > >\n> > > > To configure global resolvers, new DDL command will be introduced:\n> > > >\n> > > > CONFLICT RESOLVER ON <conflict_type> IS <conflict_resolver>\n> > > >\n> > >\n> > > I very much doubt we want a single global conflict resolver, or even one\n> > > resolver per subscription. It seems like a very table-specific thing.\n> >\n>\n> +1 to make it a table-level configuration but we probably need\n> something at the global level as well such that by default if users\n> don't define anything at table-level global-level configuration will\n> be used.\n>\n> >\n> > >\n> > > Also, doesn't all this whole design ignore the concurrency between\n> > > publishers? Isn't this problematic considering the commit timestamps may\n> > > go backwards (for a given publisher), which means the conflict\n> > > resolution is not deterministic (as it depends on how exactly it\n> > > interleaves)?\n> > >\n>\n> I am not able to imagine the cases you are worried about. Can you\n> please be specific? Is it similar to the case I described in\n> yesterday's email [1]?\n>\n> [1] - https://www.postgresql.org/message-id/CAA4eK1JTMiBOoGqkt%3DaLPLU8Rs45ihbLhXaGHsz8XC76%2BOG3%2BQ%40mail.gmail.com\n>\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 5 Jun 2024 09:12:08 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "Hi,\r\n\r\nThis time at PGconf.dev[1], we had some discussions regarding this\r\nproject. The proposed approach is to split the work into two main\r\ncomponents. The first part focuses on conflict detection, which aims to\r\nidentify and report conflicts in logical replication. This feature will\r\nenable users to monitor the unexpected conflicts that may occur. The\r\nsecond part involves the actual conflict resolution. Here, we will provide\r\nbuilt-in resolutions for each conflict and allow user to choose which\r\nresolution will be used for which conflict(as described in the initial\r\nemail of this thread).\r\n \r\nOf course, we are open to alternative ideas and suggestions, and the\r\nstrategy above can be changed based on ongoing discussions and feedback\r\nreceived.\r\n \r\nHere is the patch of the first part work, which adds a new parameter\r\ndetect_conflict for CREATE and ALTER subscription commands. This new\r\nparameter will decide if subscription will go for conflict detection. By\r\ndefault, conflict detection will be off for a subscription.\r\n \r\nWhen conflict detection is enabled, additional logging is triggered in the\r\nfollowing conflict scenarios:\r\n \r\n* updating a row that was previously modified by another origin.\r\n* The tuple to be updated is not found.\r\n* The tuple to be deleted is not found.\r\n \r\nWhile there exist other conflict types in logical replication, such as an\r\nincoming insert conflicting with an existing row due to a primary key or\r\nunique index, these cases already result in constraint violation errors.\r\nTherefore, additional conflict detection for these cases is currently\r\nomitted to minimize potential overhead. However, the pre-detection for\r\nconflict in these error cases is still essential to support automatic\r\nconflict resolution in the future.\r\n\r\n[1] https://2024.pgconf.dev/\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 5 Jun 2024 06:31:59 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 5, 2024 at 9:12 AM shveta malik <[email protected]> wrote:\n>\n> On Tue, Jun 4, 2024 at 9:37 AM Amit Kapila <[email protected]> wrote:\n> >\n> > > >\n> > > > >\n> > > > > Conflict Resolution\n> > > > > ----------------\n> > > > > a) latest_timestamp_wins: The change with later commit timestamp wins.\n> > > > > b) earliest_timestamp_wins: The change with earlier commit timestamp wins.\n> >\n> > Can you share the use case of \"earliest_timestamp_wins\" resolution\n> > method? It seems after the initial update on the local node, it will\n> > never allow remote update to succeed which sounds a bit odd. Jan has\n> > shared this and similar concerns about this resolution method, so I\n> > have added him to the email as well.\n>\n> I do not have the exact scenario for this. But I feel, if 2 nodes are\n> concurrently inserting different data against a primary key, then some\n> users may have preferences that retain the row which was inserted\n> earlier. It is no different from latest_timestamp_wins. It totally\n> depends upon what kind of application and requirement the user may\n> have, based on which, he may discard the later coming rows (specially\n> for INSERT case).\n\nI haven't read the complete design yet, but have we discussed how we\nplan to deal with clock drift if we use timestamp-based conflict\nresolution? For example, a user might insert something conflicting on\nnode1 first and then on node2. However, due to clock drift, the\ntimestamp from node2 might appear earlier. In this case, if we choose\n\"earliest timestamp wins,\" we would keep the changes from node2.\n\nI haven't fully considered if this would cause any problems, but users\nmight detect this issue. For instance, a client machine might send a\nchange to node1 first and then, upon confirmation, send it to node2.\nIf the clocks on node1 and node2 are not synchronized, the changes\nmight appear in a different order. Does this seem like a potential\nproblem?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Jun 2024 18:52:23 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 4, 2024 at 9:37 AM Amit Kapila <[email protected]> wrote:\n>\n> Can you share the use case of \"earliest_timestamp_wins\" resolution\n> method? It seems after the initial update on the local node, it will\n> never allow remote update to succeed which sounds a bit odd. Jan has\n> shared this and similar concerns about this resolution method, so I\n> have added him to the email as well.\n>\nI can not think of a use case exactly in this context but it's very\ncommon to have such a use case while designing a distributed\napplication with multiple clients. For example, when we are doing git\npush concurrently from multiple clients it is expected that the\nearliest commit wins.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Jun 2024 19:29:19 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 5, 2024 at 7:29 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jun 4, 2024 at 9:37 AM Amit Kapila <[email protected]> wrote:\n> >\n> > Can you share the use case of \"earliest_timestamp_wins\" resolution\n> > method? It seems after the initial update on the local node, it will\n> > never allow remote update to succeed which sounds a bit odd. Jan has\n> > shared this and similar concerns about this resolution method, so I\n> > have added him to the email as well.\n> >\n> I can not think of a use case exactly in this context but it's very\n> common to have such a use case while designing a distributed\n> application with multiple clients. For example, when we are doing git\n> push concurrently from multiple clients it is expected that the\n> earliest commit wins.\n>\n\nOkay, I think it mostly boils down to something like what Shveta\nmentioned where Inserts for a primary key can use\n\"earliest_timestamp_wins\" resolution method [1]. So, it seems useful\nto support this method as well.\n\n[1] - https://www.postgresql.org/message-id/CAJpy0uC4riK8e6hQt8jcU%2BnXYmRRjnbFEapYNbmxVYjENxTw2g%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Jun 2024 15:43:33 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 6, 2024 at 3:43 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 7:29 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Tue, Jun 4, 2024 at 9:37 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > Can you share the use case of \"earliest_timestamp_wins\" resolution\n> > > method? It seems after the initial update on the local node, it will\n> > > never allow remote update to succeed which sounds a bit odd. Jan has\n> > > shared this and similar concerns about this resolution method, so I\n> > > have added him to the email as well.\n> > >\n> > I can not think of a use case exactly in this context but it's very\n> > common to have such a use case while designing a distributed\n> > application with multiple clients. For example, when we are doing git\n> > push concurrently from multiple clients it is expected that the\n> > earliest commit wins.\n> >\n>\n> Okay, I think it mostly boils down to something like what Shveta\n> mentioned where Inserts for a primary key can use\n> \"earliest_timestamp_wins\" resolution method [1]. So, it seems useful\n> to support this method as well.\n\nCorrect, but we still need to think about how to make it work\ncorrectly in the presence of a clock skew as I mentioned in one of my\nprevious emails.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 15:49:55 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 5, 2024 at 7:29 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jun 4, 2024 at 9:37 AM Amit Kapila <[email protected]> wrote:\n> >\n> > Can you share the use case of \"earliest_timestamp_wins\" resolution\n> > method? It seems after the initial update on the local node, it will\n> > never allow remote update to succeed which sounds a bit odd. Jan has\n> > shared this and similar concerns about this resolution method, so I\n> > have added him to the email as well.\n> >\n> I can not think of a use case exactly in this context but it's very\n> common to have such a use case while designing a distributed\n> application with multiple clients. For example, when we are doing git\n> push concurrently from multiple clients it is expected that the\n> earliest commit wins.\n>\n\nHere are more use cases of the \"earliest_timestamp_wins\" resolution method:\n1) Applications where the record of first occurrence of an event is\nimportant. For example, sensor based applications like earthquake\ndetection systems, capturing the first seismic wave's time is crucial.\n2) Scheduling systems, like appointment booking, prioritize the\nearliest request when handling concurrent ones.\n3) In contexts where maintaining chronological order is important -\n a) Social media platforms display comments ensuring that the\nearliest ones are visible first.\n b) Finance transaction processing systems rely on timestamps to\nprioritize the processing of transactions, ensuring that the earliest\ntransaction is handled first\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Thu, 6 Jun 2024 17:15:51 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 6, 2024 at 5:16 PM Nisha Moond <[email protected]> wrote:\n\n> >\n>\n> Here are more use cases of the \"earliest_timestamp_wins\" resolution method:\n> 1) Applications where the record of first occurrence of an event is\n> important. For example, sensor based applications like earthquake\n> detection systems, capturing the first seismic wave's time is crucial.\n> 2) Scheduling systems, like appointment booking, prioritize the\n> earliest request when handling concurrent ones.\n> 3) In contexts where maintaining chronological order is important -\n> a) Social media platforms display comments ensuring that the\n> earliest ones are visible first.\n> b) Finance transaction processing systems rely on timestamps to\n> prioritize the processing of transactions, ensuring that the earliest\n> transaction is handled first\n>\n\nThanks for sharing examples. However, these scenarios would be handled by\nthe application and not during replication. What we are discussing here is\nthe timestamp when a row was updated/inserted/deleted (or rather when the\ntransaction that updated row committed/became visible) and not a DML on\ncolumn which is of type timestamp. Some implementations use a hidden\ntimestamp column but that's different from a user column which captures\ntimestamp of (say) an event. The conflict resolution will be based on the\ntimestamp when that column's value was recorded in the database which may\nbe different from the value of the column itself.\n\nIf we use the transaction commit timestamp as basis for resolution, a\ntransaction where multiple rows conflict may end up with different rows\naffected by that transaction being resolved differently. Say three\ntransactions T1, T2 and T3 on separate origins with timestamps t1, t2, and\nt3 respectively changed rows r1, r2 and r2, r3 and r1, r4 respectively.\nChanges to r1 and r2 will conflict. Let's say T2 and T3 are applied first\nand then T1 is applied. If t2 < t1 < t3, r1 will end up with version of T3\nand r2 will end up with version of T1 after applying all the three\ntransactions. Would that introduce an inconsistency between r1 and r2?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Jun 6, 2024 at 5:16 PM Nisha Moond <[email protected]> wrote:\n>\n\nHere are more use cases of the \"earliest_timestamp_wins\" resolution method:\n1) Applications where the record of first occurrence of an event is\nimportant. For example, sensor based applications like earthquake\ndetection systems, capturing the first seismic wave's time is crucial.\n2) Scheduling systems, like appointment booking, prioritize the\nearliest request when handling concurrent ones.\n3) In contexts where maintaining chronological order is important -\n  a) Social media platforms display comments ensuring that the\nearliest ones are visible first.\n  b) Finance transaction processing systems rely on timestamps to\nprioritize the processing of transactions, ensuring that the earliest\ntransaction is handled firstThanks for sharing examples. However, these scenarios would be handled by the application and not during replication. What we are discussing here is the timestamp when a row was updated/inserted/deleted (or rather when the transaction that updated row committed/became visible) and not a DML on column which is of type timestamp. Some implementations use a hidden timestamp column but that's different from a user column which captures timestamp of (say) an event. The conflict resolution will be based on the timestamp when that column's value was recorded in the database which may be different from the value of the column itself.If we use the transaction commit timestamp as basis for resolution, a transaction where multiple rows conflict may end up with different rows affected by that transaction being resolved differently. Say three transactions T1, T2 and T3 on separate origins with timestamps t1, t2, and t3 respectively changed rows r1, r2 and r2, r3 and r1, r4 respectively. Changes to r1 and r2 will conflict. Let's say T2 and T3 are applied first and then T1 is applied. If t2 < t1 < t3, r1 will end up with version of T3 and r2 will end up with version of T1 after applying all the three transactions. Would that introduce an inconsistency between r1 and r2?-- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 7 Jun 2024 17:38:49 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On 5/27/24 07:48, shveta malik wrote:\n> On Sat, May 25, 2024 at 2:39 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 5/23/24 08:36, shveta malik wrote:\n>>> Hello hackers,\n>>>\n>>> Please find the proposal for Conflict Detection and Resolution (CDR)\n>>> for Logical replication.\n>>> <Thanks to Nisha, Hou-San, and Amit who helped in figuring out the\n>>> below details.>\n>>>\n>>> Introduction\n>>> ================\n>>> In case the node is subscribed to multiple providers, or when local\n>>> writes happen on a subscriber, conflicts can arise for the incoming\n>>> changes. CDR is the mechanism to automatically detect and resolve\n>>> these conflicts depending on the application and configurations.\n>>> CDR is not applicable for the initial table sync. If locally, there\n>>> exists conflicting data on the table, the table sync worker will fail.\n>>> Please find the details on CDR in apply worker for INSERT, UPDATE and\n>>> DELETE operations:\n>>>\n>>\n>> Which architecture are you aiming for? Here you talk about multiple\n>> providers, but the wiki page mentions active-active. I'm not sure how\n>> much this matters, but it might.\n> \n> Currently, we are working for multi providers case but ideally it\n> should work for active-active also. During further discussion and\n> implementation phase, if we find that, there are cases which will not\n> work in straight-forward way for active-active, then our primary focus\n> will remain to first implement it for multiple providers architecture.\n> \n>>\n>> Also, what kind of consistency you expect from this? Because none of\n>> these simple conflict resolution methods can give you the regular\n>> consistency models we're used to, AFAICS.\n> \n> Can you please explain a little bit more on this.\n> \n\nI was referring to the well established consistency models / isolation\nlevels, e.g. READ COMMITTED or SNAPSHOT ISOLATION. This determines what\nguarantees the application developer can expect, what anomalies can\nhappen, etc.\n\nI don't think any such isolation level can be implemented with a simple\nconflict resolution methods like last-update-wins etc. For example,\nconsider an active-active where both nodes do\n\n UPDATE accounts SET balance=balance+1000 WHERE id=1\n\nThis will inevitably lead to a conflict, and while the last-update-wins\nresolves this \"consistently\" on both nodes (e.g. ending with the same\nresult), it's essentially a lost update.\n\nThis is a very simplistic example of course, I recall there are various\nmore complex examples involving foreign keys, multi-table transactions,\nconstraints, etc. But in principle it's a manifestation of the same\ninherent limitation of conflict detection and resolution etc.\n\nSimilarly, I believe this affects not just active-active, but also the\ncase where one node aggregates data from multiple publishers. Maybe not\nto the same extent / it might be fine for that use case, but you said\nthe end goal is to use this for active-active. So I'm wondering what's\nthe plan, there.\n\nIf I'm writing an application for active-active using this conflict\nhandling, what assumptions can I make? Will Can I just do stuff as if on\na single node, or do I need to be super conscious about the zillion ways\nthings can misbehave in a distributed system?\n\nMy personal opinion is that the closer this will be to the regular\nconsistency levels, the better. If past experience taught me anything,\nit's very hard to predict how distributed systems with eventual\nconsistency behave, and even harder to actually test the application in\nsuch environment.\n\nIn any case, if there are any differences compared to the usual\nbehavior, it needs to be very clearly explained in the docs.\n\n>>\n>>> INSERT\n>>> ================\n>>> To resolve INSERT conflict on subscriber, it is important to find out\n>>> the conflicting row (if any) before we attempt an insertion. The\n>>> indexes or search preference for the same will be:\n>>> First check for replica identity (RI) index.\n>>> - if not found, check for the primary key (PK) index.\n>>> - if not found, then check for unique indexes (individual ones or\n>>> added by unique constraints)\n>>> - if unique index also not found, skip CDR\n>>>\n>>> Note: if no RI index, PK, or unique index is found but\n>>> REPLICA_IDENTITY_FULL is defined, CDR will still be skipped.\n>>> The reason being that even though a row can be identified with\n>>> REPLICAT_IDENTITY_FULL, such tables are allowed to have duplicate\n>>> rows. Hence, we should not go for conflict detection in such a case.\n>>>\n>>\n>> It's not clear to me why would REPLICA_IDENTITY_FULL mean the table is\n>> allowed to have duplicate values? It just means the upstream is sending\n>> the whole original row, there can still be a PK/UNIQUE index on both the\n>> publisher and subscriber.\n> \n> Yes, right. Sorry for confusion. I meant the same i.e. in absence of\n> 'RI index, PK, or unique index', tables can have duplicates. So even\n> in presence of Replica-identity (FULL in this case) but in absence of\n> unique/primary index, CDR will be skipped for INSERT.\n> \n>>\n>>> In case of replica identity ‘nothing’ and in absence of any suitable\n>>> index (as defined above), CDR will be skipped for INSERT.\n>>>\n>>> Conflict Type:\n>>> ----------------\n>>> insert_exists: A conflict is detected when the table has the same\n>>> value for a key column as the new value in the incoming row.\n>>>\n>>> Conflict Resolution\n>>> ----------------\n>>> a) latest_timestamp_wins: The change with later commit timestamp wins.\n>>> b) earliest_timestamp_wins: The change with earlier commit timestamp wins.\n>>> c) apply: Always apply the remote change.\n>>> d) skip: Remote change is skipped.\n>>> e) error: Error out on conflict. Replication is stopped, manual\n>>> action is needed.\n>>>\n>>\n>> Why not to have some support for user-defined conflict resolution\n>> methods, allowing to do more complex stuff (e.g. merging the rows in\n>> some way, perhaps even with datatype-specific behavior)?\n> \n> Initially, for the sake of simplicity, we are targeting to support\n> built-in resolvers. But we have a plan to work on user-defined\n> resolvers as well. We shall propose that separately.\n> \n>>\n>>> The change will be converted to 'UPDATE' and applied if the decision\n>>> is in favor of applying remote change.\n>>>\n>>> It is important to have commit timestamp info available on subscriber\n>>> when latest_timestamp_wins or earliest_timestamp_wins method is chosen\n>>> as resolution method. Thus ‘track_commit_timestamp’ must be enabled\n>>> on subscriber, in absence of which, configuring the said\n>>> timestamp-based resolution methods will result in error.\n>>>\n>>> Note: If the user has chosen the latest or earliest_timestamp_wins,\n>>> and the remote and local timestamps are the same, then it will go by\n>>> system identifier. The change with a higher system identifier will\n>>> win. This will ensure that the same change is picked on all the nodes.\n>>\n>> How is this going to deal with the fact that commit LSN and timestamps\n>> may not correlate perfectly? That is, commits may happen with LSN1 <\n>> LSN2 but with T1 > T2.\n> \n> Are you pointing to the issue where a session/txn has taken\n> 'xactStopTimestamp' timestamp earlier but is delayed to insert record\n> in XLOG, while another session/txn which has taken timestamp slightly\n> later succeeded to insert the record IN XLOG sooner than the session1,\n> making LSN and Timestamps out of sync? Going by this scenario, the\n> commit-timestamp may not be reflective of actual commits and thus\n> timestamp-based resolvers may take wrong decisions. Or do you mean\n> something else?\n> \n> If this is the problem you are referring to, then I think this needs a\n> fix at the publisher side. Let me think more about it . Kindly let me\n> know if you have ideas on how to tackle it.\n> \n\nYes, this is the issue I'm talking about. We're acquiring the timestamp\nwhen not holding the lock to reserve space in WAL, so the LSN and the\ncommit LSN may not actually correlate.\n\nConsider this example I discussed with Amit last week:\n\nnode A:\n\n XACT1: UPDATE t SET v = 1; LSN1 / T1\n\n XACT2: UPDATE t SET v = 2; LSN2 / T2\n\nnode B\n\n XACT3: UPDATE t SET v = 3; LSN3 / T3\n\nAnd assume LSN1 < LSN2, T1 > T2 (i.e. the commit timestamp inversion),\nand T2 < T3 < T1. Now consider that the messages may arrive in different\norders, due to async replication. Unfortunately, this would lead to\ndifferent results of the conflict resolution:\n\n XACT1 - XACT2 - XACT3 => v=3 (T3 wins)\n\n XACT3 - XACT1 - XACT2 => v=2 (T2 wins)\n\nNow, I realize there's a flaw in this example - the (T1 > T2) inversion\ncan't actually happen, because these transactions have a dependency, and\nthus won't commit concurrently. XACT1 will complete the commit, because\nXACT2 starts to commit. And with monotonic clock (which is a requirement\nfor any timestamp-based resolution), that should guarantee (T1 < T2).\n\nHowever, I doubt this is sufficient to declare victory. It's more likely\nthat there still are problems, but the examples are likely more complex\n(changes to multiple tables, etc.).\n\nI vaguely remember there were more issues with timestamp inversion, but\nthose might have been related to parallel apply etc.\n\n>>>\n>>> UPDATE\n>>> ================\n>>>\n>>> Conflict Detection Method:\n>>> --------------------------------\n>>> Origin conflict detection: The ‘origin’ info is used to detect\n>>> conflict which can be obtained from commit-timestamp generated for\n>>> incoming txn at the source node. To compare remote’s origin with the\n>>> local’s origin, we must have origin information for local txns as well\n>>> which can be obtained from commit-timestamp after enabling\n>>> ‘track_commit_timestamp’ locally.\n>>> The one drawback here is the ‘origin’ information cannot be obtained\n>>> once the row is frozen and the commit-timestamp info is removed by\n>>> vacuum. For a frozen row, conflicts cannot be raised, and thus the\n>>> incoming changes will be applied in all the cases.\n>>>\n>>> Conflict Types:\n>>> ----------------\n>>> a) update_differ: The origin of an incoming update's key row differs\n>>> from the local row i.e.; the row has already been updated locally or\n>>> by different nodes.\n>>> b) update_missing: The row with the same value as that incoming\n>>> update's key does not exist. Remote is trying to update a row which\n>>> does not exist locally.\n>>> c) update_deleted: The row with the same value as that incoming\n>>> update's key does not exist. The row is already deleted. This conflict\n>>> type is generated only if the deleted row is still detectable i.e., it\n>>> is not removed by VACUUM yet. If the row is removed by VACUUM already,\n>>> it cannot detect this conflict. It will detect it as update_missing\n>>> and will follow the default or configured resolver of update_missing\n>>> itself.\n>>>\n>>\n>> I don't understand the why should update_missing or update_deleted be\n>> different, especially considering it's not detected reliably. And also\n>> that even if we happen to find the row the associated TOAST data may\n>> have already been removed. So why would this matter?\n> \n> Here, we are trying to tackle the case where the row is 'recently'\n> deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n> want to opt for a different resolution in such a case as against the\n> one where the corresponding row was not even present in the first\n> place. The case where the row was deleted long back may not fall into\n> this category as there are higher chances that they have been removed\n> by vacuum and can be considered equivalent to the update_ missing\n> case.\n> \n\nMy point is that if we can't detect the difference reliably, it's not\nvery useful. Consider this example:\n\nNode A:\n\n T1: INSERT INTO t (id, value) VALUES (1,1);\n\n T2: DELETE FROM t WHERE id = 1;\n\nNode B:\n\n T3: UPDATE t SET value = 2 WHERE id = 1;\n\nThe \"correct\" order of received messages on a third node is T1-T3-T2.\nBut we may also see T1-T2-T3 and T3-T1-T2, e.g. due to network issues\nand so on. For T1-T2-T3 the right decision is to discard the update,\nwhile for T3-T1-T2 it's to either wait for the INSERT or wait for the\ninsert to arrive.\n\nBut if we misdetect the situation, we either end up with a row that\nshouldn't be there, or losing an update.\n\n> Regarding \"TOAST column\" for deleted row cases, we may need to dig\n> more. Thanks for bringing this case. Let me analyze more here.\n> \n>>\n>>> Conflict Resolutions:\n>>> ----------------\n>>> a) latest_timestamp_wins: The change with later commit timestamp\n>>> wins. Can be used for ‘update_differ’.\n>>> b) earliest_timestamp_wins: The change with earlier commit\n>>> timestamp wins. Can be used for ‘update_differ’.\n>>> c) apply: The remote change is always applied. Can be used for\n>>> ‘update_differ’.\n>>> d) apply_or_skip: Remote change is converted to INSERT and is\n>>> applied. If the complete row cannot be constructed from the info\n>>> provided by the publisher, then the change is skipped. Can be used for\n>>> ‘update_missing’ or ‘update_deleted’.\n>>> e) apply_or_error: Remote change is converted to INSERT and is\n>>> applied. If the complete row cannot be constructed from the info\n>>> provided by the publisher, then error is raised. Can be used for\n>>> ‘update_missing’ or ‘update_deleted’.\n>>> f) skip: Remote change is skipped and local one is retained. Can be\n>>> used for any conflict type.\n>>> g) error: Error out of conflict. Replication is stopped, manual\n>>> action is needed. Can be used for any conflict type.\n>>>\n>>> To support UPDATE CDR, the presence of either replica identity Index\n>>> or primary key is required on target node. Update CDR will not be\n>>> supported in absence of replica identity index or primary key even\n>>> though REPLICA IDENTITY FULL is set. Please refer to \"UPDATE\" in\n>>> \"Noteworthey Scenarios\" section in [1] for further details.\n>>>\n>>> DELETE\n>>> ================\n>>> Conflict Type:\n>>> ----------------\n>>> delete_missing: An incoming delete is trying to delete a row on a\n>>> target node which does not exist.\n>>>\n>>> Conflict Resolutions:\n>>> ----------------\n>>> a) error : Error out on conflict. Replication is stopped, manual\n>>> action is needed.\n>>> b) skip : The remote change is skipped.\n>>>\n>>> Configuring Conflict Resolution:\n>>> ------------------------------------------------\n>>> There are two parts when it comes to configuring CDR:\n>>>\n>>> a) Enabling/Disabling conflict detection.\n>>> b) Configuring conflict resolvers for different conflict types.\n>>>\n>>> Users can sometimes create multiple subscriptions on the same node,\n>>> subscribing to different tables to improve replication performance by\n>>> starting multiple apply workers. If the tables in one subscription are\n>>> less likely to cause conflict, then it is possible that user may want\n>>> conflict detection disabled for that subscription to avoid detection\n>>> latency while enabling it for other subscriptions. This generates a\n>>> requirement to make ‘conflict detection’ configurable per\n>>> subscription. While the conflict resolver configuration can remain\n>>> global. All the subscriptions which opt for ‘conflict detection’ will\n>>> follow global conflict resolver configuration.\n>>>\n>>> To implement the above, subscription commands will be changed to have\n>>> one more parameter 'conflict_resolution=on/off', default will be OFF.\n>>>\n>>> To configure global resolvers, new DDL command will be introduced:\n>>>\n>>> CONFLICT RESOLVER ON <conflict_type> IS <conflict_resolver>\n>>>\n>>\n>> I very much doubt we want a single global conflict resolver, or even one\n>> resolver per subscription. It seems like a very table-specific thing.\n> \n> Even we thought about this. We feel that even if we go for table based\n> or subscription based resolvers configuration, there may be use case\n> scenarios where the user is not interested in configuring resolvers\n> for each table and thus may want to give global ones. Thus, we should\n> provide a way for users to do global configuration. Thus we started\n> with global one. I have noted your point here and would also like to\n> know the opinion of others. We are open to discussion. We can either\n> opt for any of these 2 options (global or table) or we can opt for\n> both global and table/sub based one.\n> \n\nI have no problem with a default / global conflict handler, as long as\nthere's a way to override this per table. This is especially important\nfor cases with custom conflict handler at table / column level.\n\n>>\n>> Also, doesn't all this whole design ignore the concurrency between\n>> publishers? Isn't this problematic considering the commit timestamps may\n>> go backwards (for a given publisher), which means the conflict\n>> resolution is not deterministic (as it depends on how exactly it\n>> interleaves)?\n>>\n>>\n>>> -------------------------\n>>>\n>>> Apart from the above three main operations and resolver configuration,\n>>> there are more conflict types like primary-key updates, multiple\n>>> unique constraints etc and some special scenarios to be considered.\n>>> Complete design details can be found in [1].\n>>>\n>>> [1]: https://wiki.postgresql.org/wiki/Conflict_Detection_and_Resolution\n>>>\n>>\n>> Hmmm, not sure it's good to have a \"complete\" design on wiki, and only\n>> some subset posted to the mailing list. I haven't compared what the\n>> differences are, though.\n> \n> It would have been difficult to mention all the details in email\n> (including examples and corner scenarios) and thus we thought that it\n> will be better to document everything in wiki page for the time being.\n> We can keep on discussing the design and all the scenarios on need\n> basis (before implementation phase of that part) and thus eventually\n> everything will come in email on hackers. With out first patch, we\n> plan to provide everything in a README as well.\n> \n\nThe challenge with having this on wiki is that it's unlikely people will\nnotice any changes made to the wiki.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:37:58 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On 5/28/24 11:17, Nisha Moond wrote:\n> On Mon, May 27, 2024 at 11:19 AM shveta malik <[email protected]> wrote:\n>>\n>> On Sat, May 25, 2024 at 2:39 AM Tomas Vondra\n>> <[email protected]> wrote:\n>>>\n>>> ...\n>>>\n>>> I don't understand the why should update_missing or update_deleted be\n>>> different, especially considering it's not detected reliably. And also\n>>> that even if we happen to find the row the associated TOAST data may\n>>> have already been removed. So why would this matter?\n>>\n>> Here, we are trying to tackle the case where the row is 'recently'\n>> deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n>> want to opt for a different resolution in such a case as against the\n>> one where the corresponding row was not even present in the first\n>> place. The case where the row was deleted long back may not fall into\n>> this category as there are higher chances that they have been removed\n>> by vacuum and can be considered equivalent to the update_ missing\n>> case.\n>>\n>> Regarding \"TOAST column\" for deleted row cases, we may need to dig\n>> more. Thanks for bringing this case. Let me analyze more here.\n>>\n> I tested a simple case with a table with one TOAST column and found\n> that when a tuple with a TOAST column is deleted, both the tuple and\n> corresponding pg_toast entries are marked as ‘deleted’ (dead) but not\n> removed immediately. The main tuple and respective pg_toast entry are\n> permanently deleted only during vacuum. First, the main table’s dead\n> tuples are vacuumed, followed by the secondary TOAST relation ones (if\n> available).\n> Please let us know if you have a specific scenario in mind where the\n> TOAST column data is deleted immediately upon ‘delete’ operation,\n> rather than during vacuum, which we are missing.\n> \n\nI'm pretty sure you can vacuum the TOAST table directly, which means\nyou'll end up with a deleted tuple with TOAST pointers, but with the\nTOAST entries already gone.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:40:53 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "\n\nOn 6/3/24 09:30, Amit Kapila wrote:\n> On Sat, May 25, 2024 at 2:39 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 5/23/24 08:36, shveta malik wrote:\n>>>\n>>> Conflict Resolution\n>>> ----------------\n>>> a) latest_timestamp_wins: The change with later commit timestamp wins.\n>>> b) earliest_timestamp_wins: The change with earlier commit timestamp wins.\n>>> c) apply: Always apply the remote change.\n>>> d) skip: Remote change is skipped.\n>>> e) error: Error out on conflict. Replication is stopped, manual\n>>> action is needed.\n>>>\n>>\n>> Why not to have some support for user-defined conflict resolution\n>> methods, allowing to do more complex stuff (e.g. merging the rows in\n>> some way, perhaps even with datatype-specific behavior)?\n>>\n>>> The change will be converted to 'UPDATE' and applied if the decision\n>>> is in favor of applying remote change.\n>>>\n>>> It is important to have commit timestamp info available on subscriber\n>>> when latest_timestamp_wins or earliest_timestamp_wins method is chosen\n>>> as resolution method. Thus ‘track_commit_timestamp’ must be enabled\n>>> on subscriber, in absence of which, configuring the said\n>>> timestamp-based resolution methods will result in error.\n>>>\n>>> Note: If the user has chosen the latest or earliest_timestamp_wins,\n>>> and the remote and local timestamps are the same, then it will go by\n>>> system identifier. The change with a higher system identifier will\n>>> win. This will ensure that the same change is picked on all the nodes.\n>>\n>> How is this going to deal with the fact that commit LSN and timestamps\n>> may not correlate perfectly? That is, commits may happen with LSN1 <\n>> LSN2 but with T1 > T2.\n>>\n> \n> One of the possible scenarios discussed at pgconf.dev with Tomas for\n> this was as follows:\n> \n> Say there are two publisher nodes PN1, PN2, and subscriber node SN3.\n> The logical replication is configured such that a subscription on SN3\n> has publications from both PN1 and PN2. For example, SN3 (sub) -> PN1,\n> PN2 (p1, p2)\n> \n> Now, on PN1, we have the following operations that update the same row:\n> \n> T1\n> Update-1 on table t1 at LSN1 (1000) on time (200)\n> \n> T2\n> Update-2 on table t1 at LSN2 (2000) on time (100)\n> \n> Then in parallel, we have the following operation on node PN2 that\n> updates the same row as Update-1, and Update-2 on node PN1.\n> \n> T3\n> Update-3 on table t1 at LSN(1500) on time (150)\n> \n> By theory, we can have a different state on subscribers depending on\n> the order of updates arriving at SN3 which shouldn't happen. Say, the\n> order in which they reach SN3 is: Update-1, Update-2, Update-3 then\n> the final row we have is by Update-3 considering we have configured\n> last_update_wins as a conflict resolution method. Now, consider the\n> other order: Update-1, Update-3, Update-2, in this case, the final\n> row will be by Update-2 because when we try to apply Update-3, it will\n> generate a conflict and as per the resolution method\n> (last_update_wins) we need to retain Update-1.\n> \n> On further thinking, the operations on node-1 PN-1 as defined above\n> seem impossible because one of the Updates needs to wait for the other\n> to write a commit record. So the commits may happen with LSN1 < LSN2\n> but with T1 > T2 but they can't be on the same row due to locks. So,\n> the order of apply should still be consistent. Am, I missing\n> something?\n> \n\nSorry, I should have read your message before responding a couple\nminutes ago. I think you're right this exact example can't happen, due\nto the dependency between transactions.\n\nBut as I wrote, I'm not quite convinced this means there are not other\nissues with this way of resolving conflicts. It's more likely a more\ncomplex scenario is required.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:48:55 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Jun 7, 2024 at 5:39 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Jun 6, 2024 at 5:16 PM Nisha Moond <[email protected]> wrote:\n>>\n>> >\n>>\n>> Here are more use cases of the \"earliest_timestamp_wins\" resolution method:\n>> 1) Applications where the record of first occurrence of an event is\n>> important. For example, sensor based applications like earthquake\n>> detection systems, capturing the first seismic wave's time is crucial.\n>> 2) Scheduling systems, like appointment booking, prioritize the\n>> earliest request when handling concurrent ones.\n>> 3) In contexts where maintaining chronological order is important -\n>> a) Social media platforms display comments ensuring that the\n>> earliest ones are visible first.\n>> b) Finance transaction processing systems rely on timestamps to\n>> prioritize the processing of transactions, ensuring that the earliest\n>> transaction is handled first\n>\n>\n> Thanks for sharing examples. However, these scenarios would be handled by the application and not during replication. What we are discussing here is the timestamp when a row was updated/inserted/deleted (or rather when the transaction that updated row committed/became visible) and not a DML on column which is of type timestamp. Some implementations use a hidden timestamp column but that's different from a user column which captures timestamp of (say) an event. The conflict resolution will be based on the timestamp when that column's value was recorded in the database which may be different from the value of the column itself.\n>\n\nIt depends on how these operations are performed. For example, the\nappointment booking system could be prioritized via a transaction\nupdating a row with columns emp_name, emp_id, reserved, time_slot.\nNow, if two employees at different geographical locations try to book\nthe calendar, the earlier transaction will win.\n\n> If we use the transaction commit timestamp as basis for resolution, a transaction where multiple rows conflict may end up with different rows affected by that transaction being resolved differently. Say three transactions T1, T2 and T3 on separate origins with timestamps t1, t2, and t3 respectively changed rows r1, r2 and r2, r3 and r1, r4 respectively. Changes to r1 and r2 will conflict. Let's say T2 and T3 are applied first and then T1 is applied. If t2 < t1 < t3, r1 will end up with version of T3 and r2 will end up with version of T1 after applying all the three transactions.\n>\n\nAre you telling the results based on latest_timestamp_wins? If so,\nthen it is correct. OTOH, if the user has configured\n\"earliest_timestamp_wins\" resolution method, then we should end up\nwith a version of r1 from T1 because t1 < t3. Also, due to the same\nreason, we should have version r2 from T2.\n\n>\n Would that introduce an inconsistency between r1 and r2?\n>\n\nAs per my understanding, this shouldn't be an inconsistency. Won't it\nbe true even when the transactions are performed on a single node with\nthe same timing?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 8 Jun 2024 15:52:32 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Jun 7, 2024 at 6:08 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 5/27/24 07:48, shveta malik wrote:\n> > On Sat, May 25, 2024 at 2:39 AM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> Which architecture are you aiming for? Here you talk about multiple\n> >> providers, but the wiki page mentions active-active. I'm not sure how\n> >> much this matters, but it might.\n> >\n> > Currently, we are working for multi providers case but ideally it\n> > should work for active-active also. During further discussion and\n> > implementation phase, if we find that, there are cases which will not\n> > work in straight-forward way for active-active, then our primary focus\n> > will remain to first implement it for multiple providers architecture.\n> >\n> >>\n> >> Also, what kind of consistency you expect from this? Because none of\n> >> these simple conflict resolution methods can give you the regular\n> >> consistency models we're used to, AFAICS.\n> >\n> > Can you please explain a little bit more on this.\n> >\n>\n> I was referring to the well established consistency models / isolation\n> levels, e.g. READ COMMITTED or SNAPSHOT ISOLATION. This determines what\n> guarantees the application developer can expect, what anomalies can\n> happen, etc.\n>\n> I don't think any such isolation level can be implemented with a simple\n> conflict resolution methods like last-update-wins etc. For example,\n> consider an active-active where both nodes do\n>\n> UPDATE accounts SET balance=balance+1000 WHERE id=1\n>\n> This will inevitably lead to a conflict, and while the last-update-wins\n> resolves this \"consistently\" on both nodes (e.g. ending with the same\n> result), it's essentially a lost update.\n>\n\nThe idea to solve such conflicts is using the delta apply technique\nwhere the delta from both sides will be applied to the respective\ncolumns. We do plan to target this as a separate patch. Now, if the\nbasic conflict resolution and delta apply both can't go in one\nrelease, we shall document such cases clearly to avoid misuse of the\nfeature.\n\n> This is a very simplistic example of course, I recall there are various\n> more complex examples involving foreign keys, multi-table transactions,\n> constraints, etc. But in principle it's a manifestation of the same\n> inherent limitation of conflict detection and resolution etc.\n>\n> Similarly, I believe this affects not just active-active, but also the\n> case where one node aggregates data from multiple publishers. Maybe not\n> to the same extent / it might be fine for that use case,\n>\n\nI am not sure how much it is a problem for general logical replication\nsolution but we do intend to work on solving such problems in\nstep-wise manner. Trying to attempt everything in one patch doesn't\nseem advisable to me.\n\n>\n but you said\n> the end goal is to use this for active-active. So I'm wondering what's\n> the plan, there.\n>\n\nI think at this stage we are not ready for active-active because\nleaving aside this feature we need many other features like\nreplication of all commands/objects (DDL replication, replicate large\nobjects, etc.), Global sequences, some sort of global two_phase\ntransaction management for data consistency, etc. So, it would be\nbetter to consider logical replication cases intending to extend it\nfor active-active when we have other required pieces.\n\n> If I'm writing an application for active-active using this conflict\n> handling, what assumptions can I make? Will Can I just do stuff as if on\n> a single node, or do I need to be super conscious about the zillion ways\n> things can misbehave in a distributed system?\n>\n> My personal opinion is that the closer this will be to the regular\n> consistency levels, the better. If past experience taught me anything,\n> it's very hard to predict how distributed systems with eventual\n> consistency behave, and even harder to actually test the application in\n> such environment.\n>\n\nI don't think in any way this can enable users to start writing\napplications for active-active workloads. For something like what you\nare saying, we probably need a global transaction manager (or a global\ntwo_pc) as well to allow transactions to behave as they are on\nsingle-node or achieve similar consistency levels. With such\ntransaction management, we can allow transactions to commit on a node\nonly when it doesn't lead to a conflict on the peer node.\n\n> In any case, if there are any differences compared to the usual\n> behavior, it needs to be very clearly explained in the docs.\n>\n\nI agree that docs should be clear about the cases that this can and\ncan't support.\n\n> >>\n> >> How is this going to deal with the fact that commit LSN and timestamps\n> >> may not correlate perfectly? That is, commits may happen with LSN1 <\n> >> LSN2 but with T1 > T2.\n> >\n> > Are you pointing to the issue where a session/txn has taken\n> > 'xactStopTimestamp' timestamp earlier but is delayed to insert record\n> > in XLOG, while another session/txn which has taken timestamp slightly\n> > later succeeded to insert the record IN XLOG sooner than the session1,\n> > making LSN and Timestamps out of sync? Going by this scenario, the\n> > commit-timestamp may not be reflective of actual commits and thus\n> > timestamp-based resolvers may take wrong decisions. Or do you mean\n> > something else?\n> >\n> > If this is the problem you are referring to, then I think this needs a\n> > fix at the publisher side. Let me think more about it . Kindly let me\n> > know if you have ideas on how to tackle it.\n> >\n>\n> Yes, this is the issue I'm talking about. We're acquiring the timestamp\n> when not holding the lock to reserve space in WAL, so the LSN and the\n> commit LSN may not actually correlate.\n>\n> Consider this example I discussed with Amit last week:\n>\n> node A:\n>\n> XACT1: UPDATE t SET v = 1; LSN1 / T1\n>\n> XACT2: UPDATE t SET v = 2; LSN2 / T2\n>\n> node B\n>\n> XACT3: UPDATE t SET v = 3; LSN3 / T3\n>\n> And assume LSN1 < LSN2, T1 > T2 (i.e. the commit timestamp inversion),\n> and T2 < T3 < T1. Now consider that the messages may arrive in different\n> orders, due to async replication. Unfortunately, this would lead to\n> different results of the conflict resolution:\n>\n> XACT1 - XACT2 - XACT3 => v=3 (T3 wins)\n>\n> XACT3 - XACT1 - XACT2 => v=2 (T2 wins)\n>\n> Now, I realize there's a flaw in this example - the (T1 > T2) inversion\n> can't actually happen, because these transactions have a dependency, and\n> thus won't commit concurrently. XACT1 will complete the commit, because\n> XACT2 starts to commit. And with monotonic clock (which is a requirement\n> for any timestamp-based resolution), that should guarantee (T1 < T2).\n>\n> However, I doubt this is sufficient to declare victory. It's more likely\n> that there still are problems, but the examples are likely more complex\n> (changes to multiple tables, etc.).\n>\n\nFair enough, I think we need to analyze this more to find actual\nproblems or in some way try to prove that there is no problem.\n\n> I vaguely remember there were more issues with timestamp inversion, but\n> those might have been related to parallel apply etc.\n>\n\nOkay, so considering there are problems due to timestamp inversion, I\nthink the solution to that problem would probably be somehow\ngenerating commit LSN and timestamp in order. I don't have a solution\nat this stage but will think more both on the actual problem and\nsolution. In the meantime, if you get a chance to refer to the place\nwhere you have seen such a problem please try to share the same with\nus. It would be helpful.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 10 Jun 2024 14:24:24 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Jun 7, 2024 at 6:08 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> >>>\n> >>> UPDATE\n> >>> ================\n> >>>\n> >>> Conflict Detection Method:\n> >>> --------------------------------\n> >>> Origin conflict detection: The ‘origin’ info is used to detect\n> >>> conflict which can be obtained from commit-timestamp generated for\n> >>> incoming txn at the source node. To compare remote’s origin with the\n> >>> local’s origin, we must have origin information for local txns as well\n> >>> which can be obtained from commit-timestamp after enabling\n> >>> ‘track_commit_timestamp’ locally.\n> >>> The one drawback here is the ‘origin’ information cannot be obtained\n> >>> once the row is frozen and the commit-timestamp info is removed by\n> >>> vacuum. For a frozen row, conflicts cannot be raised, and thus the\n> >>> incoming changes will be applied in all the cases.\n> >>>\n> >>> Conflict Types:\n> >>> ----------------\n> >>> a) update_differ: The origin of an incoming update's key row differs\n> >>> from the local row i.e.; the row has already been updated locally or\n> >>> by different nodes.\n> >>> b) update_missing: The row with the same value as that incoming\n> >>> update's key does not exist. Remote is trying to update a row which\n> >>> does not exist locally.\n> >>> c) update_deleted: The row with the same value as that incoming\n> >>> update's key does not exist. The row is already deleted. This conflict\n> >>> type is generated only if the deleted row is still detectable i.e., it\n> >>> is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> >>> it cannot detect this conflict. It will detect it as update_missing\n> >>> and will follow the default or configured resolver of update_missing\n> >>> itself.\n> >>>\n> >>\n> >> I don't understand the why should update_missing or update_deleted be\n> >> different, especially considering it's not detected reliably. And also\n> >> that even if we happen to find the row the associated TOAST data may\n> >> have already been removed. So why would this matter?\n> >\n> > Here, we are trying to tackle the case where the row is 'recently'\n> > deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n> > want to opt for a different resolution in such a case as against the\n> > one where the corresponding row was not even present in the first\n> > place. The case where the row was deleted long back may not fall into\n> > this category as there are higher chances that they have been removed\n> > by vacuum and can be considered equivalent to the update_ missing\n> > case.\n> >\n>\n> My point is that if we can't detect the difference reliably, it's not\n> very useful. Consider this example:\n>\n> Node A:\n>\n> T1: INSERT INTO t (id, value) VALUES (1,1);\n>\n> T2: DELETE FROM t WHERE id = 1;\n>\n> Node B:\n>\n> T3: UPDATE t SET value = 2 WHERE id = 1;\n>\n> The \"correct\" order of received messages on a third node is T1-T3-T2.\n> But we may also see T1-T2-T3 and T3-T1-T2, e.g. due to network issues\n> and so on. For T1-T2-T3 the right decision is to discard the update,\n> while for T3-T1-T2 it's to either wait for the INSERT or wait for the\n> insert to arrive.\n>\n> But if we misdetect the situation, we either end up with a row that\n> shouldn't be there, or losing an update.\n\nDoesn't the above example indicate that 'update_deleted' should also\nbe considered a necessary conflict type? Please see the possibilities\nof conflicts in all three cases:\n\n\nThe \"correct\" order of receiving messages on node C (as suggested\nabove) is T1-T3-T2 (case1)\n----------\nT1 will insert the row.\nT3 will have update_differ conflict; latest_timestamp wins or apply\nwill apply it. earliest_timestamp_wins or skip will skip it.\nT2 will delete the row (irrespective of whether the update happened or not).\nEnd Result: No Data.\n\nT1-T2-T3\n----------\nT1 will insert the row.\nT2 will delete the row.\nT3 will have conflict update_deleted. If it is 'update_deleted', the\nchances are that the resolver set here is to 'skip' (default is also\n'skip' in this case).\n\nIf vacuum has deleted that row (or if we don't support\n'update_deleted' conflict), it will be 'update_missing' conflict. In\nthat case, the user may end up inserting the row if resolver chosen is\nin favor of apply (which seems an obvious choice for 'update_missing'\nconflict; default is also 'apply_or_skip').\n\nEnd result:\nRow inserted with 'update_missing'.\nRow correctly skipped with 'update_deleted' (assuming the obvious\nchoice seems to be 'skip' for update_deleted case).\n\nSo it seems that with 'update_deleted' conflict, there are higher\nchances of opting for right decision here (which is to discard the\nupdate), as 'update_deleted' conveys correct info to the user. The\n'update_missing' OTOH does not convey correct info and user may end up\ninserting the data by choosing apply favoring resolvers for\n'update_missing'. Again, we get benefit of 'update_deleted' for\n*recently* deleted rows only.\n\nT3-T1-T2\n----------\nT3 may end up inserting the record if the resolver is in favor of\n'apply' and all the columns are received from remote.\nT1 will have' insert_exists' conflict and thus may either overwrite\n'updated' values or may leave the data as is (based on whether\nresolver is in favor of apply or not)\nT2 will end up deleting it.\nEnd Result: No Data.\n\nI feel for second case (and similar cases), 'update_deleted' serves a\nbetter conflict type.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 10 Jun 2024 16:26:57 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Jun 7, 2024 at 6:10 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> >>> I don't understand the why should update_missing or update_deleted be\n> >>> different, especially considering it's not detected reliably. And also\n> >>> that even if we happen to find the row the associated TOAST data may\n> >>> have already been removed. So why would this matter?\n> >>\n> >> Here, we are trying to tackle the case where the row is 'recently'\n> >> deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n> >> want to opt for a different resolution in such a case as against the\n> >> one where the corresponding row was not even present in the first\n> >> place. The case where the row was deleted long back may not fall into\n> >> this category as there are higher chances that they have been removed\n> >> by vacuum and can be considered equivalent to the update_ missing\n> >> case.\n> >>\n> >> Regarding \"TOAST column\" for deleted row cases, we may need to dig\n> >> more. Thanks for bringing this case. Let me analyze more here.\n> >>\n> > I tested a simple case with a table with one TOAST column and found\n> > that when a tuple with a TOAST column is deleted, both the tuple and\n> > corresponding pg_toast entries are marked as ‘deleted’ (dead) but not\n> > removed immediately. The main tuple and respective pg_toast entry are\n> > permanently deleted only during vacuum. First, the main table’s dead\n> > tuples are vacuumed, followed by the secondary TOAST relation ones (if\n> > available).\n> > Please let us know if you have a specific scenario in mind where the\n> > TOAST column data is deleted immediately upon ‘delete’ operation,\n> > rather than during vacuum, which we are missing.\n> >\n>\n> I'm pretty sure you can vacuum the TOAST table directly, which means\n> you'll end up with a deleted tuple with TOAST pointers, but with the\n> TOAST entries already gone.\n>\n\nIt is true that for a deleted row, its toast entries can be vacuumed\nearlier than the original/parent row, but we do not need to be\nconcerned about that to raise 'update_deleted'. To raise an\n'update_deleted' conflict, it is sufficient to know that the row has\nbeen deleted and not yet vacuumed, regardless of the presence or\nabsence of its toast entries. Once this is determined, we need to\nbuild the tuple from remote data and apply it (provided resolver is\nsuch that). If the tuple cannot be fully constructed from the remote\ndata, the apply operation will either be skipped or an error will be\nraised, depending on whether the user has chosen the apply_or_skip or\napply_or_error option.\n\nIn cases where the table has toast columns but the remote data does\nnot include the toast-column entry (when the toast column is\nunmodified and not part of the replica identity), the resolution for\n'update_deleted' will be no worse than for 'update_missing'. That is,\nfor both the cases, we can not construct full tuple and thus the\noperation either needs to be skipped or error needs to be raised.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 10 Jun 2024 16:28:59 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On 6/10/24 10:54, Amit Kapila wrote:\n> On Fri, Jun 7, 2024 at 6:08 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 5/27/24 07:48, shveta malik wrote:\n>>> On Sat, May 25, 2024 at 2:39 AM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>>\n>>>> Which architecture are you aiming for? Here you talk about multiple\n>>>> providers, but the wiki page mentions active-active. I'm not sure how\n>>>> much this matters, but it might.\n>>>\n>>> Currently, we are working for multi providers case but ideally it\n>>> should work for active-active also. During further discussion and\n>>> implementation phase, if we find that, there are cases which will not\n>>> work in straight-forward way for active-active, then our primary focus\n>>> will remain to first implement it for multiple providers architecture.\n>>>\n>>>>\n>>>> Also, what kind of consistency you expect from this? Because none of\n>>>> these simple conflict resolution methods can give you the regular\n>>>> consistency models we're used to, AFAICS.\n>>>\n>>> Can you please explain a little bit more on this.\n>>>\n>>\n>> I was referring to the well established consistency models / isolation\n>> levels, e.g. READ COMMITTED or SNAPSHOT ISOLATION. This determines what\n>> guarantees the application developer can expect, what anomalies can\n>> happen, etc.\n>>\n>> I don't think any such isolation level can be implemented with a simple\n>> conflict resolution methods like last-update-wins etc. For example,\n>> consider an active-active where both nodes do\n>>\n>> UPDATE accounts SET balance=balance+1000 WHERE id=1\n>>\n>> This will inevitably lead to a conflict, and while the last-update-wins\n>> resolves this \"consistently\" on both nodes (e.g. ending with the same\n>> result), it's essentially a lost update.\n>>\n> \n> The idea to solve such conflicts is using the delta apply technique\n> where the delta from both sides will be applied to the respective\n> columns. We do plan to target this as a separate patch. Now, if the\n> basic conflict resolution and delta apply both can't go in one\n> release, we shall document such cases clearly to avoid misuse of the\n> feature.\n> \n\nPerhaps, but it's not like having delta conflict resolution (or even\nCRDT as a more generic variant) would lead to a regular consistency\nmodel in a distributed system. At least I don't think it can achieve\nthat, because of the asynchronicity.\n\nConsider a table with \"CHECK (amount < 1000)\" constraint, and an update\nthat sets (amount = amount + 900) on two nodes. AFAIK there's no way to\nreconcile this using delta (or any other other) conflict resolution.\n\nWhich does not mean we should not have some form of conflict resolution,\nas long as we know what the goal is. I simply don't want to spend time\nworking on this, add a lot of complex code, and then realize it doesn't\ngive us a consistency model that makes sense.\n\nWhich leads me back to my original question - what is the consistency\nmodel this you expect to get from this (possibly when combined with some\nother pieces?)?\n\n>> This is a very simplistic example of course, I recall there are various\n>> more complex examples involving foreign keys, multi-table transactions,\n>> constraints, etc. But in principle it's a manifestation of the same\n>> inherent limitation of conflict detection and resolution etc.\n>>\n>> Similarly, I believe this affects not just active-active, but also the\n>> case where one node aggregates data from multiple publishers. Maybe not\n>> to the same extent / it might be fine for that use case,\n>>\n> \n> I am not sure how much it is a problem for general logical replication\n> solution but we do intend to work on solving such problems in\n> step-wise manner. Trying to attempt everything in one patch doesn't\n> seem advisable to me.\n> \n\nI didn't say it needs to be done in one patch. I asked for someone to\nexplain what is the goal - consistency model observed by the users.\n\n>>\n> but you said\n>> the end goal is to use this for active-active. So I'm wondering what's\n>> the plan, there.\n>>\n> \n> I think at this stage we are not ready for active-active because\n> leaving aside this feature we need many other features like\n> replication of all commands/objects (DDL replication, replicate large\n> objects, etc.), Global sequences, some sort of global two_phase\n> transaction management for data consistency, etc. So, it would be\n> better to consider logical replication cases intending to extend it\n> for active-active when we have other required pieces.\n> \n\nWe're not ready for active-active, sure. And I'm not saying a conflict\nresolution would make us ready. The question is what consistency model\nwe'd like to get from the active-active, and whether conflict resolution\ncan get us there ...\n\nAs for the other missing bits (DDL replication, large objects, global\nsequences), I think those are somewhat independent of the question I'm\nasking. And some of the stuff is also somewhat optional - for example I\nthink it'd be fine to not support large objects or global sequences.\n\n>> If I'm writing an application for active-active using this conflict\n>> handling, what assumptions can I make? Will Can I just do stuff as if on\n>> a single node, or do I need to be super conscious about the zillion ways\n>> things can misbehave in a distributed system?\n>>\n>> My personal opinion is that the closer this will be to the regular\n>> consistency levels, the better. If past experience taught me anything,\n>> it's very hard to predict how distributed systems with eventual\n>> consistency behave, and even harder to actually test the application in\n>> such environment.\n>>\n> \n> I don't think in any way this can enable users to start writing\n> applications for active-active workloads. For something like what you\n> are saying, we probably need a global transaction manager (or a global\n> two_pc) as well to allow transactions to behave as they are on\n> single-node or achieve similar consistency levels. With such\n> transaction management, we can allow transactions to commit on a node\n> only when it doesn't lead to a conflict on the peer node.\n> \n\nBut the wiki linked in the first message says:\n\n CDR is an important and necessary feature for active-active\n replication.\n\nBut if I understand your response, you're saying active-active should\nprobably use global transaction manager etc. which would prevent\nconflicts - but seems to make CDR unnecessary. Or do I understand it wrong?\n\nFWIW I don't think we'd need global components, there are ways to do\ndistributed snapshots using timestamps (for example), which would give\nus snapshot isolation.\n\n\n>> In any case, if there are any differences compared to the usual\n>> behavior, it needs to be very clearly explained in the docs.\n>>\n> \n> I agree that docs should be clear about the cases that this can and\n> can't support.\n> \n>>>>\n>>>> How is this going to deal with the fact that commit LSN and timestamps\n>>>> may not correlate perfectly? That is, commits may happen with LSN1 <\n>>>> LSN2 but with T1 > T2.\n>>>\n>>> Are you pointing to the issue where a session/txn has taken\n>>> 'xactStopTimestamp' timestamp earlier but is delayed to insert record\n>>> in XLOG, while another session/txn which has taken timestamp slightly\n>>> later succeeded to insert the record IN XLOG sooner than the session1,\n>>> making LSN and Timestamps out of sync? Going by this scenario, the\n>>> commit-timestamp may not be reflective of actual commits and thus\n>>> timestamp-based resolvers may take wrong decisions. Or do you mean\n>>> something else?\n>>>\n>>> If this is the problem you are referring to, then I think this needs a\n>>> fix at the publisher side. Let me think more about it . Kindly let me\n>>> know if you have ideas on how to tackle it.\n>>>\n>>\n>> Yes, this is the issue I'm talking about. We're acquiring the timestamp\n>> when not holding the lock to reserve space in WAL, so the LSN and the\n>> commit LSN may not actually correlate.\n>>\n>> Consider this example I discussed with Amit last week:\n>>\n>> node A:\n>>\n>> XACT1: UPDATE t SET v = 1; LSN1 / T1\n>>\n>> XACT2: UPDATE t SET v = 2; LSN2 / T2\n>>\n>> node B\n>>\n>> XACT3: UPDATE t SET v = 3; LSN3 / T3\n>>\n>> And assume LSN1 < LSN2, T1 > T2 (i.e. the commit timestamp inversion),\n>> and T2 < T3 < T1. Now consider that the messages may arrive in different\n>> orders, due to async replication. Unfortunately, this would lead to\n>> different results of the conflict resolution:\n>>\n>> XACT1 - XACT2 - XACT3 => v=3 (T3 wins)\n>>\n>> XACT3 - XACT1 - XACT2 => v=2 (T2 wins)\n>>\n>> Now, I realize there's a flaw in this example - the (T1 > T2) inversion\n>> can't actually happen, because these transactions have a dependency, and\n>> thus won't commit concurrently. XACT1 will complete the commit, because\n>> XACT2 starts to commit. And with monotonic clock (which is a requirement\n>> for any timestamp-based resolution), that should guarantee (T1 < T2).\n>>\n>> However, I doubt this is sufficient to declare victory. It's more likely\n>> that there still are problems, but the examples are likely more complex\n>> (changes to multiple tables, etc.).\n>>\n> \n> Fair enough, I think we need to analyze this more to find actual\n> problems or in some way try to prove that there is no problem.\n> \n>> I vaguely remember there were more issues with timestamp inversion, but\n>> those might have been related to parallel apply etc.\n>>\n> \n> Okay, so considering there are problems due to timestamp inversion, I\n> think the solution to that problem would probably be somehow\n> generating commit LSN and timestamp in order. I don't have a solution\n> at this stage but will think more both on the actual problem and\n> solution. In the meantime, if you get a chance to refer to the place\n> where you have seen such a problem please try to share the same with\n> us. It would be helpful.\n> \n\nI think the solution to this would be to acquire the timestamp while\nreserving the space (because that happens in LSN order). The clock would\nneed to be monotonic (easy enough with CLOCK_MONOTONIC), but also cheap.\nAFAIK this is the main problem why it's being done outside the critical\nsection, because gettimeofday() may be quite expensive. There's a\nconcept of hybrid clock, combining \"time\" and logical counter, which I\nthink might be useful independently of CDR ...\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 10 Jun 2024 13:42:13 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "\n\nOn 6/10/24 12:56, shveta malik wrote:\n> On Fri, Jun 7, 2024 at 6:08 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>>>>>\n>>>>> UPDATE\n>>>>> ================\n>>>>>\n>>>>> Conflict Detection Method:\n>>>>> --------------------------------\n>>>>> Origin conflict detection: The ‘origin’ info is used to detect\n>>>>> conflict which can be obtained from commit-timestamp generated for\n>>>>> incoming txn at the source node. To compare remote’s origin with the\n>>>>> local’s origin, we must have origin information for local txns as well\n>>>>> which can be obtained from commit-timestamp after enabling\n>>>>> ‘track_commit_timestamp’ locally.\n>>>>> The one drawback here is the ‘origin’ information cannot be obtained\n>>>>> once the row is frozen and the commit-timestamp info is removed by\n>>>>> vacuum. For a frozen row, conflicts cannot be raised, and thus the\n>>>>> incoming changes will be applied in all the cases.\n>>>>>\n>>>>> Conflict Types:\n>>>>> ----------------\n>>>>> a) update_differ: The origin of an incoming update's key row differs\n>>>>> from the local row i.e.; the row has already been updated locally or\n>>>>> by different nodes.\n>>>>> b) update_missing: The row with the same value as that incoming\n>>>>> update's key does not exist. Remote is trying to update a row which\n>>>>> does not exist locally.\n>>>>> c) update_deleted: The row with the same value as that incoming\n>>>>> update's key does not exist. The row is already deleted. This conflict\n>>>>> type is generated only if the deleted row is still detectable i.e., it\n>>>>> is not removed by VACUUM yet. If the row is removed by VACUUM already,\n>>>>> it cannot detect this conflict. It will detect it as update_missing\n>>>>> and will follow the default or configured resolver of update_missing\n>>>>> itself.\n>>>>>\n>>>>\n>>>> I don't understand the why should update_missing or update_deleted be\n>>>> different, especially considering it's not detected reliably. And also\n>>>> that even if we happen to find the row the associated TOAST data may\n>>>> have already been removed. So why would this matter?\n>>>\n>>> Here, we are trying to tackle the case where the row is 'recently'\n>>> deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n>>> want to opt for a different resolution in such a case as against the\n>>> one where the corresponding row was not even present in the first\n>>> place. The case where the row was deleted long back may not fall into\n>>> this category as there are higher chances that they have been removed\n>>> by vacuum and can be considered equivalent to the update_ missing\n>>> case.\n>>>\n>>\n>> My point is that if we can't detect the difference reliably, it's not\n>> very useful. Consider this example:\n>>\n>> Node A:\n>>\n>> T1: INSERT INTO t (id, value) VALUES (1,1);\n>>\n>> T2: DELETE FROM t WHERE id = 1;\n>>\n>> Node B:\n>>\n>> T3: UPDATE t SET value = 2 WHERE id = 1;\n>>\n>> The \"correct\" order of received messages on a third node is T1-T3-T2.\n>> But we may also see T1-T2-T3 and T3-T1-T2, e.g. due to network issues\n>> and so on. For T1-T2-T3 the right decision is to discard the update,\n>> while for T3-T1-T2 it's to either wait for the INSERT or wait for the\n>> insert to arrive.\n>>\n>> But if we misdetect the situation, we either end up with a row that\n>> shouldn't be there, or losing an update.\n> \n> Doesn't the above example indicate that 'update_deleted' should also\n> be considered a necessary conflict type? Please see the possibilities\n> of conflicts in all three cases:\n> \n> \n> The \"correct\" order of receiving messages on node C (as suggested\n> above) is T1-T3-T2 (case1)\n> ----------\n> T1 will insert the row.\n> T3 will have update_differ conflict; latest_timestamp wins or apply\n> will apply it. earliest_timestamp_wins or skip will skip it.\n> T2 will delete the row (irrespective of whether the update happened or not).\n> End Result: No Data.\n> \n> T1-T2-T3\n> ----------\n> T1 will insert the row.\n> T2 will delete the row.\n> T3 will have conflict update_deleted. If it is 'update_deleted', the\n> chances are that the resolver set here is to 'skip' (default is also\n> 'skip' in this case).\n> \n> If vacuum has deleted that row (or if we don't support\n> 'update_deleted' conflict), it will be 'update_missing' conflict. In\n> that case, the user may end up inserting the row if resolver chosen is\n> in favor of apply (which seems an obvious choice for 'update_missing'\n> conflict; default is also 'apply_or_skip').\n> \n> End result:\n> Row inserted with 'update_missing'.\n> Row correctly skipped with 'update_deleted' (assuming the obvious\n> choice seems to be 'skip' for update_deleted case).\n> \n> So it seems that with 'update_deleted' conflict, there are higher\n> chances of opting for right decision here (which is to discard the\n> update), as 'update_deleted' conveys correct info to the user. The\n> 'update_missing' OTOH does not convey correct info and user may end up\n> inserting the data by choosing apply favoring resolvers for\n> 'update_missing'. Again, we get benefit of 'update_deleted' for\n> *recently* deleted rows only.\n> \n> T3-T1-T2\n> ----------\n> T3 may end up inserting the record if the resolver is in favor of\n> 'apply' and all the columns are received from remote.\n> T1 will have' insert_exists' conflict and thus may either overwrite\n> 'updated' values or may leave the data as is (based on whether\n> resolver is in favor of apply or not)\n> T2 will end up deleting it.\n> End Result: No Data.\n> \n> I feel for second case (and similar cases), 'update_deleted' serves a\n> better conflict type.\n> \n\nTrue, but this is pretty much just a restatement of the example, right?\n\nThe point I was trying to make is that this hinges on the ability to\ndetect the correct conflict type. And if vacuum can swoop in and remove\nthe recently deleted tuples (which I believe can happen at any time,\nright?), then that's not guaranteed, because we won't see the deleted\ntuple anymore. Or am I missing something?\n\nAlso, can the resolver even convert the UPDATE into INSERT and proceed?\nMaybe with REPLICA IDENTITY FULL? Otherwise the row might be incomplete,\nmissing required columns etc. In which case it'd have to wait for the\nactual INSERT to arrive - which would work for actual update_missing,\nwhere the row may be delayed due to network issues. But if that's a\nmistake due to vacuum removing the deleted tuple, it'll wait forever.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 10 Jun 2024 13:54:57 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jun 10, 2024 at 5:24 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 6/10/24 12:56, shveta malik wrote:\n> > On Fri, Jun 7, 2024 at 6:08 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >>>>>\n> >>>>> UPDATE\n> >>>>> ================\n> >>>>>\n> >>>>> Conflict Detection Method:\n> >>>>> --------------------------------\n> >>>>> Origin conflict detection: The ‘origin’ info is used to detect\n> >>>>> conflict which can be obtained from commit-timestamp generated for\n> >>>>> incoming txn at the source node. To compare remote’s origin with the\n> >>>>> local’s origin, we must have origin information for local txns as well\n> >>>>> which can be obtained from commit-timestamp after enabling\n> >>>>> ‘track_commit_timestamp’ locally.\n> >>>>> The one drawback here is the ‘origin’ information cannot be obtained\n> >>>>> once the row is frozen and the commit-timestamp info is removed by\n> >>>>> vacuum. For a frozen row, conflicts cannot be raised, and thus the\n> >>>>> incoming changes will be applied in all the cases.\n> >>>>>\n> >>>>> Conflict Types:\n> >>>>> ----------------\n> >>>>> a) update_differ: The origin of an incoming update's key row differs\n> >>>>> from the local row i.e.; the row has already been updated locally or\n> >>>>> by different nodes.\n> >>>>> b) update_missing: The row with the same value as that incoming\n> >>>>> update's key does not exist. Remote is trying to update a row which\n> >>>>> does not exist locally.\n> >>>>> c) update_deleted: The row with the same value as that incoming\n> >>>>> update's key does not exist. The row is already deleted. This conflict\n> >>>>> type is generated only if the deleted row is still detectable i.e., it\n> >>>>> is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> >>>>> it cannot detect this conflict. It will detect it as update_missing\n> >>>>> and will follow the default or configured resolver of update_missing\n> >>>>> itself.\n> >>>>>\n> >>>>\n> >>>> I don't understand the why should update_missing or update_deleted be\n> >>>> different, especially considering it's not detected reliably. And also\n> >>>> that even if we happen to find the row the associated TOAST data may\n> >>>> have already been removed. So why would this matter?\n> >>>\n> >>> Here, we are trying to tackle the case where the row is 'recently'\n> >>> deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n> >>> want to opt for a different resolution in such a case as against the\n> >>> one where the corresponding row was not even present in the first\n> >>> place. The case where the row was deleted long back may not fall into\n> >>> this category as there are higher chances that they have been removed\n> >>> by vacuum and can be considered equivalent to the update_ missing\n> >>> case.\n> >>>\n> >>\n> >> My point is that if we can't detect the difference reliably, it's not\n> >> very useful. Consider this example:\n> >>\n> >> Node A:\n> >>\n> >> T1: INSERT INTO t (id, value) VALUES (1,1);\n> >>\n> >> T2: DELETE FROM t WHERE id = 1;\n> >>\n> >> Node B:\n> >>\n> >> T3: UPDATE t SET value = 2 WHERE id = 1;\n> >>\n> >> The \"correct\" order of received messages on a third node is T1-T3-T2.\n> >> But we may also see T1-T2-T3 and T3-T1-T2, e.g. due to network issues\n> >> and so on. For T1-T2-T3 the right decision is to discard the update,\n> >> while for T3-T1-T2 it's to either wait for the INSERT or wait for the\n> >> insert to arrive.\n> >>\n> >> But if we misdetect the situation, we either end up with a row that\n> >> shouldn't be there, or losing an update.\n> >\n> > Doesn't the above example indicate that 'update_deleted' should also\n> > be considered a necessary conflict type? Please see the possibilities\n> > of conflicts in all three cases:\n> >\n> >\n> > The \"correct\" order of receiving messages on node C (as suggested\n> > above) is T1-T3-T2 (case1)\n> > ----------\n> > T1 will insert the row.\n> > T3 will have update_differ conflict; latest_timestamp wins or apply\n> > will apply it. earliest_timestamp_wins or skip will skip it.\n> > T2 will delete the row (irrespective of whether the update happened or not).\n> > End Result: No Data.\n> >\n> > T1-T2-T3\n> > ----------\n> > T1 will insert the row.\n> > T2 will delete the row.\n> > T3 will have conflict update_deleted. If it is 'update_deleted', the\n> > chances are that the resolver set here is to 'skip' (default is also\n> > 'skip' in this case).\n> >\n> > If vacuum has deleted that row (or if we don't support\n> > 'update_deleted' conflict), it will be 'update_missing' conflict. In\n> > that case, the user may end up inserting the row if resolver chosen is\n> > in favor of apply (which seems an obvious choice for 'update_missing'\n> > conflict; default is also 'apply_or_skip').\n> >\n> > End result:\n> > Row inserted with 'update_missing'.\n> > Row correctly skipped with 'update_deleted' (assuming the obvious\n> > choice seems to be 'skip' for update_deleted case).\n> >\n> > So it seems that with 'update_deleted' conflict, there are higher\n> > chances of opting for right decision here (which is to discard the\n> > update), as 'update_deleted' conveys correct info to the user. The\n> > 'update_missing' OTOH does not convey correct info and user may end up\n> > inserting the data by choosing apply favoring resolvers for\n> > 'update_missing'. Again, we get benefit of 'update_deleted' for\n> > *recently* deleted rows only.\n> >\n> > T3-T1-T2\n> > ----------\n> > T3 may end up inserting the record if the resolver is in favor of\n> > 'apply' and all the columns are received from remote.\n> > T1 will have' insert_exists' conflict and thus may either overwrite\n> > 'updated' values or may leave the data as is (based on whether\n> > resolver is in favor of apply or not)\n> > T2 will end up deleting it.\n> > End Result: No Data.\n> >\n> > I feel for second case (and similar cases), 'update_deleted' serves a\n> > better conflict type.\n> >\n>\n> True, but this is pretty much just a restatement of the example, right?\n>\n> The point I was trying to make is that this hinges on the ability to\n> detect the correct conflict type. And if vacuum can swoop in and remove\n> the recently deleted tuples (which I believe can happen at any time,\n> right?), then that's not guaranteed, because we won't see the deleted\n> tuple anymore.\n\nYes, that's correct. However, many cases could benefit from the\nupdate_deleted conflict type if it can be implemented reliably. That's\nwhy we wanted to give it a try. But if we can't achieve predictable\nresults with it, I'm fine to drop this approach and conflict_type. We\ncan consider a better design in the future that doesn't depend on\nnon-vacuumed entries and provides a more robust method for identifying\ndeleted rows.\n\n> Also, can the resolver even convert the UPDATE into INSERT and proceed?\n> Maybe with REPLICA IDENTITY FULL?\n\nYes, it can, as long as the row doesn't contain toasted data. Without\ntoasted data, the new tuple is fully logged. However, if the row does\ncontain toasted data, the new tuple won't log it completely. In such a\ncase, REPLICA IDENTITY FULL becomes a requirement to ensure we have\nall the data necessary to create the row on the target side. In\nabsence of RI full and with row lacking toasted data, the operation\nwill be skipped or error will be raised.\n\n> Otherwise the row might be incomplete,\n> missing required columns etc. In which case it'd have to wait for the\n> actual INSERT to arrive - which would work for actual update_missing,\n> where the row may be delayed due to network issues. But if that's a\n> mistake due to vacuum removing the deleted tuple, it'll wait forever.\n\nEven in case of 'update_missing', we do not intend to wait for 'actual\ninsert' to arrive, as it is not guaranteed if the 'insert' will arrive\nor not. And thus we plan to skip or error out (based on user's\nconfiguration) if a complete row can not be created for insertion.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 11 Jun 2024 14:05:32 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Sat, Jun 8, 2024 at 3:52 PM Amit Kapila <[email protected]> wrote:\n\n> On Fri, Jun 7, 2024 at 5:39 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Thu, Jun 6, 2024 at 5:16 PM Nisha Moond <[email protected]>\n> wrote:\n> >>\n> >> >\n> >>\n> >> Here are more use cases of the \"earliest_timestamp_wins\" resolution\n> method:\n> >> 1) Applications where the record of first occurrence of an event is\n> >> important. For example, sensor based applications like earthquake\n> >> detection systems, capturing the first seismic wave's time is crucial.\n> >> 2) Scheduling systems, like appointment booking, prioritize the\n> >> earliest request when handling concurrent ones.\n> >> 3) In contexts where maintaining chronological order is important -\n> >> a) Social media platforms display comments ensuring that the\n> >> earliest ones are visible first.\n> >> b) Finance transaction processing systems rely on timestamps to\n> >> prioritize the processing of transactions, ensuring that the earliest\n> >> transaction is handled first\n> >\n> >\n> > Thanks for sharing examples. However, these scenarios would be handled\n> by the application and not during replication. What we are discussing here\n> is the timestamp when a row was updated/inserted/deleted (or rather when\n> the transaction that updated row committed/became visible) and not a DML on\n> column which is of type timestamp. Some implementations use a hidden\n> timestamp column but that's different from a user column which captures\n> timestamp of (say) an event. The conflict resolution will be based on the\n> timestamp when that column's value was recorded in the database which may\n> be different from the value of the column itself.\n> >\n>\n> It depends on how these operations are performed. For example, the\n> appointment booking system could be prioritized via a transaction\n> updating a row with columns emp_name, emp_id, reserved, time_slot.\n> Now, if two employees at different geographical locations try to book\n> the calendar, the earlier transaction will win.\n>\n\nI doubt that it would be that simple. The application will have to\nintervene and tell one of the employees that their reservation has failed.\nIt looks natural that the first one to reserve the room should get the\nreservation, but implementing that is more complex than resolving a\nconflict in the database. In fact, mostly it will be handled outside\ndatabase.\n\n\n>\n> > If we use the transaction commit timestamp as basis for resolution, a\n> transaction where multiple rows conflict may end up with different rows\n> affected by that transaction being resolved differently. Say three\n> transactions T1, T2 and T3 on separate origins with timestamps t1, t2, and\n> t3 respectively changed rows r1, r2 and r2, r3 and r1, r4 respectively.\n> Changes to r1 and r2 will conflict. Let's say T2 and T3 are applied first\n> and then T1 is applied. If t2 < t1 < t3, r1 will end up with version of T3\n> and r2 will end up with version of T1 after applying all the three\n> transactions.\n> >\n>\n> Are you telling the results based on latest_timestamp_wins? If so,\n> then it is correct. OTOH, if the user has configured\n> \"earliest_timestamp_wins\" resolution method, then we should end up\n> with a version of r1 from T1 because t1 < t3. Also, due to the same\n> reason, we should have version r2 from T2.\n>\n> >\n> Would that introduce an inconsistency between r1 and r2?\n> >\n>\n> As per my understanding, this shouldn't be an inconsistency. Won't it\n> be true even when the transactions are performed on a single node with\n> the same timing?\n>\n>\nThe inconsistency will arise irrespective of conflict resolution method. On\na single system effects of whichever transaction runs last will be visible\nentirely. But in the example above the node where T1, T2, and T3 (from\n*different*) origins) are applied, we might end up with a situation where\nsome changes from T1 are applied whereas some changes from T3 are applied.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Sat, Jun 8, 2024 at 3:52 PM Amit Kapila <[email protected]> wrote:On Fri, Jun 7, 2024 at 5:39 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Jun 6, 2024 at 5:16 PM Nisha Moond <[email protected]> wrote:\n>>\n>> >\n>>\n>> Here are more use cases of the \"earliest_timestamp_wins\" resolution method:\n>> 1) Applications where the record of first occurrence of an event is\n>> important. For example, sensor based applications like earthquake\n>> detection systems, capturing the first seismic wave's time is crucial.\n>> 2) Scheduling systems, like appointment booking, prioritize the\n>> earliest request when handling concurrent ones.\n>> 3) In contexts where maintaining chronological order is important -\n>>   a) Social media platforms display comments ensuring that the\n>> earliest ones are visible first.\n>>   b) Finance transaction processing systems rely on timestamps to\n>> prioritize the processing of transactions, ensuring that the earliest\n>> transaction is handled first\n>\n>\n> Thanks for sharing examples. However, these scenarios would be handled by the application and not during replication. What we are discussing here is the timestamp when a row was updated/inserted/deleted (or rather when the transaction that updated row committed/became visible) and not a DML on column which is of type timestamp. Some implementations use a hidden timestamp column but that's different from a user column which captures timestamp of (say) an event. The conflict resolution will be based on the timestamp when that column's value was recorded in the database which may be different from the value of the column itself.\n>\n\nIt depends on how these operations are performed. For example, the\nappointment booking system could be prioritized via a transaction\nupdating a row with columns emp_name, emp_id, reserved, time_slot.\nNow, if two employees at different geographical locations try to book\nthe calendar, the earlier transaction will win.I doubt that it would be that simple. The application will have to intervene and tell one of the employees that their reservation has failed. It looks natural that the first one to reserve the room should get the reservation, but implementing that is more complex than resolving a conflict in the database. In fact, mostly it will be handled outside database. \n\n> If we use the transaction commit timestamp as basis for resolution, a transaction where multiple rows conflict may end up with different rows affected by that transaction being resolved differently. Say three transactions T1, T2 and T3 on separate origins with timestamps t1, t2, and t3 respectively changed rows r1, r2 and r2, r3 and r1, r4 respectively. Changes to r1 and r2 will conflict. Let's say T2 and T3 are applied first and then T1 is applied. If t2 < t1 < t3, r1 will end up with version of T3 and r2 will end up with version of T1 after applying all the three transactions.\n>\n\nAre you telling the results based on latest_timestamp_wins? If so,\nthen it is correct. OTOH, if the user has configured\n\"earliest_timestamp_wins\" resolution method, then we should end up\nwith a version of r1 from T1 because t1 < t3. Also, due to the same\nreason, we should have version r2 from T2.\n\n>\n Would that introduce an inconsistency between r1 and r2?\n>\n\nAs per my understanding, this shouldn't be an inconsistency. Won't it\nbe true even when the transactions are performed on a single node with\nthe same timing?\nThe inconsistency will arise irrespective of conflict resolution method. On a single system effects of whichever transaction runs last will be visible entirely. But in the example above the node where T1, T2, and T3 (from *different*) origins) are applied, we might end up with a situation where some changes from T1 are applied whereas some changes from T3 are applied. -- Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 11 Jun 2024 15:12:30 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "\n\nOn 6/11/24 10:35, shveta malik wrote:\n> On Mon, Jun 10, 2024 at 5:24 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>>\n>>\n>> On 6/10/24 12:56, shveta malik wrote:\n>>> On Fri, Jun 7, 2024 at 6:08 PM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>>\n>>>>>>>\n>>>>>>> UPDATE\n>>>>>>> ================\n>>>>>>>\n>>>>>>> Conflict Detection Method:\n>>>>>>> --------------------------------\n>>>>>>> Origin conflict detection: The ‘origin’ info is used to detect\n>>>>>>> conflict which can be obtained from commit-timestamp generated for\n>>>>>>> incoming txn at the source node. To compare remote’s origin with the\n>>>>>>> local’s origin, we must have origin information for local txns as well\n>>>>>>> which can be obtained from commit-timestamp after enabling\n>>>>>>> ‘track_commit_timestamp’ locally.\n>>>>>>> The one drawback here is the ‘origin’ information cannot be obtained\n>>>>>>> once the row is frozen and the commit-timestamp info is removed by\n>>>>>>> vacuum. For a frozen row, conflicts cannot be raised, and thus the\n>>>>>>> incoming changes will be applied in all the cases.\n>>>>>>>\n>>>>>>> Conflict Types:\n>>>>>>> ----------------\n>>>>>>> a) update_differ: The origin of an incoming update's key row differs\n>>>>>>> from the local row i.e.; the row has already been updated locally or\n>>>>>>> by different nodes.\n>>>>>>> b) update_missing: The row with the same value as that incoming\n>>>>>>> update's key does not exist. Remote is trying to update a row which\n>>>>>>> does not exist locally.\n>>>>>>> c) update_deleted: The row with the same value as that incoming\n>>>>>>> update's key does not exist. The row is already deleted. This conflict\n>>>>>>> type is generated only if the deleted row is still detectable i.e., it\n>>>>>>> is not removed by VACUUM yet. If the row is removed by VACUUM already,\n>>>>>>> it cannot detect this conflict. It will detect it as update_missing\n>>>>>>> and will follow the default or configured resolver of update_missing\n>>>>>>> itself.\n>>>>>>>\n>>>>>>\n>>>>>> I don't understand the why should update_missing or update_deleted be\n>>>>>> different, especially considering it's not detected reliably. And also\n>>>>>> that even if we happen to find the row the associated TOAST data may\n>>>>>> have already been removed. So why would this matter?\n>>>>>\n>>>>> Here, we are trying to tackle the case where the row is 'recently'\n>>>>> deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n>>>>> want to opt for a different resolution in such a case as against the\n>>>>> one where the corresponding row was not even present in the first\n>>>>> place. The case where the row was deleted long back may not fall into\n>>>>> this category as there are higher chances that they have been removed\n>>>>> by vacuum and can be considered equivalent to the update_ missing\n>>>>> case.\n>>>>>\n>>>>\n>>>> My point is that if we can't detect the difference reliably, it's not\n>>>> very useful. Consider this example:\n>>>>\n>>>> Node A:\n>>>>\n>>>> T1: INSERT INTO t (id, value) VALUES (1,1);\n>>>>\n>>>> T2: DELETE FROM t WHERE id = 1;\n>>>>\n>>>> Node B:\n>>>>\n>>>> T3: UPDATE t SET value = 2 WHERE id = 1;\n>>>>\n>>>> The \"correct\" order of received messages on a third node is T1-T3-T2.\n>>>> But we may also see T1-T2-T3 and T3-T1-T2, e.g. due to network issues\n>>>> and so on. For T1-T2-T3 the right decision is to discard the update,\n>>>> while for T3-T1-T2 it's to either wait for the INSERT or wait for the\n>>>> insert to arrive.\n>>>>\n>>>> But if we misdetect the situation, we either end up with a row that\n>>>> shouldn't be there, or losing an update.\n>>>\n>>> Doesn't the above example indicate that 'update_deleted' should also\n>>> be considered a necessary conflict type? Please see the possibilities\n>>> of conflicts in all three cases:\n>>>\n>>>\n>>> The \"correct\" order of receiving messages on node C (as suggested\n>>> above) is T1-T3-T2 (case1)\n>>> ----------\n>>> T1 will insert the row.\n>>> T3 will have update_differ conflict; latest_timestamp wins or apply\n>>> will apply it. earliest_timestamp_wins or skip will skip it.\n>>> T2 will delete the row (irrespective of whether the update happened or not).\n>>> End Result: No Data.\n>>>\n>>> T1-T2-T3\n>>> ----------\n>>> T1 will insert the row.\n>>> T2 will delete the row.\n>>> T3 will have conflict update_deleted. If it is 'update_deleted', the\n>>> chances are that the resolver set here is to 'skip' (default is also\n>>> 'skip' in this case).\n>>>\n>>> If vacuum has deleted that row (or if we don't support\n>>> 'update_deleted' conflict), it will be 'update_missing' conflict. In\n>>> that case, the user may end up inserting the row if resolver chosen is\n>>> in favor of apply (which seems an obvious choice for 'update_missing'\n>>> conflict; default is also 'apply_or_skip').\n>>>\n>>> End result:\n>>> Row inserted with 'update_missing'.\n>>> Row correctly skipped with 'update_deleted' (assuming the obvious\n>>> choice seems to be 'skip' for update_deleted case).\n>>>\n>>> So it seems that with 'update_deleted' conflict, there are higher\n>>> chances of opting for right decision here (which is to discard the\n>>> update), as 'update_deleted' conveys correct info to the user. The\n>>> 'update_missing' OTOH does not convey correct info and user may end up\n>>> inserting the data by choosing apply favoring resolvers for\n>>> 'update_missing'. Again, we get benefit of 'update_deleted' for\n>>> *recently* deleted rows only.\n>>>\n>>> T3-T1-T2\n>>> ----------\n>>> T3 may end up inserting the record if the resolver is in favor of\n>>> 'apply' and all the columns are received from remote.\n>>> T1 will have' insert_exists' conflict and thus may either overwrite\n>>> 'updated' values or may leave the data as is (based on whether\n>>> resolver is in favor of apply or not)\n>>> T2 will end up deleting it.\n>>> End Result: No Data.\n>>>\n>>> I feel for second case (and similar cases), 'update_deleted' serves a\n>>> better conflict type.\n>>>\n>>\n>> True, but this is pretty much just a restatement of the example, right?\n>>\n>> The point I was trying to make is that this hinges on the ability to\n>> detect the correct conflict type. And if vacuum can swoop in and remove\n>> the recently deleted tuples (which I believe can happen at any time,\n>> right?), then that's not guaranteed, because we won't see the deleted\n>> tuple anymore.\n> \n> Yes, that's correct. However, many cases could benefit from the\n> update_deleted conflict type if it can be implemented reliably. That's\n> why we wanted to give it a try. But if we can't achieve predictable\n> results with it, I'm fine to drop this approach and conflict_type. We\n> can consider a better design in the future that doesn't depend on\n> non-vacuumed entries and provides a more robust method for identifying\n> deleted rows.\n> \n\nI agree having a separate update_deleted conflict would be beneficial,\nI'm not arguing against that - my point is actually that I think this\nconflict type is required, and that it needs to be detected reliably.\n\nI'm not sure dropping update_deleted entirely would be a good idea,\nthough. It pretty much guarantees making the wrong decision at least\nsometimes. But at least it's predictable and users are more likely to\nnotice that (compared to update_delete working on well-behaving systems,\nand then failing when a node starts lagging or something).\n\nThat's my opinion, though, and I don't intend to stay in the way. But I\nthink the solution is not that difficult - something needs to prevent\ncleanup of recently dead tuples (until the \"relevant\" changes are\nreceived and applied from other nodes). I don't know if that could be\ndone based on information we have for subscriptions, or if we need\nsomething new.\n\n>> Also, can the resolver even convert the UPDATE into INSERT and proceed?\n>> Maybe with REPLICA IDENTITY FULL?\n> \n> Yes, it can, as long as the row doesn't contain toasted data. Without\n> toasted data, the new tuple is fully logged. However, if the row does\n> contain toasted data, the new tuple won't log it completely. In such a\n> case, REPLICA IDENTITY FULL becomes a requirement to ensure we have\n> all the data necessary to create the row on the target side. In\n> absence of RI full and with row lacking toasted data, the operation\n> will be skipped or error will be raised.\n> \n>> Otherwise the row might be incomplete,\n>> missing required columns etc. In which case it'd have to wait for the\n>> actual INSERT to arrive - which would work for actual update_missing,\n>> where the row may be delayed due to network issues. But if that's a\n>> mistake due to vacuum removing the deleted tuple, it'll wait forever.\n> \n> Even in case of 'update_missing', we do not intend to wait for 'actual\n> insert' to arrive, as it is not guaranteed if the 'insert' will arrive\n> or not. And thus we plan to skip or error out (based on user's\n> configuration) if a complete row can not be created for insertion.\n> \n\nIf the UPDATE contains all the columns and can be turned into an INSERT,\nthen that seems reasonable. But I don't see how skipping it could work\nin general (except for some very simple / specific use cases). I'm not\nsure if you suggest to skip just the one UPDATE or transaction as a\nwhole, but it seems to me either of those options could easily lead to\nall kinds of inconsistencies and user confusion.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 11 Jun 2024 16:14:09 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 11, 2024 at 7:44 PM Tomas Vondra\n<[email protected]> wrote:\n\n> > Yes, that's correct. However, many cases could benefit from the\n> > update_deleted conflict type if it can be implemented reliably. That's\n> > why we wanted to give it a try. But if we can't achieve predictable\n> > results with it, I'm fine to drop this approach and conflict_type. We\n> > can consider a better design in the future that doesn't depend on\n> > non-vacuumed entries and provides a more robust method for identifying\n> > deleted rows.\n> >\n>\n> I agree having a separate update_deleted conflict would be beneficial,\n> I'm not arguing against that - my point is actually that I think this\n> conflict type is required, and that it needs to be detected reliably.\n>\n\nWhen working with a distributed system, we must accept some form of\neventual consistency model. However, it's essential to design a\npredictable and acceptable behavior. For example, if a change is a\nresult of a previous operation (such as an update on node B triggered\nafter observing an operation on node A), we can say that the operation\non node A happened before the operation on node B. Conversely, if\noperations on nodes A and B are independent, we consider them\nconcurrent.\n\nIn distributed systems, clock skew is a known issue. To establish a\nconsistency model, we need to ensure it guarantees the\n\"happens-before\" relationship. Consider a scenario with three nodes:\nNodeA, NodeB, and NodeC. If NodeA sends changes to NodeB, and\nsubsequently NodeB makes changes, and then both NodeA's and NodeB's\nchanges are sent to NodeC, the clock skew might make NodeB's changes\nappear to have occurred before NodeA's changes. However, we should\nmaintain data that indicates NodeB's changes were triggered after\nNodeA's changes arrived at NodeB. This implies that logically, NodeB's\nchanges happened after NodeA's changes, despite what the timestamps\nsuggest.\n\nA common method to handle such cases is using vector clocks for\nconflict resolution. \"Vector clocks\" allow us to track the causal\nrelationships between changes across nodes, ensuring that we can\ncorrectly order events and resolve conflicts in a manner that respects\nthe \"happens-before\" relationship. This method helps maintain\nconsistency and predictability in the system despite issues like clock\nskew.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:02:50 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jun 10, 2024 at 5:12 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 6/10/24 10:54, Amit Kapila wrote:\n> > On Fri, Jun 7, 2024 at 6:08 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> On 5/27/24 07:48, shveta malik wrote:\n> >>> On Sat, May 25, 2024 at 2:39 AM Tomas Vondra\n> >>> <[email protected]> wrote:\n> >>>>\n> >>>> Which architecture are you aiming for? Here you talk about multiple\n> >>>> providers, but the wiki page mentions active-active. I'm not sure how\n> >>>> much this matters, but it might.\n> >>>\n> >>> Currently, we are working for multi providers case but ideally it\n> >>> should work for active-active also. During further discussion and\n> >>> implementation phase, if we find that, there are cases which will not\n> >>> work in straight-forward way for active-active, then our primary focus\n> >>> will remain to first implement it for multiple providers architecture.\n> >>>\n> >>>>\n> >>>> Also, what kind of consistency you expect from this? Because none of\n> >>>> these simple conflict resolution methods can give you the regular\n> >>>> consistency models we're used to, AFAICS.\n> >>>\n> >>> Can you please explain a little bit more on this.\n> >>>\n> >>\n> >> I was referring to the well established consistency models / isolation\n> >> levels, e.g. READ COMMITTED or SNAPSHOT ISOLATION. This determines what\n> >> guarantees the application developer can expect, what anomalies can\n> >> happen, etc.\n> >>\n> >> I don't think any such isolation level can be implemented with a simple\n> >> conflict resolution methods like last-update-wins etc. For example,\n> >> consider an active-active where both nodes do\n> >>\n> >> UPDATE accounts SET balance=balance+1000 WHERE id=1\n> >>\n> >> This will inevitably lead to a conflict, and while the last-update-wins\n> >> resolves this \"consistently\" on both nodes (e.g. ending with the same\n> >> result), it's essentially a lost update.\n> >>\n> >\n> > The idea to solve such conflicts is using the delta apply technique\n> > where the delta from both sides will be applied to the respective\n> > columns. We do plan to target this as a separate patch. Now, if the\n> > basic conflict resolution and delta apply both can't go in one\n> > release, we shall document such cases clearly to avoid misuse of the\n> > feature.\n> >\n>\n> Perhaps, but it's not like having delta conflict resolution (or even\n> CRDT as a more generic variant) would lead to a regular consistency\n> model in a distributed system. At least I don't think it can achieve\n> that, because of the asynchronicity.\n>\n> Consider a table with \"CHECK (amount < 1000)\" constraint, and an update\n> that sets (amount = amount + 900) on two nodes. AFAIK there's no way to\n> reconcile this using delta (or any other other) conflict resolution.\n>\n\nRight, in such a case an error will be generated and I agree that we\ncan't always reconcile the updates on different nodes and some data\nloss is unavoidable with or without conflict resolution.\n\n> Which does not mean we should not have some form of conflict resolution,\n> as long as we know what the goal is. I simply don't want to spend time\n> working on this, add a lot of complex code, and then realize it doesn't\n> give us a consistency model that makes sense.\n>\n> Which leads me back to my original question - what is the consistency\n> model this you expect to get from this (possibly when combined with some\n> other pieces?)?\n>\n\nI don't think this feature per se (or some additional features like\ndelta apply) can help with improving/changing the consistency model\nour current logical replication module provides (which as per my\nunderstanding is an eventual consistency model). This feature will\nhelp with reducing the number of cases where manual intervention is\nrequired with configurable way to resolve conflicts. For example, for\nprimary key violation ERRORs, or when we intentionally overwrite the\ndata even when there is conflicting data present from different\norigin, or for cases we simply skip the remote data when there is a\nconflict in the local node.\n\nTo achieve consistent reads on all nodes we either need a distributed\ntransaction using a two-phase commit with some sort of quorum\nprotocol, or a sharded database with multiple primaries each\nresponsible for a unique partition of the data, or some other way. The\ncurrent proposal doesn't intend to implement any of those.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 Jun 2024 11:22:39 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 11, 2024 at 7:44 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 6/11/24 10:35, shveta malik wrote:\n> > On Mon, Jun 10, 2024 at 5:24 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >>\n> >>\n> >> On 6/10/24 12:56, shveta malik wrote:\n> >>> On Fri, Jun 7, 2024 at 6:08 PM Tomas Vondra\n> >>> <[email protected]> wrote:\n> >>>>\n> >>>>>>>\n> >>>>>>> UPDATE\n> >>>>>>> ================\n> >>>>>>>\n> >>>>>>> Conflict Detection Method:\n> >>>>>>> --------------------------------\n> >>>>>>> Origin conflict detection: The ‘origin’ info is used to detect\n> >>>>>>> conflict which can be obtained from commit-timestamp generated for\n> >>>>>>> incoming txn at the source node. To compare remote’s origin with the\n> >>>>>>> local’s origin, we must have origin information for local txns as well\n> >>>>>>> which can be obtained from commit-timestamp after enabling\n> >>>>>>> ‘track_commit_timestamp’ locally.\n> >>>>>>> The one drawback here is the ‘origin’ information cannot be obtained\n> >>>>>>> once the row is frozen and the commit-timestamp info is removed by\n> >>>>>>> vacuum. For a frozen row, conflicts cannot be raised, and thus the\n> >>>>>>> incoming changes will be applied in all the cases.\n> >>>>>>>\n> >>>>>>> Conflict Types:\n> >>>>>>> ----------------\n> >>>>>>> a) update_differ: The origin of an incoming update's key row differs\n> >>>>>>> from the local row i.e.; the row has already been updated locally or\n> >>>>>>> by different nodes.\n> >>>>>>> b) update_missing: The row with the same value as that incoming\n> >>>>>>> update's key does not exist. Remote is trying to update a row which\n> >>>>>>> does not exist locally.\n> >>>>>>> c) update_deleted: The row with the same value as that incoming\n> >>>>>>> update's key does not exist. The row is already deleted. This conflict\n> >>>>>>> type is generated only if the deleted row is still detectable i.e., it\n> >>>>>>> is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> >>>>>>> it cannot detect this conflict. It will detect it as update_missing\n> >>>>>>> and will follow the default or configured resolver of update_missing\n> >>>>>>> itself.\n> >>>>>>>\n> >>>>>>\n> >>>>>> I don't understand the why should update_missing or update_deleted be\n> >>>>>> different, especially considering it's not detected reliably. And also\n> >>>>>> that even if we happen to find the row the associated TOAST data may\n> >>>>>> have already been removed. So why would this matter?\n> >>>>>\n> >>>>> Here, we are trying to tackle the case where the row is 'recently'\n> >>>>> deleted i.e. concurrent UPDATE and DELETE on pub and sub. User may\n> >>>>> want to opt for a different resolution in such a case as against the\n> >>>>> one where the corresponding row was not even present in the first\n> >>>>> place. The case where the row was deleted long back may not fall into\n> >>>>> this category as there are higher chances that they have been removed\n> >>>>> by vacuum and can be considered equivalent to the update_ missing\n> >>>>> case.\n> >>>>>\n> >>>>\n> >>>> My point is that if we can't detect the difference reliably, it's not\n> >>>> very useful. Consider this example:\n> >>>>\n> >>>> Node A:\n> >>>>\n> >>>> T1: INSERT INTO t (id, value) VALUES (1,1);\n> >>>>\n> >>>> T2: DELETE FROM t WHERE id = 1;\n> >>>>\n> >>>> Node B:\n> >>>>\n> >>>> T3: UPDATE t SET value = 2 WHERE id = 1;\n> >>>>\n> >>>> The \"correct\" order of received messages on a third node is T1-T3-T2.\n> >>>> But we may also see T1-T2-T3 and T3-T1-T2, e.g. due to network issues\n> >>>> and so on. For T1-T2-T3 the right decision is to discard the update,\n> >>>> while for T3-T1-T2 it's to either wait for the INSERT or wait for the\n> >>>> insert to arrive.\n> >>>>\n> >>>> But if we misdetect the situation, we either end up with a row that\n> >>>> shouldn't be there, or losing an update.\n> >>>\n> >>> Doesn't the above example indicate that 'update_deleted' should also\n> >>> be considered a necessary conflict type? Please see the possibilities\n> >>> of conflicts in all three cases:\n> >>>\n> >>>\n> >>> The \"correct\" order of receiving messages on node C (as suggested\n> >>> above) is T1-T3-T2 (case1)\n> >>> ----------\n> >>> T1 will insert the row.\n> >>> T3 will have update_differ conflict; latest_timestamp wins or apply\n> >>> will apply it. earliest_timestamp_wins or skip will skip it.\n> >>> T2 will delete the row (irrespective of whether the update happened or not).\n> >>> End Result: No Data.\n> >>>\n> >>> T1-T2-T3\n> >>> ----------\n> >>> T1 will insert the row.\n> >>> T2 will delete the row.\n> >>> T3 will have conflict update_deleted. If it is 'update_deleted', the\n> >>> chances are that the resolver set here is to 'skip' (default is also\n> >>> 'skip' in this case).\n> >>>\n> >>> If vacuum has deleted that row (or if we don't support\n> >>> 'update_deleted' conflict), it will be 'update_missing' conflict. In\n> >>> that case, the user may end up inserting the row if resolver chosen is\n> >>> in favor of apply (which seems an obvious choice for 'update_missing'\n> >>> conflict; default is also 'apply_or_skip').\n> >>>\n> >>> End result:\n> >>> Row inserted with 'update_missing'.\n> >>> Row correctly skipped with 'update_deleted' (assuming the obvious\n> >>> choice seems to be 'skip' for update_deleted case).\n> >>>\n> >>> So it seems that with 'update_deleted' conflict, there are higher\n> >>> chances of opting for right decision here (which is to discard the\n> >>> update), as 'update_deleted' conveys correct info to the user. The\n> >>> 'update_missing' OTOH does not convey correct info and user may end up\n> >>> inserting the data by choosing apply favoring resolvers for\n> >>> 'update_missing'. Again, we get benefit of 'update_deleted' for\n> >>> *recently* deleted rows only.\n> >>>\n> >>> T3-T1-T2\n> >>> ----------\n> >>> T3 may end up inserting the record if the resolver is in favor of\n> >>> 'apply' and all the columns are received from remote.\n> >>> T1 will have' insert_exists' conflict and thus may either overwrite\n> >>> 'updated' values or may leave the data as is (based on whether\n> >>> resolver is in favor of apply or not)\n> >>> T2 will end up deleting it.\n> >>> End Result: No Data.\n> >>>\n> >>> I feel for second case (and similar cases), 'update_deleted' serves a\n> >>> better conflict type.\n> >>>\n> >>\n> >> True, but this is pretty much just a restatement of the example, right?\n> >>\n> >> The point I was trying to make is that this hinges on the ability to\n> >> detect the correct conflict type. And if vacuum can swoop in and remove\n> >> the recently deleted tuples (which I believe can happen at any time,\n> >> right?), then that's not guaranteed, because we won't see the deleted\n> >> tuple anymore.\n> >\n> > Yes, that's correct. However, many cases could benefit from the\n> > update_deleted conflict type if it can be implemented reliably. That's\n> > why we wanted to give it a try. But if we can't achieve predictable\n> > results with it, I'm fine to drop this approach and conflict_type. We\n> > can consider a better design in the future that doesn't depend on\n> > non-vacuumed entries and provides a more robust method for identifying\n> > deleted rows.\n> >\n>\n> I agree having a separate update_deleted conflict would be beneficial,\n> I'm not arguing against that - my point is actually that I think this\n> conflict type is required, and that it needs to be detected reliably.\n>\n> I'm not sure dropping update_deleted entirely would be a good idea,\n> though. It pretty much guarantees making the wrong decision at least\n> sometimes. But at least it's predictable and users are more likely to\n> notice that (compared to update_delete working on well-behaving systems,\n> and then failing when a node starts lagging or something).\n>\n> That's my opinion, though, and I don't intend to stay in the way. But I\n> think the solution is not that difficult - something needs to prevent\n> cleanup of recently dead tuples (until the \"relevant\" changes are\n> received and applied from other nodes). I don't know if that could be\n> done based on information we have for subscriptions, or if we need\n> something new.\n\nI agree that without update_deleted, there are higher chances of\nmaking incorrect decisions in some cases. But not sure if relying on\ndelaying vacuum from removing such rows is a full proof plan. We\ncannot predict if or when \"relevant\" changes will occur, so how long\nshould we delay the vacuum?\nTo address this problem, we may need a completely different approach.\nOne solution could be to store deleted rows in a separate table\n(dead-rows-table) so we can consult that table for any deleted entries\nat any time. Additionally, we would need methods to purge older data\nfrom the dead-rows-table to prevent it from growing too large. This\nwould be a substantial project on its own, so we can aim to implement\nsome initial and simple conflict resolution methods first before\ntackling this more complex solution.\n\n> >> Also, can the resolver even convert the UPDATE into INSERT and proceed?\n> >> Maybe with REPLICA IDENTITY FULL?\n> >\n> > Yes, it can, as long as the row doesn't contain toasted data. Without\n> > toasted data, the new tuple is fully logged. However, if the row does\n> > contain toasted data, the new tuple won't log it completely. In such a\n> > case, REPLICA IDENTITY FULL becomes a requirement to ensure we have\n> > all the data necessary to create the row on the target side. In\n> > absence of RI full and with row lacking toasted data, the operation\n> > will be skipped or error will be raised.\n> >\n> >> Otherwise the row might be incomplete,\n> >> missing required columns etc. In which case it'd have to wait for the\n> >> actual INSERT to arrive - which would work for actual update_missing,\n> >> where the row may be delayed due to network issues. But if that's a\n> >> mistake due to vacuum removing the deleted tuple, it'll wait forever.\n> >\n> > Even in case of 'update_missing', we do not intend to wait for 'actual\n> > insert' to arrive, as it is not guaranteed if the 'insert' will arrive\n> > or not. And thus we plan to skip or error out (based on user's\n> > configuration) if a complete row can not be created for insertion.\n> >\n>\n> If the UPDATE contains all the columns and can be turned into an INSERT,\n> then that seems reasonable. But I don't see how skipping it could work\n> in general (except for some very simple / specific use cases). I'm not\n> sure if you suggest to skip just the one UPDATE or transaction as a\n> whole, but it seems to me either of those options could easily lead to\n> all kinds of inconsistencies and user confusion.\n\nConflict resolution is row-based, meaning that whatever action we\nchoose (error or skip) applies to the specific change rather than the\nentire transaction. I'm not sure if waiting indefinitely for an INSERT\nto arrive is a good idea, as the node that triggered the INSERT might\nbe down for an extended period. At best, we could provide a\nconfiguration parameter using which the apply worker waits for a\nspecified time period for the INSERT to arrive before either skipping\nor throwing an error.\n\nThat said, even if we error out or skip and log without waiting for\nthe INSERT, we won't introduce any new inconsistencies. This is the\ncurrent behavior on pg-HEAD. But with options like apply_or_skip and\napply_or_error, we have a better chance of resolving conflicts by\nconstructing the complete row internally, without user's intervention.\nThere will still be some cases where we can't fully reconstruct the\nrow, but in those instances, the behavior won't be any worse than the\ncurrent pg-HEAD.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:22:40 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "\n\nOn 6/12/24 06:32, Dilip Kumar wrote:\n> On Tue, Jun 11, 2024 at 7:44 PM Tomas Vondra\n> <[email protected]> wrote:\n> \n>>> Yes, that's correct. However, many cases could benefit from the\n>>> update_deleted conflict type if it can be implemented reliably. That's\n>>> why we wanted to give it a try. But if we can't achieve predictable\n>>> results with it, I'm fine to drop this approach and conflict_type. We\n>>> can consider a better design in the future that doesn't depend on\n>>> non-vacuumed entries and provides a more robust method for identifying\n>>> deleted rows.\n>>>\n>>\n>> I agree having a separate update_deleted conflict would be beneficial,\n>> I'm not arguing against that - my point is actually that I think this\n>> conflict type is required, and that it needs to be detected reliably.\n>>\n> \n> When working with a distributed system, we must accept some form of\n> eventual consistency model.\n\nI'm not sure this is necessarily true. There are distributed databases\nimplementing (or aiming to) regular consistency models, without eventual\nconsistency. I'm not saying it's easy, but it shows eventual consistency\nis not the only option.\n\n> However, it's essential to design a\n> predictable and acceptable behavior. For example, if a change is a\n> result of a previous operation (such as an update on node B triggered\n> after observing an operation on node A), we can say that the operation\n> on node A happened before the operation on node B. Conversely, if\n> operations on nodes A and B are independent, we consider them\n> concurrent.\n> \n\nRight. And this is precisely the focus or my questions - understanding\nwhat behavior we aim for / expect in the end. Or said differently, what\nanomalies / weird behavior would be considered expected.\n\nBecause that's important both for discussions about feasibility, etc.\nAnd also for evaluation / reviews of the patch.\n\n> In distributed systems, clock skew is a known issue. To establish a\n> consistency model, we need to ensure it guarantees the\n> \"happens-before\" relationship. Consider a scenario with three nodes:\n> NodeA, NodeB, and NodeC. If NodeA sends changes to NodeB, and\n> subsequently NodeB makes changes, and then both NodeA's and NodeB's\n> changes are sent to NodeC, the clock skew might make NodeB's changes\n> appear to have occurred before NodeA's changes. However, we should\n> maintain data that indicates NodeB's changes were triggered after\n> NodeA's changes arrived at NodeB. This implies that logically, NodeB's\n> changes happened after NodeA's changes, despite what the timestamps\n> suggest.\n> \n> A common method to handle such cases is using vector clocks for\n> conflict resolution. \"Vector clocks\" allow us to track the causal\n> relationships between changes across nodes, ensuring that we can\n> correctly order events and resolve conflicts in a manner that respects\n> the \"happens-before\" relationship. This method helps maintain\n> consistency and predictability in the system despite issues like clock\n> skew.\n> \n\nI'm familiar with the concept of vector clock (or logical clock in\ngeneral), but it's not clear to me how you plan to use this in the\ncontext of the conflict handling. Can you elaborate/explain?\n\nThe way I see it, conflict handling is pretty tightly coupled with\nregular commit timestamps and MVCC in general. How would you use vector\nclock to change that?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 12 Jun 2024 13:56:39 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 12, 2024 at 5:26 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> >> I agree having a separate update_deleted conflict would be beneficial,\n> >> I'm not arguing against that - my point is actually that I think this\n> >> conflict type is required, and that it needs to be detected reliably.\n> >>\n> >\n> > When working with a distributed system, we must accept some form of\n> > eventual consistency model.\n>\n> I'm not sure this is necessarily true. There are distributed databases\n> implementing (or aiming to) regular consistency models, without eventual\n> consistency. I'm not saying it's easy, but it shows eventual consistency\n> is not the only option.\n\nRight, that statement might not be completely accurate. Based on the\nCAP theorem, when a network partition is unavoidable and availability\nis expected, we often choose an eventual consistency model. However,\nclaiming that a fully consistent model is impossible in any\ndistributed system is incorrect, as it can be achieved using\nmechanisms like Two-Phase Commit.\n\nWe must also accept that our PostgreSQL replication mechanism does not\nguarantee a fully consistent model. Even with synchronous commit, it\nonly waits for the WAL to be replayed on the standby but does not\nchange the commit decision based on other nodes. This means, at most,\nwe can only guarantee \"Read Your Write\" consistency.\n\n> > However, it's essential to design a\n> > predictable and acceptable behavior. For example, if a change is a\n> > result of a previous operation (such as an update on node B triggered\n> > after observing an operation on node A), we can say that the operation\n> > on node A happened before the operation on node B. Conversely, if\n> > operations on nodes A and B are independent, we consider them\n> > concurrent.\n> >\n>\n> Right. And this is precisely the focus or my questions - understanding\n> what behavior we aim for / expect in the end. Or said differently, what\n> anomalies / weird behavior would be considered expected.\n\n> Because that's important both for discussions about feasibility, etc.\n> And also for evaluation / reviews of the patch.\n\n+1\n\n> > In distributed systems, clock skew is a known issue. To establish a\n> > consistency model, we need to ensure it guarantees the\n> > \"happens-before\" relationship. Consider a scenario with three nodes:\n> > NodeA, NodeB, and NodeC. If NodeA sends changes to NodeB, and\n> > subsequently NodeB makes changes, and then both NodeA's and NodeB's\n> > changes are sent to NodeC, the clock skew might make NodeB's changes\n> > appear to have occurred before NodeA's changes. However, we should\n> > maintain data that indicates NodeB's changes were triggered after\n> > NodeA's changes arrived at NodeB. This implies that logically, NodeB's\n> > changes happened after NodeA's changes, despite what the timestamps\n> > suggest.\n> >\n> > A common method to handle such cases is using vector clocks for\n> > conflict resolution. \"Vector clocks\" allow us to track the causal\n> > relationships between changes across nodes, ensuring that we can\n> > correctly order events and resolve conflicts in a manner that respects\n> > the \"happens-before\" relationship. This method helps maintain\n> > consistency and predictability in the system despite issues like clock\n> > skew.\n> >\n>\n> I'm familiar with the concept of vector clock (or logical clock in\n> general), but it's not clear to me how you plan to use this in the\n> context of the conflict handling. Can you elaborate/explain?\n>\n> The way I see it, conflict handling is pretty tightly coupled with\n> regular commit timestamps and MVCC in general. How would you use vector\n> clock to change that?\n\nThe issue with using commit timestamps is that, when multiple nodes\nare involved, the commit timestamp won't accurately represent the\nactual order of operations. There's no reliable way to determine the\nperfect order of each operation happening on different nodes roughly\nsimultaneously unless we use some globally synchronized counter.\nGenerally, that order might not cause real issues unless one operation\nis triggered by a previous operation, and relying solely on physical\ntimestamps would not detect that correctly.\n\nWe need some sort of logical counter, such as a vector clock, which\nmight be an independent counter on each node but can perfectly track\nthe causal order. For example, if NodeA observes an operation from\nNodeB with a counter value of X, NodeA will adjust its counter to X+1.\nThis ensures that if NodeA has seen an operation from NodeB, its next\noperation will appear to have occurred after NodeB's operation.\n\nI admit that I haven't fully thought through how we could design such\nversion tracking in our logical replication protocol or how it would\nfit into our system. However, my point is that we need to consider\nsomething beyond commit timestamps to achieve reliable ordering.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 10:22:01 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 5, 2024 at 3:32 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Hi,\n>\n> This time at PGconf.dev[1], we had some discussions regarding this\n> project. The proposed approach is to split the work into two main\n> components. The first part focuses on conflict detection, which aims to\n> identify and report conflicts in logical replication. This feature will\n> enable users to monitor the unexpected conflicts that may occur. The\n> second part involves the actual conflict resolution. Here, we will provide\n> built-in resolutions for each conflict and allow user to choose which\n> resolution will be used for which conflict(as described in the initial\n> email of this thread).\n\nI agree with this direction that we focus on conflict detection (and\nlogging) first and then develop conflict resolution on top of that.\n\n>\n> Of course, we are open to alternative ideas and suggestions, and the\n> strategy above can be changed based on ongoing discussions and feedback\n> received.\n>\n> Here is the patch of the first part work, which adds a new parameter\n> detect_conflict for CREATE and ALTER subscription commands. This new\n> parameter will decide if subscription will go for conflict detection. By\n> default, conflict detection will be off for a subscription.\n>\n> When conflict detection is enabled, additional logging is triggered in the\n> following conflict scenarios:\n>\n> * updating a row that was previously modified by another origin.\n> * The tuple to be updated is not found.\n> * The tuple to be deleted is not found.\n>\n> While there exist other conflict types in logical replication, such as an\n> incoming insert conflicting with an existing row due to a primary key or\n> unique index, these cases already result in constraint violation errors.\n\nWhat does detect_conflict being true actually mean to users? I\nunderstand that detect_conflict being true could introduce some\noverhead to detect conflicts. But in terms of conflict detection, even\nif detect_confict is false, we detect some conflicts such as\nconcurrent inserts with the same key. Once we introduce the complete\nconflict detection feature, I'm not sure there is a case where a user\nwants to detect only some particular types of conflict.\n\n> Therefore, additional conflict detection for these cases is currently\n> omitted to minimize potential overhead. However, the pre-detection for\n> conflict in these error cases is still essential to support automatic\n> conflict resolution in the future.\n\nI feel that we should log all types of conflict in an uniform way. For\nexample, with detect_conflict being true, the update_differ conflict\nis reported as \"conflict %s detected on relation \"%s\"\", whereas\nconcurrent inserts with the same key is reported as \"duplicate key\nvalue violates unique constraint \"%s\"\", which could confuse users.\nIdeally, I think that we log such conflict detection details (table\nname, column name, conflict type, etc) to somewhere (e.g. a table or\nserver logs) so that the users can resolve them manually.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 15:11:21 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 13, 2024 at 11:41 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 3:32 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > This time at PGconf.dev[1], we had some discussions regarding this\n> > project. The proposed approach is to split the work into two main\n> > components. The first part focuses on conflict detection, which aims to\n> > identify and report conflicts in logical replication. This feature will\n> > enable users to monitor the unexpected conflicts that may occur. The\n> > second part involves the actual conflict resolution. Here, we will provide\n> > built-in resolutions for each conflict and allow user to choose which\n> > resolution will be used for which conflict(as described in the initial\n> > email of this thread).\n>\n> I agree with this direction that we focus on conflict detection (and\n> logging) first and then develop conflict resolution on top of that.\n>\n> >\n> > Of course, we are open to alternative ideas and suggestions, and the\n> > strategy above can be changed based on ongoing discussions and feedback\n> > received.\n> >\n> > Here is the patch of the first part work, which adds a new parameter\n> > detect_conflict for CREATE and ALTER subscription commands. This new\n> > parameter will decide if subscription will go for conflict detection. By\n> > default, conflict detection will be off for a subscription.\n> >\n> > When conflict detection is enabled, additional logging is triggered in the\n> > following conflict scenarios:\n> >\n> > * updating a row that was previously modified by another origin.\n> > * The tuple to be updated is not found.\n> > * The tuple to be deleted is not found.\n> >\n> > While there exist other conflict types in logical replication, such as an\n> > incoming insert conflicting with an existing row due to a primary key or\n> > unique index, these cases already result in constraint violation errors.\n>\n> What does detect_conflict being true actually mean to users? I\n> understand that detect_conflict being true could introduce some\n> overhead to detect conflicts. But in terms of conflict detection, even\n> if detect_confict is false, we detect some conflicts such as\n> concurrent inserts with the same key. Once we introduce the complete\n> conflict detection feature, I'm not sure there is a case where a user\n> wants to detect only some particular types of conflict.\n>\n\nYou are right that users would wish to detect the conflicts and\nprobably the extra effort would only be in the 'update_differ' case\nwhere we need to consult committs module and that we will only do when\n'track_commit_timestamp' is true. BTW, I think for Inserts with\nprimary/unique key violation, we should catch the ERROR and log it. If\nwe want to log the conflicts in a separate table then do we want to do\nthat in the catch block after getting pk violation or do an extra scan\nbefore 'INSERT' to find the conflict? I think logging would need extra\ncost especially if we want to LOG it in some table as you are\nsuggesting below that may need some option.\n\n> > Therefore, additional conflict detection for these cases is currently\n> > omitted to minimize potential overhead. However, the pre-detection for\n> > conflict in these error cases is still essential to support automatic\n> > conflict resolution in the future.\n>\n> I feel that we should log all types of conflict in an uniform way. For\n> example, with detect_conflict being true, the update_differ conflict\n> is reported as \"conflict %s detected on relation \"%s\"\", whereas\n> concurrent inserts with the same key is reported as \"duplicate key\n> value violates unique constraint \"%s\"\", which could confuse users.\n> Ideally, I think that we log such conflict detection details (table\n> name, column name, conflict type, etc) to somewhere (e.g. a table or\n> server logs) so that the users can resolve them manually.\n>\n\nIt is good to think if there is a value in providing in\npg_conflicts_history kind of table which will have details of\nconflicts that occurred and then we can extend it to have resolutions.\nI feel we can anyway LOG the conflicts by default. Updating a separate\ntable with conflicts should be done by default or with a knob is a\npoint to consider.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 Jun 2024 16:58:26 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On 23.05.24 08:36, shveta malik wrote:\n> Conflict Resolution\n> ----------------\n> a) latest_timestamp_wins: The change with later commit timestamp wins.\n> b) earliest_timestamp_wins: The change with earlier commit timestamp wins.\n> c) apply: Always apply the remote change.\n> d) skip: Remote change is skipped.\n> e) error: Error out on conflict. Replication is stopped, manual\n> action is needed.\n\nYou might be aware of pglogical, which has similar conflict resolution \nmodes, but they appear to be spelled a bit different. It might be worth \nreviewing this, so that we don't unnecessarily introduce differences.\n\nhttps://github.com/2ndquadrant/pglogical?tab=readme-ov-file#conflicts\n\nThere might also be other inspiration to be found related to this in \npglogical documentation or code.\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 14:45:50 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On 2024-Jun-07, Tomas Vondra wrote:\n\n> On 6/3/24 09:30, Amit Kapila wrote:\n> > On Sat, May 25, 2024 at 2:39 AM Tomas Vondra <[email protected]> wrote:\n\n> >> How is this going to deal with the fact that commit LSN and timestamps\n> >> may not correlate perfectly? That is, commits may happen with LSN1 <\n> >> LSN2 but with T1 > T2.\n\n> But as I wrote, I'm not quite convinced this means there are not other\n> issues with this way of resolving conflicts. It's more likely a more\n> complex scenario is required.\n\nJan Wieck approached me during pgconf.dev to reproach me of this\nproblem. He also said he had some code to fix-up the commit TS\nafterwards somehow, to make the sequence monotonically increasing.\nPerhaps we should consider that, to avoid any problems caused by the\ndifference between LSN order and TS order. It might be quite\nnightmarish to try to make the system work correctly without\nreasonable constraints of that nature.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 13 Jun 2024 15:30:33 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On 6/13/24 7:28 AM, Amit Kapila wrote:\r\n\r\n> You are right that users would wish to detect the conflicts and\r\n> probably the extra effort would only be in the 'update_differ' case\r\n> where we need to consult committs module and that we will only do when\r\n> 'track_commit_timestamp' is true. BTW, I think for Inserts with\r\n> primary/unique key violation, we should catch the ERROR and log it. If\r\n> we want to log the conflicts in a separate table then do we want to do\r\n> that in the catch block after getting pk violation or do an extra scan\r\n> before 'INSERT' to find the conflict? I think logging would need extra\r\n> cost especially if we want to LOG it in some table as you are\r\n> suggesting below that may need some option.\r\n> \r\n>>> Therefore, additional conflict detection for these cases is currently\r\n>>> omitted to minimize potential overhead. However, the pre-detection for\r\n>>> conflict in these error cases is still essential to support automatic\r\n>>> conflict resolution in the future.\r\n>>\r\n>> I feel that we should log all types of conflict in an uniform way. For\r\n>> example, with detect_conflict being true, the update_differ conflict\r\n>> is reported as \"conflict %s detected on relation \"%s\"\", whereas\r\n>> concurrent inserts with the same key is reported as \"duplicate key\r\n>> value violates unique constraint \"%s\"\", which could confuse users.\r\n>> Ideally, I think that we log such conflict detection details (table\r\n>> name, column name, conflict type, etc) to somewhere (e.g. a table or\r\n>> server logs) so that the users can resolve them manually.\r\n>>\r\n> \r\n> It is good to think if there is a value in providing in\r\n> pg_conflicts_history kind of table which will have details of\r\n> conflicts that occurred and then we can extend it to have resolutions.\r\n> I feel we can anyway LOG the conflicts by default. Updating a separate\r\n> table with conflicts should be done by default or with a knob is a\r\n> point to consider.\r\n\r\n+1 for logging conflicts uniformly, but I would +100 to exposing the log \r\nin a way that's easy for the user to query (whether it's a system view \r\nor a stat table). Arguably, I'd say that would be the most important \r\nfeature to come out of this effort.\r\n\r\nRemoving how conflicts are resolved, users want to know exactly what row \r\nhad a conflict, and users from other database systems that have dealt \r\nwith these issues will have tooling to be able to review and analyze if \r\na conflicts occur. This data is typically stored in a queryable table, \r\nwith data retained for N days. When you add in automatic conflict \r\nresolution, users then want to have a record of how the conflict was \r\nresolved, in case they need to manually update it.\r\n\r\nHaving this data in a table also gives the user opportunity to \r\nunderstand conflict stats (e.g. conflict rates) and potentially identify \r\nportions of the application and other parts of the system to optimize. \r\nIt also makes it easier to import to downstream systems that may perform \r\nfurther analysis on conflict resolution, or alarm if a conflict rate \r\nexceeds a certain threshold.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Thu, 13 Jun 2024 13:48:41 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, May 23, 2024 at 2:37 AM shveta malik <[email protected]> wrote:\n> c) update_deleted: The row with the same value as that incoming\n> update's key does not exist. The row is already deleted. This conflict\n> type is generated only if the deleted row is still detectable i.e., it\n> is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> it cannot detect this conflict. It will detect it as update_missing\n> and will follow the default or configured resolver of update_missing\n> itself.\n\nI think this design is categorically unacceptable. It amounts to\ndesigning a feature that works except when it doesn't. I'm not exactly\nsure how the proposal should be changed to avoid depending on the\ntiming of VACUUM, but I think it's absolutely not OK to depend on the\ntiming of VACUUm -- or, really, this is going to depend on the timing\nof HOT-pruning, which will often happen almost instantly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 14:39:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 13, 2024 at 7:00 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jun-07, Tomas Vondra wrote:\n>\n> > On 6/3/24 09:30, Amit Kapila wrote:\n> > > On Sat, May 25, 2024 at 2:39 AM Tomas Vondra <[email protected]> wrote:\n>\n> > >> How is this going to deal with the fact that commit LSN and timestamps\n> > >> may not correlate perfectly? That is, commits may happen with LSN1 <\n> > >> LSN2 but with T1 > T2.\n>\n> > But as I wrote, I'm not quite convinced this means there are not other\n> > issues with this way of resolving conflicts. It's more likely a more\n> > complex scenario is required.\n>\n> Jan Wieck approached me during pgconf.dev to reproach me of this\n> problem. He also said he had some code to fix-up the commit TS\n> afterwards somehow, to make the sequence monotonically increasing.\n> Perhaps we should consider that, to avoid any problems caused by the\n> difference between LSN order and TS order. It might be quite\n> nightmarish to try to make the system work correctly without\n> reasonable constraints of that nature.\n>\n\nI agree with this but the problem Jan was worried about was not\ndirectly reproducible in what the PostgreSQL provides at least that is\nwhat I understood then. We are also unable to think of a concrete\nscenario where this is a problem but we are planning to spend more\ntime deriving a test to reproducible the problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 14 Jun 2024 15:58:39 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 13, 2024 at 11:18 PM Jonathan S. Katz <[email protected]> wrote:\n>\n> On 6/13/24 7:28 AM, Amit Kapila wrote:\n> >>\n> >> I feel that we should log all types of conflict in an uniform way. For\n> >> example, with detect_conflict being true, the update_differ conflict\n> >> is reported as \"conflict %s detected on relation \"%s\"\", whereas\n> >> concurrent inserts with the same key is reported as \"duplicate key\n> >> value violates unique constraint \"%s\"\", which could confuse users.\n> >> Ideally, I think that we log such conflict detection details (table\n> >> name, column name, conflict type, etc) to somewhere (e.g. a table or\n> >> server logs) so that the users can resolve them manually.\n> >>\n> >\n> > It is good to think if there is a value in providing in\n> > pg_conflicts_history kind of table which will have details of\n> > conflicts that occurred and then we can extend it to have resolutions.\n> > I feel we can anyway LOG the conflicts by default. Updating a separate\n> > table with conflicts should be done by default or with a knob is a\n> > point to consider.\n>\n> +1 for logging conflicts uniformly, but I would +100 to exposing the log\n> in a way that's easy for the user to query (whether it's a system view\n> or a stat table). Arguably, I'd say that would be the most important\n> feature to come out of this effort.\n>\n\nWe can have both the system view and a stats table. The system view\ncould have some sort of cumulative stats data like how many times a\nparticular conflict had occurred and the table would provide detailed\ninformation about the conflict. The one challenge I see in providing a\ntable is in its cleanup mechanism. We could prove a partitioned table\nsuch that users can truncate/drop the not needed partitions or provide\na non-partitioned table where users can delete the old data in which\ncase they generate a work for auto vacuum.\n\n> Removing how conflicts are resolved, users want to know exactly what row\n> had a conflict, and users from other database systems that have dealt\n> with these issues will have tooling to be able to review and analyze if\n> a conflicts occur. This data is typically stored in a queryable table,\n> with data retained for N days. When you add in automatic conflict\n> resolution, users then want to have a record of how the conflict was\n> resolved, in case they need to manually update it.\n>\n> Having this data in a table also gives the user opportunity to\n> understand conflict stats (e.g. conflict rates) and potentially identify\n> portions of the application and other parts of the system to optimize.\n> It also makes it easier to import to downstream systems that may perform\n> further analysis on conflict resolution, or alarm if a conflict rate\n> exceeds a certain threshold.\n>\n\nAgreed those are good use cases to store conflict history.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 14 Jun 2024 16:24:45 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Jun 14, 2024 at 12:10 AM Robert Haas <[email protected]> wrote:\n>\n> On Thu, May 23, 2024 at 2:37 AM shveta malik <[email protected]> wrote:\n> > c) update_deleted: The row with the same value as that incoming\n> > update's key does not exist. The row is already deleted. This conflict\n> > type is generated only if the deleted row is still detectable i.e., it\n> > is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> > it cannot detect this conflict. It will detect it as update_missing\n> > and will follow the default or configured resolver of update_missing\n> > itself.\n>\n> I think this design is categorically unacceptable. It amounts to\n> designing a feature that works except when it doesn't. I'm not exactly\n> sure how the proposal should be changed to avoid depending on the\n> timing of VACUUM, but I think it's absolutely not OK to depend on the\n> timing of VACUUm -- or, really, this is going to depend on the timing\n> of HOT-pruning, which will often happen almost instantly.\n>\n\nAgreed, above Tomas has speculated to have a way to avoid vacuum\ncleaning dead tuples until the required changes are received and\napplied. Shveta also mentioned another way to have deads-store (say a\ntable where deleted rows are stored for resolution) [1] which is\nsimilar to a technique used by some other databases. There is an\nagreement to not rely on Vacuum to detect such a conflict but the\nalternative is not clear. Currently, we are thinking to consider such\na conflict type as update_missing (The row with the same value as that\nincoming update's key does not exist.). This is how the current HEAD\ncode behaves and LOGs the information (logical replication did not\nfind row to be updated ..).\n\n[1] - https://www.postgresql.org/message-id/CAJpy0uCov4JfZJeOvY0O21_gk9bcgNUDp4jf8%2BBbMp%2BEAv8cVQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 14 Jun 2024 16:59:28 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thursday, June 13, 2024 8:46 PM Peter Eisentraut <[email protected]> wrote:\r\n> \r\n> On 23.05.24 08:36, shveta malik wrote:\r\n> > Conflict Resolution\r\n> > ----------------\r\n> > a) latest_timestamp_wins: The change with later commit timestamp\r\n> wins.\r\n> > b) earliest_timestamp_wins: The change with earlier commit timestamp\r\n> wins.\r\n> > c) apply: Always apply the remote change.\r\n> > d) skip: Remote change is skipped.\r\n> > e) error: Error out on conflict. Replication is stopped, manual\r\n> > action is needed.\r\n> \r\n> You might be aware of pglogical, which has similar conflict resolution modes,\r\n> but they appear to be spelled a bit different. It might be worth reviewing this,\r\n> so that we don't unnecessarily introduce differences.\r\n\r\nRight. Some of the proposed resolution names are different from pglogical's\r\nwhile the functionalities are the same. The following is the comparison with\r\npglogical:\r\n\r\n latest_timestamp_wins(proposal) - last_update_wins(pglogical)\r\n earliest_timestamp_wins(proposal) - first_update_wins(pglogical)\r\n apply(proposal) - apply_remote(pglogical)\r\n skip(proposal) - keep_local(pglogical)\r\n\r\nI personally think the pglogical's names read more naturally. But others may\r\nhave different opinions on this.\r\n\r\n> \r\n> https://github.com/2ndquadrant/pglogical?tab=readme-ov-file#conflicts\r\n> \r\n> There might also be other inspiration to be found related to this in pglogical\r\n> documentation or code.\r\n\r\nAnother difference is that we allow users to specify different resolutions for\r\ndifferent conflicts, while pglogical allows specifying one resolution for all conflict.\r\nI think the proposed approach offers more flexibility to users, which seems more\r\nfavorable to me.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Fri, 14 Jun 2024 11:47:38 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Conflict Detection and Resolution" }, { "msg_contents": "\n\nOn 6/14/24 13:29, Amit Kapila wrote:\n> On Fri, Jun 14, 2024 at 12:10 AM Robert Haas <[email protected]> wrote:\n>>\n>> On Thu, May 23, 2024 at 2:37 AM shveta malik <[email protected]> wrote:\n>>> c) update_deleted: The row with the same value as that incoming\n>>> update's key does not exist. The row is already deleted. This conflict\n>>> type is generated only if the deleted row is still detectable i.e., it\n>>> is not removed by VACUUM yet. If the row is removed by VACUUM already,\n>>> it cannot detect this conflict. It will detect it as update_missing\n>>> and will follow the default or configured resolver of update_missing\n>>> itself.\n>>\n>> I think this design is categorically unacceptable. It amounts to\n>> designing a feature that works except when it doesn't. I'm not exactly\n>> sure how the proposal should be changed to avoid depending on the\n>> timing of VACUUM, but I think it's absolutely not OK to depend on the\n>> timing of VACUUm -- or, really, this is going to depend on the timing\n>> of HOT-pruning, which will often happen almost instantly.\n>>\n> \n> Agreed, above Tomas has speculated to have a way to avoid vacuum\n> cleaning dead tuples until the required changes are received and\n> applied. Shveta also mentioned another way to have deads-store (say a\n> table where deleted rows are stored for resolution) [1] which is\n> similar to a technique used by some other databases. There is an\n> agreement to not rely on Vacuum to detect such a conflict but the\n> alternative is not clear.\n\nI'm not sure I'd say I \"speculated\" about it - it's not like we don't\nhave ways to hold off cleanup for a while for various reasons\n(long-running query, replication slot, hot-standby feedback, ...).\n\nHow exactly would that be implemented I don't know, but it seems like a\nfar simpler approach than inventing a new \"dead store\". It'd need logic\nto let the vacuum to cleanup the stuff no longer needed, but so would\nthe dead store I think.\n\n> Currently, we are thinking to consider such\n> a conflict type as update_missing (The row with the same value as that\n> incoming update's key does not exist.). This is how the current HEAD\n> code behaves and LOGs the information (logical replication did not\n> find row to be updated ..).\n> \n\nI thought the agreement was we need both conflict types to get sensible\nbehavior, so proceeding with just the update_missing (mostly because we\ndon't know how to detect these conflicts reliably) seems like maybe not\nbe the right direction ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 17 Jun 2024 00:48:13 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "\n\nOn 6/13/24 06:52, Dilip Kumar wrote:\n> On Wed, Jun 12, 2024 at 5:26 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>>>> I agree having a separate update_deleted conflict would be beneficial,\n>>>> I'm not arguing against that - my point is actually that I think this\n>>>> conflict type is required, and that it needs to be detected reliably.\n>>>>\n>>>\n>>> When working with a distributed system, we must accept some form of\n>>> eventual consistency model.\n>>\n>> I'm not sure this is necessarily true. There are distributed databases\n>> implementing (or aiming to) regular consistency models, without eventual\n>> consistency. I'm not saying it's easy, but it shows eventual consistency\n>> is not the only option.\n>\n> Right, that statement might not be completely accurate. Based on the\n> CAP theorem, when a network partition is unavoidable and availability\n> is expected, we often choose an eventual consistency model. However,\n> claiming that a fully consistent model is impossible in any\n> distributed system is incorrect, as it can be achieved using\n> mechanisms like Two-Phase Commit.\n>\n> We must also accept that our PostgreSQL replication mechanism does not\n> guarantee a fully consistent model. Even with synchronous commit, it\n> only waits for the WAL to be replayed on the standby but does not\n> change the commit decision based on other nodes. This means, at most,\n> we can only guarantee \"Read Your Write\" consistency.\n>\nPerhaps, but even accepting eventual consistency does not absolve us\nfrom actually defining what that means, ensuring it's sensible enough to\nbe practical/usable, and that it actually converges to a consistent\nstate (that's essentially the problem of the update conflict types,\nbecause misdetecting it results in diverging results).\n\n>>> However, it's essential to design a\n>>> predictable and acceptable behavior. For example, if a change is a\n>>> result of a previous operation (such as an update on node B triggered\n>>> after observing an operation on node A), we can say that the operation\n>>> on node A happened before the operation on node B. Conversely, if\n>>> operations on nodes A and B are independent, we consider them\n>>> concurrent.\n>>>\n>>\n>> Right. And this is precisely the focus or my questions - understanding\n>> what behavior we aim for / expect in the end. Or said differently, what\n>> anomalies / weird behavior would be considered expected.\n>\n>> Because that's important both for discussions about feasibility, etc.\n>> And also for evaluation / reviews of the patch.\n>\n> +1\n>\n>>> In distributed systems, clock skew is a known issue. To establish a\n>>> consistency model, we need to ensure it guarantees the\n>>> \"happens-before\" relationship. Consider a scenario with three nodes:\n>>> NodeA, NodeB, and NodeC. If NodeA sends changes to NodeB, and\n>>> subsequently NodeB makes changes, and then both NodeA's and NodeB's\n>>> changes are sent to NodeC, the clock skew might make NodeB's changes\n>>> appear to have occurred before NodeA's changes. However, we should\n>>> maintain data that indicates NodeB's changes were triggered after\n>>> NodeA's changes arrived at NodeB. This implies that logically, NodeB's\n>>> changes happened after NodeA's changes, despite what the timestamps\n>>> suggest.\n>>>\n>>> A common method to handle such cases is using vector clocks for\n>>> conflict resolution. \"Vector clocks\" allow us to track the causal\n>>> relationships between changes across nodes, ensuring that we can\n>>> correctly order events and resolve conflicts in a manner that respects\n>>> the \"happens-before\" relationship. This method helps maintain\n>>> consistency and predictability in the system despite issues like clock\n>>> skew.\n>>>\n>>\n>> I'm familiar with the concept of vector clock (or logical clock in\n>> general), but it's not clear to me how you plan to use this in the\n>> context of the conflict handling. Can you elaborate/explain?\n>>\n>> The way I see it, conflict handling is pretty tightly coupled with\n>> regular commit timestamps and MVCC in general. How would you use vector\n>> clock to change that?\n>\n> The issue with using commit timestamps is that, when multiple nodes\n> are involved, the commit timestamp won't accurately represent the\n> actual order of operations. There's no reliable way to determine the\n> perfect order of each operation happening on different nodes roughly\n> simultaneously unless we use some globally synchronized counter.\n> Generally, that order might not cause real issues unless one operation\n> is triggered by a previous operation, and relying solely on physical\n> timestamps would not detect that correctly.\n>\nThis whole conflict detection / resolution proposal is based on using\ncommit timestamps. Aren't you suggesting it can't really work with\ncommit timestamps?\n\nFWIW there are ways to builds distributed consistency with timestamps,\nas long as it's monotonic - e.g. clock-SI does that. It's not perfect,\nbut it shows it's possible.\n\nHowever, I'm not we have to go there - it depends on what the goal is.\nFor a one-directional replication (multiple nodes replicating to the\nsame target) it might be sufficient if the conflict resolution is\n\"deterministic\" (e.g. not dependent on the order in which the changes\nare applied). I'm not sure, but it's why I asked what's the goal in my\nvery first message in this thread.\n\n> We need some sort of logical counter, such as a vector clock, which\n> might be an independent counter on each node but can perfectly track\n> the causal order. For example, if NodeA observes an operation from\n> NodeB with a counter value of X, NodeA will adjust its counter to X+1.\n> This ensures that if NodeA has seen an operation from NodeB, its next\n> operation will appear to have occurred after NodeB's operation.\n>\n> I admit that I haven't fully thought through how we could design such\n> version tracking in our logical replication protocol or how it would\n> fit into our system. However, my point is that we need to consider\n> something beyond commit timestamps to achieve reliable ordering.\n>\n\nI can't really respond to this as there's no suggestion how it would be\nimplemented in the patch discussed in this thread.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 17 Jun 2024 02:08:01 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jun 17, 2024 at 4:18 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 6/14/24 13:29, Amit Kapila wrote:\n> > On Fri, Jun 14, 2024 at 12:10 AM Robert Haas <[email protected]> wrote:\n> >>\n> >> On Thu, May 23, 2024 at 2:37 AM shveta malik <[email protected]> wrote:\n> >>> c) update_deleted: The row with the same value as that incoming\n> >>> update's key does not exist. The row is already deleted. This conflict\n> >>> type is generated only if the deleted row is still detectable i.e., it\n> >>> is not removed by VACUUM yet. If the row is removed by VACUUM already,\n> >>> it cannot detect this conflict. It will detect it as update_missing\n> >>> and will follow the default or configured resolver of update_missing\n> >>> itself.\n> >>\n> >> I think this design is categorically unacceptable. It amounts to\n> >> designing a feature that works except when it doesn't. I'm not exactly\n> >> sure how the proposal should be changed to avoid depending on the\n> >> timing of VACUUM, but I think it's absolutely not OK to depend on the\n> >> timing of VACUUm -- or, really, this is going to depend on the timing\n> >> of HOT-pruning, which will often happen almost instantly.\n> >>\n> >\n> > Agreed, above Tomas has speculated to have a way to avoid vacuum\n> > cleaning dead tuples until the required changes are received and\n> > applied. Shveta also mentioned another way to have deads-store (say a\n> > table where deleted rows are stored for resolution) [1] which is\n> > similar to a technique used by some other databases. There is an\n> > agreement to not rely on Vacuum to detect such a conflict but the\n> > alternative is not clear.\n>\n> I'm not sure I'd say I \"speculated\" about it - it's not like we don't\n> have ways to hold off cleanup for a while for various reasons\n> (long-running query, replication slot, hot-standby feedback, ...).\n>\n> How exactly would that be implemented I don't know, but it seems like a\n> far simpler approach than inventing a new \"dead store\". It'd need logic\n> to let the vacuum to cleanup the stuff no longer needed, but so would\n> the dead store I think.\n>\n\nThe difference w.r.t the existing mechanisms for holding deleted data\nis that we don't know whether we need to hold off the vacuum from\ncleaning up the rows because we can't say with any certainty whether\nother nodes will perform any conflicting operations in the future.\nUsing the example we discussed,\nNode A:\n T1: INSERT INTO t (id, value) VALUES (1,1);\n T2: DELETE FROM t WHERE id = 1;\n\nNode B:\n T3: UPDATE t SET value = 2 WHERE id = 1;\n\nSay the order of receiving the commands is T1-T2-T3. We can't predict\nwhether we will ever get T-3, so on what basis shall we try to prevent\nvacuum from removing the deleted row? The one factor could be time,\nsay we define a new parameter vacuum_committs_age which would indicate\nthat we will allow rows to be removed only if the modified time of the\ntuple as indicated by committs module is greater than the\nvacuum_committs_age. This needs more analysis if we want to pursue\nthis direction.\n\nOTOH, in the existing mechanisms, there is a common factor among all\nwhich is that we know that there is some event that requires data to\nbe present. For example, with a long-running query, we know that the\ndeleted/updated row is still visible for some running query. For\nreplication slots, we know that the client will acknowledge the\nfeedback in terms of LSN using which we can allow vacuum to remove\nrows. Similar to these hot_standby_feedback allows the vacuum to\nprevent row removal based on current activity (the xid horizons\nrequired by queries on standby) on hot_standby.\n\n> > Currently, we are thinking to consider such\n> > a conflict type as update_missing (The row with the same value as that\n> > incoming update's key does not exist.). This is how the current HEAD\n> > code behaves and LOGs the information (logical replication did not\n> > find row to be updated ..).\n> >\n>\n> I thought the agreement was we need both conflict types to get sensible\n> behavior, so proceeding with just the update_missing (mostly because we\n> don't know how to detect these conflicts reliably) seems like maybe not\n> be the right direction ...\n>\n\nFair enough. I am also not in favor of ignoring this but if as a first\nstep, we want to improve our current conflict detection mechanism and\nprovide the stats or conflict information in some catalog or view, we\ncan do that even if update_delete is not detected. For example, as of\nnow, we only detect update_missing and simply LOG it at DEBUG1 level.\nAdditionally, we can detect update_differ (the row updated by a\ndifferent origin) and have some stats. We seem to have some agreement\nthat conflict detection and stats about the same could be the first\nstep.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:12:46 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jun 17, 2024 at 5:38 AM Tomas Vondra\n<[email protected]> wrote:\n>\n\n> > The issue with using commit timestamps is that, when multiple nodes\n> > are involved, the commit timestamp won't accurately represent the\n> > actual order of operations. There's no reliable way to determine the\n> > perfect order of each operation happening on different nodes roughly\n> > simultaneously unless we use some globally synchronized counter.\n> > Generally, that order might not cause real issues unless one operation\n> > is triggered by a previous operation, and relying solely on physical\n> > timestamps would not detect that correctly.\n> >\n> This whole conflict detection / resolution proposal is based on using\n> commit timestamps. Aren't you suggesting it can't really work with\n> commit timestamps?\n>\n> FWIW there are ways to builds distributed consistency with timestamps,\n> as long as it's monotonic - e.g. clock-SI does that. It's not perfect,\n> but it shows it's possible.\n\nHmm, I see that clock-SI does this by delaying the transaction when it\ndetects the clock skew.\n\n> However, I'm not we have to go there - it depends on what the goal is.\n> For a one-directional replication (multiple nodes replicating to the\n> same target) it might be sufficient if the conflict resolution is\n> \"deterministic\" (e.g. not dependent on the order in which the changes\n> are applied). I'm not sure, but it's why I asked what's the goal in my\n> very first message in this thread.\n\nI'm not completely certain about this. Even in one directional\nreplication if multiple nodes are sending data how can we guarantee\ndeterminism in the presence of clock skew if we are not using some\nother mechanism like logical counters or something like what clock-SI\nis doing? I don't want to insist on using any specific solution here.\nHowever, I noticed that we haven't addressed how we plan to manage\nclock skew, which is my primary concern. I believe that if multiple\nnodes are involved and we're receiving data from them with\nunsynchronized clocks, ensuring determinism about their order will\nrequire us to take some measures to handle that.\n\n> > We need some sort of logical counter, such as a vector clock, which\n> > might be an independent counter on each node but can perfectly track\n> > the causal order. For example, if NodeA observes an operation from\n> > NodeB with a counter value of X, NodeA will adjust its counter to X+1.\n> > This ensures that if NodeA has seen an operation from NodeB, its next\n> > operation will appear to have occurred after NodeB's operation.\n> >\n> > I admit that I haven't fully thought through how we could design such\n> > version tracking in our logical replication protocol or how it would\n> > fit into our system. However, my point is that we need to consider\n> > something beyond commit timestamps to achieve reliable ordering.\n> >\n>\n> I can't really respond to this as there's no suggestion how it would be\n> implemented in the patch discussed in this thread.\n>\nNo worries, I'll consider whether finding such a solution is feasible\nfor our situation. Thank you!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jun 2024 13:47:53 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 12, 2024 at 10:03 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jun 11, 2024 at 7:44 PM Tomas Vondra\n> <[email protected]> wrote:\n>\n> > > Yes, that's correct. However, many cases could benefit from the\n> > > update_deleted conflict type if it can be implemented reliably. That's\n> > > why we wanted to give it a try. But if we can't achieve predictable\n> > > results with it, I'm fine to drop this approach and conflict_type. We\n> > > can consider a better design in the future that doesn't depend on\n> > > non-vacuumed entries and provides a more robust method for identifying\n> > > deleted rows.\n> > >\n> >\n> > I agree having a separate update_deleted conflict would be beneficial,\n> > I'm not arguing against that - my point is actually that I think this\n> > conflict type is required, and that it needs to be detected reliably.\n> >\n>\n> When working with a distributed system, we must accept some form of\n> eventual consistency model. However, it's essential to design a\n> predictable and acceptable behavior. For example, if a change is a\n> result of a previous operation (such as an update on node B triggered\n> after observing an operation on node A), we can say that the operation\n> on node A happened before the operation on node B. Conversely, if\n> operations on nodes A and B are independent, we consider them\n> concurrent.\n>\n> In distributed systems, clock skew is a known issue. To establish a\n> consistency model, we need to ensure it guarantees the\n> \"happens-before\" relationship. Consider a scenario with three nodes:\n> NodeA, NodeB, and NodeC. If NodeA sends changes to NodeB, and\n> subsequently NodeB makes changes, and then both NodeA's and NodeB's\n> changes are sent to NodeC, the clock skew might make NodeB's changes\n> appear to have occurred before NodeA's changes. However, we should\n> maintain data that indicates NodeB's changes were triggered after\n> NodeA's changes arrived at NodeB. This implies that logically, NodeB's\n> changes happened after NodeA's changes, despite what the timestamps\n> suggest.\n>\n> A common method to handle such cases is using vector clocks for\n> conflict resolution.\n>\n\nI think the unbounded size of the vector could be a problem to store\nfor each event. However, while researching previous discussions, it\ncame to our notice that we have discussed this topic in the past as\nwell in the context of standbys. For recovery_min_apply_delay, we\ndecided the clock skew is not a problem as the settings of this\nparameter are much larger than typical time deviations between servers\nas mentioned in docs. Similarly for casual reads [1], there was a\nproposal to introduce max_clock_skew parameter and suggesting the user\nto make sure to have NTP set up correctly. We have tried to check\nother databases (like Ora and BDR) where CDR is implemented but didn't\nfind anything specific to clock skew. So, I propose to go with a GUC\nlike max_clock_skew such that if the difference of time between the\nincoming transaction's commit time and the local time is more than\nmax_clock_skew then we raise an ERROR. It is not clear to me that\nputting bigger effort into clock skew is worth especially when other\nsystems providing CDR feature (like Ora or BDR) for decades have not\ndone anything like vector clocks. It is possible that this is less of\na problem w.r.t CDR and just detecting the anomaly in clock skew is\ngood enough.\n\n[1] - https://www.postgresql.org/message-id/flat/CAEepm%3D1iiEzCVLD%3DRoBgtZSyEY1CR-Et7fRc9prCZ9MuTz3pWg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 17 Jun 2024 15:22:58 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jun 17, 2024 at 1:42 AM Amit Kapila <[email protected]> wrote:\n> The difference w.r.t the existing mechanisms for holding deleted data\n> is that we don't know whether we need to hold off the vacuum from\n> cleaning up the rows because we can't say with any certainty whether\n> other nodes will perform any conflicting operations in the future.\n> Using the example we discussed,\n> Node A:\n> T1: INSERT INTO t (id, value) VALUES (1,1);\n> T2: DELETE FROM t WHERE id = 1;\n>\n> Node B:\n> T3: UPDATE t SET value = 2 WHERE id = 1;\n>\n> Say the order of receiving the commands is T1-T2-T3. We can't predict\n> whether we will ever get T-3, so on what basis shall we try to prevent\n> vacuum from removing the deleted row?\n\nThe problem arises because T2 and T3 might be applied out of order on\nsome nodes. Once either one of them has been applied on every node, no\nfurther conflicts are possible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:21:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thursday, June 13, 2024 2:11 PM Masahiko Sawada <[email protected]> wrote:\r\n\r\nHi,\r\n\r\n> On Wed, Jun 5, 2024 at 3:32 PM Zhijie Hou (Fujitsu) <[email protected]>\r\n> wrote:\r\n> >\r\n> > This time at PGconf.dev[1], we had some discussions regarding this\r\n> > project. The proposed approach is to split the work into two main\r\n> > components. The first part focuses on conflict detection, which aims\r\n> > to identify and report conflicts in logical replication. This feature\r\n> > will enable users to monitor the unexpected conflicts that may occur.\r\n> > The second part involves the actual conflict resolution. Here, we will\r\n> > provide built-in resolutions for each conflict and allow user to\r\n> > choose which resolution will be used for which conflict(as described\r\n> > in the initial email of this thread).\r\n> \r\n> I agree with this direction that we focus on conflict detection (and\r\n> logging) first and then develop conflict resolution on top of that.\r\n\r\nThanks for your reply !\r\n\r\n> \r\n> >\r\n> > Of course, we are open to alternative ideas and suggestions, and the\r\n> > strategy above can be changed based on ongoing discussions and\r\n> > feedback received.\r\n> >\r\n> > Here is the patch of the first part work, which adds a new parameter\r\n> > detect_conflict for CREATE and ALTER subscription commands. This new\r\n> > parameter will decide if subscription will go for conflict detection.\r\n> > By default, conflict detection will be off for a subscription.\r\n> >\r\n> > When conflict detection is enabled, additional logging is triggered in\r\n> > the following conflict scenarios:\r\n> >\r\n> > * updating a row that was previously modified by another origin.\r\n> > * The tuple to be updated is not found.\r\n> > * The tuple to be deleted is not found.\r\n> >\r\n> > While there exist other conflict types in logical replication, such as\r\n> > an incoming insert conflicting with an existing row due to a primary\r\n> > key or unique index, these cases already result in constraint violation errors.\r\n> \r\n> What does detect_conflict being true actually mean to users? I understand that\r\n> detect_conflict being true could introduce some overhead to detect conflicts.\r\n> But in terms of conflict detection, even if detect_confict is false, we detect\r\n> some conflicts such as concurrent inserts with the same key. Once we\r\n> introduce the complete conflict detection feature, I'm not sure there is a case\r\n> where a user wants to detect only some particular types of conflict.\r\n> \r\n> > Therefore, additional conflict detection for these cases is currently\r\n> > omitted to minimize potential overhead. However, the pre-detection for\r\n> > conflict in these error cases is still essential to support automatic\r\n> > conflict resolution in the future.\r\n> \r\n> I feel that we should log all types of conflict in an uniform way. For example,\r\n> with detect_conflict being true, the update_differ conflict is reported as\r\n> \"conflict %s detected on relation \"%s\"\", whereas concurrent inserts with the\r\n> same key is reported as \"duplicate key value violates unique constraint \"%s\"\",\r\n> which could confuse users.\r\n\r\nDo you mean it's ok to add a pre-check before applying the INSERT, which will\r\nverify if the remote tuple violates any unique constraints, and if it violates\r\nthen we log a conflict message ? I thought about this but was slightly\r\nworried about the extra cost it would bring. OTOH, if we think it's acceptable,\r\nwe could do that since the cost is there only when detect_conflict is enabled.\r\n\r\nI also thought of logging such a conflict message in pg_catch(), but I think we\r\nlack some necessary info(relation, index name, column name) at the catch block.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n\r\n\r\n", "msg_date": "Tue, 18 Jun 2024 02:14:16 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jun 17, 2024 at 3:23 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jun 12, 2024 at 10:03 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Tue, Jun 11, 2024 at 7:44 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >\n> > > > Yes, that's correct. However, many cases could benefit from the\n> > > > update_deleted conflict type if it can be implemented reliably. That's\n> > > > why we wanted to give it a try. But if we can't achieve predictable\n> > > > results with it, I'm fine to drop this approach and conflict_type. We\n> > > > can consider a better design in the future that doesn't depend on\n> > > > non-vacuumed entries and provides a more robust method for identifying\n> > > > deleted rows.\n> > > >\n> > >\n> > > I agree having a separate update_deleted conflict would be beneficial,\n> > > I'm not arguing against that - my point is actually that I think this\n> > > conflict type is required, and that it needs to be detected reliably.\n> > >\n> >\n> > When working with a distributed system, we must accept some form of\n> > eventual consistency model. However, it's essential to design a\n> > predictable and acceptable behavior. For example, if a change is a\n> > result of a previous operation (such as an update on node B triggered\n> > after observing an operation on node A), we can say that the operation\n> > on node A happened before the operation on node B. Conversely, if\n> > operations on nodes A and B are independent, we consider them\n> > concurrent.\n> >\n> > In distributed systems, clock skew is a known issue. To establish a\n> > consistency model, we need to ensure it guarantees the\n> > \"happens-before\" relationship. Consider a scenario with three nodes:\n> > NodeA, NodeB, and NodeC. If NodeA sends changes to NodeB, and\n> > subsequently NodeB makes changes, and then both NodeA's and NodeB's\n> > changes are sent to NodeC, the clock skew might make NodeB's changes\n> > appear to have occurred before NodeA's changes. However, we should\n> > maintain data that indicates NodeB's changes were triggered after\n> > NodeA's changes arrived at NodeB. This implies that logically, NodeB's\n> > changes happened after NodeA's changes, despite what the timestamps\n> > suggest.\n> >\n> > A common method to handle such cases is using vector clocks for\n> > conflict resolution.\n> >\n>\n> I think the unbounded size of the vector could be a problem to store\n> for each event. However, while researching previous discussions, it\n> came to our notice that we have discussed this topic in the past as\n> well in the context of standbys. For recovery_min_apply_delay, we\n> decided the clock skew is not a problem as the settings of this\n> parameter are much larger than typical time deviations between servers\n> as mentioned in docs. Similarly for casual reads [1], there was a\n> proposal to introduce max_clock_skew parameter and suggesting the user\n> to make sure to have NTP set up correctly. We have tried to check\n> other databases (like Ora and BDR) where CDR is implemented but didn't\n> find anything specific to clock skew. So, I propose to go with a GUC\n> like max_clock_skew such that if the difference of time between the\n> incoming transaction's commit time and the local time is more than\n> max_clock_skew then we raise an ERROR. It is not clear to me that\n> putting bigger effort into clock skew is worth especially when other\n> systems providing CDR feature (like Ora or BDR) for decades have not\n> done anything like vector clocks. It is possible that this is less of\n> a problem w.r.t CDR and just detecting the anomaly in clock skew is\n> good enough.\n\nI believe that if we've accepted this solution elsewhere, then we can\nalso consider the same. Basically, we're allowing the application to\nset its tolerance for clock skew. And, if the skew exceeds that\ntolerance, it's the application's responsibility to synchronize;\notherwise, an error will occur. This approach seems reasonable.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Jun 2024 10:17:32 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 18, 2024 at 10:17 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Mon, Jun 17, 2024 at 3:23 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jun 12, 2024 at 10:03 AM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > On Tue, Jun 11, 2024 at 7:44 PM Tomas Vondra\n> > > <[email protected]> wrote:\n> > >\n> > > > > Yes, that's correct. However, many cases could benefit from the\n> > > > > update_deleted conflict type if it can be implemented reliably. That's\n> > > > > why we wanted to give it a try. But if we can't achieve predictable\n> > > > > results with it, I'm fine to drop this approach and conflict_type. We\n> > > > > can consider a better design in the future that doesn't depend on\n> > > > > non-vacuumed entries and provides a more robust method for identifying\n> > > > > deleted rows.\n> > > > >\n> > > >\n> > > > I agree having a separate update_deleted conflict would be beneficial,\n> > > > I'm not arguing against that - my point is actually that I think this\n> > > > conflict type is required, and that it needs to be detected reliably.\n> > > >\n> > >\n> > > When working with a distributed system, we must accept some form of\n> > > eventual consistency model. However, it's essential to design a\n> > > predictable and acceptable behavior. For example, if a change is a\n> > > result of a previous operation (such as an update on node B triggered\n> > > after observing an operation on node A), we can say that the operation\n> > > on node A happened before the operation on node B. Conversely, if\n> > > operations on nodes A and B are independent, we consider them\n> > > concurrent.\n> > >\n> > > In distributed systems, clock skew is a known issue. To establish a\n> > > consistency model, we need to ensure it guarantees the\n> > > \"happens-before\" relationship. Consider a scenario with three nodes:\n> > > NodeA, NodeB, and NodeC. If NodeA sends changes to NodeB, and\n> > > subsequently NodeB makes changes, and then both NodeA's and NodeB's\n> > > changes are sent to NodeC, the clock skew might make NodeB's changes\n> > > appear to have occurred before NodeA's changes. However, we should\n> > > maintain data that indicates NodeB's changes were triggered after\n> > > NodeA's changes arrived at NodeB. This implies that logically, NodeB's\n> > > changes happened after NodeA's changes, despite what the timestamps\n> > > suggest.\n> > >\n> > > A common method to handle such cases is using vector clocks for\n> > > conflict resolution.\n> > >\n> >\n> > I think the unbounded size of the vector could be a problem to store\n> > for each event. However, while researching previous discussions, it\n> > came to our notice that we have discussed this topic in the past as\n> > well in the context of standbys. For recovery_min_apply_delay, we\n> > decided the clock skew is not a problem as the settings of this\n> > parameter are much larger than typical time deviations between servers\n> > as mentioned in docs. Similarly for casual reads [1], there was a\n> > proposal to introduce max_clock_skew parameter and suggesting the user\n> > to make sure to have NTP set up correctly. We have tried to check\n> > other databases (like Ora and BDR) where CDR is implemented but didn't\n> > find anything specific to clock skew. So, I propose to go with a GUC\n> > like max_clock_skew such that if the difference of time between the\n> > incoming transaction's commit time and the local time is more than\n> > max_clock_skew then we raise an ERROR. It is not clear to me that\n> > putting bigger effort into clock skew is worth especially when other\n> > systems providing CDR feature (like Ora or BDR) for decades have not\n> > done anything like vector clocks. It is possible that this is less of\n> > a problem w.r.t CDR and just detecting the anomaly in clock skew is\n> > good enough.\n>\n> I believe that if we've accepted this solution elsewhere, then we can\n> also consider the same. Basically, we're allowing the application to\n> set its tolerance for clock skew. And, if the skew exceeds that\n> tolerance, it's the application's responsibility to synchronize;\n> otherwise, an error will occur. This approach seems reasonable.\n\nThis model can be further extended by making the apply worker wait if\nthe remote transaction's commit_ts is greater than the local\ntimestamp. This ensures that no local transactions occurring after the\nremote transaction appear to have happened earlier due to clock skew\ninstead we make them happen before the remote transaction by delaying\nthe remote transaction apply. Essentially, by having the remote\napplication wait until the local timestamp matches the remote\ntransaction's timestamp, we ensure that the remote transaction, which\nseems to occur after concurrent local transactions due to clock skew,\nis actually applied after those transactions.\n\nWith this model, there should be no ordering errors from the\napplication's perspective as well if synchronous commit is enabled.\nThe transaction initiated by the publisher cannot be completed until\nit is applied to the synchronous subscriber. This ensures that if the\nsubscriber's clock is lagging behind the publisher's clock, the\ntransaction will not be applied until the subscriber's local clock is\nin sync, preventing the transaction from being completed out of order.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Jun 2024 11:33:54 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jun 17, 2024 at 8:51 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Jun 17, 2024 at 1:42 AM Amit Kapila <[email protected]> wrote:\n> > The difference w.r.t the existing mechanisms for holding deleted data\n> > is that we don't know whether we need to hold off the vacuum from\n> > cleaning up the rows because we can't say with any certainty whether\n> > other nodes will perform any conflicting operations in the future.\n> > Using the example we discussed,\n> > Node A:\n> > T1: INSERT INTO t (id, value) VALUES (1,1);\n> > T2: DELETE FROM t WHERE id = 1;\n> >\n> > Node B:\n> > T3: UPDATE t SET value = 2 WHERE id = 1;\n> >\n> > Say the order of receiving the commands is T1-T2-T3. We can't predict\n> > whether we will ever get T-3, so on what basis shall we try to prevent\n> > vacuum from removing the deleted row?\n>\n> The problem arises because T2 and T3 might be applied out of order on\n> some nodes. Once either one of them has been applied on every node, no\n> further conflicts are possible.\n\nIf we decide to skip the update whether the row is missing or deleted,\nwe indeed reach the same end result regardless of the order of T2, T3,\nand Vacuum. Here's how it looks in each case:\n\nCase 1: T1, T2, Vacuum, T3 -> Skip the update for a non-existing row\n-> end result we do not have a row.\nCase 2: T1, T2, T3 -> Skip the update for a deleted row -> end result\nwe do not have a row.\nCase 3: T1, T3, T2 -> deleted the row -> end result we do not have a row.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Jun 2024 11:54:10 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 18, 2024 at 11:54 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Mon, Jun 17, 2024 at 8:51 PM Robert Haas <[email protected]> wrote:\n> >\n> > On Mon, Jun 17, 2024 at 1:42 AM Amit Kapila <[email protected]> wrote:\n> > > The difference w.r.t the existing mechanisms for holding deleted data\n> > > is that we don't know whether we need to hold off the vacuum from\n> > > cleaning up the rows because we can't say with any certainty whether\n> > > other nodes will perform any conflicting operations in the future.\n> > > Using the example we discussed,\n> > > Node A:\n> > > T1: INSERT INTO t (id, value) VALUES (1,1);\n> > > T2: DELETE FROM t WHERE id = 1;\n> > >\n> > > Node B:\n> > > T3: UPDATE t SET value = 2 WHERE id = 1;\n> > >\n> > > Say the order of receiving the commands is T1-T2-T3. We can't predict\n> > > whether we will ever get T-3, so on what basis shall we try to prevent\n> > > vacuum from removing the deleted row?\n> >\n> > The problem arises because T2 and T3 might be applied out of order on\n> > some nodes. Once either one of them has been applied on every node, no\n> > further conflicts are possible.\n>\n> If we decide to skip the update whether the row is missing or deleted,\n> we indeed reach the same end result regardless of the order of T2, T3,\n> and Vacuum. Here's how it looks in each case:\n>\n> Case 1: T1, T2, Vacuum, T3 -> Skip the update for a non-existing row\n> -> end result we do not have a row.\n> Case 2: T1, T2, T3 -> Skip the update for a deleted row -> end result\n> we do not have a row.\n> Case 3: T1, T3, T2 -> deleted the row -> end result we do not have a row.\n>\n\nIn case 3, how can deletion be successful? The row required to be\ndeleted has already been updated.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 Jun 2024 12:11:13 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 18, 2024 at 12:11 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jun 18, 2024 at 11:54 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Mon, Jun 17, 2024 at 8:51 PM Robert Haas <[email protected]> wrote:\n> > >\n> > > On Mon, Jun 17, 2024 at 1:42 AM Amit Kapila <[email protected]> wrote:\n> > > > The difference w.r.t the existing mechanisms for holding deleted data\n> > > > is that we don't know whether we need to hold off the vacuum from\n> > > > cleaning up the rows because we can't say with any certainty whether\n> > > > other nodes will perform any conflicting operations in the future.\n> > > > Using the example we discussed,\n> > > > Node A:\n> > > > T1: INSERT INTO t (id, value) VALUES (1,1);\n> > > > T2: DELETE FROM t WHERE id = 1;\n> > > >\n> > > > Node B:\n> > > > T3: UPDATE t SET value = 2 WHERE id = 1;\n> > > >\n> > > > Say the order of receiving the commands is T1-T2-T3. We can't predict\n> > > > whether we will ever get T-3, so on what basis shall we try to prevent\n> > > > vacuum from removing the deleted row?\n> > >\n> > > The problem arises because T2 and T3 might be applied out of order on\n> > > some nodes. Once either one of them has been applied on every node, no\n> > > further conflicts are possible.\n> >\n> > If we decide to skip the update whether the row is missing or deleted,\n> > we indeed reach the same end result regardless of the order of T2, T3,\n> > and Vacuum. Here's how it looks in each case:\n> >\n> > Case 1: T1, T2, Vacuum, T3 -> Skip the update for a non-existing row\n> > -> end result we do not have a row.\n> > Case 2: T1, T2, T3 -> Skip the update for a deleted row -> end result\n> > we do not have a row.\n> > Case 3: T1, T3, T2 -> deleted the row -> end result we do not have a row.\n> >\n>\n> In case 3, how can deletion be successful? The row required to be\n> deleted has already been updated.\n\nHmm, I was considering this case in the example given by you above[1],\nso we have updated some fields of the row with id=1, isn't this row\nstill detectable by the delete because delete will find this by id=1\nas we haven't updated the id? I was making the point w.r.t. the\nexample used above.\n\n[1]\n> > > > Node A:\n> > > > T1: INSERT INTO t (id, value) VALUES (1,1);\n> > > > T2: DELETE FROM t WHERE id = 1;\n> > > >\n> > > > Node B:\n> > > > T3: UPDATE t SET value = 2 WHERE id = 1;\n\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Jun 2024 13:18:37 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 18, 2024 at 1:18 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jun 18, 2024 at 12:11 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jun 18, 2024 at 11:54 AM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > On Mon, Jun 17, 2024 at 8:51 PM Robert Haas <[email protected]> wrote:\n> > > >\n> > > > On Mon, Jun 17, 2024 at 1:42 AM Amit Kapila <[email protected]> wrote:\n> > > > > The difference w.r.t the existing mechanisms for holding deleted data\n> > > > > is that we don't know whether we need to hold off the vacuum from\n> > > > > cleaning up the rows because we can't say with any certainty whether\n> > > > > other nodes will perform any conflicting operations in the future.\n> > > > > Using the example we discussed,\n> > > > > Node A:\n> > > > > T1: INSERT INTO t (id, value) VALUES (1,1);\n> > > > > T2: DELETE FROM t WHERE id = 1;\n> > > > >\n> > > > > Node B:\n> > > > > T3: UPDATE t SET value = 2 WHERE id = 1;\n> > > > >\n> > > > > Say the order of receiving the commands is T1-T2-T3. We can't predict\n> > > > > whether we will ever get T-3, so on what basis shall we try to prevent\n> > > > > vacuum from removing the deleted row?\n> > > >\n> > > > The problem arises because T2 and T3 might be applied out of order on\n> > > > some nodes. Once either one of them has been applied on every node, no\n> > > > further conflicts are possible.\n> > >\n> > > If we decide to skip the update whether the row is missing or deleted,\n> > > we indeed reach the same end result regardless of the order of T2, T3,\n> > > and Vacuum. Here's how it looks in each case:\n> > >\n> > > Case 1: T1, T2, Vacuum, T3 -> Skip the update for a non-existing row\n> > > -> end result we do not have a row.\n> > > Case 2: T1, T2, T3 -> Skip the update for a deleted row -> end result\n> > > we do not have a row.\n> > > Case 3: T1, T3, T2 -> deleted the row -> end result we do not have a row.\n> > >\n> >\n> > In case 3, how can deletion be successful? The row required to be\n> > deleted has already been updated.\n>\n> Hmm, I was considering this case in the example given by you above[1],\n> so we have updated some fields of the row with id=1, isn't this row\n> still detectable by the delete because delete will find this by id=1\n> as we haven't updated the id? I was making the point w.r.t. the\n> example used above.\n>\n\nYour point is correct w.r.t the example but I responded considering a\ngeneral update-delete ordering. BTW, it is not clear to me how\nupdate_delete conflict will be handled with what Robert and you are\nsaying. I'll try to say what I understood. If we assume that there are\ntwo nodes A & B as mentioned in the above example and DELETE has\napplied on both nodes, now say UPDATE has been performed on node B\nthen irrespective of whether we consider the conflict as update_delete\nor update_missing, the data will remain same on both nodes. So, in\nsuch a case, we don't need to bother differentiating between those two\ntypes of conflicts. Is that what we can interpret from above?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 Jun 2024 14:07:30 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 18, 2024 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jun 18, 2024 at 10:17 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Mon, Jun 17, 2024 at 3:23 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Jun 12, 2024 at 10:03 AM Dilip Kumar <[email protected]> wrote:\n> > > >\n> > > > On Tue, Jun 11, 2024 at 7:44 PM Tomas Vondra\n> > > > <[email protected]> wrote:\n> > > >\n> > > > > > Yes, that's correct. However, many cases could benefit from the\n> > > > > > update_deleted conflict type if it can be implemented reliably. That's\n> > > > > > why we wanted to give it a try. But if we can't achieve predictable\n> > > > > > results with it, I'm fine to drop this approach and conflict_type. We\n> > > > > > can consider a better design in the future that doesn't depend on\n> > > > > > non-vacuumed entries and provides a more robust method for identifying\n> > > > > > deleted rows.\n> > > > > >\n> > > > >\n> > > > > I agree having a separate update_deleted conflict would be beneficial,\n> > > > > I'm not arguing against that - my point is actually that I think this\n> > > > > conflict type is required, and that it needs to be detected reliably.\n> > > > >\n> > > >\n> > > > When working with a distributed system, we must accept some form of\n> > > > eventual consistency model. However, it's essential to design a\n> > > > predictable and acceptable behavior. For example, if a change is a\n> > > > result of a previous operation (such as an update on node B triggered\n> > > > after observing an operation on node A), we can say that the operation\n> > > > on node A happened before the operation on node B. Conversely, if\n> > > > operations on nodes A and B are independent, we consider them\n> > > > concurrent.\n> > > >\n> > > > In distributed systems, clock skew is a known issue. To establish a\n> > > > consistency model, we need to ensure it guarantees the\n> > > > \"happens-before\" relationship. Consider a scenario with three nodes:\n> > > > NodeA, NodeB, and NodeC. If NodeA sends changes to NodeB, and\n> > > > subsequently NodeB makes changes, and then both NodeA's and NodeB's\n> > > > changes are sent to NodeC, the clock skew might make NodeB's changes\n> > > > appear to have occurred before NodeA's changes. However, we should\n> > > > maintain data that indicates NodeB's changes were triggered after\n> > > > NodeA's changes arrived at NodeB. This implies that logically, NodeB's\n> > > > changes happened after NodeA's changes, despite what the timestamps\n> > > > suggest.\n> > > >\n> > > > A common method to handle such cases is using vector clocks for\n> > > > conflict resolution.\n> > > >\n> > >\n> > > I think the unbounded size of the vector could be a problem to store\n> > > for each event. However, while researching previous discussions, it\n> > > came to our notice that we have discussed this topic in the past as\n> > > well in the context of standbys. For recovery_min_apply_delay, we\n> > > decided the clock skew is not a problem as the settings of this\n> > > parameter are much larger than typical time deviations between servers\n> > > as mentioned in docs. Similarly for casual reads [1], there was a\n> > > proposal to introduce max_clock_skew parameter and suggesting the user\n> > > to make sure to have NTP set up correctly. We have tried to check\n> > > other databases (like Ora and BDR) where CDR is implemented but didn't\n> > > find anything specific to clock skew. So, I propose to go with a GUC\n> > > like max_clock_skew such that if the difference of time between the\n> > > incoming transaction's commit time and the local time is more than\n> > > max_clock_skew then we raise an ERROR. It is not clear to me that\n> > > putting bigger effort into clock skew is worth especially when other\n> > > systems providing CDR feature (like Ora or BDR) for decades have not\n> > > done anything like vector clocks. It is possible that this is less of\n> > > a problem w.r.t CDR and just detecting the anomaly in clock skew is\n> > > good enough.\n> >\n> > I believe that if we've accepted this solution elsewhere, then we can\n> > also consider the same. Basically, we're allowing the application to\n> > set its tolerance for clock skew. And, if the skew exceeds that\n> > tolerance, it's the application's responsibility to synchronize;\n> > otherwise, an error will occur. This approach seems reasonable.\n>\n> This model can be further extended by making the apply worker wait if\n> the remote transaction's commit_ts is greater than the local\n> timestamp. This ensures that no local transactions occurring after the\n> remote transaction appear to have happened earlier due to clock skew\n> instead we make them happen before the remote transaction by delaying\n> the remote transaction apply. Essentially, by having the remote\n> application wait until the local timestamp matches the remote\n> transaction's timestamp, we ensure that the remote transaction, which\n> seems to occur after concurrent local transactions due to clock skew,\n> is actually applied after those transactions.\n>\n> With this model, there should be no ordering errors from the\n> application's perspective as well if synchronous commit is enabled.\n> The transaction initiated by the publisher cannot be completed until\n> it is applied to the synchronous subscriber. This ensures that if the\n> subscriber's clock is lagging behind the publisher's clock, the\n> transaction will not be applied until the subscriber's local clock is\n> in sync, preventing the transaction from being completed out of order.\n\nI tried to work out a few scenarios with this, where the apply worker\nwill wait until its local clock hits 'remote_commit_tts - max_skew\npermitted'. Please have a look.\n\nLet's say, we have a GUC to configure max_clock_skew permitted.\nResolver is last_update_wins in both cases.\n\n----------------\n1) Case 1: max_clock_skew set to 0 i.e. no tolerance for clock skew.\n\nRemote Update with commit_timestamp = 10.20AM.\nLocal clock (which is say 5 min behind) shows = 10.15AM.\n\nWhen remote update arrives at local node, we see that skew is greater\nthan max_clock_skew and thus apply worker waits till local clock hits\n'remote's commit_tts - max_clock_skew' i.e. till 10.20 AM. Once the\nlocal clock hits 10.20 AM, the worker applies the remote change with\ncommit_tts of 10.20AM. In the meantime (during wait period of apply\nworker)) if some local update on same row has happened at say 10.18am,\nthat will applied first, which will be later overwritten by above\nremote change of 10.20AM as remote-change's timestamp appear more\nlatest, even though it has happened earlier than local change.\n\n2) Case 2: max_clock_skew is set to 2min.\n\nRemote Update with commit_timestamp=10.20AM\nLocal clock (which is say 5 min behind) = 10.15AM.\n\nNow apply worker will notice skew greater than 2min and thus will wait\ntill local clock hits 'remote's commit_tts - max_clock_skew' i.e.\n10.18 and will apply the change with commit_tts of 10.20 ( as we\nalways save the origin's commit timestamp into local commit_tts, see\nRecordTransactionCommit->TransactionTreeSetCommitTsData). Now lets say\nanother local update is triggered at 10.19am, it will be applied\nlocally but it will be ignored on remote node. On the remote node ,\nthe existing change with a timestamp of 10.20 am will win resulting in\ndata divergence.\n----------\n\nIn case 1, the local change which was otherwise triggered later than\nthe remote change is overwritten by remote change. And in Case2, it\nresults in data divergence. Is this behaviour in both cases expected?\nOr am I getting the wait logic wrong? Thoughts?\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 18 Jun 2024 15:28:51 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 18, 2024 at 7:44 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Thursday, June 13, 2024 2:11 PM Masahiko Sawada <[email protected]> wrote:\n>\n> Hi,\n>\n> > On Wed, Jun 5, 2024 at 3:32 PM Zhijie Hou (Fujitsu) <[email protected]>\n> > wrote:\n> > >\n> > > This time at PGconf.dev[1], we had some discussions regarding this\n> > > project. The proposed approach is to split the work into two main\n> > > components. The first part focuses on conflict detection, which aims\n> > > to identify and report conflicts in logical replication. This feature\n> > > will enable users to monitor the unexpected conflicts that may occur.\n> > > The second part involves the actual conflict resolution. Here, we will\n> > > provide built-in resolutions for each conflict and allow user to\n> > > choose which resolution will be used for which conflict(as described\n> > > in the initial email of this thread).\n> >\n> > I agree with this direction that we focus on conflict detection (and\n> > logging) first and then develop conflict resolution on top of that.\n>\n> Thanks for your reply !\n>\n> >\n> > >\n> > > Of course, we are open to alternative ideas and suggestions, and the\n> > > strategy above can be changed based on ongoing discussions and\n> > > feedback received.\n> > >\n> > > Here is the patch of the first part work, which adds a new parameter\n> > > detect_conflict for CREATE and ALTER subscription commands. This new\n> > > parameter will decide if subscription will go for conflict detection.\n> > > By default, conflict detection will be off for a subscription.\n> > >\n> > > When conflict detection is enabled, additional logging is triggered in\n> > > the following conflict scenarios:\n> > >\n> > > * updating a row that was previously modified by another origin.\n> > > * The tuple to be updated is not found.\n> > > * The tuple to be deleted is not found.\n> > >\n> > > While there exist other conflict types in logical replication, such as\n> > > an incoming insert conflicting with an existing row due to a primary\n> > > key or unique index, these cases already result in constraint violation errors.\n> >\n> > What does detect_conflict being true actually mean to users? I understand that\n> > detect_conflict being true could introduce some overhead to detect conflicts.\n> > But in terms of conflict detection, even if detect_confict is false, we detect\n> > some conflicts such as concurrent inserts with the same key. Once we\n> > introduce the complete conflict detection feature, I'm not sure there is a case\n> > where a user wants to detect only some particular types of conflict.\n> >\n> > > Therefore, additional conflict detection for these cases is currently\n> > > omitted to minimize potential overhead. However, the pre-detection for\n> > > conflict in these error cases is still essential to support automatic\n> > > conflict resolution in the future.\n> >\n> > I feel that we should log all types of conflict in an uniform way. For example,\n> > with detect_conflict being true, the update_differ conflict is reported as\n> > \"conflict %s detected on relation \"%s\"\", whereas concurrent inserts with the\n> > same key is reported as \"duplicate key value violates unique constraint \"%s\"\",\n> > which could confuse users.\n>\n> Do you mean it's ok to add a pre-check before applying the INSERT, which will\n> verify if the remote tuple violates any unique constraints, and if it violates\n> then we log a conflict message ? I thought about this but was slightly\n> worried about the extra cost it would bring. OTOH, if we think it's acceptable,\n> we could do that since the cost is there only when detect_conflict is enabled.\n>\n> I also thought of logging such a conflict message in pg_catch(), but I think we\n> lack some necessary info(relation, index name, column name) at the catch block.\n>\n\nCan't we use/extend existing 'apply_error_callback_arg' for this purpose?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 Jun 2024 11:23:24 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 11, 2024 at 3:12 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Sat, Jun 8, 2024 at 3:52 PM Amit Kapila <[email protected]> wrote:\n>>\n>> On Fri, Jun 7, 2024 at 5:39 PM Ashutosh Bapat\n>> <[email protected]> wrote:\n>> >\n>> > On Thu, Jun 6, 2024 at 5:16 PM Nisha Moond <[email protected]> wrote:\n>> >>\n>> >> >\n>> >>\n>> >> Here are more use cases of the \"earliest_timestamp_wins\" resolution method:\n>> >> 1) Applications where the record of first occurrence of an event is\n>> >> important. For example, sensor based applications like earthquake\n>> >> detection systems, capturing the first seismic wave's time is crucial.\n>> >> 2) Scheduling systems, like appointment booking, prioritize the\n>> >> earliest request when handling concurrent ones.\n>> >> 3) In contexts where maintaining chronological order is important -\n>> >> a) Social media platforms display comments ensuring that the\n>> >> earliest ones are visible first.\n>> >> b) Finance transaction processing systems rely on timestamps to\n>> >> prioritize the processing of transactions, ensuring that the earliest\n>> >> transaction is handled first\n>> >\n>> >\n>> > Thanks for sharing examples. However, these scenarios would be handled by the application and not during replication. What we are discussing here is the timestamp when a row was updated/inserted/deleted (or rather when the transaction that updated row committed/became visible) and not a DML on column which is of type timestamp. Some implementations use a hidden timestamp column but that's different from a user column which captures timestamp of (say) an event. The conflict resolution will be based on the timestamp when that column's value was recorded in the database which may be different from the value of the column itself.\n>> >\n>>\n>> It depends on how these operations are performed. For example, the\n>> appointment booking system could be prioritized via a transaction\n>> updating a row with columns emp_name, emp_id, reserved, time_slot.\n>> Now, if two employees at different geographical locations try to book\n>> the calendar, the earlier transaction will win.\n>\n>\n> I doubt that it would be that simple. The application will have to intervene and tell one of the employees that their reservation has failed. It looks natural that the first one to reserve the room should get the reservation, but implementing that is more complex than resolving a conflict in the database. In fact, mostly it will be handled outside database.\n>\n\nSure, the application needs some handling but I have tried to explain\nwith a simple way that comes to my mind and how it can be realized\nwith db involved. This is a known conflict detection method but note\nthat I am not insisting to have \"earliest_timestamp_wins\". Even, if we\nwant this we can have a separate discussion on this and add it later.\n\n>>\n>>\n>> > If we use the transaction commit timestamp as basis for resolution, a transaction where multiple rows conflict may end up with different rows affected by that transaction being resolved differently. Say three transactions T1, T2 and T3 on separate origins with timestamps t1, t2, and t3 respectively changed rows r1, r2 and r2, r3 and r1, r4 respectively. Changes to r1 and r2 will conflict. Let's say T2 and T3 are applied first and then T1 is applied. If t2 < t1 < t3, r1 will end up with version of T3 and r2 will end up with version of T1 after applying all the three transactions.\n>> >\n>>\n>> Are you telling the results based on latest_timestamp_wins? If so,\n>> then it is correct. OTOH, if the user has configured\n>> \"earliest_timestamp_wins\" resolution method, then we should end up\n>> with a version of r1 from T1 because t1 < t3. Also, due to the same\n>> reason, we should have version r2 from T2.\n>>\n>> >\n>> Would that introduce an inconsistency between r1 and r2?\n>> >\n>>\n>> As per my understanding, this shouldn't be an inconsistency. Won't it\n>> be true even when the transactions are performed on a single node with\n>> the same timing?\n>>\n>\n> The inconsistency will arise irrespective of conflict resolution method. On a single system effects of whichever transaction runs last will be visible entirely. But in the example above the node where T1, T2, and T3 (from *different*) origins) are applied, we might end up with a situation where some changes from T1 are applied whereas some changes from T3 are applied.\n>\n\nI still think it will lead to the same result if all three T1, T2, T3\nhappen on the same node in the same order as you mentioned. Say, we\nhave a pre-existing table with rows r1, r2, r3, r4. Now, if we use the\norder of transactions to be applied on the same node based on t2 < t1\n< t3. First T2 will be applied, so for now, r1 is a pre-existing\nversion and r2 is from T2. Next, when T1 is performed, both r1 and r2\nare from T1. Lastly, when T3 is applied, r1 will be from T3 and r2\nwill be from T1. This is what you mentioned will happen after conflict\nresolution in the above example.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 Jun 2024 12:03:26 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 18, 2024 at 3:29 PM shveta malik <[email protected]> wrote:\n> On Tue, Jun 18, 2024 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n>\n> I tried to work out a few scenarios with this, where the apply worker\n> will wait until its local clock hits 'remote_commit_tts - max_skew\n> permitted'. Please have a look.\n>\n> Let's say, we have a GUC to configure max_clock_skew permitted.\n> Resolver is last_update_wins in both cases.\n> ----------------\n> 1) Case 1: max_clock_skew set to 0 i.e. no tolerance for clock skew.\n>\n> Remote Update with commit_timestamp = 10.20AM.\n> Local clock (which is say 5 min behind) shows = 10.15AM.\n>\n> When remote update arrives at local node, we see that skew is greater\n> than max_clock_skew and thus apply worker waits till local clock hits\n> 'remote's commit_tts - max_clock_skew' i.e. till 10.20 AM. Once the\n> local clock hits 10.20 AM, the worker applies the remote change with\n> commit_tts of 10.20AM. In the meantime (during wait period of apply\n> worker)) if some local update on same row has happened at say 10.18am,\n> that will applied first, which will be later overwritten by above\n> remote change of 10.20AM as remote-change's timestamp appear more\n> latest, even though it has happened earlier than local change.\n\nFor the sake of simplicity let's call the change that happened at\n10:20 AM change-1 and the change that happened at 10:15 as change-2\nand assume we are talking about the synchronous commit only.\n\nI think now from an application perspective the change-1 wouldn't have\ncaused the change-2 because we delayed applying change-2 on the local\nnode which would have delayed the confirmation of the change-1 to the\napplication that means we have got the change-2 on the local node\nwithout the confirmation of change-1 hence change-2 has no causal\ndependency on the change-1. So it's fine that we perform change-1\nbefore change-2 and the timestamp will also show the same at any other\nnode if they receive these 2 changes.\n\nThe goal is to ensure that if we define the order where change-2\nhappens before change-1, this same order should be visible on all\nother nodes. This will hold true because the commit timestamp of\nchange-2 is earlier than that of change-1.\n\n> 2) Case 2: max_clock_skew is set to 2min.\n>\n> Remote Update with commit_timestamp=10.20AM\n> Local clock (which is say 5 min behind) = 10.15AM.\n>\n> Now apply worker will notice skew greater than 2min and thus will wait\n> till local clock hits 'remote's commit_tts - max_clock_skew' i.e.\n> 10.18 and will apply the change with commit_tts of 10.20 ( as we\n> always save the origin's commit timestamp into local commit_tts, see\n> RecordTransactionCommit->TransactionTreeSetCommitTsData). Now lets say\n> another local update is triggered at 10.19am, it will be applied\n> locally but it will be ignored on remote node. On the remote node ,\n> the existing change with a timestamp of 10.20 am will win resulting in\n> data divergence.\n\nLet's call the 10:20 AM change as a change-1 and the change that\nhappened at 10:19 as change-2\n\nIIUC, although we apply the change-1 at 10:18 AM the commit_ts of that\ncommit_ts of that change is 10:20, and the same will be visible to all\nother nodes. So in conflict resolution still the change-1 happened\nafter the change-2 because change-2's commit_ts is 10:19 AM. Now\nthere could be a problem with the causal order because we applied the\nchange-1 at 10:18 AM so the application might have gotten confirmation\nat 10:18 AM and the change-2 of the local node may be triggered as a\nresult of confirmation of the change-1 that means now change-2 has a\ncausal dependency on the change-1 but commit_ts shows change-2\nhappened before the change-1 on all the nodes.\n\nSo, is this acceptable? I think yes because the user has configured a\nmaximum clock skew of 2 minutes, which means the detected order might\nnot always align with the causal order for transactions occurring\nwithin that time frame. Generally, the ideal configuration for\nmax_clock_skew should be in multiple of the network round trip time.\nAssuming this configuration, we wouldn’t encounter this problem\nbecause for change-2 to be caused by change-1, the client would need\nto get confirmation of change-1 and then trigger change-2, which would\ntake at least 2-3 network round trips.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jun 2024 13:52:16 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 19, 2024 at 1:52 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jun 18, 2024 at 3:29 PM shveta malik <[email protected]> wrote:\n> > On Tue, Jun 18, 2024 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > I tried to work out a few scenarios with this, where the apply worker\n> > will wait until its local clock hits 'remote_commit_tts - max_skew\n> > permitted'. Please have a look.\n> >\n> > Let's say, we have a GUC to configure max_clock_skew permitted.\n> > Resolver is last_update_wins in both cases.\n> > ----------------\n> > 1) Case 1: max_clock_skew set to 0 i.e. no tolerance for clock skew.\n> >\n> > Remote Update with commit_timestamp = 10.20AM.\n> > Local clock (which is say 5 min behind) shows = 10.15AM.\n> >\n> > When remote update arrives at local node, we see that skew is greater\n> > than max_clock_skew and thus apply worker waits till local clock hits\n> > 'remote's commit_tts - max_clock_skew' i.e. till 10.20 AM. Once the\n> > local clock hits 10.20 AM, the worker applies the remote change with\n> > commit_tts of 10.20AM. In the meantime (during wait period of apply\n> > worker)) if some local update on same row has happened at say 10.18am,\n> > that will applied first, which will be later overwritten by above\n> > remote change of 10.20AM as remote-change's timestamp appear more\n> > latest, even though it has happened earlier than local change.\n>\n> For the sake of simplicity let's call the change that happened at\n> 10:20 AM change-1 and the change that happened at 10:15 as change-2\n> and assume we are talking about the synchronous commit only.\n\nDo you mean \"the change that happened at 10:18 as change-2\"\n\n>\n> I think now from an application perspective the change-1 wouldn't have\n> caused the change-2 because we delayed applying change-2 on the local\n> node\n\nDo you mean \"we delayed applying change-1 on the local node.\"\n\n>which would have delayed the confirmation of the change-1 to the\n> application that means we have got the change-2 on the local node\n> without the confirmation of change-1 hence change-2 has no causal\n> dependency on the change-1. So it's fine that we perform change-1\n> before change-2\n\nDo you mean \"So it's fine that we perform change-2 before change-1\"\n\n>and the timestamp will also show the same at any other\n> node if they receive these 2 changes.\n>\n> The goal is to ensure that if we define the order where change-2\n> happens before change-1, this same order should be visible on all\n> other nodes. This will hold true because the commit timestamp of\n> change-2 is earlier than that of change-1.\n\nConsidering the above corrections as base, I agree with this.\n\n> > 2) Case 2: max_clock_skew is set to 2min.\n> >\n> > Remote Update with commit_timestamp=10.20AM\n> > Local clock (which is say 5 min behind) = 10.15AM.\n> >\n> > Now apply worker will notice skew greater than 2min and thus will wait\n> > till local clock hits 'remote's commit_tts - max_clock_skew' i.e.\n> > 10.18 and will apply the change with commit_tts of 10.20 ( as we\n> > always save the origin's commit timestamp into local commit_tts, see\n> > RecordTransactionCommit->TransactionTreeSetCommitTsData). Now lets say\n> > another local update is triggered at 10.19am, it will be applied\n> > locally but it will be ignored on remote node. On the remote node ,\n> > the existing change with a timestamp of 10.20 am will win resulting in\n> > data divergence.\n>\n> Let's call the 10:20 AM change as a change-1 and the change that\n> happened at 10:19 as change-2\n>\n> IIUC, although we apply the change-1 at 10:18 AM the commit_ts of that\n> commit_ts of that change is 10:20, and the same will be visible to all\n> other nodes. So in conflict resolution still the change-1 happened\n> after the change-2 because change-2's commit_ts is 10:19 AM. Now\n> there could be a problem with the causal order because we applied the\n> change-1 at 10:18 AM so the application might have gotten confirmation\n> at 10:18 AM and the change-2 of the local node may be triggered as a\n> result of confirmation of the change-1 that means now change-2 has a\n> causal dependency on the change-1 but commit_ts shows change-2\n> happened before the change-1 on all the nodes.\n>\n> So, is this acceptable? I think yes because the user has configured a\n> maximum clock skew of 2 minutes, which means the detected order might\n> not always align with the causal order for transactions occurring\n> within that time frame.\n\nAgree. I had the same thoughts, and wanted to confirm my understanding.\n\n>Generally, the ideal configuration for\n> max_clock_skew should be in multiple of the network round trip time.\n> Assuming this configuration, we wouldn’t encounter this problem\n> because for change-2 to be caused by change-1, the client would need\n> to get confirmation of change-1 and then trigger change-2, which would\n> take at least 2-3 network round trips.\n\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 19 Jun 2024 14:36:42 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 19, 2024 at 12:03 PM Amit Kapila <[email protected]>\nwrote:\n\n> > I doubt that it would be that simple. The application will have to\n> intervene and tell one of the employees that their reservation has failed.\n> It looks natural that the first one to reserve the room should get the\n> reservation, but implementing that is more complex than resolving a\n> conflict in the database. In fact, mostly it will be handled outside\n> database.\n> >\n>\n> Sure, the application needs some handling but I have tried to explain\n> with a simple way that comes to my mind and how it can be realized\n> with db involved. This is a known conflict detection method but note\n> that I am not insisting to have \"earliest_timestamp_wins\". Even, if we\n> want this we can have a separate discussion on this and add it later.\n>\n>\nIt will be good to add a minimal set of conflict resolution strategies to\nbegin with, while designing the feature for extensibility. I imagine the\nfirst version might just detect the conflict and throw error or do nothing.\nThat's already two simple conflict resolution strategies with minimal\nefforts. We can add more complicated ones incrementally.\n\n\n> >\n> > The inconsistency will arise irrespective of conflict resolution method.\n> On a single system effects of whichever transaction runs last will be\n> visible entirely. But in the example above the node where T1, T2, and T3\n> (from *different*) origins) are applied, we might end up with a situation\n> where some changes from T1 are applied whereas some changes from T3 are\n> applied.\n> >\n>\n> I still think it will lead to the same result if all three T1, T2, T3\n> happen on the same node in the same order as you mentioned. Say, we\n> have a pre-existing table with rows r1, r2, r3, r4. Now, if we use the\n> order of transactions to be applied on the same node based on t2 < t1\n> < t3. First T2 will be applied, so for now, r1 is a pre-existing\n> version and r2 is from T2. Next, when T1 is performed, both r1 and r2\n> are from T1. Lastly, when T3 is applied, r1 will be from T3 and r2\n> will be from T1. This is what you mentioned will happen after conflict\n> resolution in the above example.\n>\n>\nYou are right. It won't affect the consistency. The contents of transaction\non each node might vary after application depending upon the changes that\nconflict resolver makes; but the end result will be the same.\n\n--\nBest Wishes,\nAshutosh Bapat\n\nOn Wed, Jun 19, 2024 at 12:03 PM Amit Kapila <[email protected]> wrote:\n> I doubt that it would be that simple. The application will have to intervene and tell one of the employees that their reservation has failed. It looks natural that the first one to reserve the room should get the reservation, but implementing that is more complex than resolving a conflict in the database. In fact, mostly it will be handled outside database.\n>\n\nSure, the application needs some handling but I have tried to explain\nwith a simple way that comes to my mind and how it can be realized\nwith db involved. This is a known conflict detection method but note\nthat I am not insisting to have \"earliest_timestamp_wins\". Even, if we\nwant this we can have a separate discussion on this and add it later.\nIt will be good to add a minimal set of conflict resolution strategies to begin with, while designing the feature for extensibility. I imagine the first version might just detect the conflict and throw error or do nothing. That's already two simple conflict resolution strategies with minimal efforts. We can add more complicated ones incrementally. \n>\n> The inconsistency will arise irrespective of conflict resolution method. On a single system effects of whichever transaction runs last will be visible entirely. But in the example above the node where T1, T2, and T3 (from *different*) origins) are applied, we might end up with a situation where some changes from T1 are applied whereas some changes from T3 are applied.\n>\n\nI still think it will lead to the same result if all three T1, T2, T3\nhappen on the same node in the same order as you mentioned. Say, we\nhave a pre-existing table with rows r1, r2, r3, r4. Now, if we use the\norder of transactions to be applied on the same node based on t2 < t1\n< t3. First T2 will be applied, so for now, r1 is a pre-existing\nversion and r2 is from T2. Next, when T1 is performed, both r1 and r2\nare from T1. Lastly, when T3 is applied, r1 will be from T3 and r2\nwill be from T1. This is what you mentioned will happen after conflict\nresolution in the above example.\nYou are right. It won't affect the consistency. The contents of transaction on each node might vary after application depending upon the changes that conflict resolver makes; but the end result will be the same.--Best Wishes,Ashutosh Bapat", "msg_date": "Wed, 19 Jun 2024 14:51:12 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 19, 2024 at 2:36 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Jun 19, 2024 at 1:52 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Tue, Jun 18, 2024 at 3:29 PM shveta malik <[email protected]> wrote:\n> > > On Tue, Jun 18, 2024 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > I tried to work out a few scenarios with this, where the apply worker\n> > > will wait until its local clock hits 'remote_commit_tts - max_skew\n> > > permitted'. Please have a look.\n> > >\n> > > Let's say, we have a GUC to configure max_clock_skew permitted.\n> > > Resolver is last_update_wins in both cases.\n> > > ----------------\n> > > 1) Case 1: max_clock_skew set to 0 i.e. no tolerance for clock skew.\n> > >\n> > > Remote Update with commit_timestamp = 10.20AM.\n> > > Local clock (which is say 5 min behind) shows = 10.15AM.\n> > >\n> > > When remote update arrives at local node, we see that skew is greater\n> > > than max_clock_skew and thus apply worker waits till local clock hits\n> > > 'remote's commit_tts - max_clock_skew' i.e. till 10.20 AM. Once the\n> > > local clock hits 10.20 AM, the worker applies the remote change with\n> > > commit_tts of 10.20AM. In the meantime (during wait period of apply\n> > > worker)) if some local update on same row has happened at say 10.18am,\n> > > that will applied first, which will be later overwritten by above\n> > > remote change of 10.20AM as remote-change's timestamp appear more\n> > > latest, even though it has happened earlier than local change.\n\nOops lot of mistakes in the usage of change-1 and change-2, sorry about that.\n\n> > For the sake of simplicity let's call the change that happened at\n> > 10:20 AM change-1 and the change that happened at 10:15 as change-2\n> > and assume we are talking about the synchronous commit only.\n>\n> Do you mean \"the change that happened at 10:18 as change-2\"\n\nRight\n\n> >\n> > I think now from an application perspective the change-1 wouldn't have\n> > caused the change-2 because we delayed applying change-2 on the local\n> > node\n>\n> Do you mean \"we delayed applying change-1 on the local node.\"\n\nRight\n\n> >which would have delayed the confirmation of the change-1 to the\n> > application that means we have got the change-2 on the local node\n> > without the confirmation of change-1 hence change-2 has no causal\n> > dependency on the change-1. So it's fine that we perform change-1\n> > before change-2\n>\n> Do you mean \"So it's fine that we perform change-2 before change-1\"\n\nRight\n\n> >and the timestamp will also show the same at any other\n> > node if they receive these 2 changes.\n> >\n> > The goal is to ensure that if we define the order where change-2\n> > happens before change-1, this same order should be visible on all\n> > other nodes. This will hold true because the commit timestamp of\n> > change-2 is earlier than that of change-1.\n>\n> Considering the above corrections as base, I agree with this.\n\n+1\n\n> > > 2) Case 2: max_clock_skew is set to 2min.\n> > >\n> > > Remote Update with commit_timestamp=10.20AM\n> > > Local clock (which is say 5 min behind) = 10.15AM.\n> > >\n> > > Now apply worker will notice skew greater than 2min and thus will wait\n> > > till local clock hits 'remote's commit_tts - max_clock_skew' i.e.\n> > > 10.18 and will apply the change with commit_tts of 10.20 ( as we\n> > > always save the origin's commit timestamp into local commit_tts, see\n> > > RecordTransactionCommit->TransactionTreeSetCommitTsData). Now lets say\n> > > another local update is triggered at 10.19am, it will be applied\n> > > locally but it will be ignored on remote node. On the remote node ,\n> > > the existing change with a timestamp of 10.20 am will win resulting in\n> > > data divergence.\n> >\n> > Let's call the 10:20 AM change as a change-1 and the change that\n> > happened at 10:19 as change-2\n> >\n> > IIUC, although we apply the change-1 at 10:18 AM the commit_ts of that\n> > commit_ts of that change is 10:20, and the same will be visible to all\n> > other nodes. So in conflict resolution still the change-1 happened\n> > after the change-2 because change-2's commit_ts is 10:19 AM. Now\n> > there could be a problem with the causal order because we applied the\n> > change-1 at 10:18 AM so the application might have gotten confirmation\n> > at 10:18 AM and the change-2 of the local node may be triggered as a\n> > result of confirmation of the change-1 that means now change-2 has a\n> > causal dependency on the change-1 but commit_ts shows change-2\n> > happened before the change-1 on all the nodes.\n> >\n> > So, is this acceptable? I think yes because the user has configured a\n> > maximum clock skew of 2 minutes, which means the detected order might\n> > not always align with the causal order for transactions occurring\n> > within that time frame.\n>\n> Agree. I had the same thoughts, and wanted to confirm my understanding.\n\nOkay\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jun 2024 14:57:02 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 19, 2024 at 2:51 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Wed, Jun 19, 2024 at 12:03 PM Amit Kapila <[email protected]> wrote:\n>>\n>> > I doubt that it would be that simple. The application will have to intervene and tell one of the employees that their reservation has failed. It looks natural that the first one to reserve the room should get the reservation, but implementing that is more complex than resolving a conflict in the database. In fact, mostly it will be handled outside database.\n>> >\n>>\n>> Sure, the application needs some handling but I have tried to explain\n>> with a simple way that comes to my mind and how it can be realized\n>> with db involved. This is a known conflict detection method but note\n>> that I am not insisting to have \"earliest_timestamp_wins\". Even, if we\n>> want this we can have a separate discussion on this and add it later.\n>>\n>\n> It will be good to add a minimal set of conflict resolution strategies to begin with, while designing the feature for extensibility. I imagine the first version might just detect the conflict and throw error or do nothing. That's already two simple conflict resolution strategies with minimal efforts. We can add more complicated ones incrementally.\n>\n\nAgreed, splitting the work into multiple patches would help us to\nfinish the easier ones first.\n\nI have thought to divide it such that in the first patch, we detect\nconflicts like 'insert_exists', 'update_differ', 'update_missing', and\n'delete_missing' (the definition of each could be found in the initial\nemail [1]) and throw an ERROR or write them in LOG. Various people\nagreed to have this as a separate committable work [2]. This can help\nusers to detect and monitor the conflicts in a better way. I have\nintentionally skipped update_deleted as it would require more\ninfrastructure and it would be helpful even without that.\n\nIn the second patch, we can implement simple built-in resolution\nstrategies like apply and skip (which can be named as remote_apply and\nkeep_local, see [3][4] for details on these strategies) with ERROR or\nLOG being the default strategy. We can allow these strategies to be\nconfigured at the global and table level.\n\nIn the third patch, we can add monitoring capability for conflicts and\nresolutions as mentioned by Jonathan [5]. Here, we can have stats like\nhow many conflicts of a particular type have happened.\n\nIn the meantime, we can keep discussing and try to reach a consensus\non the timing-related resolution strategy like 'last_update_wins' and\nthe conflict strategy 'update_deleted'.\n\nIf we agree on the above, some of the work, especially the first one,\ncould even be discussed in a separate thread.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAJpy0uD0-DpYVMtsxK5R%3DzszXauZBayQMAYET9sWr_w0CNWXxQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAD21AoAa6JzqhXY02uNUPb-aTozu2RY9nMdD1%3DTUh%2BFpskkYtw%40mail.gmail.com\n[3] - https://www.postgresql.org/message-id/CAJpy0uD0-DpYVMtsxK5R%3DzszXauZBayQMAYET9sWr_w0CNWXxQ%40mail.gmail.com\n[4] - https://github.com/2ndquadrant/pglogical?tab=readme-ov-file#conflicts\n[5] - https://www.postgresql.org/message-id/1eb9242f-dcb6-45c3-871c-98ec324e03ef%40postgresql.org\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Jun 2024 15:21:29 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 18, 2024 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jun 18, 2024 at 10:17 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > >\n> > > I think the unbounded size of the vector could be a problem to store\n> > > for each event. However, while researching previous discussions, it\n> > > came to our notice that we have discussed this topic in the past as\n> > > well in the context of standbys. For recovery_min_apply_delay, we\n> > > decided the clock skew is not a problem as the settings of this\n> > > parameter are much larger than typical time deviations between servers\n> > > as mentioned in docs. Similarly for casual reads [1], there was a\n> > > proposal to introduce max_clock_skew parameter and suggesting the user\n> > > to make sure to have NTP set up correctly. We have tried to check\n> > > other databases (like Ora and BDR) where CDR is implemented but didn't\n> > > find anything specific to clock skew. So, I propose to go with a GUC\n> > > like max_clock_skew such that if the difference of time between the\n> > > incoming transaction's commit time and the local time is more than\n> > > max_clock_skew then we raise an ERROR. It is not clear to me that\n> > > putting bigger effort into clock skew is worth especially when other\n> > > systems providing CDR feature (like Ora or BDR) for decades have not\n> > > done anything like vector clocks. It is possible that this is less of\n> > > a problem w.r.t CDR and just detecting the anomaly in clock skew is\n> > > good enough.\n> >\n> > I believe that if we've accepted this solution elsewhere, then we can\n> > also consider the same. Basically, we're allowing the application to\n> > set its tolerance for clock skew. And, if the skew exceeds that\n> > tolerance, it's the application's responsibility to synchronize;\n> > otherwise, an error will occur. This approach seems reasonable.\n>\n> This model can be further extended by making the apply worker wait if\n> the remote transaction's commit_ts is greater than the local\n> timestamp. This ensures that no local transactions occurring after the\n> remote transaction appear to have happened earlier due to clock skew\n> instead we make them happen before the remote transaction by delaying\n> the remote transaction apply. Essentially, by having the remote\n> application wait until the local timestamp matches the remote\n> transaction's timestamp, we ensure that the remote transaction, which\n> seems to occur after concurrent local transactions due to clock skew,\n> is actually applied after those transactions.\n>\n> With this model, there should be no ordering errors from the\n> application's perspective as well if synchronous commit is enabled.\n> The transaction initiated by the publisher cannot be completed until\n> it is applied to the synchronous subscriber. This ensures that if the\n> subscriber's clock is lagging behind the publisher's clock, the\n> transaction will not be applied until the subscriber's local clock is\n> in sync, preventing the transaction from being completed out of order.\n>\n\nAs per the discussion, this idea will help us to resolve transaction\nordering issues due to clock skew. I was thinking of having two\nvariables max_clock_skew (indicates how much clock skew is\nacceptable), max_clock_skew_options: ERROR, LOG, WAIT (indicates the\naction we need to take once the clock skew is detected). There could\nbe multiple ways to provide these parameters, one is providing them as\nGUCs, and another at the subscription or the table level. I am\nthinking whether users would only like to care about a table or set of\ntables or they would like to set such variables at the system level.\nWe already have an SKIP option (that allows us to skip the\ntransactions till a particular LSN) at the subscription level, so I am\nwondering if there is a sense to provide these new parameters related\nto conflict resolution also at the same level?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Jun 2024 16:10:39 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 20, 2024 at 3:21 PM Amit Kapila <[email protected]> wrote:\n\n> On Wed, Jun 19, 2024 at 2:51 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Wed, Jun 19, 2024 at 12:03 PM Amit Kapila <[email protected]>\n> wrote:\n> >>\n> >> > I doubt that it would be that simple. The application will have to\n> intervene and tell one of the employees that their reservation has failed.\n> It looks natural that the first one to reserve the room should get the\n> reservation, but implementing that is more complex than resolving a\n> conflict in the database. In fact, mostly it will be handled outside\n> database.\n> >> >\n> >>\n> >> Sure, the application needs some handling but I have tried to explain\n> >> with a simple way that comes to my mind and how it can be realized\n> >> with db involved. This is a known conflict detection method but note\n> >> that I am not insisting to have \"earliest_timestamp_wins\". Even, if we\n> >> want this we can have a separate discussion on this and add it later.\n> >>\n> >\n> > It will be good to add a minimal set of conflict resolution strategies\n> to begin with, while designing the feature for extensibility. I imagine the\n> first version might just detect the conflict and throw error or do nothing.\n> That's already two simple conflict resolution strategies with minimal\n> efforts. We can add more complicated ones incrementally.\n> >\n>\n> Agreed, splitting the work into multiple patches would help us to\n> finish the easier ones first.\n>\n> I have thought to divide it such that in the first patch, we detect\n> conflicts like 'insert_exists', 'update_differ', 'update_missing', and\n> 'delete_missing' (the definition of each could be found in the initial\n> email [1]) and throw an ERROR or write them in LOG. Various people\n> agreed to have this as a separate committable work [2]. This can help\n> users to detect and monitor the conflicts in a better way. I have\n> intentionally skipped update_deleted as it would require more\n> infrastructure and it would be helpful even without that.\n>\n\nSince we are in the initial months of release, it will be good to take a\nstock of whether the receiver receives all the information needed for most\n(if not all) of the conflict detection and resolution strategies. If there\nare any missing pieces, we may want to add those in PG18 so that improved\nconflict detection and resolution on a higher version receiver can still\nwork.\n\n\n>\n> In the second patch, we can implement simple built-in resolution\n> strategies like apply and skip (which can be named as remote_apply and\n> keep_local, see [3][4] for details on these strategies) with ERROR or\n> LOG being the default strategy. We can allow these strategies to be\n> configured at the global and table level.\n>\n> In the third patch, we can add monitoring capability for conflicts and\n> resolutions as mentioned by Jonathan [5]. Here, we can have stats like\n> how many conflicts of a particular type have happened.\n>\n\nThat looks like a plan. Thanks for chalking it out.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Jun 20, 2024 at 3:21 PM Amit Kapila <[email protected]> wrote:On Wed, Jun 19, 2024 at 2:51 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Wed, Jun 19, 2024 at 12:03 PM Amit Kapila <[email protected]> wrote:\n>>\n>> > I doubt that it would be that simple. The application will have to intervene and tell one of the employees that their reservation has failed. It looks natural that the first one to reserve the room should get the reservation, but implementing that is more complex than resolving a conflict in the database. In fact, mostly it will be handled outside database.\n>> >\n>>\n>> Sure, the application needs some handling but I have tried to explain\n>> with a simple way that comes to my mind and how it can be realized\n>> with db involved. This is a known conflict detection method but note\n>> that I am not insisting to have \"earliest_timestamp_wins\". Even, if we\n>> want this we can have a separate discussion on this and add it later.\n>>\n>\n> It will be good to add a minimal set of conflict resolution strategies to begin with, while designing the feature for extensibility. I imagine the first version might just detect the conflict and throw error or do nothing. That's already two simple conflict resolution strategies with minimal efforts. We can add more complicated ones incrementally.\n>\n\nAgreed, splitting the work into multiple patches would help us to\nfinish the easier ones first.\n\nI have thought to divide it such that in the first patch, we detect\nconflicts like 'insert_exists', 'update_differ', 'update_missing', and\n'delete_missing' (the definition of each could be found in the initial\nemail [1]) and throw an ERROR or write them in LOG. Various people\nagreed to have this as a separate committable work [2]. This can help\nusers to detect and monitor the conflicts in a better way. I have\nintentionally skipped update_deleted as it would require more\ninfrastructure and it would be helpful even without that.Since we are in the initial months of release, it will be good to take a stock of whether the receiver receives all the information needed for most (if not all) of the conflict detection and resolution strategies. If there are any missing pieces, we may want to add those in PG18 so that improved conflict detection and resolution on a higher version receiver can still work. \n\nIn the second patch, we can implement simple built-in resolution\nstrategies like apply and skip (which can be named as remote_apply and\nkeep_local, see [3][4] for details on these strategies) with ERROR or\nLOG being the default strategy. We can allow these strategies to be\nconfigured at the global and table level.\n\nIn the third patch, we can add monitoring capability for conflicts and\nresolutions as mentioned by Jonathan [5]. Here, we can have stats like\nhow many conflicts of a particular type have happened.That looks like a plan. Thanks for chalking it out.-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 20 Jun 2024 17:05:51 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 20, 2024 at 5:06 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Jun 20, 2024 at 3:21 PM Amit Kapila <[email protected]> wrote:\n>>\n>> On Wed, Jun 19, 2024 at 2:51 PM Ashutosh Bapat\n>> <[email protected]> wrote:\n>> >\n>> > On Wed, Jun 19, 2024 at 12:03 PM Amit Kapila <[email protected]> wrote:\n>> >>\n>> >> > I doubt that it would be that simple. The application will have to intervene and tell one of the employees that their reservation has failed. It looks natural that the first one to reserve the room should get the reservation, but implementing that is more complex than resolving a conflict in the database. In fact, mostly it will be handled outside database.\n>> >> >\n>> >>\n>> >> Sure, the application needs some handling but I have tried to explain\n>> >> with a simple way that comes to my mind and how it can be realized\n>> >> with db involved. This is a known conflict detection method but note\n>> >> that I am not insisting to have \"earliest_timestamp_wins\". Even, if we\n>> >> want this we can have a separate discussion on this and add it later.\n>> >>\n>> >\n>> > It will be good to add a minimal set of conflict resolution strategies to begin with, while designing the feature for extensibility. I imagine the first version might just detect the conflict and throw error or do nothing. That's already two simple conflict resolution strategies with minimal efforts. We can add more complicated ones incrementally.\n>> >\n>>\n>> Agreed, splitting the work into multiple patches would help us to\n>> finish the easier ones first.\n>>\n>> I have thought to divide it such that in the first patch, we detect\n>> conflicts like 'insert_exists', 'update_differ', 'update_missing', and\n>> 'delete_missing' (the definition of each could be found in the initial\n>> email [1]) and throw an ERROR or write them in LOG. Various people\n>> agreed to have this as a separate committable work [2]. This can help\n>> users to detect and monitor the conflicts in a better way. I have\n>> intentionally skipped update_deleted as it would require more\n>> infrastructure and it would be helpful even without that.\n>\n>\n> Since we are in the initial months of release, it will be good to take a stock of whether the receiver receives all the information needed for most (if not all) of the conflict detection and resolution strategies. If there are any missing pieces, we may want to add those in PG18 so that improved conflict detection and resolution on a higher version receiver can still work.\n>\n\nGood point. This can help us to detect conflicts if required even when\nwe move to a higher version. As we continue to discuss/develop the\nfeatures, I hope we will be able to see any missing pieces.\n\n>>\n>>\n>> In the second patch, we can implement simple built-in resolution\n>> strategies like apply and skip (which can be named as remote_apply and\n>> keep_local, see [3][4] for details on these strategies) with ERROR or\n>> LOG being the default strategy. We can allow these strategies to be\n>> configured at the global and table level.\n>>\n>> In the third patch, we can add monitoring capability for conflicts and\n>> resolutions as mentioned by Jonathan [5]. Here, we can have stats like\n>> how many conflicts of a particular type have happened.\n>\n>\n> That looks like a plan. Thanks for chalking it out.\n>\n\nThanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Jun 2024 18:41:15 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 20, 2024 at 6:41 PM Amit Kapila <[email protected]> wrote:\n>\n> >> In the second patch, we can implement simple built-in resolution\n> >> strategies like apply and skip (which can be named as remote_apply and\n> >> keep_local, see [3][4] for details on these strategies) with ERROR or\n> >> LOG being the default strategy. We can allow these strategies to be\n> >> configured at the global and table level.\n\nBefore we implement resolvers, we need a way to configure them. Please\nfind the patch002 which attempts to implement Global Level Conflict\nResolvers Configuration. Note that patch002 is dependent upon\nConflict-Detection patch001 which is reviewed in another thread [1].\nI have attached patch001 here for convenience and to avoid CFBot\nfailures. But please use [1] if you have any comments on patch001.\n\nNew DDL commands in patch002 are:\n\nTo set global resolver for given conflcit_type:\nSET CONFLICT RESOLVER 'conflict_resolver' FOR 'conflict_type'\n\nTo reset to default resolver:\nRESET CONFLICT RESOLVER FOR 'conflict_type'\n\nTODO: Once we get initial consensus on DDL commands, I will add\nsupport for them in pg_dump/restore and will add doc.\n\n------------\n\nAs suggested in [2] and above, it seems logical to have table-specific\nresolvers configuration along with global one.\n\nHere is the proposal for table level resolvers:\n\n1) We can provide support for table level resolvers using ALTER TABLE:\n\nALTER TABLE <name> SET CONFLICT RESOLVER <resolver1> on <conflict_type1>,\n SET CONFLICT RESOLVER\n<resolver2> on <conflict_type2>, ...;\n\nReset can be done using:\nALTER TABLE <name> RESET CONFLICT RESOLVER on <conflict_type1>,\n RESET CONFLICT RESOLVER on\n<conflict_type2>, ...;\n\nAbove commands will save/remove configuration in/from the new system\ncatalog pg_conflict_rel.\n\n2) Table level configuration (if any) will be given preference over\nglobal ones. The tables not having table-specific resolvers will use\nglobal configured ones.\n\n3) If the table is a partition table, then resolvers created for the\nparent will be inherited by all child partition tables. Multiple\nresolver entries will be created, one for each child partition in the\nsystem catalog (similar to constraints).\n\n4) Users can also configure explicit resolvers for child partitions.\nIn such a case, child's resolvers will override inherited resolvers\n(if any).\n\n5) Any attempt to RESET (remove) inherited resolvers on the child\npartition table *alone* will result in error: \"cannot reset inherited\nresolvers\" (similar to constraints). But RESET of explicit created\nresolvers (non-inherited ones) will be permitted for child partitions.\nOn RESET, the resolver configuration will not fallback to the\ninherited resolver again. Users need to explicitly configure new\nresolvers for the child partition tables (after RESET) if needed.\n\n6) Removal/Reset of resolvers on parent will remove corresponding\n\"inherited\" resolvers on all the child partitions as well. If any\nchild has overridden inherited resolvers earlier, those will stay.\n\n7) For 'ALTER TABLE parent ATTACH PARTITION child'; if 'child' has its\nown resolvers set, those will not be overridden. But if it does not\nhave resolvers set, it will inherit from the parent table. This will\nmean, for say out of 5 conflict_types, if the child table has\nresolvers configured for any 2, 'attach' will retain those; for the\nrest 3, it will inherit from the parent (if any).\n\n8) Detach partition will not remove inherited resolvers, it will just\nmark them 'non inherited' (similar to constraints).\n\nThoughts?\n\n------------\n\n[1]: https://www.postgresql.org/message-id/OS0PR01MB57161006B8F2779F2C97318194D42%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n[2]: https://www.postgresql.org/message-id/4738d098-6378-494e-9f88-9e3a85a5de82%40enterprisedb.com\n\nthanks\nShveta", "msg_date": "Mon, 24 Jun 2024 13:47:02 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jun 24, 2024 at 1:47 PM shveta malik <[email protected]> wrote:\n>\n> On Thu, Jun 20, 2024 at 6:41 PM Amit Kapila <[email protected]> wrote:\n> >\n> > >> In the second patch, we can implement simple built-in resolution\n> > >> strategies like apply and skip (which can be named as remote_apply and\n> > >> keep_local, see [3][4] for details on these strategies) with ERROR or\n> > >> LOG being the default strategy. We can allow these strategies to be\n> > >> configured at the global and table level.\n>\n> Before we implement resolvers, we need a way to configure them. Please\n> find the patch002 which attempts to implement Global Level Conflict\n> Resolvers Configuration. Note that patch002 is dependent upon\n> Conflict-Detection patch001 which is reviewed in another thread [1].\n> I have attached patch001 here for convenience and to avoid CFBot\n> failures. But please use [1] if you have any comments on patch001.\n>\n> New DDL commands in patch002 are:\n>\n> To set global resolver for given conflcit_type:\n> SET CONFLICT RESOLVER 'conflict_resolver' FOR 'conflict_type'\n>\n> To reset to default resolver:\n> RESET CONFLICT RESOLVER FOR 'conflict_type'\n>\n\nDoes setting up resolvers have any meaning without subscriptions? I am\nwondering whether we should allow to set up the resolvers at the\nsubscription level. One benefit is that users don't need to use a\ndifferent DDL to set up resolvers. The first patch gives a conflict\ndetection option at the subscription level, so it would be symmetrical\nto provide a resolver at the subscription level. Yet another benefit\ncould be that it provides users facility to configure different\nresolvers for a set of tables belonging to a particular\npublication/node.\n\n>\n> ------------\n>\n> As suggested in [2] and above, it seems logical to have table-specific\n> resolvers configuration along with global one.\n>\n> Here is the proposal for table level resolvers:\n>\n> 1) We can provide support for table level resolvers using ALTER TABLE:\n>\n> ALTER TABLE <name> SET CONFLICT RESOLVER <resolver1> on <conflict_type1>,\n> SET CONFLICT RESOLVER\n> <resolver2> on <conflict_type2>, ...;\n>\n> Reset can be done using:\n> ALTER TABLE <name> RESET CONFLICT RESOLVER on <conflict_type1>,\n> RESET CONFLICT RESOLVER on\n> <conflict_type2>, ...;\n>\n> Above commands will save/remove configuration in/from the new system\n> catalog pg_conflict_rel.\n>\n> 2) Table level configuration (if any) will be given preference over\n> global ones. The tables not having table-specific resolvers will use\n> global configured ones.\n>\n> 3) If the table is a partition table, then resolvers created for the\n> parent will be inherited by all child partition tables. Multiple\n> resolver entries will be created, one for each child partition in the\n> system catalog (similar to constraints).\n>\n> 4) Users can also configure explicit resolvers for child partitions.\n> In such a case, child's resolvers will override inherited resolvers\n> (if any).\n>\n> 5) Any attempt to RESET (remove) inherited resolvers on the child\n> partition table *alone* will result in error: \"cannot reset inherited\n> resolvers\" (similar to constraints). But RESET of explicit created\n> resolvers (non-inherited ones) will be permitted for child partitions.\n> On RESET, the resolver configuration will not fallback to the\n> inherited resolver again. Users need to explicitly configure new\n> resolvers for the child partition tables (after RESET) if needed.\n>\n\nWhy so? If we can allow the RESET command to fallback to the inherited\nresolver it would make the behavior consistent for the child table\nwhere we don't have performed SET.\n\n> 6) Removal/Reset of resolvers on parent will remove corresponding\n> \"inherited\" resolvers on all the child partitions as well. If any\n> child has overridden inherited resolvers earlier, those will stay.\n>\n> 7) For 'ALTER TABLE parent ATTACH PARTITION child'; if 'child' has its\n> own resolvers set, those will not be overridden. But if it does not\n> have resolvers set, it will inherit from the parent table. This will\n> mean, for say out of 5 conflict_types, if the child table has\n> resolvers configured for any 2, 'attach' will retain those; for the\n> rest 3, it will inherit from the parent (if any).\n>\n> 8) Detach partition will not remove inherited resolvers, it will just\n> mark them 'non inherited' (similar to constraints).\n>\n\nBTW, to keep the initial patch simple, can we prohibit setting\nresolvers at the child table level? If we follow this, then we can\ngive an ERROR if the user tries to attach the table (with configured\nresolvers) to an existing partitioned table.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 25 Jun 2024 15:12:36 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 25, 2024 at 3:12 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jun 24, 2024 at 1:47 PM shveta malik <[email protected]> wrote:\n> >\n> > On Thu, Jun 20, 2024 at 6:41 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > >> In the second patch, we can implement simple built-in resolution\n> > > >> strategies like apply and skip (which can be named as remote_apply and\n> > > >> keep_local, see [3][4] for details on these strategies) with ERROR or\n> > > >> LOG being the default strategy. We can allow these strategies to be\n> > > >> configured at the global and table level.\n> >\n> > Before we implement resolvers, we need a way to configure them. Please\n> > find the patch002 which attempts to implement Global Level Conflict\n> > Resolvers Configuration. Note that patch002 is dependent upon\n> > Conflict-Detection patch001 which is reviewed in another thread [1].\n> > I have attached patch001 here for convenience and to avoid CFBot\n> > failures. But please use [1] if you have any comments on patch001.\n> >\n> > New DDL commands in patch002 are:\n> >\n> > To set global resolver for given conflcit_type:\n> > SET CONFLICT RESOLVER 'conflict_resolver' FOR 'conflict_type'\n> >\n> > To reset to default resolver:\n> > RESET CONFLICT RESOLVER FOR 'conflict_type'\n> >\n>\n> Does setting up resolvers have any meaning without subscriptions? I am\n> wondering whether we should allow to set up the resolvers at the\n> subscription level. One benefit is that users don't need to use a\n> different DDL to set up resolvers. The first patch gives a conflict\n> detection option at the subscription level, so it would be symmetrical\n> to provide a resolver at the subscription level. Yet another benefit\n> could be that it provides users facility to configure different\n> resolvers for a set of tables belonging to a particular\n> publication/node.\n\nThere can be multiple tables included in a publication with varying\nbusiness use-cases and thus may need different resolvers set, even\nthough they all are part of the same publication.\n\n> >\n> > ------------\n> >\n> > As suggested in [2] and above, it seems logical to have table-specific\n> > resolvers configuration along with global one.\n> >\n> > Here is the proposal for table level resolvers:\n> >\n> > 1) We can provide support for table level resolvers using ALTER TABLE:\n> >\n> > ALTER TABLE <name> SET CONFLICT RESOLVER <resolver1> on <conflict_type1>,\n> > SET CONFLICT RESOLVER\n> > <resolver2> on <conflict_type2>, ...;\n> >\n> > Reset can be done using:\n> > ALTER TABLE <name> RESET CONFLICT RESOLVER on <conflict_type1>,\n> > RESET CONFLICT RESOLVER on\n> > <conflict_type2>, ...;\n> >\n> > Above commands will save/remove configuration in/from the new system\n> > catalog pg_conflict_rel.\n> >\n> > 2) Table level configuration (if any) will be given preference over\n> > global ones. The tables not having table-specific resolvers will use\n> > global configured ones.\n> >\n> > 3) If the table is a partition table, then resolvers created for the\n> > parent will be inherited by all child partition tables. Multiple\n> > resolver entries will be created, one for each child partition in the\n> > system catalog (similar to constraints).\n> >\n> > 4) Users can also configure explicit resolvers for child partitions.\n> > In such a case, child's resolvers will override inherited resolvers\n> > (if any).\n> >\n> > 5) Any attempt to RESET (remove) inherited resolvers on the child\n> > partition table *alone* will result in error: \"cannot reset inherited\n> > resolvers\" (similar to constraints). But RESET of explicit created\n> > resolvers (non-inherited ones) will be permitted for child partitions.\n> > On RESET, the resolver configuration will not fallback to the\n> > inherited resolver again. Users need to explicitly configure new\n> > resolvers for the child partition tables (after RESET) if needed.\n> >\n>\n> Why so? If we can allow the RESET command to fallback to the inherited\n> resolver it would make the behavior consistent for the child table\n> where we don't have performed SET.\n\nThought behind not making it fallback is since the user has done\n'RESET', he may want to remove the resolver completely. We don't know\nif he really wants to go back to the previous one. If he does, it is\neasy to set it again. But if he does not, and we set the inherited\nresolver again during 'RESET', there is no way he can drop that\ninherited resolver alone on the child partition.\n\n> > 6) Removal/Reset of resolvers on parent will remove corresponding\n> > \"inherited\" resolvers on all the child partitions as well. If any\n> > child has overridden inherited resolvers earlier, those will stay.\n> >\n> > 7) For 'ALTER TABLE parent ATTACH PARTITION child'; if 'child' has its\n> > own resolvers set, those will not be overridden. But if it does not\n> > have resolvers set, it will inherit from the parent table. This will\n> > mean, for say out of 5 conflict_types, if the child table has\n> > resolvers configured for any 2, 'attach' will retain those; for the\n> > rest 3, it will inherit from the parent (if any).\n> >\n> > 8) Detach partition will not remove inherited resolvers, it will just\n> > mark them 'non inherited' (similar to constraints).\n> >\n>\n> BTW, to keep the initial patch simple, can we prohibit setting\n> resolvers at the child table level? If we follow this, then we can\n> give an ERROR if the user tries to attach the table (with configured\n> resolvers) to an existing partitioned table.\n\nOkay, I will think about this if the patch becomes too complex.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 25 Jun 2024 15:39:01 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jun 25, 2024 at 3:39 PM shveta malik <[email protected]> wrote:\n>\n> On Tue, Jun 25, 2024 at 3:12 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Jun 24, 2024 at 1:47 PM shveta malik <[email protected]> wrote:\n> > >\n> > > On Thu, Jun 20, 2024 at 6:41 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > >> In the second patch, we can implement simple built-in resolution\n> > > > >> strategies like apply and skip (which can be named as remote_apply and\n> > > > >> keep_local, see [3][4] for details on these strategies) with ERROR or\n> > > > >> LOG being the default strategy. We can allow these strategies to be\n> > > > >> configured at the global and table level.\n> > >\n> > > Before we implement resolvers, we need a way to configure them. Please\n> > > find the patch002 which attempts to implement Global Level Conflict\n> > > Resolvers Configuration. Note that patch002 is dependent upon\n> > > Conflict-Detection patch001 which is reviewed in another thread [1].\n> > > I have attached patch001 here for convenience and to avoid CFBot\n> > > failures. But please use [1] if you have any comments on patch001.\n> > >\n> > > New DDL commands in patch002 are:\n> > >\n> > > To set global resolver for given conflcit_type:\n> > > SET CONFLICT RESOLVER 'conflict_resolver' FOR 'conflict_type'\n> > >\n> > > To reset to default resolver:\n> > > RESET CONFLICT RESOLVER FOR 'conflict_type'\n> > >\n> >\n> > Does setting up resolvers have any meaning without subscriptions? I am\n> > wondering whether we should allow to set up the resolvers at the\n> > subscription level. One benefit is that users don't need to use a\n> > different DDL to set up resolvers. The first patch gives a conflict\n> > detection option at the subscription level, so it would be symmetrical\n> > to provide a resolver at the subscription level. Yet another benefit\n> > could be that it provides users facility to configure different\n> > resolvers for a set of tables belonging to a particular\n> > publication/node.\n>\n> There can be multiple tables included in a publication with varying\n> business use-cases and thus may need different resolvers set, even\n> though they all are part of the same publication.\n>\n\nAgreed but this is the reason we are planning to keep resolvers at the\ntable level. Here, I am asking to set resolvers at the subscription\nlevel rather than at the global level.\n\n> > >\n> > > ------------\n> > >\n> > > As suggested in [2] and above, it seems logical to have table-specific\n> > > resolvers configuration along with global one.\n> > >\n> > > Here is the proposal for table level resolvers:\n> > >\n> > > 1) We can provide support for table level resolvers using ALTER TABLE:\n> > >\n> > > ALTER TABLE <name> SET CONFLICT RESOLVER <resolver1> on <conflict_type1>,\n> > > SET CONFLICT RESOLVER\n> > > <resolver2> on <conflict_type2>, ...;\n> > >\n> > > Reset can be done using:\n> > > ALTER TABLE <name> RESET CONFLICT RESOLVER on <conflict_type1>,\n> > > RESET CONFLICT RESOLVER on\n> > > <conflict_type2>, ...;\n> > >\n> > > Above commands will save/remove configuration in/from the new system\n> > > catalog pg_conflict_rel.\n> > >\n> > > 2) Table level configuration (if any) will be given preference over\n> > > global ones. The tables not having table-specific resolvers will use\n> > > global configured ones.\n> > >\n> > > 3) If the table is a partition table, then resolvers created for the\n> > > parent will be inherited by all child partition tables. Multiple\n> > > resolver entries will be created, one for each child partition in the\n> > > system catalog (similar to constraints).\n> > >\n> > > 4) Users can also configure explicit resolvers for child partitions.\n> > > In such a case, child's resolvers will override inherited resolvers\n> > > (if any).\n> > >\n> > > 5) Any attempt to RESET (remove) inherited resolvers on the child\n> > > partition table *alone* will result in error: \"cannot reset inherited\n> > > resolvers\" (similar to constraints). But RESET of explicit created\n> > > resolvers (non-inherited ones) will be permitted for child partitions.\n> > > On RESET, the resolver configuration will not fallback to the\n> > > inherited resolver again. Users need to explicitly configure new\n> > > resolvers for the child partition tables (after RESET) if needed.\n> > >\n> >\n> > Why so? If we can allow the RESET command to fallback to the inherited\n> > resolver it would make the behavior consistent for the child table\n> > where we don't have performed SET.\n>\n> Thought behind not making it fallback is since the user has done\n> 'RESET', he may want to remove the resolver completely. We don't know\n> if he really wants to go back to the previous one. If he does, it is\n> easy to set it again. But if he does not, and we set the inherited\n> resolver again during 'RESET', there is no way he can drop that\n> inherited resolver alone on the child partition.\n>\n\nI see your point but normally RESET allows us to go back to the\ndefault which in this case would be the resolver inherited from the\nparent table.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 26 Jun 2024 14:32:48 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "Please find the attached 'patch0003', which implements conflict\nresolutions according to the global resolver settings.\n\nSummary of Conflict Resolutions Implemented in 'patch0003':\n\nINSERT Conflicts:\n------------------------\n1) Conflict Type: 'insert_exists'\n\nSupported Resolutions:\na) 'remote_apply': Convert the INSERT to an UPDATE and apply.\nb) 'keep_local': Ignore the incoming (conflicting) INSERT and retain\nthe local tuple.\nc) 'error': The apply worker will error out and restart.\n\nUPDATE Conflicts:\n------------------------\n1) Conflict Type: 'update_differ'\n\nSupported Resolutions:\na) 'remote_apply': Apply the remote update.\nb) 'keep_local': Skip the remote update and retain the local tuple.\nc) 'error': The apply worker will error out and restart.\n\n2) Conflict Type: 'update_missing'\n\nSupported Resolutions:\na) 'apply_or_skip': Try to convert the UPDATE to an INSERT; if\nunsuccessful, skip the remote update and continue.\nb) 'apply_or_error': Try to convert the UPDATE to an INSERT; if\nunsuccessful, error out.\nc) 'skip': Skip the remote update and continue.\nd) 'error': The apply worker will error out and restart.\n\nDELETE Conflicts:\n------------------------\n1) Conflict Type: 'delete_missing'\n\nSupported Resolutions:\na) 'skip': Skip the remote delete and continue.\nb) 'error': The apply worker will error out and restart.\n\nNOTE: With these basic resolution techniques, the patch does not aim\nto ensure consistency across nodes, so data divergence is expected.\n\n--\nThanks,\nNisha", "msg_date": "Thu, 27 Jun 2024 08:44:02 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 26, 2024 at 2:33 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jun 25, 2024 at 3:39 PM shveta malik <[email protected]> wrote:\n> >\n> > On Tue, Jun 25, 2024 at 3:12 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Jun 24, 2024 at 1:47 PM shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Thu, Jun 20, 2024 at 6:41 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > >> In the second patch, we can implement simple built-in resolution\n> > > > > >> strategies like apply and skip (which can be named as remote_apply and\n> > > > > >> keep_local, see [3][4] for details on these strategies) with ERROR or\n> > > > > >> LOG being the default strategy. We can allow these strategies to be\n> > > > > >> configured at the global and table level.\n> > > >\n> > > > Before we implement resolvers, we need a way to configure them. Please\n> > > > find the patch002 which attempts to implement Global Level Conflict\n> > > > Resolvers Configuration. Note that patch002 is dependent upon\n> > > > Conflict-Detection patch001 which is reviewed in another thread [1].\n> > > > I have attached patch001 here for convenience and to avoid CFBot\n> > > > failures. But please use [1] if you have any comments on patch001.\n> > > >\n> > > > New DDL commands in patch002 are:\n> > > >\n> > > > To set global resolver for given conflcit_type:\n> > > > SET CONFLICT RESOLVER 'conflict_resolver' FOR 'conflict_type'\n> > > >\n> > > > To reset to default resolver:\n> > > > RESET CONFLICT RESOLVER FOR 'conflict_type'\n> > > >\n> > >\n> > > Does setting up resolvers have any meaning without subscriptions? I am\n> > > wondering whether we should allow to set up the resolvers at the\n> > > subscription level. One benefit is that users don't need to use a\n> > > different DDL to set up resolvers. The first patch gives a conflict\n> > > detection option at the subscription level, so it would be symmetrical\n> > > to provide a resolver at the subscription level. Yet another benefit\n> > > could be that it provides users facility to configure different\n> > > resolvers for a set of tables belonging to a particular\n> > > publication/node.\n> >\n> > There can be multiple tables included in a publication with varying\n> > business use-cases and thus may need different resolvers set, even\n> > though they all are part of the same publication.\n> >\n>\n> Agreed but this is the reason we are planning to keep resolvers at the\n> table level. Here, I am asking to set resolvers at the subscription\n> level rather than at the global level.\n\nOkay, got it. I misunderstood earlier that we want to replace table\nlevel resolvers with subscription ones.\nHaving global configuration has one benefit that if the user has no\nrequirement to set different resolvers for different subscriptions or\ntables, he may always set one global configuration and be done with\nit. OTOH, I also agree with benefits coming with subscription level\nconfiguration.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 27 Jun 2024 10:20:25 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 27, 2024 at 8:44 AM Nisha Moond <[email protected]> wrote:\n>\n> Please find the attached 'patch0003', which implements conflict\n> resolutions according to the global resolver settings.\n\nThanks for providing the resolver patch.\n\nPlease find new patches attached. Changes:\n\npatch002:\n--Fixed CFBot compilation failure where a header file was not included\nin meson.build\n--Also this is the correct version of patch. Previous email has\nattached an older version by mistake.\n\npatch004:\nThis is a WIP progress which attempts to implement Configuration of\ntable-level resolvers . It has below changes:\n--Alter table SET CONFLICT RESOLVER.\n--Alter table RESET CONFLICT RESOLVER. <Note that these 2 commands\nalso take care of resolvers inheritance for partition tables as\ndiscussed in [1]>.\n--Resolver inheritance support during 'Alter table ATTACH PARTITION'.\n--Resolver inheritance removal during 'Alter table DETACH PARTITION'.\n\nPending:\n--Resolver Inheritance support during 'CREATE TABLE .. PARTITION OF\n..'.\n--Using tabel-level resolver while resolving conflicts. (Resolver\npatch003 still relies on global resolvers).\n\nPlease refer [1] for the complete proposal for table-level resolvers.\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uAqegGDbuJk3Z-ku8wYFZyPv7C1KmHCkJ3885O%2Bj5enFg%40mail.gmail.com\n\nthanks\nShveta", "msg_date": "Thu, 27 Jun 2024 16:03:30 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 27, 2024 at 4:03 PM shveta malik <[email protected]> wrote:\n>\n> On Thu, Jun 27, 2024 at 8:44 AM Nisha Moond <[email protected]> wrote:\n> >\n> > Please find the attached 'patch0003', which implements conflict\n> > resolutions according to the global resolver settings.\n>\n> Thanks for providing the resolver patch.\n>\n> Please find new patches attached. Changes:\n>\n> patch002:\n> --Fixed CFBot compilation failure where a header file was not included\n> in meson.build\n> --Also this is the correct version of patch. Previous email has\n> attached an older version by mistake.\n>\n> patch004:\n> This is a WIP progress which attempts to implement Configuration of\n> table-level resolvers . It has below changes:\n> --Alter table SET CONFLICT RESOLVER.\n> --Alter table RESET CONFLICT RESOLVER. <Note that these 2 commands\n> also take care of resolvers inheritance for partition tables as\n> discussed in [1]>.\n> --Resolver inheritance support during 'Alter table ATTACH PARTITION'.\n> --Resolver inheritance removal during 'Alter table DETACH PARTITION'.\n>\n> Pending:\n> --Resolver Inheritance support during 'CREATE TABLE .. PARTITION OF\n> ..'.\n> --Using tabel-level resolver while resolving conflicts. (Resolver\n> patch003 still relies on global resolvers).\n>\n> Please refer [1] for the complete proposal for table-level resolvers.\n>\n\nPlease find v2 attached. Changes are in patch004 only, which are:\n\n--Resolver Inheritance support during 'CREATE TABLE .. PARTITION OF'.\n--SPLIT and MERGE partition review and testing (it was missed earlier).\n--Test Cases added for all above cases.\n\nthanks\nShveta", "msg_date": "Fri, 28 Jun 2024 15:15:19 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "Hi,\n\nOn Thu, May 23, 2024 at 3:37 PM shveta malik <[email protected]> wrote:\n>\n> DELETE\n> ================\n> Conflict Type:\n> ----------------\n> delete_missing: An incoming delete is trying to delete a row on a\n> target node which does not exist.\n\nIIUC the 'delete_missing' conflict doesn't cover the case where an\nincoming delete message is trying to delete a row that has already\nbeen updated locally or by another node. I think in update/delete\nconflict situations, we need to resolve the conflicts based on commit\ntimestamps like we do for update/update and insert/update conflicts.\n\nFor example, suppose there are two node-A and node-B and setup\nbi-directional replication, and suppose further that both have the row\nwith id = 1, consider the following sequences:\n\n09:00:00 DELETE ... WHERE id = 1 on node-A.\n09:00:05 UPDATE ... WHERE id = 1 on node-B.\n09:00:10 node-A received the update message from node-B.\n09:00:15 node-B received the delete message from node-A.\n\nAt 09:00:10 on node-A, an update_deleted conflict is generated since\nthe row on node-A is already deleted locally. Suppose that we use\n'apply_or_skip' resolution for this conflict, we convert the update\nmessage into an insertion, so node-A now has the row with id = 1. At\n09:00:15 on node-B, the incoming delete message is applied and deletes\nthe row with id = 1, even though the row has already been modified\nlocally. The node-A and node-B are now inconsistent. This\ninconsistency can be avoided by using 'skip' resolution for the\n'update_deleted' conflict on node-A, and 'skip' resolution is the\ndefault method for that actually. However, if we handle it as\n'update_missing', the 'apply_or_skip' resolution is used by default.\n\nIIUC with the proposed architecture, DELETE always takes precedence\nover UPDATE since both 'update_deleted' and 'update_missing' don't use\ncommit timestamps to resolve the conflicts. As long as that is true, I\nthink there is no use case for 'apply_or_skip' and 'apply_or_error'\nresolutions in update/delete conflict cases. In short, I think we need\nsomething like 'delete_differ' conflict type as well. FYI PGD and\nOracle GoldenGate seem to have this conflict type[1][2].\n\nThe 'delete'_differ' conflict type would have at least\n'latest_timestamp_wins' resolution. With the timestamp based\nresolution method, we would deal with update/delete conflicts as\nfollows:\n\n09:00:00: DELETE ... WHERE id = 1 on node-A.\n09:00:05: UPDATE ... WHERE id = 1 on node-B.\n - the updated row doesn't have the origin since it's a local change.\n09:00:10: node-A received the update message from node-B.\n - the incoming update message has the origin of node-B whereas the\nlocal row is already removed locally.\n - 'update_deleted' conflict is generated.\n - do the insert of the new row instead, because the commit\ntimestamp of UPDATE is newer than DELETE's one.\n09:00:15: node-B received the delete message from node-A.\n - the incoming delete message has the origin of node-B whereas the\n(updated) row doesn't have the origin.\n - 'update_differ' conflict is generated.\n - discard DELETE, because the commit timestamp of UPDATE is newer\nthan DELETE' one.ard DELETE, because the commit timestamp of UPDATE is\nnewer than DELETE' one.\n\nAs a result, both nodes have the new version row.\n\nRegards,\n\n[1] https://www.enterprisedb.com/docs/pgd/latest/consistency/conflicts/#updatedelete-conflicts\n[2] https://docs.oracle.com/goldengate/c1230/gg-winux/GWUAD/configuring-conflict-detection-and-resolution.htm\n(see DELETEROWEXISTS conflict type)\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Jul 2024 15:16:52 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 27, 2024 at 1:14 PM Nisha Moond <[email protected]>\nwrote:\n\n> Please find the attached 'patch0003', which implements conflict\n> resolutions according to the global resolver settings.\n>\n> Summary of Conflict Resolutions Implemented in 'patch0003':\n>\n> INSERT Conflicts:\n> ------------------------\n> 1) Conflict Type: 'insert_exists'\n>\n> Supported Resolutions:\n> a) 'remote_apply': Convert the INSERT to an UPDATE and apply.\n> b) 'keep_local': Ignore the incoming (conflicting) INSERT and retain\n> the local tuple.\n> c) 'error': The apply worker will error out and restart.\n>\n>\nHi Nisha,\n\nWhile testing the patch, when conflict resolution is configured and\ninsert_exists is set to \"remote_apply\", I see this warning in the logs due\nto a resource not being closed:\n\n2024-07-01 02:52:59.427 EDT [20304] LOG: conflict insert_exists detected\non relation \"public.test1\"\n2024-07-01 02:52:59.427 EDT [20304] DETAIL: Key already exists. Applying\nresolution method \"remote_apply\"\n2024-07-01 02:52:59.427 EDT [20304] CONTEXT: processing remote data for\nreplication origin \"pg_16417\" during message type \"INSERT\" for replication\ntarget relation \"public.test1\" in transaction 763, finished at 0/15E7F68\n2024-07-01 02:52:59.427 EDT [20304] WARNING: resource was not closed:\n[138] (rel=base/5/16413, blockNum=0, flags=0x93800000, refcount=1 1)\n2024-07-01 02:52:59.427 EDT [20304] CONTEXT: processing remote data for\nreplication origin \"pg_16417\" during message type \"COMMIT\" in transaction\n763, finished at 0/15E7F68\n2024-07-01 02:52:59.427 EDT [20304] WARNING: resource was not closed:\nTupleDesc 0x7f8c0439e448 (16402,-1)\n2024-07-01 02:52:59.427 EDT [20304] CONTEXT: processing remote data for\nreplication origin \"pg_16417\" during message type \"COMMIT\" in transaction\n763, finished at 0/15E7F68\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Thu, Jun 27, 2024 at 1:14 PM Nisha Moond <[email protected]> wrote:Please find the attached  'patch0003', which implements conflict\nresolutions according to the global resolver settings.\n\nSummary of Conflict Resolutions Implemented in 'patch0003':\n\nINSERT Conflicts:\n------------------------\n1) Conflict Type: 'insert_exists'\n\nSupported Resolutions:\na) 'remote_apply': Convert the INSERT to an UPDATE and apply.\nb) 'keep_local': Ignore the incoming (conflicting) INSERT and retain\nthe local tuple.\nc) 'error': The apply worker will error out and restart.\nHi Nisha,While testing the patch, when conflict resolution is configured and insert_exists is set to \"remote_apply\", I see this warning in the logs due to a resource not being closed:2024-07-01 02:52:59.427 EDT [20304] LOG:  conflict insert_exists detected on relation \"public.test1\"2024-07-01 02:52:59.427 EDT [20304] DETAIL:  Key already exists. Applying resolution method \"remote_apply\"2024-07-01 02:52:59.427 EDT [20304] CONTEXT:  processing remote data for replication origin \"pg_16417\" during message type \"INSERT\" for replication target relation \"public.test1\" in transaction 763, finished at 0/15E7F682024-07-01 02:52:59.427 EDT [20304] WARNING:  resource was not closed: [138] (rel=base/5/16413, blockNum=0, flags=0x93800000, refcount=1 1)2024-07-01 02:52:59.427 EDT [20304] CONTEXT:  processing remote data for replication origin \"pg_16417\" during message type \"COMMIT\" in transaction 763, finished at 0/15E7F682024-07-01 02:52:59.427 EDT [20304] WARNING:  resource was not closed: TupleDesc 0x7f8c0439e448 (16402,-1)2024-07-01 02:52:59.427 EDT [20304] CONTEXT:  processing remote data for replication origin \"pg_16417\" during message type \"COMMIT\" in transaction 763, finished at 0/15E7F68regards,Ajin CherianFujitsu Australia", "msg_date": "Mon, 1 Jul 2024 17:47:07 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jun 27, 2024 at 1:50 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Jun 26, 2024 at 2:33 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jun 25, 2024 at 3:39 PM shveta malik <[email protected]> wrote:\n> > >\n> > > On Tue, Jun 25, 2024 at 3:12 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Mon, Jun 24, 2024 at 1:47 PM shveta malik <[email protected]> wrote:\n> > > > >\n> > > > > On Thu, Jun 20, 2024 at 6:41 PM Amit Kapila <[email protected]> wrote:\n> > > > > >\n> > > > > > >> In the second patch, we can implement simple built-in resolution\n> > > > > > >> strategies like apply and skip (which can be named as remote_apply and\n> > > > > > >> keep_local, see [3][4] for details on these strategies) with ERROR or\n> > > > > > >> LOG being the default strategy. We can allow these strategies to be\n> > > > > > >> configured at the global and table level.\n> > > > >\n> > > > > Before we implement resolvers, we need a way to configure them. Please\n> > > > > find the patch002 which attempts to implement Global Level Conflict\n> > > > > Resolvers Configuration. Note that patch002 is dependent upon\n> > > > > Conflict-Detection patch001 which is reviewed in another thread [1].\n> > > > > I have attached patch001 here for convenience and to avoid CFBot\n> > > > > failures. But please use [1] if you have any comments on patch001.\n> > > > >\n> > > > > New DDL commands in patch002 are:\n> > > > >\n> > > > > To set global resolver for given conflcit_type:\n> > > > > SET CONFLICT RESOLVER 'conflict_resolver' FOR 'conflict_type'\n> > > > >\n> > > > > To reset to default resolver:\n> > > > > RESET CONFLICT RESOLVER FOR 'conflict_type'\n> > > > >\n> > > >\n> > > > Does setting up resolvers have any meaning without subscriptions? I am\n> > > > wondering whether we should allow to set up the resolvers at the\n> > > > subscription level. One benefit is that users don't need to use a\n> > > > different DDL to set up resolvers. The first patch gives a conflict\n> > > > detection option at the subscription level, so it would be symmetrical\n> > > > to provide a resolver at the subscription level. Yet another benefit\n> > > > could be that it provides users facility to configure different\n> > > > resolvers for a set of tables belonging to a particular\n> > > > publication/node.\n> > >\n> > > There can be multiple tables included in a publication with varying\n> > > business use-cases and thus may need different resolvers set, even\n> > > though they all are part of the same publication.\n> > >\n> >\n> > Agreed but this is the reason we are planning to keep resolvers at the\n> > table level. Here, I am asking to set resolvers at the subscription\n> > level rather than at the global level.\n>\n> Okay, got it. I misunderstood earlier that we want to replace table\n> level resolvers with subscription ones.\n> Having global configuration has one benefit that if the user has no\n> requirement to set different resolvers for different subscriptions or\n> tables, he may always set one global configuration and be done with\n> it. OTOH, I also agree with benefits coming with subscription level\n> configuration.\n\nSetting resolvers at table-level and subscription-level sounds good to\nme. DDLs for setting resolvers at subscription-level would need the\nsubscription name to be specified? And another question is: a\ntable-level resolver setting is precedent over all subscriber-level\nresolver settings in the database?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Jul 2024 17:05:14 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jul 1, 2024 at 11:47 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, May 23, 2024 at 3:37 PM shveta malik <[email protected]> wrote:\n> >\n> > DELETE\n> > ================\n> > Conflict Type:\n> > ----------------\n> > delete_missing: An incoming delete is trying to delete a row on a\n> > target node which does not exist.\n>\n> IIUC the 'delete_missing' conflict doesn't cover the case where an\n> incoming delete message is trying to delete a row that has already\n> been updated locally or by another node. I think in update/delete\n> conflict situations, we need to resolve the conflicts based on commit\n> timestamps like we do for update/update and insert/update conflicts.\n>\n> For example, suppose there are two node-A and node-B and setup\n> bi-directional replication, and suppose further that both have the row\n> with id = 1, consider the following sequences:\n>\n> 09:00:00 DELETE ... WHERE id = 1 on node-A.\n> 09:00:05 UPDATE ... WHERE id = 1 on node-B.\n> 09:00:10 node-A received the update message from node-B.\n> 09:00:15 node-B received the delete message from node-A.\n>\n> At 09:00:10 on node-A, an update_deleted conflict is generated since\n> the row on node-A is already deleted locally. Suppose that we use\n> 'apply_or_skip' resolution for this conflict, we convert the update\n> message into an insertion, so node-A now has the row with id = 1. At\n> 09:00:15 on node-B, the incoming delete message is applied and deletes\n> the row with id = 1, even though the row has already been modified\n> locally. The node-A and node-B are now inconsistent. This\n> inconsistency can be avoided by using 'skip' resolution for the\n> 'update_deleted' conflict on node-A, and 'skip' resolution is the\n> default method for that actually. However, if we handle it as\n> 'update_missing', the 'apply_or_skip' resolution is used by default.\n>\n> IIUC with the proposed architecture, DELETE always takes precedence\n> over UPDATE since both 'update_deleted' and 'update_missing' don't use\n> commit timestamps to resolve the conflicts. As long as that is true, I\n> think there is no use case for 'apply_or_skip' and 'apply_or_error'\n> resolutions in update/delete conflict cases. In short, I think we need\n> something like 'delete_differ' conflict type as well. FYI PGD and\n> Oracle GoldenGate seem to have this conflict type[1][2].\n>\n\nYour explanation makes sense to me and I agree that we should\nimplement 'delete_differ' conflict type.\n\n> The 'delete'_differ' conflict type would have at least\n> 'latest_timestamp_wins' resolution. With the timestamp based\n> resolution method, we would deal with update/delete conflicts as\n> follows:\n>\n> 09:00:00: DELETE ... WHERE id = 1 on node-A.\n> 09:00:05: UPDATE ... WHERE id = 1 on node-B.\n> - the updated row doesn't have the origin since it's a local change.\n> 09:00:10: node-A received the update message from node-B.\n> - the incoming update message has the origin of node-B whereas the\n> local row is already removed locally.\n> - 'update_deleted' conflict is generated.\n>\n\nFYI, as of now, we don't have a reliable way to detect\n'update_deleted' type of conflicts but we had some discussion about\nthe same [1].\n\n> - do the insert of the new row instead, because the commit\n> timestamp of UPDATE is newer than DELETE's one.\n> 09:00:15: node-B received the delete message from node-A.\n> - the incoming delete message has the origin of node-B whereas the\n> (updated) row doesn't have the origin.\n> - 'update_differ' conflict is generated.\n> - discard DELETE, because the commit timestamp of UPDATE is newer\n> than DELETE' one.ard DELETE, because the commit timestamp of UPDATE is\n> newer than DELETE' one.\n>\n> As a result, both nodes have the new version row.\n>\n\nRight, it seems to me that we should implement 'latest_time_wins' if\nwe want consistency in such cases.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Lj-PWrP789KnKxZydisHajd38rSihWXO8MVBLDwxG1Kg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 1 Jul 2024 15:13:27 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jul 1, 2024 at 1:35 PM Masahiko Sawada <[email protected]> wrote:\n>\n> Setting resolvers at table-level and subscription-level sounds good to\n> me. DDLs for setting resolvers at subscription-level would need the\n> subscription name to be specified?\n>\n\nYes, it should be part of the ALTER/CREATE SUBSCRIPTION command. One\nidea could be to have syntax as follows:\n\nALTER SUBSCRIPTION name SET CONFLICT RESOLVER 'conflict_resolver' FOR\n'conflict_type';\nALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR 'conflict_type';\n\nCREATE SUBSCRIPTION subscription_name CONNECTION 'conninfo'\nPUBLICATION publication_name [, ...] CONFLICT RESOLVER\n'conflict_resolver' FOR 'conflict_type';\n\n> And another question is: a\n> table-level resolver setting is precedent over all subscriber-level\n> resolver settings in the database?\n>\n\nYes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 1 Jul 2024 15:24:41 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jul 1, 2024 at 11:47 AM Masahiko Sawada <[email protected]> wrote:\n>\n> Hi,\n>\n> On Thu, May 23, 2024 at 3:37 PM shveta malik <[email protected]> wrote:\n> >\n> > DELETE\n> > ================\n> > Conflict Type:\n> > ----------------\n> > delete_missing: An incoming delete is trying to delete a row on a\n> > target node which does not exist.\n>\n> IIUC the 'delete_missing' conflict doesn't cover the case where an\n> incoming delete message is trying to delete a row that has already\n> been updated locally or by another node. I think in update/delete\n> conflict situations, we need to resolve the conflicts based on commit\n> timestamps like we do for update/update and insert/update conflicts.\n>\n> For example, suppose there are two node-A and node-B and setup\n> bi-directional replication, and suppose further that both have the row\n> with id = 1, consider the following sequences:\n>\n> 09:00:00 DELETE ... WHERE id = 1 on node-A.\n> 09:00:05 UPDATE ... WHERE id = 1 on node-B.\n> 09:00:10 node-A received the update message from node-B.\n> 09:00:15 node-B received the delete message from node-A.\n>\n> At 09:00:10 on node-A, an update_deleted conflict is generated since\n> the row on node-A is already deleted locally. Suppose that we use\n> 'apply_or_skip' resolution for this conflict, we convert the update\n> message into an insertion, so node-A now has the row with id = 1. At\n> 09:00:15 on node-B, the incoming delete message is applied and deletes\n> the row with id = 1, even though the row has already been modified\n> locally. The node-A and node-B are now inconsistent. This\n> inconsistency can be avoided by using 'skip' resolution for the\n> 'update_deleted' conflict on node-A, and 'skip' resolution is the\n> default method for that actually. However, if we handle it as\n> 'update_missing', the 'apply_or_skip' resolution is used by default.\n>\n> IIUC with the proposed architecture, DELETE always takes precedence\n> over UPDATE since both 'update_deleted' and 'update_missing' don't use\n> commit timestamps to resolve the conflicts. As long as that is true, I\n> think there is no use case for 'apply_or_skip' and 'apply_or_error'\n> resolutions in update/delete conflict cases. In short, I think we need\n> something like 'delete_differ' conflict type as well.\n\nThanks for the feedback. Sure, we can have 'delete_differ'.\n\n> FYI PGD and\n> Oracle GoldenGate seem to have this conflict type[1][2].\n>\n> The 'delete'_differ' conflict type would have at least\n> 'latest_timestamp_wins' resolution. With the timestamp based\n> resolution method, we would deal with update/delete conflicts as\n> follows:\n>\n> 09:00:00: DELETE ... WHERE id = 1 on node-A.\n> 09:00:05: UPDATE ... WHERE id = 1 on node-B.\n> - the updated row doesn't have the origin since it's a local change.\n> 09:00:10: node-A received the update message from node-B.\n> - the incoming update message has the origin of node-B whereas the\n> local row is already removed locally.\n> - 'update_deleted' conflict is generated.\n> - do the insert of the new row instead, because the commit\n> timestamp of UPDATE is newer than DELETE's one.\n\nSo, are you suggesting to support latest_tmestamp_wins for\n'update_deleted' case? And shall 'latest_tmestamp_wins' be default\nthen instead of 'skip'? In some cases, the complete row can not be\nconstructed, and then 'insertion' might not be possible even if the\ntimestamp of 'update' is latest. Then shall we skip or error out at\nlatest_tmestamp_wins config?\n\nEven if we support 'latest_timestamp_wins' as default, we can still\nhave 'apply_or_skip' and 'apply_or_error' as other options for\n'update_deleted' case. Or do you suggest getting rid of these options\ncompletely?\n\n> 09:00:15: node-B received the delete message from node-A.\n> - the incoming delete message has the origin of node-B whereas the\n> (updated) row doesn't have the origin.\n> - 'update_differ' conflict is generated.\n\nHere, do you mean 'delete_differ' conflict is generated?\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 2 Jul 2024 09:24:20 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jun 19, 2024 at 1:52 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jun 18, 2024 at 3:29 PM shveta malik <[email protected]> wrote:\n> > On Tue, Jun 18, 2024 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > I tried to work out a few scenarios with this, where the apply worker\n> > will wait until its local clock hits 'remote_commit_tts - max_skew\n> > permitted'. Please have a look.\n> >\n> > Let's say, we have a GUC to configure max_clock_skew permitted.\n> > Resolver is last_update_wins in both cases.\n> > ----------------\n> > 1) Case 1: max_clock_skew set to 0 i.e. no tolerance for clock skew.\n> >\n> > Remote Update with commit_timestamp = 10.20AM.\n> > Local clock (which is say 5 min behind) shows = 10.15AM.\n> >\n> > When remote update arrives at local node, we see that skew is greater\n> > than max_clock_skew and thus apply worker waits till local clock hits\n> > 'remote's commit_tts - max_clock_skew' i.e. till 10.20 AM. Once the\n> > local clock hits 10.20 AM, the worker applies the remote change with\n> > commit_tts of 10.20AM. In the meantime (during wait period of apply\n> > worker)) if some local update on same row has happened at say 10.18am,\n> > that will applied first, which will be later overwritten by above\n> > remote change of 10.20AM as remote-change's timestamp appear more\n> > latest, even though it has happened earlier than local change.\n>\n> For the sake of simplicity let's call the change that happened at\n> 10:20 AM change-1 and the change that happened at 10:15 as change-2\n> and assume we are talking about the synchronous commit only.\n>\n> I think now from an application perspective the change-1 wouldn't have\n> caused the change-2 because we delayed applying change-2 on the local\n> node which would have delayed the confirmation of the change-1 to the\n> application that means we have got the change-2 on the local node\n> without the confirmation of change-1 hence change-2 has no causal\n> dependency on the change-1. So it's fine that we perform change-1\n> before change-2 and the timestamp will also show the same at any other\n> node if they receive these 2 changes.\n>\n> The goal is to ensure that if we define the order where change-2\n> happens before change-1, this same order should be visible on all\n> other nodes. This will hold true because the commit timestamp of\n> change-2 is earlier than that of change-1.\n>\n> > 2) Case 2: max_clock_skew is set to 2min.\n> >\n> > Remote Update with commit_timestamp=10.20AM\n> > Local clock (which is say 5 min behind) = 10.15AM.\n> >\n> > Now apply worker will notice skew greater than 2min and thus will wait\n> > till local clock hits 'remote's commit_tts - max_clock_skew' i.e.\n> > 10.18 and will apply the change with commit_tts of 10.20 ( as we\n> > always save the origin's commit timestamp into local commit_tts, see\n> > RecordTransactionCommit->TransactionTreeSetCommitTsData). Now lets say\n> > another local update is triggered at 10.19am, it will be applied\n> > locally but it will be ignored on remote node. On the remote node ,\n> > the existing change with a timestamp of 10.20 am will win resulting in\n> > data divergence.\n>\n> Let's call the 10:20 AM change as a change-1 and the change that\n> happened at 10:19 as change-2\n>\n> IIUC, although we apply the change-1 at 10:18 AM the commit_ts of that\n> commit_ts of that change is 10:20, and the same will be visible to all\n> other nodes. So in conflict resolution still the change-1 happened\n> after the change-2 because change-2's commit_ts is 10:19 AM. Now\n> there could be a problem with the causal order because we applied the\n> change-1 at 10:18 AM so the application might have gotten confirmation\n> at 10:18 AM and the change-2 of the local node may be triggered as a\n> result of confirmation of the change-1 that means now change-2 has a\n> causal dependency on the change-1 but commit_ts shows change-2\n> happened before the change-1 on all the nodes.\n>\n> So, is this acceptable? I think yes because the user has configured a\n> maximum clock skew of 2 minutes, which means the detected order might\n> not always align with the causal order for transactions occurring\n> within that time frame. Generally, the ideal configuration for\n> max_clock_skew should be in multiple of the network round trip time.\n> Assuming this configuration, we wouldn’t encounter this problem\n> because for change-2 to be caused by change-1, the client would need\n> to get confirmation of change-1 and then trigger change-2, which would\n> take at least 2-3 network round trips.\n\nAs we agreed, the subscriber should wait before applying an operation\nif the commit timestamp of the currently replayed transaction is in\nthe future and the difference exceeds the maximum clock skew. This\nraises the question: should the subscriber wait only for insert,\nupdate, and delete operations when timestamp-based resolution methods\nare set, or should it wait regardless of the type of remote operation,\nthe presence or absence of conflicts, and the resolvers configured?\nI believe the latter approach is the way to go i.e. this should be\nindependent of CDR, though needed by CDR for better timestamp based\nresolutions. Thoughts?\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 2 Jul 2024 14:39:49 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jul 2, 2024 at 2:40 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Jun 19, 2024 at 1:52 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Tue, Jun 18, 2024 at 3:29 PM shveta malik <[email protected]> wrote:\n> > > On Tue, Jun 18, 2024 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > I tried to work out a few scenarios with this, where the apply worker\n> > > will wait until its local clock hits 'remote_commit_tts - max_skew\n> > > permitted'. Please have a look.\n> > >\n> > > Let's say, we have a GUC to configure max_clock_skew permitted.\n> > > Resolver is last_update_wins in both cases.\n> > > ----------------\n> > > 1) Case 1: max_clock_skew set to 0 i.e. no tolerance for clock skew.\n> > >\n> > > Remote Update with commit_timestamp = 10.20AM.\n> > > Local clock (which is say 5 min behind) shows = 10.15AM.\n> > >\n> > > When remote update arrives at local node, we see that skew is greater\n> > > than max_clock_skew and thus apply worker waits till local clock hits\n> > > 'remote's commit_tts - max_clock_skew' i.e. till 10.20 AM. Once the\n> > > local clock hits 10.20 AM, the worker applies the remote change with\n> > > commit_tts of 10.20AM. In the meantime (during wait period of apply\n> > > worker)) if some local update on same row has happened at say 10.18am,\n> > > that will applied first, which will be later overwritten by above\n> > > remote change of 10.20AM as remote-change's timestamp appear more\n> > > latest, even though it has happened earlier than local change.\n> >\n> > For the sake of simplicity let's call the change that happened at\n> > 10:20 AM change-1 and the change that happened at 10:15 as change-2\n> > and assume we are talking about the synchronous commit only.\n> >\n> > I think now from an application perspective the change-1 wouldn't have\n> > caused the change-2 because we delayed applying change-2 on the local\n> > node which would have delayed the confirmation of the change-1 to the\n> > application that means we have got the change-2 on the local node\n> > without the confirmation of change-1 hence change-2 has no causal\n> > dependency on the change-1. So it's fine that we perform change-1\n> > before change-2 and the timestamp will also show the same at any other\n> > node if they receive these 2 changes.\n> >\n> > The goal is to ensure that if we define the order where change-2\n> > happens before change-1, this same order should be visible on all\n> > other nodes. This will hold true because the commit timestamp of\n> > change-2 is earlier than that of change-1.\n> >\n> > > 2) Case 2: max_clock_skew is set to 2min.\n> > >\n> > > Remote Update with commit_timestamp=10.20AM\n> > > Local clock (which is say 5 min behind) = 10.15AM.\n> > >\n> > > Now apply worker will notice skew greater than 2min and thus will wait\n> > > till local clock hits 'remote's commit_tts - max_clock_skew' i.e.\n> > > 10.18 and will apply the change with commit_tts of 10.20 ( as we\n> > > always save the origin's commit timestamp into local commit_tts, see\n> > > RecordTransactionCommit->TransactionTreeSetCommitTsData). Now lets say\n> > > another local update is triggered at 10.19am, it will be applied\n> > > locally but it will be ignored on remote node. On the remote node ,\n> > > the existing change with a timestamp of 10.20 am will win resulting in\n> > > data divergence.\n> >\n> > Let's call the 10:20 AM change as a change-1 and the change that\n> > happened at 10:19 as change-2\n> >\n> > IIUC, although we apply the change-1 at 10:18 AM the commit_ts of that\n> > commit_ts of that change is 10:20, and the same will be visible to all\n> > other nodes. So in conflict resolution still the change-1 happened\n> > after the change-2 because change-2's commit_ts is 10:19 AM. Now\n> > there could be a problem with the causal order because we applied the\n> > change-1 at 10:18 AM so the application might have gotten confirmation\n> > at 10:18 AM and the change-2 of the local node may be triggered as a\n> > result of confirmation of the change-1 that means now change-2 has a\n> > causal dependency on the change-1 but commit_ts shows change-2\n> > happened before the change-1 on all the nodes.\n> >\n> > So, is this acceptable? I think yes because the user has configured a\n> > maximum clock skew of 2 minutes, which means the detected order might\n> > not always align with the causal order for transactions occurring\n> > within that time frame. Generally, the ideal configuration for\n> > max_clock_skew should be in multiple of the network round trip time.\n> > Assuming this configuration, we wouldn’t encounter this problem\n> > because for change-2 to be caused by change-1, the client would need\n> > to get confirmation of change-1 and then trigger change-2, which would\n> > take at least 2-3 network round trips.\n>\n> As we agreed, the subscriber should wait before applying an operation\n> if the commit timestamp of the currently replayed transaction is in\n> the future and the difference exceeds the maximum clock skew. This\n> raises the question: should the subscriber wait only for insert,\n> update, and delete operations when timestamp-based resolution methods\n> are set, or should it wait regardless of the type of remote operation,\n> the presence or absence of conflicts, and the resolvers configured?\n> I believe the latter approach is the way to go i.e. this should be\n> independent of CDR, though needed by CDR for better timestamp based\n> resolutions. Thoughts?\n\nYes, I also think it should be independent of CDR. IMHO, it should be\nbased on the user-configured maximum clock skew tolerance and can be\nindependent of CDR. IIUC we would make the remote apply wait just\nbefore committing if the remote commit timestamp is ahead of the local\nclock by more than the maximum clock skew tolerance, is that correct?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 10:46:58 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 10:47 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jul 2, 2024 at 2:40 PM shveta malik <[email protected]> wrote:\n> >\n> > On Wed, Jun 19, 2024 at 1:52 PM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > On Tue, Jun 18, 2024 at 3:29 PM shveta malik <[email protected]> wrote:\n> > > > On Tue, Jun 18, 2024 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n> > > >\n> > > > I tried to work out a few scenarios with this, where the apply worker\n> > > > will wait until its local clock hits 'remote_commit_tts - max_skew\n> > > > permitted'. Please have a look.\n> > > >\n> > > > Let's say, we have a GUC to configure max_clock_skew permitted.\n> > > > Resolver is last_update_wins in both cases.\n> > > > ----------------\n> > > > 1) Case 1: max_clock_skew set to 0 i.e. no tolerance for clock skew.\n> > > >\n> > > > Remote Update with commit_timestamp = 10.20AM.\n> > > > Local clock (which is say 5 min behind) shows = 10.15AM.\n> > > >\n> > > > When remote update arrives at local node, we see that skew is greater\n> > > > than max_clock_skew and thus apply worker waits till local clock hits\n> > > > 'remote's commit_tts - max_clock_skew' i.e. till 10.20 AM. Once the\n> > > > local clock hits 10.20 AM, the worker applies the remote change with\n> > > > commit_tts of 10.20AM. In the meantime (during wait period of apply\n> > > > worker)) if some local update on same row has happened at say 10.18am,\n> > > > that will applied first, which will be later overwritten by above\n> > > > remote change of 10.20AM as remote-change's timestamp appear more\n> > > > latest, even though it has happened earlier than local change.\n> > >\n> > > For the sake of simplicity let's call the change that happened at\n> > > 10:20 AM change-1 and the change that happened at 10:15 as change-2\n> > > and assume we are talking about the synchronous commit only.\n> > >\n> > > I think now from an application perspective the change-1 wouldn't have\n> > > caused the change-2 because we delayed applying change-2 on the local\n> > > node which would have delayed the confirmation of the change-1 to the\n> > > application that means we have got the change-2 on the local node\n> > > without the confirmation of change-1 hence change-2 has no causal\n> > > dependency on the change-1. So it's fine that we perform change-1\n> > > before change-2 and the timestamp will also show the same at any other\n> > > node if they receive these 2 changes.\n> > >\n> > > The goal is to ensure that if we define the order where change-2\n> > > happens before change-1, this same order should be visible on all\n> > > other nodes. This will hold true because the commit timestamp of\n> > > change-2 is earlier than that of change-1.\n> > >\n> > > > 2) Case 2: max_clock_skew is set to 2min.\n> > > >\n> > > > Remote Update with commit_timestamp=10.20AM\n> > > > Local clock (which is say 5 min behind) = 10.15AM.\n> > > >\n> > > > Now apply worker will notice skew greater than 2min and thus will wait\n> > > > till local clock hits 'remote's commit_tts - max_clock_skew' i.e.\n> > > > 10.18 and will apply the change with commit_tts of 10.20 ( as we\n> > > > always save the origin's commit timestamp into local commit_tts, see\n> > > > RecordTransactionCommit->TransactionTreeSetCommitTsData). Now lets say\n> > > > another local update is triggered at 10.19am, it will be applied\n> > > > locally but it will be ignored on remote node. On the remote node ,\n> > > > the existing change with a timestamp of 10.20 am will win resulting in\n> > > > data divergence.\n> > >\n> > > Let's call the 10:20 AM change as a change-1 and the change that\n> > > happened at 10:19 as change-2\n> > >\n> > > IIUC, although we apply the change-1 at 10:18 AM the commit_ts of that\n> > > commit_ts of that change is 10:20, and the same will be visible to all\n> > > other nodes. So in conflict resolution still the change-1 happened\n> > > after the change-2 because change-2's commit_ts is 10:19 AM. Now\n> > > there could be a problem with the causal order because we applied the\n> > > change-1 at 10:18 AM so the application might have gotten confirmation\n> > > at 10:18 AM and the change-2 of the local node may be triggered as a\n> > > result of confirmation of the change-1 that means now change-2 has a\n> > > causal dependency on the change-1 but commit_ts shows change-2\n> > > happened before the change-1 on all the nodes.\n> > >\n> > > So, is this acceptable? I think yes because the user has configured a\n> > > maximum clock skew of 2 minutes, which means the detected order might\n> > > not always align with the causal order for transactions occurring\n> > > within that time frame. Generally, the ideal configuration for\n> > > max_clock_skew should be in multiple of the network round trip time.\n> > > Assuming this configuration, we wouldn’t encounter this problem\n> > > because for change-2 to be caused by change-1, the client would need\n> > > to get confirmation of change-1 and then trigger change-2, which would\n> > > take at least 2-3 network round trips.\n> >\n> > As we agreed, the subscriber should wait before applying an operation\n> > if the commit timestamp of the currently replayed transaction is in\n> > the future and the difference exceeds the maximum clock skew. This\n> > raises the question: should the subscriber wait only for insert,\n> > update, and delete operations when timestamp-based resolution methods\n> > are set, or should it wait regardless of the type of remote operation,\n> > the presence or absence of conflicts, and the resolvers configured?\n> > I believe the latter approach is the way to go i.e. this should be\n> > independent of CDR, though needed by CDR for better timestamp based\n> > resolutions. Thoughts?\n>\n> Yes, I also think it should be independent of CDR. IMHO, it should be\n> based on the user-configured maximum clock skew tolerance and can be\n> independent of CDR.\n\n+1\n\n> IIUC we would make the remote apply wait just\n> before committing if the remote commit timestamp is ahead of the local\n> clock by more than the maximum clock skew tolerance, is that correct?\n\n+1 on condition to wait.\n\nBut I think we should make apply worker wait during begin\n(apply_handle_begin) instead of commit. It makes more sense to delay\nthe entire operation to manage clock-skew rather than the commit\nalone. And only then CDR's timestamp based resolution which are much\nprior to commit-stage can benefit from this. Thoughts?\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 3 Jul 2024 11:00:43 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 11:00 AM shveta malik <[email protected]> wrote:\n>\n> > Yes, I also think it should be independent of CDR. IMHO, it should be\n> > based on the user-configured maximum clock skew tolerance and can be\n> > independent of CDR.\n>\n> +1\n>\n> > IIUC we would make the remote apply wait just\n> > before committing if the remote commit timestamp is ahead of the local\n> > clock by more than the maximum clock skew tolerance, is that correct?\n>\n> +1 on condition to wait.\n>\n> But I think we should make apply worker wait during begin\n> (apply_handle_begin) instead of commit. It makes more sense to delay\n> the entire operation to manage clock-skew rather than the commit\n> alone. And only then CDR's timestamp based resolution which are much\n> prior to commit-stage can benefit from this. Thoughts?\n\nBut do we really need to wait at apply_handle_begin()? I mean if we\nalready know the commit_ts then we can perform the conflict resolution\nno? I mean we should wait before committing because we are\nconsidering this remote transaction to be in the future and we do not\nwant to confirm the commit of this transaction to the remote node\nbefore the local clock reaches the record commit_ts to preserve the\ncausal order. However, we can still perform conflict resolution\nbeforehand since we already know the commit_ts. The conflict\nresolution function will be something like \"out_version =\nCRF(version1_commit_ts, version2_commit_ts),\" so the result should be\nthe same regardless of when we apply it, correct? From a performance\nstandpoint, wouldn't it be beneficial to perform as much work as\npossible in advance? By the time we apply all the operations, the\nlocal clock might already be in sync with the commit_ts of the remote\ntransaction. Am I missing something?\n\nHowever, while thinking about this, I'm wondering about how we will\nhandle the streaming of in-progress transactions. If we start applying\nwith parallel workers, we might not know the commit_ts of those\ntransactions since they may not have been committed yet. One simple\noption could be to prevent parallel workers from applying in-progress\ntransactions when CDR is set up. Instead, we could let these\ntransactions spill to files and only apply them once we receive the\ncommit record.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 11:29:25 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 11:00 AM shveta malik <[email protected]> wrote:\n> >\n> > > Yes, I also think it should be independent of CDR. IMHO, it should be\n> > > based on the user-configured maximum clock skew tolerance and can be\n> > > independent of CDR.\n> >\n> > +1\n> >\n> > > IIUC we would make the remote apply wait just\n> > > before committing if the remote commit timestamp is ahead of the local\n> > > clock by more than the maximum clock skew tolerance, is that correct?\n> >\n> > +1 on condition to wait.\n> >\n> > But I think we should make apply worker wait during begin\n> > (apply_handle_begin) instead of commit. It makes more sense to delay\n> > the entire operation to manage clock-skew rather than the commit\n> > alone. And only then CDR's timestamp based resolution which are much\n> > prior to commit-stage can benefit from this. Thoughts?\n>\n> But do we really need to wait at apply_handle_begin()? I mean if we\n> already know the commit_ts then we can perform the conflict resolution\n> no? I mean we should wait before committing because we are\n> considering this remote transaction to be in the future and we do not\n> want to confirm the commit of this transaction to the remote node\n> before the local clock reaches the record commit_ts to preserve the\n> causal order. However, we can still perform conflict resolution\n> beforehand since we already know the commit_ts. The conflict\n> resolution function will be something like \"out_version =\n> CRF(version1_commit_ts, version2_commit_ts),\" so the result should be\n> the same regardless of when we apply it, correct? From a performance\n> standpoint, wouldn't it be beneficial to perform as much work as\n> possible in advance? By the time we apply all the operations, the\n> local clock might already be in sync with the commit_ts of the remote\n> transaction. Am I missing something?\n>\n\nBut waiting after applying the operations and before applying the\ncommit would mean that we need to wait with the locks held. That could\nbe a recipe for deadlocks in the system. I see your point related to\nperformance but as we are not expecting clock skew in normal cases, we\nshouldn't be too much bothered on the performance due to this. If\nthere is clock skew, we expect users to fix it, this is just a\nworst-case aid for users.\n\n> However, while thinking about this, I'm wondering about how we will\n> handle the streaming of in-progress transactions. If we start applying\n> with parallel workers, we might not know the commit_ts of those\n> transactions since they may not have been committed yet. One simple\n> option could be to prevent parallel workers from applying in-progress\n> transactions when CDR is set up. Instead, we could let these\n> transactions spill to files and only apply them once we receive the\n> commit record.\n>\n\nAgreed, we should do it as you have suggested and document it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Jul 2024 12:30:15 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 12:30 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n\n> But waiting after applying the operations and before applying the\n> commit would mean that we need to wait with the locks held. That could\n> be a recipe for deadlocks in the system. I see your point related to\n> performance but as we are not expecting clock skew in normal cases, we\n> shouldn't be too much bothered on the performance due to this. If\n> there is clock skew, we expect users to fix it, this is just a\n> worst-case aid for users.\n\nBut if we make it wait at the very first operation that means we will\nnot suck more decoded data from the network and wouldn't that make the\nsender wait for the network buffer to get sucked in by the receiver?\nAlso, we already have a handling of parallel apply workers so if we do\nnot have an issue of deadlock there or if we can handle those issues\nthere we can do it here as well no?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 14:16:08 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 2:16 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 12:30 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n>\n> > But waiting after applying the operations and before applying the\n> > commit would mean that we need to wait with the locks held. That could\n> > be a recipe for deadlocks in the system. I see your point related to\n> > performance but as we are not expecting clock skew in normal cases, we\n> > shouldn't be too much bothered on the performance due to this. If\n> > there is clock skew, we expect users to fix it, this is just a\n> > worst-case aid for users.\n>\n> But if we make it wait at the very first operation that means we will\n> not suck more decoded data from the network and wouldn't that make the\n> sender wait for the network buffer to get sucked in by the receiver?\n>\n\nThat would be true even if we wait just before applying the commit\nrecord considering the transaction is small and the wait time is\nlarge.\n\n> Also, we already have a handling of parallel apply workers so if we do\n> not have an issue of deadlock there or if we can handle those issues\n> there we can do it here as well no?\n>\n\nParallel apply workers won't wait for a long time. There is some\nsimilarity and in both cases, deadlock will be detected but chances of\nsuch implementation-related deadlocks will be higher if we start\nwaiting for a random amount of times. The other possibility is that we\ncan keep a cap on the max clock skew time above which we will give\nERROR even if the user has configured wait. This is because anyway the\nsystem will be choked (walsender won't be able to send more data,\nvacuum on publisher won't be able to remove dead rows) if we wait for\nlonger times. But even with that, I am not sure if waiting after\nholding locks is a good idea or gives us the benefit that is worth the\nrisk of deadlocks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Jul 2024 15:35:28 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 11:00 AM shveta malik <[email protected]> wrote:\n> >\n> > > Yes, I also think it should be independent of CDR. IMHO, it should be\n> > > based on the user-configured maximum clock skew tolerance and can be\n> > > independent of CDR.\n> >\n> > +1\n> >\n> > > IIUC we would make the remote apply wait just\n> > > before committing if the remote commit timestamp is ahead of the local\n> > > clock by more than the maximum clock skew tolerance, is that correct?\n> >\n> > +1 on condition to wait.\n> >\n> > But I think we should make apply worker wait during begin\n> > (apply_handle_begin) instead of commit. It makes more sense to delay\n> > the entire operation to manage clock-skew rather than the commit\n> > alone. And only then CDR's timestamp based resolution which are much\n> > prior to commit-stage can benefit from this. Thoughts?\n>\n> But do we really need to wait at apply_handle_begin()? I mean if we\n> already know the commit_ts then we can perform the conflict resolution\n> no?\n\nI would like to highlight one point here that the resultant data may\nbe different depending upon at what stage (begin or commit) we\nconclude to wait. Example:\n\n--max_clock_skew set to 0 i.e. no tolerance for clock skew.\n--Remote Update with commit_timestamp = 10.20AM.\n--Local clock (which is say 5 min behind) shows = 10.15AM.\n\nCase 1: Wait during Begin:\nWhen remote update arrives at local node, apply worker waits till\nlocal clock hits 'remote's commit_tts - max_clock_skew' i.e. till\n10.20 AM. In the meantime (during the wait period of apply worker) if\nsome local update on the same row has happened at say 10.18am (local\nclock), that will be applied first. Now when apply worker's wait is\nover, it will detect 'update_diffe'r conflict and as per\n'last_update_win', remote_tuple will win as 10.20 is latest than\n10.18.\n\nCase 2: Wait during Commit:\nWhen remote update arrives at local node, it finds no conflict and\ngoes for commit. But before commit, it waits till the local clock hits\n10.20 AM. In the meantime (during wait period of apply worker)) if\nsome local update is trying to update the same row say at 10.18, it\nhas to wait (due to locks taken by remote update on that row) and\nremote tuple will get committed first with commit timestamp of 10.20.\nThen local update will proceed and will overwrite remote tuple.\n\nSo in case1, remote tuple is the final change while in case2, local\ntuple is the final change.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 3 Jul 2024 16:02:05 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 3:35 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 2:16 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 12:30 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > > But waiting after applying the operations and before applying the\n> > > commit would mean that we need to wait with the locks held. That could\n> > > be a recipe for deadlocks in the system. I see your point related to\n> > > performance but as we are not expecting clock skew in normal cases, we\n> > > shouldn't be too much bothered on the performance due to this. If\n> > > there is clock skew, we expect users to fix it, this is just a\n> > > worst-case aid for users.\n> >\n> > But if we make it wait at the very first operation that means we will\n> > not suck more decoded data from the network and wouldn't that make the\n> > sender wait for the network buffer to get sucked in by the receiver?\n> >\n>\n> That would be true even if we wait just before applying the commit\n> record considering the transaction is small and the wait time is\n> large.\n\nWhat I am saying is that if we are not applying the whole transaction,\nit means we are not receiving it either unless we plan to spill it to\na file. If we don't spill it to a file, the network buffer will fill\nup very quickly. This issue wouldn't occur if we waited right before\nthe commit because, by that time, we would have already received all\nthe data from the network.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 16:04:00 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 3:35 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 2:16 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 12:30 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > > But waiting after applying the operations and before applying the\n> > > commit would mean that we need to wait with the locks held. That could\n> > > be a recipe for deadlocks in the system. I see your point related to\n> > > performance but as we are not expecting clock skew in normal cases, we\n> > > shouldn't be too much bothered on the performance due to this. If\n> > > there is clock skew, we expect users to fix it, this is just a\n> > > worst-case aid for users.\n> >\n> > But if we make it wait at the very first operation that means we will\n> > not suck more decoded data from the network and wouldn't that make the\n> > sender wait for the network buffer to get sucked in by the receiver?\n> >\n>\n> That would be true even if we wait just before applying the commit\n> record considering the transaction is small and the wait time is\n> large.\n>\n> > Also, we already have a handling of parallel apply workers so if we do\n> > not have an issue of deadlock there or if we can handle those issues\n> > there we can do it here as well no?\n> >\n>\n> Parallel apply workers won't wait for a long time. There is some\n> similarity and in both cases, deadlock will be detected but chances of\n> such implementation-related deadlocks will be higher if we start\n> waiting for a random amount of times. The other possibility is that we\n> can keep a cap on the max clock skew time above which we will give\n> ERROR even if the user has configured wait.\n\n+1. But I think cap has to be on wait-time. As an example, let's say\nthe user has configured 'clock skew tolerance' of 10sec while the\nactual clock skew between nodes is 5 min. It means, we will mostly\nhave to wait '5 min - 10sec' to bring the clock skew to a tolerable\nlimit, which is a huge waiting time. We can keep a max limit on this\nwait time.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 3 Jul 2024 16:09:21 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 4:02 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 11:00 AM shveta malik <[email protected]> wrote:\n> > >\n> > > > Yes, I also think it should be independent of CDR. IMHO, it should be\n> > > > based on the user-configured maximum clock skew tolerance and can be\n> > > > independent of CDR.\n> > >\n> > > +1\n> > >\n> > > > IIUC we would make the remote apply wait just\n> > > > before committing if the remote commit timestamp is ahead of the local\n> > > > clock by more than the maximum clock skew tolerance, is that correct?\n> > >\n> > > +1 on condition to wait.\n> > >\n> > > But I think we should make apply worker wait during begin\n> > > (apply_handle_begin) instead of commit. It makes more sense to delay\n> > > the entire operation to manage clock-skew rather than the commit\n> > > alone. And only then CDR's timestamp based resolution which are much\n> > > prior to commit-stage can benefit from this. Thoughts?\n> >\n> > But do we really need to wait at apply_handle_begin()? I mean if we\n> > already know the commit_ts then we can perform the conflict resolution\n> > no?\n>\n> I would like to highlight one point here that the resultant data may\n> be different depending upon at what stage (begin or commit) we\n> conclude to wait. Example:\n>\n> --max_clock_skew set to 0 i.e. no tolerance for clock skew.\n> --Remote Update with commit_timestamp = 10.20AM.\n> --Local clock (which is say 5 min behind) shows = 10.15AM.\n>\n> Case 1: Wait during Begin:\n> When remote update arrives at local node, apply worker waits till\n> local clock hits 'remote's commit_tts - max_clock_skew' i.e. till\n> 10.20 AM. In the meantime (during the wait period of apply worker) if\n> some local update on the same row has happened at say 10.18am (local\n> clock), that will be applied first. Now when apply worker's wait is\n> over, it will detect 'update_diffe'r conflict and as per\n> 'last_update_win', remote_tuple will win as 10.20 is latest than\n> 10.18.\n>\n> Case 2: Wait during Commit:\n> When remote update arrives at local node, it finds no conflict and\n> goes for commit. But before commit, it waits till the local clock hits\n> 10.20 AM. In the meantime (during wait period of apply worker)) if\n> some local update is trying to update the same row say at 10.18, it\n> has to wait (due to locks taken by remote update on that row) and\n> remote tuple will get committed first with commit timestamp of 10.20.\n> Then local update will proceed and will overwrite remote tuple.\n>\n> So in case1, remote tuple is the final change while in case2, local\n> tuple is the final change.\n\nGot it, but which case is correct, I think both. Because in case-1\nlocal commit's commit_ts is 10:18 and the remote commit's commit_ts is\n10:20 so remote apply wins. And case 2, the remote commit's commit_ts\nis 10:20 whereas the local commit's commit_ts must be 10:20 + delta\n(because it waited for the remote transaction to get committed).\n\nNow say which is better, in case-1 we have to make the remote apply to\nwait at the beginning state without knowing what would be the local\nclock when it actually comes to commit, it may so happen that if we\nchoose case-2 by the time the remote transaction finish applying the\nlocal clock is beyond 10:20 and we do not even need to wait?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 16:12:12 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 4:04 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 3:35 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 2:16 PM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 3, 2024 at 12:30 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > > But waiting after applying the operations and before applying the\n> > > > commit would mean that we need to wait with the locks held. That could\n> > > > be a recipe for deadlocks in the system. I see your point related to\n> > > > performance but as we are not expecting clock skew in normal cases, we\n> > > > shouldn't be too much bothered on the performance due to this. If\n> > > > there is clock skew, we expect users to fix it, this is just a\n> > > > worst-case aid for users.\n> > >\n> > > But if we make it wait at the very first operation that means we will\n> > > not suck more decoded data from the network and wouldn't that make the\n> > > sender wait for the network buffer to get sucked in by the receiver?\n> > >\n> >\n> > That would be true even if we wait just before applying the commit\n> > record considering the transaction is small and the wait time is\n> > large.\n>\n> What I am saying is that if we are not applying the whole transaction,\n> it means we are not receiving it either unless we plan to spill it to\n> a file. If we don't spill it to a file, the network buffer will fill\n> up very quickly. This issue wouldn't occur if we waited right before\n> the commit because, by that time, we would have already received all\n> the data from the network.\n>\n\nWe would have received the transaction data but there could be other\ntransactions that need to wait because the apply worker is waiting\nbefore the commit. So, the situation will be the same. We can even\ndecide to spill the data to files if the decision is that we need to\nwait to avoid network buffer-fill situations. But note that the wait\nin apply worker has consequences that the subscriber won't be able to\nconfirm the flush position and publisher won't be able to vacuum the\ndead rows and we won't be remove WAL as well. Last time when we\ndiscussed the delay_apply feature, we decided not to proceed because\nof such issues. This is the reason I proposed a cap on wait time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Jul 2024 16:47:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 4:48 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 4:04 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 3:35 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 3, 2024 at 2:16 PM Dilip Kumar <[email protected]> wrote:\n> > > >\n> > > > On Wed, Jul 3, 2024 at 12:30 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n> > > >\n> > > > > But waiting after applying the operations and before applying the\n> > > > > commit would mean that we need to wait with the locks held. That could\n> > > > > be a recipe for deadlocks in the system. I see your point related to\n> > > > > performance but as we are not expecting clock skew in normal cases, we\n> > > > > shouldn't be too much bothered on the performance due to this. If\n> > > > > there is clock skew, we expect users to fix it, this is just a\n> > > > > worst-case aid for users.\n> > > >\n> > > > But if we make it wait at the very first operation that means we will\n> > > > not suck more decoded data from the network and wouldn't that make the\n> > > > sender wait for the network buffer to get sucked in by the receiver?\n> > > >\n> > >\n> > > That would be true even if we wait just before applying the commit\n> > > record considering the transaction is small and the wait time is\n> > > large.\n> >\n> > What I am saying is that if we are not applying the whole transaction,\n> > it means we are not receiving it either unless we plan to spill it to\n> > a file. If we don't spill it to a file, the network buffer will fill\n> > up very quickly. This issue wouldn't occur if we waited right before\n> > the commit because, by that time, we would have already received all\n> > the data from the network.\n> >\n>\n> We would have received the transaction data but there could be other\n> transactions that need to wait because the apply worker is waiting\n> before the commit.\n\nYeah, that's a valid point, can parallel apply worker help here?\n\n So, the situation will be the same. We can even\n> decide to spill the data to files if the decision is that we need to\n> wait to avoid network buffer-fill situations. But note that the wait\n> in apply worker has consequences that the subscriber won't be able to\n> confirm the flush position and publisher won't be able to vacuum the\n> dead rows and we won't be remove WAL as well. Last time when we\n> discussed the delay_apply feature, we decided not to proceed because\n> of such issues. This is the reason I proposed a cap on wait time.\n\nYes, spilling to file or cap on the wait time should help, and as I\nsaid above maybe a parallel apply worker can also help.\n\nSo I agree that the problem with network buffers arises in both cases,\nwhether we wait before committing or before beginning. So keeping that\nin mind I don't have any strong objections against waiting at the\nbeginning if it simplifies the design compared to waiting at the\ncommit.\n\nHowever, one point to remember in favor of waiting before applying the\ncommit is that if we decide to wait before beginning the transaction,\nwe would end up waiting in many more cases compared to waiting before\ncommitting. Because in cases, when transactions are large and the\nclock skew is small, the local clock would have already passed the\nremote commit_ts by the time we reach the commit.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 17:06:05 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 4:12 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 4:02 PM shveta malik <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 3, 2024 at 11:00 AM shveta malik <[email protected]> wrote:\n> > > >\n> > > > > Yes, I also think it should be independent of CDR. IMHO, it should be\n> > > > > based on the user-configured maximum clock skew tolerance and can be\n> > > > > independent of CDR.\n> > > >\n> > > > +1\n> > > >\n> > > > > IIUC we would make the remote apply wait just\n> > > > > before committing if the remote commit timestamp is ahead of the local\n> > > > > clock by more than the maximum clock skew tolerance, is that correct?\n> > > >\n> > > > +1 on condition to wait.\n> > > >\n> > > > But I think we should make apply worker wait during begin\n> > > > (apply_handle_begin) instead of commit. It makes more sense to delay\n> > > > the entire operation to manage clock-skew rather than the commit\n> > > > alone. And only then CDR's timestamp based resolution which are much\n> > > > prior to commit-stage can benefit from this. Thoughts?\n> > >\n> > > But do we really need to wait at apply_handle_begin()? I mean if we\n> > > already know the commit_ts then we can perform the conflict resolution\n> > > no?\n> >\n> > I would like to highlight one point here that the resultant data may\n> > be different depending upon at what stage (begin or commit) we\n> > conclude to wait. Example:\n> >\n> > --max_clock_skew set to 0 i.e. no tolerance for clock skew.\n> > --Remote Update with commit_timestamp = 10.20AM.\n> > --Local clock (which is say 5 min behind) shows = 10.15AM.\n> >\n> > Case 1: Wait during Begin:\n> > When remote update arrives at local node, apply worker waits till\n> > local clock hits 'remote's commit_tts - max_clock_skew' i.e. till\n> > 10.20 AM. In the meantime (during the wait period of apply worker) if\n> > some local update on the same row has happened at say 10.18am (local\n> > clock), that will be applied first. Now when apply worker's wait is\n> > over, it will detect 'update_diffe'r conflict and as per\n> > 'last_update_win', remote_tuple will win as 10.20 is latest than\n> > 10.18.\n> >\n> > Case 2: Wait during Commit:\n> > When remote update arrives at local node, it finds no conflict and\n> > goes for commit. But before commit, it waits till the local clock hits\n> > 10.20 AM. In the meantime (during wait period of apply worker)) if\n> > some local update is trying to update the same row say at 10.18, it\n> > has to wait (due to locks taken by remote update on that row) and\n> > remote tuple will get committed first with commit timestamp of 10.20.\n> > Then local update will proceed and will overwrite remote tuple.\n> >\n> > So in case1, remote tuple is the final change while in case2, local\n> > tuple is the final change.\n>\n> Got it, but which case is correct, I think both. Because in case-1\n> local commit's commit_ts is 10:18 and the remote commit's commit_ts is\n> 10:20 so remote apply wins. And case 2, the remote commit's commit_ts\n> is 10:20 whereas the local commit's commit_ts must be 10:20 + delta\n> (because it waited for the remote transaction to get committed).\n>\n> Now say which is better, in case-1 we have to make the remote apply to\n> wait at the beginning state without knowing what would be the local\n> clock when it actually comes to commit, it may so happen that if we\n> choose case-2 by the time the remote transaction finish applying the\n> local clock is beyond 10:20 and we do not even need to wait?\n\nyes, agree that wait time could be lesser to some extent in case 2.\nBut the wait during commit will make user operations on the same row\nwait, without user having any clue on concurrent blocking operations.\nI am not sure if it will be acceptable.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 3 Jul 2024 17:08:21 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 5:08 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 4:12 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 4:02 PM shveta malik <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n> > > >\n> > > > On Wed, Jul 3, 2024 at 11:00 AM shveta malik <[email protected]> wrote:\n> > > > >\n> > > > > > Yes, I also think it should be independent of CDR. IMHO, it should be\n> > > > > > based on the user-configured maximum clock skew tolerance and can be\n> > > > > > independent of CDR.\n> > > > >\n> > > > > +1\n> > > > >\n> > > > > > IIUC we would make the remote apply wait just\n> > > > > > before committing if the remote commit timestamp is ahead of the local\n> > > > > > clock by more than the maximum clock skew tolerance, is that correct?\n> > > > >\n> > > > > +1 on condition to wait.\n> > > > >\n> > > > > But I think we should make apply worker wait during begin\n> > > > > (apply_handle_begin) instead of commit. It makes more sense to delay\n> > > > > the entire operation to manage clock-skew rather than the commit\n> > > > > alone. And only then CDR's timestamp based resolution which are much\n> > > > > prior to commit-stage can benefit from this. Thoughts?\n> > > >\n> > > > But do we really need to wait at apply_handle_begin()? I mean if we\n> > > > already know the commit_ts then we can perform the conflict resolution\n> > > > no?\n> > >\n> > > I would like to highlight one point here that the resultant data may\n> > > be different depending upon at what stage (begin or commit) we\n> > > conclude to wait. Example:\n> > >\n> > > --max_clock_skew set to 0 i.e. no tolerance for clock skew.\n> > > --Remote Update with commit_timestamp = 10.20AM.\n> > > --Local clock (which is say 5 min behind) shows = 10.15AM.\n> > >\n> > > Case 1: Wait during Begin:\n> > > When remote update arrives at local node, apply worker waits till\n> > > local clock hits 'remote's commit_tts - max_clock_skew' i.e. till\n> > > 10.20 AM. In the meantime (during the wait period of apply worker) if\n> > > some local update on the same row has happened at say 10.18am (local\n> > > clock), that will be applied first. Now when apply worker's wait is\n> > > over, it will detect 'update_diffe'r conflict and as per\n> > > 'last_update_win', remote_tuple will win as 10.20 is latest than\n> > > 10.18.\n> > >\n> > > Case 2: Wait during Commit:\n> > > When remote update arrives at local node, it finds no conflict and\n> > > goes for commit. But before commit, it waits till the local clock hits\n> > > 10.20 AM. In the meantime (during wait period of apply worker)) if\n> > > some local update is trying to update the same row say at 10.18, it\n> > > has to wait (due to locks taken by remote update on that row) and\n> > > remote tuple will get committed first with commit timestamp of 10.20.\n> > > Then local update will proceed and will overwrite remote tuple.\n> > >\n> > > So in case1, remote tuple is the final change while in case2, local\n> > > tuple is the final change.\n> >\n> > Got it, but which case is correct, I think both. Because in case-1\n> > local commit's commit_ts is 10:18 and the remote commit's commit_ts is\n> > 10:20 so remote apply wins. And case 2, the remote commit's commit_ts\n> > is 10:20 whereas the local commit's commit_ts must be 10:20 + delta\n> > (because it waited for the remote transaction to get committed).\n> >\n> > Now say which is better, in case-1 we have to make the remote apply to\n> > wait at the beginning state without knowing what would be the local\n> > clock when it actually comes to commit, it may so happen that if we\n> > choose case-2 by the time the remote transaction finish applying the\n> > local clock is beyond 10:20 and we do not even need to wait?\n>\n> yes, agree that wait time could be lesser to some extent in case 2.\n> But the wait during commit will make user operations on the same row\n> wait, without user having any clue on concurrent blocking operations.\n> I am not sure if it will be acceptable.\n\nI don't think there is any problem with the acceptance of user\nexperience because even while applying the remote transaction\n(irrespective of whether we implement this wait feature) the user\ntransaction might have to wait if updating the common rows right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 17:22:55 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 12:30 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 11:29 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 11:00 AM shveta malik <[email protected]> wrote:\n> > >\n> > > > Yes, I also think it should be independent of CDR. IMHO, it should be\n> > > > based on the user-configured maximum clock skew tolerance and can be\n> > > > independent of CDR.\n> > >\n> > > +1\n> > >\n> > > > IIUC we would make the remote apply wait just\n> > > > before committing if the remote commit timestamp is ahead of the local\n> > > > clock by more than the maximum clock skew tolerance, is that correct?\n> > >\n> > > +1 on condition to wait.\n> > >\n> > > But I think we should make apply worker wait during begin\n> > > (apply_handle_begin) instead of commit. It makes more sense to delay\n> > > the entire operation to manage clock-skew rather than the commit\n> > > alone. And only then CDR's timestamp based resolution which are much\n> > > prior to commit-stage can benefit from this. Thoughts?\n> >\n> > But do we really need to wait at apply_handle_begin()? I mean if we\n> > already know the commit_ts then we can perform the conflict resolution\n> > no? I mean we should wait before committing because we are\n> > considering this remote transaction to be in the future and we do not\n> > want to confirm the commit of this transaction to the remote node\n> > before the local clock reaches the record commit_ts to preserve the\n> > causal order. However, we can still perform conflict resolution\n> > beforehand since we already know the commit_ts. The conflict\n> > resolution function will be something like \"out_version =\n> > CRF(version1_commit_ts, version2_commit_ts),\" so the result should be\n> > the same regardless of when we apply it, correct? From a performance\n> > standpoint, wouldn't it be beneficial to perform as much work as\n> > possible in advance? By the time we apply all the operations, the\n> > local clock might already be in sync with the commit_ts of the remote\n> > transaction. Am I missing something?\n> >\n>\n> But waiting after applying the operations and before applying the\n> commit would mean that we need to wait with the locks held. That could\n> be a recipe for deadlocks in the system. I see your point related to\n> performance but as we are not expecting clock skew in normal cases, we\n> shouldn't be too much bothered on the performance due to this. If\n> there is clock skew, we expect users to fix it, this is just a\n> worst-case aid for users.\n>\n\nPlease find the new patch set. patch004 is the new patch which\nattempts to implement:\n\n1) Either wait or error out on clock skew as configured. Please note\nthat currently wait is implemented during 'begin'. Once the ongoing\ndiscussion is concluded, it can be changed as needed.\n2) last_update_wins resolver. Thanks Nisha for providing the resolver\nrelated changes.\n\nNext to be done:\n1) parallel apply worker related changes as mentioned in [1]\n2) cap on wait time due to clock skew\n3) resolvers for delete_differ as conflict detection thread [2] has\nimplemented detection for that.\n\n[1]: https://www.postgresql.org/message-id/CAFiTN-sf23K%3DsRsnxw-BKNJqg5P6JXcqXBBkx%3DEULX8QGSQYaw%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/OS0PR01MB571686E464A325F26CEFCCEF94DD2%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nthanks\nShveta", "msg_date": "Thu, 4 Jul 2024 09:31:01 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jul 1, 2024 at 6:54 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jul 1, 2024 at 1:35 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > Setting resolvers at table-level and subscription-level sounds good to\n> > me. DDLs for setting resolvers at subscription-level would need the\n> > subscription name to be specified?\n> >\n>\n> Yes, it should be part of the ALTER/CREATE SUBSCRIPTION command. One\n> idea could be to have syntax as follows:\n>\n> ALTER SUBSCRIPTION name SET CONFLICT RESOLVER 'conflict_resolver' FOR\n> 'conflict_type';\n> ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR 'conflict_type';\n>\n> CREATE SUBSCRIPTION subscription_name CONNECTION 'conninfo'\n> PUBLICATION publication_name [, ...] CONFLICT RESOLVER\n> 'conflict_resolver' FOR 'conflict_type';\n\nLooks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Jul 2024 15:51:48 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 3, 2024 at 5:06 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 4:48 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 4:04 PM Dilip Kumar <[email protected]> wrote:\n> > >\n> > >\n> > > What I am saying is that if we are not applying the whole transaction,\n> > > it means we are not receiving it either unless we plan to spill it to\n> > > a file. If we don't spill it to a file, the network buffer will fill\n> > > up very quickly. This issue wouldn't occur if we waited right before\n> > > the commit because, by that time, we would have already received all\n> > > the data from the network.\n> > >\n> >\n> > We would have received the transaction data but there could be other\n> > transactions that need to wait because the apply worker is waiting\n> > before the commit.\n>\n> Yeah, that's a valid point, can parallel apply worker help here?\n>\n> So, the situation will be the same. We can even\n> > decide to spill the data to files if the decision is that we need to\n> > wait to avoid network buffer-fill situations. But note that the wait\n> > in apply worker has consequences that the subscriber won't be able to\n> > confirm the flush position and publisher won't be able to vacuum the\n> > dead rows and we won't be remove WAL as well. Last time when we\n> > discussed the delay_apply feature, we decided not to proceed because\n> > of such issues. This is the reason I proposed a cap on wait time.\n>\n> Yes, spilling to file or cap on the wait time should help, and as I\n> said above maybe a parallel apply worker can also help.\n>\n\nIt is not clear to me how a parallel apply worker can help in this\ncase. Can you elaborate on what you have in mind?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 4 Jul 2024 17:37:10 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Jul 4, 2024 at 5:37 PM Amit Kapila <[email protected]> wrote:\n> > So, the situation will be the same. We can even\n> > > decide to spill the data to files if the decision is that we need to\n> > > wait to avoid network buffer-fill situations. But note that the wait\n> > > in apply worker has consequences that the subscriber won't be able to\n> > > confirm the flush position and publisher won't be able to vacuum the\n> > > dead rows and we won't be remove WAL as well. Last time when we\n> > > discussed the delay_apply feature, we decided not to proceed because\n> > > of such issues. This is the reason I proposed a cap on wait time.\n> >\n> > Yes, spilling to file or cap on the wait time should help, and as I\n> > said above maybe a parallel apply worker can also help.\n> >\n>\n> It is not clear to me how a parallel apply worker can help in this\n> case. Can you elaborate on what you have in mind?\n\nIf we decide to wait at commit time, and before starting to apply if\nwe already see a remote commit_ts clock is ahead, then if we apply\nsuch transactions using the parallel worker, wouldn't it solve the\nissue of the network buffer congestion? Now the apply worker can move\nahead and fetch new transactions from the buffer as our waiting\ntransaction will not block it. I understand that if this transaction\nis going to wait at commit then any future transaction that we are\ngoing to fetch might also going to wait again because if the previous\ntransaction committed before is in the future then the subsequent\ntransaction committed after this must also be in future so eventually\nthat will also go to some another parallel worker and soon we end up\nconsuming all the parallel worker if the clock skew is large. So I\nwon't say this will resolve the problem and we would still have to\nfall back to the spilling to the disk but that's just in the worst\ncase when the clock skew is really huge. In most cases which is due\nto slight clock drift by the time we apply the medium to large size\ntransaction, the local clock should be able to catch up the remote\ncommit_ts and we might not have to wait in most of the cases.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jul 2024 11:57:44 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "Please find the new patch set (v4). It implements the resolvers for\nconflict type : 'delete_differ'.\nSupported resolutions for ‘delete_differ’ are :\n - ‘last_update_wins': Apply the change with the latest timestamp (default)\n - 'remote_apply': Apply the remote delete.\n - 'keep_local': Skip the remote delete and continue.\n - 'error': The apply worker will error out and restart.\n\nThe changes made in the patches are as follows:\n - Updated the conflict detection patch (patch0001) to the latest\nversion from [1], which implements delete_differ conflict detection.\n - Patch0002 now supports resolver settings for delete_differ.\n - Patch0003 implements resolutions for delete_differ as well.\n - Patch0004 includes changes to support last_update_wins resolution\nfor delete_differ.\n\n[1] https://www.postgresql.org/message-id/OS0PR01MB571686E464A325F26CEFCCEF94DD2%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n--\nThanks,\nNisha", "msg_date": "Fri, 5 Jul 2024 12:22:29 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Jul 5, 2024 at 11:58 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Jul 4, 2024 at 5:37 PM Amit Kapila <[email protected]> wrote:\n> > > So, the situation will be the same. We can even\n> > > > decide to spill the data to files if the decision is that we need to\n> > > > wait to avoid network buffer-fill situations. But note that the wait\n> > > > in apply worker has consequences that the subscriber won't be able to\n> > > > confirm the flush position and publisher won't be able to vacuum the\n> > > > dead rows and we won't be remove WAL as well. Last time when we\n> > > > discussed the delay_apply feature, we decided not to proceed because\n> > > > of such issues. This is the reason I proposed a cap on wait time.\n> > >\n> > > Yes, spilling to file or cap on the wait time should help, and as I\n> > > said above maybe a parallel apply worker can also help.\n> > >\n> >\n> > It is not clear to me how a parallel apply worker can help in this\n> > case. Can you elaborate on what you have in mind?\n>\n> If we decide to wait at commit time, and before starting to apply if\n> we already see a remote commit_ts clock is ahead, then if we apply\n> such transactions using the parallel worker, wouldn't it solve the\n> issue of the network buffer congestion? Now the apply worker can move\n> ahead and fetch new transactions from the buffer as our waiting\n> transaction will not block it. I understand that if this transaction\n> is going to wait at commit then any future transaction that we are\n> going to fetch might also going to wait again because if the previous\n> transaction committed before is in the future then the subsequent\n> transaction committed after this must also be in future so eventually\n> that will also go to some another parallel worker and soon we end up\n> consuming all the parallel worker if the clock skew is large. So I\n> won't say this will resolve the problem and we would still have to\n> fall back to the spilling to the disk but that's just in the worst\n> case when the clock skew is really huge. In most cases which is due\n> to slight clock drift by the time we apply the medium to large size\n> transaction, the local clock should be able to catch up the remote\n> commit_ts and we might not have to wait in most of the cases.\n>\n\nYeah, this is possible but even if go with the spilling logic at first\nit should work for all cases. If we get some complaints then we can\nexplore executing such transactions by parallel apply workers.\nPersonally, I am of the opinion that clock synchronization should be\nhandled outside the database system via network time protocols like\nNTP. Still, we can have some simple solution to inform users about the\nclock_skew.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 5 Jul 2024 14:23:13 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Jul 5, 2024 at 2:23 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jul 5, 2024 at 11:58 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Thu, Jul 4, 2024 at 5:37 PM Amit Kapila <[email protected]> wrote:\n> > > > So, the situation will be the same. We can even\n> > > > > decide to spill the data to files if the decision is that we need to\n> > > > > wait to avoid network buffer-fill situations. But note that the wait\n> > > > > in apply worker has consequences that the subscriber won't be able to\n> > > > > confirm the flush position and publisher won't be able to vacuum the\n> > > > > dead rows and we won't be remove WAL as well. Last time when we\n> > > > > discussed the delay_apply feature, we decided not to proceed because\n> > > > > of such issues. This is the reason I proposed a cap on wait time.\n> > > >\n> > > > Yes, spilling to file or cap on the wait time should help, and as I\n> > > > said above maybe a parallel apply worker can also help.\n> > > >\n> > >\n> > > It is not clear to me how a parallel apply worker can help in this\n> > > case. Can you elaborate on what you have in mind?\n> >\n> > If we decide to wait at commit time, and before starting to apply if\n> > we already see a remote commit_ts clock is ahead, then if we apply\n> > such transactions using the parallel worker, wouldn't it solve the\n> > issue of the network buffer congestion? Now the apply worker can move\n> > ahead and fetch new transactions from the buffer as our waiting\n> > transaction will not block it. I understand that if this transaction\n> > is going to wait at commit then any future transaction that we are\n> > going to fetch might also going to wait again because if the previous\n> > transaction committed before is in the future then the subsequent\n> > transaction committed after this must also be in future so eventually\n> > that will also go to some another parallel worker and soon we end up\n> > consuming all the parallel worker if the clock skew is large. So I\n> > won't say this will resolve the problem and we would still have to\n> > fall back to the spilling to the disk but that's just in the worst\n> > case when the clock skew is really huge. In most cases which is due\n> > to slight clock drift by the time we apply the medium to large size\n> > transaction, the local clock should be able to catch up the remote\n> > commit_ts and we might not have to wait in most of the cases.\n> >\n>\n> Yeah, this is possible but even if go with the spilling logic at first\n> it should work for all cases. If we get some complaints then we can\n> explore executing such transactions by parallel apply workers.\n> Personally, I am of the opinion that clock synchronization should be\n> handled outside the database system via network time protocols like\n> NTP. Still, we can have some simple solution to inform users about the\n> clock_skew.\n\nYeah, that makes sense, in the first version we can have a simple\nsolution and we can further improvise it based on the feedback.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jul 2024 17:11:40 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Jul 1, 2024 at 1:17 PM Ajin Cherian <[email protected]> wrote:\n>\n>\n>\n> On Thu, Jun 27, 2024 at 1:14 PM Nisha Moond <[email protected]> wrote:\n>>\n>> Please find the attached 'patch0003', which implements conflict\n>> resolutions according to the global resolver settings.\n>>\n>> Summary of Conflict Resolutions Implemented in 'patch0003':\n>>\n>> INSERT Conflicts:\n>> ------------------------\n>> 1) Conflict Type: 'insert_exists'\n>>\n>> Supported Resolutions:\n>> a) 'remote_apply': Convert the INSERT to an UPDATE and apply.\n>> b) 'keep_local': Ignore the incoming (conflicting) INSERT and retain\n>> the local tuple.\n>> c) 'error': The apply worker will error out and restart.\n>>\n>\n> Hi Nisha,\n>\n> While testing the patch, when conflict resolution is configured and insert_exists is set to \"remote_apply\", I see this warning in the logs due to a resource not being closed:\n>\n> 2024-07-01 02:52:59.427 EDT [20304] LOG: conflict insert_exists detected on relation \"public.test1\"\n> 2024-07-01 02:52:59.427 EDT [20304] DETAIL: Key already exists. Applying resolution method \"remote_apply\"\n> 2024-07-01 02:52:59.427 EDT [20304] CONTEXT: processing remote data for replication origin \"pg_16417\" during message type \"INSERT\" for replication target relation \"public.test1\" in transaction 763, finished at 0/15E7F68\n> 2024-07-01 02:52:59.427 EDT [20304] WARNING: resource was not closed: [138] (rel=base/5/16413, blockNum=0, flags=0x93800000, refcount=1 1)\n> 2024-07-01 02:52:59.427 EDT [20304] CONTEXT: processing remote data for replication origin \"pg_16417\" during message type \"COMMIT\" in transaction 763, finished at 0/15E7F68\n> 2024-07-01 02:52:59.427 EDT [20304] WARNING: resource was not closed: TupleDesc 0x7f8c0439e448 (16402,-1)\n> 2024-07-01 02:52:59.427 EDT [20304] CONTEXT: processing remote data for replication origin \"pg_16417\" during message type \"COMMIT\" in transaction 763, finished at 0/15E7F68\n>\nThank you Ajin for reporting the issue, This is now fixed with the\nv4-0003 patch.\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Fri, 5 Jul 2024 17:12:13 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "Hi,\r\n\r\nI researched about how to detect the resolve update_deleted and thought\r\nabout one idea: which is to maintain the xmin in logical slot to preserve\r\nthe dead row and support latest_timestamp_xmin resolution for\r\nupdate_deleted to maintain data consistency.\r\n\r\nHere are details of the xmin idea and resolution of update_deleted:\r\n\r\n1. how to preserve the dead row so that we can detect update_delete\r\n conflict correctly. (In the following explanation, let's assume there is a\r\n a multimeter setup with node A, B).\r\n\r\nTo preserve the dead row on node A, I think we could maintain the \"xmin\"\r\nin the logical replication slot on Node A to prevent the VACCUM from\r\nremoving the dead row in user table. The walsender that acquires the slot\r\nis responsible to advance the xmin. (Node that I am trying to explore\r\nxmin idea as it could be more efficient than using commit_timestamp, and the\r\nlogic could be simpler as we are already maintaining catalog_xmin in\r\nlogical slot and xmin in physical slot)\r\n\r\n- Strategy for advancing xmin:\r\n\r\nThe xmin can be advanced if a) a transaction (xid:1000) has been flushed\r\nto the remote node (Node B in this case). *AND* b) On Node B, the local\r\ntransactions that happened before applying the remote\r\ntransaction(xid:1000) were also sent and flushed to the Node A.\r\n\r\n- The implementation:\r\n\r\ncondition a) can be achieved with existing codes, the walsender can\r\nadvance the xmin similar to the catalog_xmin.\r\n\r\nFor condition b), we can add a subscription option (say 'feedback_slot').\r\nThe feedback_slot indicates the replication slot that will send changes to\r\nthe origin (On Node B, the slot should be subBA). The apply worker will\r\ncheck the status(confirmed flush lsn) of the 'feedback slot' and send\r\nfeedback to the walsender about the WAL position that has been sent and\r\nflushed via the feedback_slot.\r\n\r\nFor example, on Node B, we specify the replication slot (subBA) that is\r\nsending changes to Node A. The apply worker on Node B will send\r\nfeedback(WAL position that has been sent to the Node A) to Node A\r\nregularly. Then the Node A can use the position to advance the xmin.\r\n(Similar to the hot_standby_feedback).\r\n\r\n2. The resolution for update_delete\r\n\r\nThe current design doesn't support 'last_timestamp_win'. But this could be\r\na problem if update_deleted is detected due to some very old dead row.\r\nAssume the update has the latest timestamp, and if we skip the update due\r\nto these very old dead rows, the data would be inconsistent because the\r\nlatest update data is missing.\r\n\r\nThe ideal resolution should compare the timestamp of the UPDATE and the\r\ntimestamp of the transaction that produced these dead rows. If the UPDATE\r\nis newer, the convert the UDPATE to INSERT, otherwise, skip the UPDATE.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 8 Jul 2024 04:32:03 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Conflict Detection and Resolution" }, { "msg_contents": "On Monday, July 8, 2024 12:32 PM Zhijie Hou (Fujitsu) <[email protected]> wrote:\r\n> \r\n> I researched about how to detect the resolve update_deleted and thought\r\n> about one idea: which is to maintain the xmin in logical slot to preserve\r\n> the dead row and support latest_timestamp_xmin resolution for\r\n> update_deleted to maintain data consistency.\r\n> \r\n> Here are details of the xmin idea and resolution of update_deleted:\r\n> \r\n> 1. how to preserve the dead row so that we can detect update_delete\r\n> conflict correctly. (In the following explanation, let's assume there is a\r\n> a multimeter setup with node A, B).\r\n> \r\n> To preserve the dead row on node A, I think we could maintain the \"xmin\"\r\n> in the logical replication slot on Node A to prevent the VACCUM from\r\n> removing the dead row in user table. The walsender that acquires the slot\r\n> is responsible to advance the xmin. (Node that I am trying to explore\r\n> xmin idea as it could be more efficient than using commit_timestamp, and the\r\n> logic could be simpler as we are already maintaining catalog_xmin in\r\n> logical slot and xmin in physical slot)\r\n> \r\n> - Strategy for advancing xmin:\r\n> \r\n> The xmin can be advanced if a) a transaction (xid:1000) has been flushed\r\n> to the remote node (Node B in this case). *AND* b) On Node B, the local\r\n> transactions that happened before applying the remote\r\n> transaction(xid:1000) were also sent and flushed to the Node A.\r\n> \r\n> - The implementation:\r\n> \r\n> condition a) can be achieved with existing codes, the walsender can\r\n> advance the xmin similar to the catalog_xmin.\r\n> \r\n> For condition b), we can add a subscription option (say 'feedback_slot').\r\n> The feedback_slot indicates the replication slot that will send changes to\r\n> the origin (On Node B, the slot should be subBA). The apply worker will\r\n> check the status(confirmed flush lsn) of the 'feedback slot' and send\r\n> feedback to the walsender about the WAL position that has been sent and\r\n> flushed via the feedback_slot.\r\n\r\nThe above are some initial thoughts of how to preserve the dead row for\r\nupdate_deleted conflict detection.\r\n\r\nAfter thinking more, I have identified a few additional cases that I\r\nmissed to analyze regarding the design. One aspect that needs more\r\nthoughts is the possibility of multiple slots on each node. In this\r\nscenario, the 'feedback_slot' subscription option would need to be\r\nstructured as a list. However, requiring users to specify all the slots\r\nmay not be user-friendly. I will explore if this process can be\r\nautomated.\r\n\r\nIn addition, I will think more about the potential impact of re-using the\r\nexisting 'xmin' of the slot which may affect existing logic that relies on\r\n'xmin'.\r\n\r\nI will analyze more and reply about these points.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 8 Jul 2024 09:41:29 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Jul 5, 2024 at 5:12 PM Nisha Moond <[email protected]> wrote:\n>\n> Thank you Ajin for reporting the issue, This is now fixed with the\n> v4-0003 patch.\n\nPlease find v5 patch-set. Changes are:\n\n1) patch003:\n Added test cases for all resolvers (034_conflict_resolver.pl).\n\n2) Patch004:\na) Emit error while resolving conflict if conflict resolver is default\n'last_update_wins' but track_commit_timetsamp is not enabled.\nb) Emit Warning during create and alter subscription when\n'detect_conflict' is ON but 'track_commit_timetsamp' is not enabled.\nc) Restrict start of pa worker if either max-clock-skew is configured\nor conflict detection and resolution is enabled for a subscription.\nd) Implement clock-skew delay/error when changes are applied from a\nfile (apply_spooled_messages).\ne) Implement clock-skew delay while applying prepared changes (two\nphase txns). The prepare-timestamp to be considered as base for\nclock-skew handling as well as for last_update_win resolver.\n<TODO: This needs to be analyzed and tested further to see if there is\nany side effect of taking prepare-timestamp as base.>\n\nThanks Ajin fo working on 1.\nThanks Nisha for working on 2a,2b.\n\nthanks\nShveta", "msg_date": "Tue, 9 Jul 2024 15:09:35 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jul 9, 2024 at 3:09 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Jul 5, 2024 at 5:12 PM Nisha Moond <[email protected]> wrote:\n> >\n> > Thank you Ajin for reporting the issue, This is now fixed with the\n> > v4-0003 patch.\n>\n> Please find v5 patch-set. Changes are:\n>\n> 1) patch003:\n> Added test cases for all resolvers (034_conflict_resolver.pl).\n>\n> 2) Patch004:\n> a) Emit error while resolving conflict if conflict resolver is default\n> 'last_update_wins' but track_commit_timetsamp is not enabled.\n> b) Emit Warning during create and alter subscription when\n> 'detect_conflict' is ON but 'track_commit_timetsamp' is not enabled.\n> c) Restrict start of pa worker if either max-clock-skew is configured\n> or conflict detection and resolution is enabled for a subscription.\n> d) Implement clock-skew delay/error when changes are applied from a\n> file (apply_spooled_messages).\n> e) Implement clock-skew delay while applying prepared changes (two\n> phase txns). The prepare-timestamp to be considered as base for\n> clock-skew handling as well as for last_update_win resolver.\n> <TODO: This needs to be analyzed and tested further to see if there is\n> any side effect of taking prepare-timestamp as base.>\n>\n> Thanks Ajin fo working on 1.\n> Thanks Nisha for working on 2a,2b.\n>\n\nPlease find v6 patch-set. Changes are:\n\n1) patch003:\n1a) Improved log and restructured code around it.\n1b) added test case for delete_differ.\n\n2) patch004:\n2a) Local and remote timestamps were logged incorrectly due to a bug,\ncorrected that.\n2b) Added tests for last_update_wins.\n2c) Added a cap on wait time; introduced a new GUC for this. Apply\nworker will now error out without waiting if the computed wait exceeds\nthis GUC's value.\n2d) Restricted enabling two_phase and detect_conflict together for a\nsubscription. This is because the time based resolvers may result in\ndata divergence for two phase commit transactions if prepare-timestamp\nis used for comparison.\n\nThanks Nisha for working on 1a to 2b.\n\nthanks\nShveta", "msg_date": "Wed, 17 Jul 2024 11:31:12 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Jul 17, 2024 at 4:01 PM shveta malik <[email protected]> wrote:\n\n>\n> Please find v6 patch-set. Changes are:\n>\n>\n> Please find v7 patch-set, the changes are:\n\nPatch 0001 - Reflects v5 of Conflict Detection patch in [1].\n\nPatch 0002:\n\na) Removed global CONFLICT RESOLVER syntax and logic.\nb) Added new syntax for creating CONFLICT RESOLVERs at the subscription\nlevel.\n\nSyntax for CREATE SUBSCRIPTION:\n\nCREATE SUBSCRIPTION <subname> CONNECTION <conninfo> PUBLICATION <pubname>\nCONFLICT RESOLVER\n (conflict_type1 = resolver1, conflict_type2 = resolver2, conflict_type3 =\nresolver3, ...);\nSyntax for ALTER SUBSCRIPTION:\n\nALTER SUBSCRIPTION <subname> CONFLICT RESOLVER\n (conflict_type1 = resolver1, conflict_type2 = resolver2, conflict_type3 =\nresolver3, ...);\n\nPatch 0003 - Supports subscription-level resolvers for conflict resolution.\n\nPatch 0004 - Modified last_update_win related test cases to reflect the new\nsyntax.\n\nPatch 0005 - Dropped for the time being; will rebase and post in the next\nversion.\n\nThanks to Shveta for design discussions and thanks to Nisha for helping in\nrebasing the patch and helping in testing and stabilizing the patch by\nproviding comments off-list.\n\n[1] -\nhttps://www.postgresql.org/message-id/OS0PR01MB57166C2566E00676649CF48B94AC2@OS0PR01MB5716.jpnprd01.prod.outlook.com", "msg_date": "Fri, 26 Jul 2024 14:20:01 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Jul 26, 2024 at 9:50 AM Ajin Cherian <[email protected]> wrote:\n>>\n> Please find v7 patch-set, the changes are:\n>\n\nThanks Ajin for working on this. Please find few comments:\n\n1)\nparse_subscription_conflict_resolvers():\nHere we loop in this function to find the given conflict type in the\nsupported list and error out if conflict-type is not valid. Also we\ncall validate_conflict_type_and_resolver() which again validates\nconflict-type. I would recommend to loop 'stmtresolvers' in parse\nfunction and then read each type and resolver and pass that to\nvalidate_conflict_type_and_resolver(). Avoid double validation.\n\n2)\nSetSubConflictResolver():\nIt works well, but it does not look apt that the 'resolvers' passed to\nthis function by the caller is an array and this function knows the\narray range and traverse from CT_MIN to CT_MAX assuming this array\nmaps directly to ConflictType. I think it would be better to have it\npassed as a list and then SetSubConflictResolver() traverse the list\nwithout knowing the range of it. Similar to what we do in\nalter-sub-flow in and around UpdateSubConflictResolvers().\n\n3)\nWhen I execute 'alter subscription ..(detect_conflict=on)' for a\nsubscription which *already* has detect_conflict as ON, it tries to\nreset resolvers to default and ends up in error. It should actually be\nno-op in this particular situation and should not reset resolvers to\ndefault.\n\npostgres=# alter subscription sub1 set (detect_conflict=on);\nWARNING: Using default conflict resolvers\nERROR: duplicate key value violates unique constraint\n\"pg_subscription_conflict_sub_index\"\n\n4)\nDo we need SUBSCRIPTIONCONFLICTOID cache? We are not using it\nanywhere. Shall we remove this and the corresponding index?\n\n5)\nRemoveSubscriptionConflictBySubid().\n--We can remove extra blank line before table_open.\n--We can get rid of curly braces around CatalogTupleDelete() as it is\na single line in loop.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 30 Jul 2024 09:49:41 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Jul 26, 2024 at 9:50 AM Ajin Cherian <[email protected]> wrote:\n\nComment in 0002,\n\n1) I do not see any test case that set a proper conflict type and\nconflict resolver, all tests either give incorrect conflict\ntype/conflict resolver or the conflict resolver is ignored\n\n0003\n2) I was trying to think about this patch, so suppose we consider this\ncase conflict_type-> update_differ resolver->remote_apply, my\nquestion is to confirm whether my understanding is correct. So if\nthis is set and we have 2 nodes and set up a 2-way logical\nreplication, and if a conflict occurs node-1 will take the changes of\nnode-2 and node-2 will take the changes of node-1? Maybe so I think\nto avoid such cases user needs to set the resolver more thoughtfully,\non node-1 it may be set as \"skip\" and on node-1 as \"remote-apply\" so\nin such cases if conflict happens both nodes will have the value from\nnode-1. But maybe it would be more difficult to get a consistent\nvalue if we are setting up a mess replication topology right? Maybe\nthere I think a more advanced timestamp-based option would work better\nIMHO.\n\nI am doing code level review as well and will share my comments soon\non 0003 and 0004\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jul 2024 16:03:42 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jul 30, 2024 at 4:04 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Fri, Jul 26, 2024 at 9:50 AM Ajin Cherian <[email protected]> wrote:\n>\n> Comment in 0002,\n>\n> 1) I do not see any test case that set a proper conflict type and\n> conflict resolver, all tests either give incorrect conflict\n> type/conflict resolver or the conflict resolver is ignored\n>\n> 0003\n> 2) I was trying to think about this patch, so suppose we consider this\n> case conflict_type-> update_differ resolver->remote_apply, my\n> question is to confirm whether my understanding is correct. So if\n> this is set and we have 2 nodes and set up a 2-way logical\n> replication, and if a conflict occurs node-1 will take the changes of\n> node-2 and node-2 will take the changes of node-1?\n\nYes, that's right.\n\n> Maybe so I think\n> to avoid such cases user needs to set the resolver more thoughtfully,\n> on node-1 it may be set as \"skip\" and on node-1 as \"remote-apply\" so\n> in such cases if conflict happens both nodes will have the value from\n> node-1. But maybe it would be more difficult to get a consistent\n> value if we are setting up a mess replication topology right? Maybe\n> there I think a more advanced timestamp-based option would work better\n> IMHO.\n\nYes, that's correct. We can get data divergence with resolvers like\n'remote_apply', 'keep_local' etc. If you meant 'mesh' replication\ntopology, then yes, it is difficult to get consistent value there with\nresolvers other than timestamp based. And thus timestamp based\nresolvers are needed and should be the default when implemented.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 30 Jul 2024 16:56:21 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jul 30, 2024 at 4:56 PM shveta malik <[email protected]> wrote:\n>\n> On Tue, Jul 30, 2024 at 4:04 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Fri, Jul 26, 2024 at 9:50 AM Ajin Cherian <[email protected]> wrote:\n> >\n> > Comment in 0002,\n> >\n> > 1) I do not see any test case that set a proper conflict type and\n> > conflict resolver, all tests either give incorrect conflict\n> > type/conflict resolver or the conflict resolver is ignored\n> >\n> > 0003\n> > 2) I was trying to think about this patch, so suppose we consider this\n> > case conflict_type-> update_differ resolver->remote_apply, my\n> > question is to confirm whether my understanding is correct. So if\n> > this is set and we have 2 nodes and set up a 2-way logical\n> > replication, and if a conflict occurs node-1 will take the changes of\n> > node-2 and node-2 will take the changes of node-1?\n>\n> Yes, that's right.\n>\n> > Maybe so I think\n> > to avoid such cases user needs to set the resolver more thoughtfully,\n> > on node-1 it may be set as \"skip\" and on node-1 as \"remote-apply\" so\n> > in such cases if conflict happens both nodes will have the value from\n> > node-1. But maybe it would be more difficult to get a consistent\n> > value if we are setting up a mess replication topology right? Maybe\n> > there I think a more advanced timestamp-based option would work better\n> > IMHO.\n>\n> Yes, that's correct. We can get data divergence with resolvers like\n> 'remote_apply', 'keep_local' etc. If you meant 'mesh' replication\n> topology, then yes, it is difficult to get consistent value there with\n> resolvers other than timestamp based. And thus timestamp based\n> resolvers are needed and should be the default when implemented.\n>\n\nThanks for the clarification.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jul 2024 18:24:59 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Jul 30, 2024 at 2:19 PM shveta malik <[email protected]> wrote:\n\n> On Fri, Jul 26, 2024 at 9:50 AM Ajin Cherian <[email protected]> wrote:\n> >>\n> > Please find v7 patch-set, the changes are:\n> >\n>\n> Thanks Ajin for working on this. Please find few comments:\n>\n> 1)\n> parse_subscription_conflict_resolvers():\n> Here we loop in this function to find the given conflict type in the\n> supported list and error out if conflict-type is not valid. Also we\n> call validate_conflict_type_and_resolver() which again validates\n> conflict-type. I would recommend to loop 'stmtresolvers' in parse\n> function and then read each type and resolver and pass that to\n> validate_conflict_type_and_resolver(). Avoid double validation.\n>\n>\nI have modified this as per comment.\n\n\n> 2)\n> SetSubConflictResolver():\n> It works well, but it does not look apt that the 'resolvers' passed to\n> this function by the caller is an array and this function knows the\n> array range and traverse from CT_MIN to CT_MAX assuming this array\n> maps directly to ConflictType. I think it would be better to have it\n> passed as a list and then SetSubConflictResolver() traverse the list\n> without knowing the range of it. Similar to what we do in\n> alter-sub-flow in and around UpdateSubConflictResolvers().\n>\n>\nI have kept the array as it requires that all conflict resolvers be set, if\nnot provided by the user then default needs to be used. However, I have\nmodified SetSubConflictResolver such that it takes in the size of the array\nand does not assume it.\n\n3)\n> When I execute 'alter subscription ..(detect_conflict=on)' for a\n> subscription which *already* has detect_conflict as ON, it tries to\n> reset resolvers to default and ends up in error. It should actually be\n> no-op in this particular situation and should not reset resolvers to\n> default.\n>\n> postgres=# alter subscription sub1 set (detect_conflict=on);\n> WARNING: Using default conflict resolvers\n> ERROR: duplicate key value violates unique constraint\n> \"pg_subscription_conflict_sub_index\"\n>\n>\nfixed\n\n\n> 4)\n> Do we need SUBSCRIPTIONCONFLICTOID cache? We are not using it\n> anywhere. Shall we remove this and the corresponding index?\n>\n>\nWe are using the index but not the cache, so removing the cache.\n\n\n> 5)\n> RemoveSubscriptionConflictBySubid().\n> --We can remove extra blank line before table_open.\n> --We can get rid of curly braces around CatalogTupleDelete() as it is\n> a single line in loop.\n>\n>\nfixed.\n\nOn Tue, Jul 30, 2024 at 8:34 PM Dilip Kumar <[email protected]> wrote:\n\n> On Fri, Jul 26, 2024 at 9:50 AM Ajin Cherian <[email protected]> wrote:\n>\n> Comment in 0002,\n>\n> 1) I do not see any test case that set a proper conflict type and\n> conflict resolver, all tests either give incorrect conflict\n> type/conflict resolver or the conflict resolver is ignored\n>\n\nfixed.\n\nI've also fixed a cfbot error due to patch 0001. Rebase of table resolver\npatch is still pending, will try and target that in the next patch-set.\n\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Wed, 31 Jul 2024 21:24:06 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "The patches have been rebased on the latest pgHead following the merge\nof the conflict detection patch [1]. The detect_conflict option has\nbeen removed, and conflict detection is now enabled by default. This\nchange required the following updates in resolver patches:\npatch-0001:\n - Removed dependency on the detect_conflict option. Now, default\nconflict resolvers are set on CREATE SUBSCRIPTION if no values are\nprovided.\n - To keep the behavior unchanged, the default resolvers are now set as -\n insert_exists = error\n update_exists = error\n update_differ = apply_remote\n update_missing = skip\n delete_missing = skip\n delete_differ = apply_remote\n - Added documentation for conflict resolvers.\n\npatch-0002:\n- Removed dependency on the detect_conflict option.\n- Updated test cases in 034_conflict_resolver.pl to reflect new\ndefault resolvers and the removal of the detect_conflict option.\n\npatch-0003:\n- Implemented resolver for the update_exists conflict type. Supported\nresolvers are: apply_remote, keep_local, error.\n\n*The timestamp-based resolution patch is not yet rebased due to its\ndependency on the detect_conflict option for handling two-phase and\nparallel apply-worker workflows. The behavior needs to be reassessed.\n\nTo Do:\npatch-0001: Add support for pgdump.\npatch-0002 and 0003:\n - Optimize by avoiding the pre-scan for conflicts in insert_exists\nand update_exists when the resolver favors error or skips applying\nremote changes.\n - Improve the current recursive method used for multiple key conflict\nresolution in update_exists.\n\nThanks Ajin for working on the docs.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BKb4i64cRD7MCUGkABNzBgQkP8vr5t01N%2BL_8GtwPgcA%40mail.gmail.com\n\n--\nThanks,\nNisha", "msg_date": "Wed, 21 Aug 2024 16:08:00 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Aug 21, 2024 at 4:08 PM Nisha Moond <[email protected]> wrote:\n>\n> The patches have been rebased on the latest pgHead following the merge\n> of the conflict detection patch [1].\n\nThanks for working on patches.\n\nSummarizing the issues which need some suggestions/thoughts.\n\n1)\nFor subscription based resolvers, currently the syntax implemented is:\n\n1a)\nCREATE SUBSCRIPTION <subname>\nCONNECTION <conninfo> PUBLICATION <pubname>\nCONFLICT RESOLVER\n (conflict_type1 = resolver1, conflict_type2 = resolver2,\nconflict_type3 = resolver3,...);\n\n1b)\nALTER SUBSCRIPTION <subname> CONFLICT RESOLVER\n (conflict_type1 = resolver1, conflict_type2 = resolver2,\nconflict_type3 = resolver3,...);\n\nEarlier the syntax suggested in [1] was:\nCREATE SUBSCRIPTION <subname> CONNECTION <conninfo> PUBLICATION <pubname>\nCONFLICT RESOLVER 'conflict_resolver1' FOR 'conflict_type1',\nCONFLICT RESOLVER 'conflict_resolver2' FOR 'conflict_type2';\n\nI think the currently implemented syntax is good as it has less\nrepetition, unless others think otherwise.\n\n~~\n\n2)\nFor subscription based resolvers, do we need a RESET command to reset\nresolvers to default? Any one of below or both?\n\n2a) reset all at once:\n ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVERS\n\n2b) reset one at a time:\n ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVER for 'conflict_type';\n\nThe issue I see here is, to implement 1a and 1b, we have introduced\nthe 'RESOLVER' keyword. If we want to implement 2a, we will have to\nintroduce the 'RESOLVERS' keyword as well. But we can come up with\nsome alternative syntax if we plan to implement these. Thoughts?\n\n~~\n\n3) Regarding update_exists:\n\n3a)\nCurrently update_exists resolver patch is kept separate. The reason\nbeing, it performs resolution which will need deletion of multiple\nrows. It will be good to discuss if we want to target this in the\nfirst draft. Please see the example:\n\ncreate table tab (a int primary key, b int unique, c int unique);\n\nPub: insert into tab values (1,1,1);\nSub:\ninsert into tab values (2,20,30);\ninsert into tab values (3,40,50);\ninsert into tab values (4,60,70);\n\nPub: update tab set a=2,b=40,c=70 where a=1;\n\nThe above 'update' on pub will result in 'update_exists' on sub and if\nresolution is in favour of 'apply', then it will conflict with all the\nthree local rows of subscriber due to unique constraint present on all\nthree columns. Thus in order to resolve the conflict, it will have to\ndelete these 3 rows on sub:\n\n2,20,30\n3,40,50\n4,60,70\nand then update 1,1,1 to 2,40,70.\n\nJust need opinion on if we shall target this in the initial draft.\n\n3b)\nIf we plan to implement this, we need to work on optimal design where\nwe can find all the conflicting rows at once and delete those.\nCurrently the implementation has been done using recursion i.e. find\none conflicting row, then delete it and then next and so on i.e. we\ncall apply_handle_update_internal() recursively. On initial code\nreview, I feel it is doable to scan all indexes at once and get\nconflicting-tuple-ids in one go and get rid of recursion. It can be\nattempted once we decide on 3a.\n\n~~\n\n4)\nNow for insert_exists and update_exists, we are doing a pre-scan of\nall unique indexes to find conflict. Also there is post-scan to figure\nout if the conflicting row is inserted meanwhile. This needs to be\nreviewed for optimization. We need to avoid pre-scan wherever\npossible. I think the only case for which it can be avoided is\n'ERROR'. For the cases where resolver is in favor of remote-apply, we\nneed to check conflict beforehand to avoid rollback of already\ninserted data. And for the case where resolver is in favor of skipping\nthe change, then too we should know beforehand about the conflict to\navoid heap-insertion and rollback. Thoughts?\n\n~~\n\n5)\nCurrently we only capture update_missing conflict i.e. we are not\ndistinguishing between the missing row and the deleted row. We had\ndiscussed this in the past a couple of times. If we plan to target it\nin draft 1, I can dig up all old emails and resume discussion on this.\n\n~~\n\n6)\nTable-level resolves. There was a suggestion earlier to implement\ntable-level resolvers. The patch has been implemented to some extent,\nit can be completed and posted when we are done reviewing subscription\nlevel resolvers.\n\n~~\n\n[1]: https://www.postgresql.org/message-id/CAA4eK1LhD%3DC5UwDeKxC_5jK4_ADtM7g%2BMoFW9qhziSxHbVVfeQ%40mail.gmail.com\n\nFor clock-skew and timestamp based resolution, if needed, I will post\nanother email for the design items where suggestions are needed.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 22 Aug 2024 15:44:50 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Aug 22, 2024 at 3:44 PM shveta malik <[email protected]> wrote:\n>\n>\n> For clock-skew and timestamp based resolution, if needed, I will post\n> another email for the design items where suggestions are needed.\n>\n\nPlease find issues which need some thoughts and approval for\ntime-based resolution and clock-skew.\n\n1)\nTime based conflict resolution and two phase transactions:\n\nTime based conflict resolution (last_update_wins) is the one\nresolution which will not result in data-divergence considering\nclock-skew is taken care of. But when it comes to two-phase\ntransactions, it might not be the case. For two-phase transaction, we\ndo not have commit timestamp when the changes are being applied. Thus\nfor time-based comparison, initially it was decided to user prepare\ntimestamp but it may result in data-divergence. Please see the\nexample at [1].\n\nExample at [1] is a tricky situation, and thus in the initial draft,\nwe decided to restrict usage of 2pc and CDR together. The plan is:\n\na) During Create subscription, if the user has given last_update_wins\nresolver for any conflict_type and 'two_phase' is also enabled, we\nERROR out.\nb) During Alter subscription, if the user tries to update resolver to\n'last_update_wins' but 'two_phase' is enabled, we error out.\n\nAnother solution could be to save both prepare_ts and commit_ts. And\nwhen any txn comes for conflict resolution, we first check if\nprepare_ts is available, use that else use commit_ts. Availability of\nprepare_ts would indicate it was a prepared txn and thus even if it is\ncommitted, we should use prepare_ts for comparison for consistency.\nThis will have some overhead of storing prepare_ts along with\ncommit_ts. But if the number of prepared txns are reasonably small,\nthis overhead should be less.\n\nWe currently plan to go with restricting 2pc and last_update_wins\ntogether, unless others have different opinions.\n\n~~\n\n2)\nparallel apply worker and conflict-resolution:\nAs discussed in [2] (see last paragraph in [2]), for streaming of\nin-progress transactions by parallel worker, we do not have\ncommit-timestamp with each change and thus it makes sense to disable\nparallel apply worker with CDR. The plan is to not start parallel\napply worker if 'last_update_wins' is configured for any\nconflict_type.\n\n ~~\n\n3)\nparallel apply worker and clock skew management:\nRegarding clock-skew management as discussed in [3], we will wait for\nthe local clock to come within tolerable range during 'begin' rather\nthan before 'commit'. And this wait needs commit-timestamp in the\nbeginning, thus we plan to restrict starting pa-worker even when\nclock-skew related GUCs are configured.\n\nEarlier we had restricted both 2pc and parallel worker worker start\nwhen detect_conflict was enabled, but now since detect_conflict\nparameter is removed, we will change the implementation to restrict\nall 3 above cases when last_update_wins is configured. When the\nchanges are done, we will post the patch.\n\n~~\n\n4)\n<not related to timestamp and clock skew>\nEarlier when 'detect_conflict' was enabled, we were giving WARNING if\n'track_commit_timestamp' was not enabled. This was during CREATE and\nALTER subscription. Now with this parameter removed, this WARNING has\nalso been removed. But I think we need to bring back this WARNING.\nCurrently default resolvers set may work without\n'track_commit_timestamp' but when user gives CONFLICT RESOLVER in\ncreate-sub or alter-sub explicitly making them configured to\nnon-default values (or say any values, does not matter if few are\ndefaults), we may still emit this warning to alert user:\n\n2024-07-26 09:14:03.152 IST [195415] WARNING: conflict detection\ncould be incomplete due to disabled track_commit_timestamp\n2024-07-26 09:14:03.152 IST [195415] DETAIL: Conflicts update_differ\nand delete_differ cannot be detected, and the origin and commit\ntimestamp for the local row will not be logged.\n\nThoughts?\n\nIf we emit this WARNING during each resolution, then it may flood our\nlog files, thus it seems better to emit it during create or alter\nsubscription instead of during resolution.\n\n\n~~\n\n[1]:\nExample of 2pc inconsistency:\n---------------------------------------------------------\nTwo nodes, A and B, are subscribed to each other and have identical\ndata. The last_update_wins strategy is configured.\n\nBoth contain the data: '1, x, node'.\n\nTimeline of Events:\n9:00 AM on Node A: A transaction (txn1) is prepared to update the row\nto '1, x, nodeAAA'. We'll refer to this as change1 on Node A.\n9:01 AM on Node B: An update occurs for the row, changing it to '1, x,\nnodeBBB'. This update is then sent to Node A. We'll call this change2\non Node B.\n\nAt 9:02 AM:\n\n--Node A: Still holds '1, x, node' because txn1 is not yet committed.\n--Node B: Holds '1, x, nodeBBB'.\n--Node B receives the prepared transaction from Node A at 9:02 AM and\nraises an update_differ conflict.\n--Since the local change occurred at 9:01 AM, which is later than the\n9:00 AM prepare-timestamp from Node A, Node B retains its local\nchange.\n\nAt 9:05 AM:\n--Node A commits the prepared txn1.\n--The apply worker on Node A has been waiting to apply the changes\nfrom Node B because the tuple was locked by txn1.\n--Once the commit occurs, the apply worker proceeds with the update from Node B.\n--When update_differ is triggered, since the 9:05 AM commit-timestamp\nfrom Node A is later than the 9:01 AM commit-timestamp from Node B,\nNode A’s update wins.\n\nFinal Data on Nodes:\nNode A: '1, x, nodeAAA'\nNode B: '1, x, nodeBBB'\n\nDespite the last_update_wins resolution, the nodes end up with different data.\n\nThe data divergence happened because on node B, we used change1's\nprepare_ts (9.00) for comparison; while on node A, we used change1's\ncommit_ts(9.05) for comparison.\n---------------------------------------------------------\n\n[2]: https://www.postgresql.org/message-id/CAFiTN-sf23K%3DsRsnxw-BKNJqg5P6JXcqXBBkx%3DEULX8QGSQYaw%40mail.gmail.com\n[3]: https://www.postgresql.org/message-id/CAA4eK1%2BhdMmwEEiMb4z6x7JgQbw1jU2XyP1U7dNObyUe4JQQWg%40mail.gmail.com\n\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 23 Aug 2024 10:38:51 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Aug 22, 2024 at 8:15 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Aug 21, 2024 at 4:08 PM Nisha Moond <[email protected]> wrote:\n> >\n> > The patches have been rebased on the latest pgHead following the merge\n> > of the conflict detection patch [1].\n>\n> Thanks for working on patches.\n>\n> Summarizing the issues which need some suggestions/thoughts.\n>\n> 1)\n> For subscription based resolvers, currently the syntax implemented is:\n>\n> 1a)\n> CREATE SUBSCRIPTION <subname>\n> CONNECTION <conninfo> PUBLICATION <pubname>\n> CONFLICT RESOLVER\n> (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> conflict_type3 = resolver3,...);\n>\n> 1b)\n> ALTER SUBSCRIPTION <subname> CONFLICT RESOLVER\n> (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> conflict_type3 = resolver3,...);\n>\n> Earlier the syntax suggested in [1] was:\n> CREATE SUBSCRIPTION <subname> CONNECTION <conninfo> PUBLICATION <pubname>\n> CONFLICT RESOLVER 'conflict_resolver1' FOR 'conflict_type1',\n> CONFLICT RESOLVER 'conflict_resolver2' FOR 'conflict_type2';\n>\n> I think the currently implemented syntax is good as it has less\n> repetition, unless others think otherwise.\n>\n> ~~\n>\n> 2)\n> For subscription based resolvers, do we need a RESET command to reset\n> resolvers to default? Any one of below or both?\n>\n> 2a) reset all at once:\n> ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVERS\n>\n> 2b) reset one at a time:\n> ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVER for 'conflict_type';\n>\n> The issue I see here is, to implement 1a and 1b, we have introduced\n> the 'RESOLVER' keyword. If we want to implement 2a, we will have to\n> introduce the 'RESOLVERS' keyword as well. But we can come up with\n> some alternative syntax if we plan to implement these. Thoughts?\n>\n\nHi Shveta,\n\nI felt it would be better to keep the syntax similar to the existing\nINSERT ... ON CONFLICT [1].\n\nI'd suggest a syntax like this:\n\n... ON CONFLICT ['conflict_type'] DO { 'conflict_action' | DEFAULT }\n\n~~~\n\ne.g.\n\nTo configure conflict resolvers for the SUBSCRIPTION:\n\nCREATE SUBSCRIPTION subname CONNECTION coninfo PUBLICATION pubname\nON CONFLICT 'conflict_type1' DO 'conflict_action1',\nON CONFLICT 'conflict_type2' DO 'conflict_action2';\n\nLikewise, for ALTER:\n\nALTER SUBSCRIPTION <subname>\nON CONFLICT 'conflict_type1' DO 'conflict_action1',\nON CONFLICT 'conflict_type2' DO 'conflict_action2';\n\nTo RESET all at once:\n\nALTER SUBSCRIPTION <subname>\nON CONFLICT DO DEFAULT;\n\nAnd, to RESET one at a time:\n\nALTER SUBSCRIPTION <subname>\nON CONFLICT 'conflict_type1' DO DEFAULT;\n\n~~~\n\nAlthough your list format \"('conflict_type1' = 'conflict_action1',\n'conflict_type2' = 'conflict_action2')\" is clear and without\nrepetition, I predict this terse style could end up being troublesome\nbecause it does not offer much flexibility for whatever the future\nmight hold for CDR.\n\ne.g. ability to handle the conflict with a user-defined resolver\ne.g. ability to handle the conflict conditionally (e.g. with a WHERE clause...)\ne.g. ability to handle all conflicts with a common resolver\netc.\n\n~~~~\n\nAdvantages of my suggestion:\n- Close to existing SQL syntax\n- No loss of clarity by removing the word \"RESOLVER\"\n- No requirement for new keyword/s\n- The commands now read more like English\n- Offers more flexibility for any unknown future requirements\n- The setup (via create subscription) and the alter/reset all look the same.\n\n======\n[1] https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 26 Aug 2024 11:58:04 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Aug 21, 2024 at 4:08 PM Nisha Moond <[email protected]> wrote:\n>\n> The patches have been rebased on the latest pgHead following the merge\n> of the conflict detection patch [1]. The detect_conflict option has\n> been removed, and conflict detection is now enabled by default. This\n> change required the following updates in resolver patches:\n> patch-0001:\n> - Removed dependency on the detect_conflict option. Now, default\n> conflict resolvers are set on CREATE SUBSCRIPTION if no values are\n> provided.\n> - To keep the behavior unchanged, the default resolvers are now set as -\n> insert_exists = error\n> update_exists = error\n> update_differ = apply_remote\n> update_missing = skip\n> delete_missing = skip\n> delete_differ = apply_remote\n> - Added documentation for conflict resolvers.\n>\n> patch-0002:\n> - Removed dependency on the detect_conflict option.\n> - Updated test cases in 034_conflict_resolver.pl to reflect new\n> default resolvers and the removal of the detect_conflict option.\n>\n> patch-0003:\n> - Implemented resolver for the update_exists conflict type. Supported\n> resolvers are: apply_remote, keep_local, error.\n>\n\nThanks Nisha for the patches, I was running some tests on\nupdate_exists and found this case wherein it misses to LOG one\nconflict out of 3.\n\ncreate table tab (a int primary key, b int unique, c int unique);\nPub: insert into tab values (1,1,1);\n\nSub:\ninsert into tab values (2,20,30);\ninsert into tab values (3,40,50);\ninsert into tab values (4,60,70);\n\nPub: update tab set a=2,b=40,c=70 where a=1;\n\nHere it logs update_exists conflict and the resolution for Key\n(b)=(40) and Key (c)=(70) but misses to LOG first one which is with\nKey (a)=(2).\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 26 Aug 2024 09:05:05 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Aug 26, 2024 at 7:28 AM Peter Smith <[email protected]> wrote:\n>\n> On Thu, Aug 22, 2024 at 8:15 PM shveta malik <[email protected]> wrote:\n> >\n> > On Wed, Aug 21, 2024 at 4:08 PM Nisha Moond <[email protected]> wrote:\n> > >\n> > > The patches have been rebased on the latest pgHead following the merge\n> > > of the conflict detection patch [1].\n> >\n> > Thanks for working on patches.\n> >\n> > Summarizing the issues which need some suggestions/thoughts.\n> >\n> > 1)\n> > For subscription based resolvers, currently the syntax implemented is:\n> >\n> > 1a)\n> > CREATE SUBSCRIPTION <subname>\n> > CONNECTION <conninfo> PUBLICATION <pubname>\n> > CONFLICT RESOLVER\n> > (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> > conflict_type3 = resolver3,...);\n> >\n> > 1b)\n> > ALTER SUBSCRIPTION <subname> CONFLICT RESOLVER\n> > (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> > conflict_type3 = resolver3,...);\n> >\n> > Earlier the syntax suggested in [1] was:\n> > CREATE SUBSCRIPTION <subname> CONNECTION <conninfo> PUBLICATION <pubname>\n> > CONFLICT RESOLVER 'conflict_resolver1' FOR 'conflict_type1',\n> > CONFLICT RESOLVER 'conflict_resolver2' FOR 'conflict_type2';\n> >\n> > I think the currently implemented syntax is good as it has less\n> > repetition, unless others think otherwise.\n> >\n> > ~~\n> >\n> > 2)\n> > For subscription based resolvers, do we need a RESET command to reset\n> > resolvers to default? Any one of below or both?\n> >\n> > 2a) reset all at once:\n> > ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVERS\n> >\n> > 2b) reset one at a time:\n> > ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVER for 'conflict_type';\n> >\n> > The issue I see here is, to implement 1a and 1b, we have introduced\n> > the 'RESOLVER' keyword. If we want to implement 2a, we will have to\n> > introduce the 'RESOLVERS' keyword as well. But we can come up with\n> > some alternative syntax if we plan to implement these. Thoughts?\n> >\n>\n> Hi Shveta,\n>\n> I felt it would be better to keep the syntax similar to the existing\n> INSERT ... ON CONFLICT [1].\n>\n> I'd suggest a syntax like this:\n>\n> ... ON CONFLICT ['conflict_type'] DO { 'conflict_action' | DEFAULT }\n>\n> ~~~\n>\n> e.g.\n>\n> To configure conflict resolvers for the SUBSCRIPTION:\n>\n> CREATE SUBSCRIPTION subname CONNECTION coninfo PUBLICATION pubname\n> ON CONFLICT 'conflict_type1' DO 'conflict_action1',\n> ON CONFLICT 'conflict_type2' DO 'conflict_action2';\n>\n> Likewise, for ALTER:\n>\n> ALTER SUBSCRIPTION <subname>\n> ON CONFLICT 'conflict_type1' DO 'conflict_action1',\n> ON CONFLICT 'conflict_type2' DO 'conflict_action2';\n>\n> To RESET all at once:\n>\n> ALTER SUBSCRIPTION <subname>\n> ON CONFLICT DO DEFAULT;\n>\n> And, to RESET one at a time:\n>\n> ALTER SUBSCRIPTION <subname>\n> ON CONFLICT 'conflict_type1' DO DEFAULT;\n>\n\nThanks for the suggestion. The idea looks good to me. But we need to\nonce check the complexity involved in its implementation in gram.y.\nInitial analysis says that it will need something like 'action' which\nwe have for ALTER TABLE command ([1]) to have these multiple\nsubcommands implemented. For INSERT case, it is a just a subclause but\nfor create/alter sub we hill have it multiple times under one command.\nLet us review.\n\nAlso I would like to know opinion of others on this.\n\n[1]: https://www.postgresql.org/docs/current/sql-altertable.html\n\n>\n> Although your list format \"('conflict_type1' = 'conflict_action1',\n> 'conflict_type2' = 'conflict_action2')\" is clear and without\n> repetition, I predict this terse style could end up being troublesome\n> because it does not offer much flexibility for whatever the future\n> might hold for CDR.\n>\n> e.g. ability to handle the conflict with a user-defined resolver\n> e.g. ability to handle the conflict conditionally (e.g. with a WHERE clause...)\n> e.g. ability to handle all conflicts with a common resolver\n> etc.\n>\n> ~~~~\n>\n> Advantages of my suggestion:\n> - Close to existing SQL syntax\n> - No loss of clarity by removing the word \"RESOLVER\"\n> - No requirement for new keyword/s\n> - The commands now read more like English\n> - Offers more flexibility for any unknown future requirements\n> - The setup (via create subscription) and the alter/reset all look the same.\n>\n> ======\n> [1] https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n\n\n", "msg_date": "Mon, 26 Aug 2024 09:42:10 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Aug 22, 2024 at 3:45 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Aug 21, 2024 at 4:08 PM Nisha Moond <[email protected]> wrote:\n> >\n> > The patches have been rebased on the latest pgHead following the merge\n> > of the conflict detection patch [1].\n>\n> Thanks for working on patches.\n>\n> Summarizing the issues which need some suggestions/thoughts.\n>\n> 1)\n> For subscription based resolvers, currently the syntax implemented is:\n>\n> 1a)\n> CREATE SUBSCRIPTION <subname>\n> CONNECTION <conninfo> PUBLICATION <pubname>\n> CONFLICT RESOLVER\n> (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> conflict_type3 = resolver3,...);\n>\n> 1b)\n> ALTER SUBSCRIPTION <subname> CONFLICT RESOLVER\n> (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> conflict_type3 = resolver3,...);\n>\n> Earlier the syntax suggested in [1] was:\n> CREATE SUBSCRIPTION <subname> CONNECTION <conninfo> PUBLICATION <pubname>\n> CONFLICT RESOLVER 'conflict_resolver1' FOR 'conflict_type1',\n> CONFLICT RESOLVER 'conflict_resolver2' FOR 'conflict_type2';\n>\n> I think the currently implemented syntax is good as it has less\n> repetition, unless others think otherwise.\n>\n> ~~\n>\n> 2)\n> For subscription based resolvers, do we need a RESET command to reset\n> resolvers to default? Any one of below or both?\n>\n> 2a) reset all at once:\n> ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVERS\n>\n> 2b) reset one at a time:\n> ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVER for 'conflict_type';\n>\n> The issue I see here is, to implement 1a and 1b, we have introduced\n> the 'RESOLVER' keyword. If we want to implement 2a, we will have to\n> introduce the 'RESOLVERS' keyword as well. But we can come up with\n> some alternative syntax if we plan to implement these. Thoughts?\n>\n> ~~\n>\n> 3) Regarding update_exists:\n>\n> 3a)\n> Currently update_exists resolver patch is kept separate. The reason\n> being, it performs resolution which will need deletion of multiple\n> rows. It will be good to discuss if we want to target this in the\n> first draft. Please see the example:\n>\n> create table tab (a int primary key, b int unique, c int unique);\n>\n> Pub: insert into tab values (1,1,1);\n> Sub:\n> insert into tab values (2,20,30);\n> insert into tab values (3,40,50);\n> insert into tab values (4,60,70);\n>\n> Pub: update tab set a=2,b=40,c=70 where a=1;\n>\n> The above 'update' on pub will result in 'update_exists' on sub and if\n> resolution is in favour of 'apply', then it will conflict with all the\n> three local rows of subscriber due to unique constraint present on all\n> three columns. Thus in order to resolve the conflict, it will have to\n> delete these 3 rows on sub:\n>\n> 2,20,30\n> 3,40,50\n> 4,60,70\n> and then update 1,1,1 to 2,40,70.\n>\n> Just need opinion on if we shall target this in the initial draft.\n>\n> 3b)\n> If we plan to implement this, we need to work on optimal design where\n> we can find all the conflicting rows at once and delete those.\n> Currently the implementation has been done using recursion i.e. find\n> one conflicting row, then delete it and then next and so on i.e. we\n> call apply_handle_update_internal() recursively. On initial code\n> review, I feel it is doable to scan all indexes at once and get\n> conflicting-tuple-ids in one go and get rid of recursion. It can be\n> attempted once we decide on 3a.\n>\n> ~~\n>\n> 4)\n> Now for insert_exists and update_exists, we are doing a pre-scan of\n> all unique indexes to find conflict. Also there is post-scan to figure\n> out if the conflicting row is inserted meanwhile. This needs to be\n> reviewed for optimization. We need to avoid pre-scan wherever\n> possible. I think the only case for which it can be avoided is\n> 'ERROR'. For the cases where resolver is in favor of remote-apply, we\n> need to check conflict beforehand to avoid rollback of already\n> inserted data. And for the case where resolver is in favor of skipping\n> the change, then too we should know beforehand about the conflict to\n> avoid heap-insertion and rollback. Thoughts?\n>\n+1 to the idea of optimization, but it seems that when the resolver is\nset to ERROR, skipping the pre-scan only optimizes the case where no\nconflict exists.\nIf a conflict is found, the apply-worker will error out during the\npre-scan, and no post-scan occurs, so there's no opportunity for\noptimization.\nHowever, if no conflict is present, we currently do both pre-scan and\npost-scan. Skipping the pre-scan in this scenario could be a\nworthwhile optimization, even if it only benefits the no-conflict\ncase.\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Mon, 26 Aug 2024 10:23:00 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Aug 22, 2024 at 3:45 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Aug 21, 2024 at 4:08 PM Nisha Moond <[email protected]> wrote:\n> >\n> > The patches have been rebased on the latest pgHead following the merge\n> > of the conflict detection patch [1].\n>\n> Thanks for working on patches.\n>\n> Summarizing the issues which need some suggestions/thoughts.\n>\n> 1)\n> For subscription based resolvers, currently the syntax implemented is:\n>\n> 1a)\n> CREATE SUBSCRIPTION <subname>\n> CONNECTION <conninfo> PUBLICATION <pubname>\n> CONFLICT RESOLVER\n> (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> conflict_type3 = resolver3,...);\n>\n> 1b)\n> ALTER SUBSCRIPTION <subname> CONFLICT RESOLVER\n> (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> conflict_type3 = resolver3,...);\n>\n> Earlier the syntax suggested in [1] was:\n> CREATE SUBSCRIPTION <subname> CONNECTION <conninfo> PUBLICATION <pubname>\n> CONFLICT RESOLVER 'conflict_resolver1' FOR 'conflict_type1',\n> CONFLICT RESOLVER 'conflict_resolver2' FOR 'conflict_type2';\n>\n> I think the currently implemented syntax is good as it has less\n> repetition, unless others think otherwise.\n>\n> ~~\n>\n> 2)\n> For subscription based resolvers, do we need a RESET command to reset\n> resolvers to default? Any one of below or both?\n>\n> 2a) reset all at once:\n> ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVERS\n>\n> 2b) reset one at a time:\n> ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVER for 'conflict_type';\n>\n> The issue I see here is, to implement 1a and 1b, we have introduced\n> the 'RESOLVER' keyword. If we want to implement 2a, we will have to\n> introduce the 'RESOLVERS' keyword as well. But we can come up with\n> some alternative syntax if we plan to implement these. Thoughts?\n>\n\nIt makes sense to have a RESET on the lines of (a) and (b). At this\nstage, we should do minimal in extending the syntax. How about RESET\nCONFLICT RESOLVER ALL for (a)?\n\n> ~~\n>\n> 3) Regarding update_exists:\n>\n> 3a)\n> Currently update_exists resolver patch is kept separate. The reason\n> being, it performs resolution which will need deletion of multiple\n> rows. It will be good to discuss if we want to target this in the\n> first draft. Please see the example:\n>\n> create table tab (a int primary key, b int unique, c int unique);\n>\n> Pub: insert into tab values (1,1,1);\n> Sub:\n> insert into tab values (2,20,30);\n> insert into tab values (3,40,50);\n> insert into tab values (4,60,70);\n>\n> Pub: update tab set a=2,b=40,c=70 where a=1;\n>\n> The above 'update' on pub will result in 'update_exists' on sub and if\n> resolution is in favour of 'apply', then it will conflict with all the\n> three local rows of subscriber due to unique constraint present on all\n> three columns. Thus in order to resolve the conflict, it will have to\n> delete these 3 rows on sub:\n>\n> 2,20,30\n> 3,40,50\n> 4,60,70\n> and then update 1,1,1 to 2,40,70.\n>\n> Just need opinion on if we shall target this in the initial draft.\n>\n\nThis case looks a bit complicated. It seems there is no other\nalternative than to delete the multiple rows. It is better to create a\nseparate top-up patch for this and we can discuss in detail about this\nonce the basic patch is in better shape.\n\n> 3b)\n> If we plan to implement this, we need to work on optimal design where\n> we can find all the conflicting rows at once and delete those.\n> Currently the implementation has been done using recursion i.e. find\n> one conflicting row, then delete it and then next and so on i.e. we\n> call apply_handle_update_internal() recursively. On initial code\n> review, I feel it is doable to scan all indexes at once and get\n> conflicting-tuple-ids in one go and get rid of recursion. It can be\n> attempted once we decide on 3a.\n>\n\nI suggest following the simplest strategy (even if that means calling\nthe update function recursively) by adding comments on the optimal\nstrategy. We can optimize it later as well.\n\n> ~~\n>\n> 4)\n> Now for insert_exists and update_exists, we are doing a pre-scan of\n> all unique indexes to find conflict. Also there is post-scan to figure\n> out if the conflicting row is inserted meanwhile. This needs to be\n> reviewed for optimization. We need to avoid pre-scan wherever\n> possible. I think the only case for which it can be avoided is\n> 'ERROR'. For the cases where resolver is in favor of remote-apply, we\n> need to check conflict beforehand to avoid rollback of already\n> inserted data. And for the case where resolver is in favor of skipping\n> the change, then too we should know beforehand about the conflict to\n> avoid heap-insertion and rollback. Thoughts?\n>\n\nIt makes sense to skip the pre-scan wherever possible. Your analysis\nsounds reasonable to me.\n\n> ~~\n>\n> 5)\n> Currently we only capture update_missing conflict i.e. we are not\n> distinguishing between the missing row and the deleted row. We had\n> discussed this in the past a couple of times. If we plan to target it\n> in draft 1, I can dig up all old emails and resume discussion on this.\n>\n\nThis is a separate conflict detection project in itself. I am thinking\nabout the solution to this problem. We will talk about this in a\nseparate thread.\n\n> ~~\n>\n> 6)\n> Table-level resolves. There was a suggestion earlier to implement\n> table-level resolvers. The patch has been implemented to some extent,\n> it can be completed and posted when we are done reviewing subscription\n> level resolvers.\n>\n\nYeah, it makes sense to do it after the subscription-level resolution\npatch is ready.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Aug 2024 14:23:12 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Aug 26, 2024 at 7:28 AM Peter Smith <[email protected]> wrote:\n>\n> On Thu, Aug 22, 2024 at 8:15 PM shveta malik <[email protected]> wrote:\n> >\n>\n> Hi Shveta,\n>\n> I felt it would be better to keep the syntax similar to the existing\n> INSERT ... ON CONFLICT [1].\n>\n> I'd suggest a syntax like this:\n>\n> ... ON CONFLICT ['conflict_type'] DO { 'conflict_action' | DEFAULT }\n>\n> ~~~\n>\n> e.g.\n>\n> To configure conflict resolvers for the SUBSCRIPTION:\n>\n> CREATE SUBSCRIPTION subname CONNECTION coninfo PUBLICATION pubname\n> ON CONFLICT 'conflict_type1' DO 'conflict_action1',\n> ON CONFLICT 'conflict_type2' DO 'conflict_action2';\n>\n\nOne thing that looks odd to me about this is the resolution part of\nit. For example, ON CONFLICT 'insert_exists' DO 'keep_local'. The\naction part doesn't go well without being explicit that it is a\nresolution method. Another variant could be ON CONFLICT\n'insert_exists' USE RESOLUTION [METHOD] 'keep_local'.\n\nI think we can keep all these syntax alternatives either in the form\nof comments or in the commit message and discuss more on these once we\nagree on the solutions to the key design issues pointed out by Shveta.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Aug 2024 14:44:32 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Aug 26, 2024 at 2:23 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 22, 2024 at 3:45 PM shveta malik <[email protected]> wrote:\n> >\n> > On Wed, Aug 21, 2024 at 4:08 PM Nisha Moond <[email protected]> wrote:\n> > >\n> > > The patches have been rebased on the latest pgHead following the merge\n> > > of the conflict detection patch [1].\n> >\n> > Thanks for working on patches.\n> >\n> > Summarizing the issues which need some suggestions/thoughts.\n> >\n> > 1)\n> > For subscription based resolvers, currently the syntax implemented is:\n> >\n> > 1a)\n> > CREATE SUBSCRIPTION <subname>\n> > CONNECTION <conninfo> PUBLICATION <pubname>\n> > CONFLICT RESOLVER\n> > (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> > conflict_type3 = resolver3,...);\n> >\n> > 1b)\n> > ALTER SUBSCRIPTION <subname> CONFLICT RESOLVER\n> > (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> > conflict_type3 = resolver3,...);\n> >\n> > Earlier the syntax suggested in [1] was:\n> > CREATE SUBSCRIPTION <subname> CONNECTION <conninfo> PUBLICATION <pubname>\n> > CONFLICT RESOLVER 'conflict_resolver1' FOR 'conflict_type1',\n> > CONFLICT RESOLVER 'conflict_resolver2' FOR 'conflict_type2';\n> >\n> > I think the currently implemented syntax is good as it has less\n> > repetition, unless others think otherwise.\n> >\n> > ~~\n> >\n> > 2)\n> > For subscription based resolvers, do we need a RESET command to reset\n> > resolvers to default? Any one of below or both?\n> >\n> > 2a) reset all at once:\n> > ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVERS\n> >\n> > 2b) reset one at a time:\n> > ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVER for 'conflict_type';\n> >\n> > The issue I see here is, to implement 1a and 1b, we have introduced\n> > the 'RESOLVER' keyword. If we want to implement 2a, we will have to\n> > introduce the 'RESOLVERS' keyword as well. But we can come up with\n> > some alternative syntax if we plan to implement these. Thoughts?\n> >\n>\n> It makes sense to have a RESET on the lines of (a) and (b). At this\n> stage, we should do minimal in extending the syntax. How about RESET\n> CONFLICT RESOLVER ALL for (a)?\n\nYes, the syntax looks good.\n\n> > ~~\n> >\n> > 3) Regarding update_exists:\n> >\n> > 3a)\n> > Currently update_exists resolver patch is kept separate. The reason\n> > being, it performs resolution which will need deletion of multiple\n> > rows. It will be good to discuss if we want to target this in the\n> > first draft. Please see the example:\n> >\n> > create table tab (a int primary key, b int unique, c int unique);\n> >\n> > Pub: insert into tab values (1,1,1);\n> > Sub:\n> > insert into tab values (2,20,30);\n> > insert into tab values (3,40,50);\n> > insert into tab values (4,60,70);\n> >\n> > Pub: update tab set a=2,b=40,c=70 where a=1;\n> >\n> > The above 'update' on pub will result in 'update_exists' on sub and if\n> > resolution is in favour of 'apply', then it will conflict with all the\n> > three local rows of subscriber due to unique constraint present on all\n> > three columns. Thus in order to resolve the conflict, it will have to\n> > delete these 3 rows on sub:\n> >\n> > 2,20,30\n> > 3,40,50\n> > 4,60,70\n> > and then update 1,1,1 to 2,40,70.\n> >\n> > Just need opinion on if we shall target this in the initial draft.\n> >\n>\n> This case looks a bit complicated. It seems there is no other\n> alternative than to delete the multiple rows. It is better to create a\n> separate top-up patch for this and we can discuss in detail about this\n> once the basic patch is in better shape.\n\nAgreed.\n\n>\n> > 3b)\n> > If we plan to implement this, we need to work on optimal design where\n> > we can find all the conflicting rows at once and delete those.\n> > Currently the implementation has been done using recursion i.e. find\n> > one conflicting row, then delete it and then next and so on i.e. we\n> > call apply_handle_update_internal() recursively. On initial code\n> > review, I feel it is doable to scan all indexes at once and get\n> > conflicting-tuple-ids in one go and get rid of recursion. It can be\n> > attempted once we decide on 3a.\n> >\n>\n> I suggest following the simplest strategy (even if that means calling\n> the update function recursively) by adding comments on the optimal\n> strategy. We can optimize it later as well.\n\nSure.\n\n>\n> > ~~\n> >\n> > 4)\n> > Now for insert_exists and update_exists, we are doing a pre-scan of\n> > all unique indexes to find conflict. Also there is post-scan to figure\n> > out if the conflicting row is inserted meanwhile. This needs to be\n> > reviewed for optimization. We need to avoid pre-scan wherever\n> > possible. I think the only case for which it can be avoided is\n> > 'ERROR'. For the cases where resolver is in favor of remote-apply, we\n> > need to check conflict beforehand to avoid rollback of already\n> > inserted data. And for the case where resolver is in favor of skipping\n> > the change, then too we should know beforehand about the conflict to\n> > avoid heap-insertion and rollback. Thoughts?\n> >\n>\n> It makes sense to skip the pre-scan wherever possible. Your analysis\n> sounds reasonable to me.\n>\n> > ~~\n> >\n> > 5)\n> > Currently we only capture update_missing conflict i.e. we are not\n> > distinguishing between the missing row and the deleted row. We had\n> > discussed this in the past a couple of times. If we plan to target it\n> > in draft 1, I can dig up all old emails and resume discussion on this.\n> >\n>\n> This is a separate conflict detection project in itself. I am thinking\n> about the solution to this problem. We will talk about this in a\n> separate thread.\n>\n> > ~~\n> >\n> > 6)\n> > Table-level resolves. There was a suggestion earlier to implement\n> > table-level resolvers. The patch has been implemented to some extent,\n> > it can be completed and posted when we are done reviewing subscription\n> > level resolvers.\n> >\n>\n> Yeah, it makes sense to do it after the subscription-level resolution\n> patch is ready.\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n", "msg_date": "Mon, 26 Aug 2024 15:16:04 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "Please find v10 patch-set. Changes are:\n\n1) patch-001:\n - Corrected a patch application warning.\n - Added support for pg_dump.\n - As suggested in pt.4 of [1]: added a warning during CREATE and\nALTER subscription when track_commit_timestamp is OFF.\n\n2) patch-002 & patch-003:\n - Reduced code duplication in execReplication.c\n - As suggested in pt.4 of [2]: Optimized the pre-scan for\ninsert_exists and update_exists cases when resolver is set to ERROR.\n - Fixed a bug reported by Shveta in [3]\n\nThank You Ajin for working on pg_dump support changes.\n\n[1] https://www.postgresql.org/message-id/CAJpy0uA0J8kz2DKU0xbUkUT%3Drtt%3DCenpzmUMgYcwms9%2BzgCuvA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAJpy0uBrXZE6LLofX5tc8WOm5F%2BFNgnQjRLQerOY8cOqqvtrNg%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAJpy0uCtYweJHwuYdgGWR7iSDUKjqDtA7yoLe%2BXMWqWnmQhj8g%40mail.gmail.com\n\nThanks,\nNisha", "msg_date": "Tue, 27 Aug 2024 13:51:42 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Aug 26, 2024 at 9:05 AM shveta malik <[email protected]> wrote:\n>\n> On Wed, Aug 21, 2024 at 4:08 PM Nisha Moond <[email protected]> wrote:\n> >\n> > The patches have been rebased on the latest pgHead following the merge\n> > of the conflict detection patch [1]. The detect_conflict option has\n> > been removed, and conflict detection is now enabled by default. This\n> > change required the following updates in resolver patches:\n> > patch-0001:\n> > - Removed dependency on the detect_conflict option. Now, default\n> > conflict resolvers are set on CREATE SUBSCRIPTION if no values are\n> > provided.\n> > - To keep the behavior unchanged, the default resolvers are now set as -\n> > insert_exists = error\n> > update_exists = error\n> > update_differ = apply_remote\n> > update_missing = skip\n> > delete_missing = skip\n> > delete_differ = apply_remote\n> > - Added documentation for conflict resolvers.\n> >\n> > patch-0002:\n> > - Removed dependency on the detect_conflict option.\n> > - Updated test cases in 034_conflict_resolver.pl to reflect new\n> > default resolvers and the removal of the detect_conflict option.\n> >\n> > patch-0003:\n> > - Implemented resolver for the update_exists conflict type. Supported\n> > resolvers are: apply_remote, keep_local, error.\n> >\n>\n> Thanks Nisha for the patches, I was running some tests on\n> update_exists and found this case wherein it misses to LOG one\n> conflict out of 3.\n>\n> create table tab (a int primary key, b int unique, c int unique);\n> Pub: insert into tab values (1,1,1);\n>\n> Sub:\n> insert into tab values (2,20,30);\n> insert into tab values (3,40,50);\n> insert into tab values (4,60,70);\n>\n> Pub: update tab set a=2,b=40,c=70 where a=1;\n>\n> Here it logs update_exists conflict and the resolution for Key\n> (b)=(40) and Key (c)=(70) but misses to LOG first one which is with\n> Key (a)=(2).\n>\n\nFixed.\n\nThanks,\nNisha\n\n\n", "msg_date": "Tue, 27 Aug 2024 13:56:53 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Aug 26, 2024 at 2:23 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 22, 2024 at 3:45 PM shveta malik <[email protected]> wrote:\n> >\n> > On Wed, Aug 21, 2024 at 4:08 PM Nisha Moond <[email protected]> wrote:\n> > >\n> > > The patches have been rebased on the latest pgHead following the merge\n> > > of the conflict detection patch [1].\n> >\n> > Thanks for working on patches.\n> >\n> > Summarizing the issues which need some suggestions/thoughts.\n> >\n> > 1)\n> > For subscription based resolvers, currently the syntax implemented is:\n> >\n> > 1a)\n> > CREATE SUBSCRIPTION <subname>\n> > CONNECTION <conninfo> PUBLICATION <pubname>\n> > CONFLICT RESOLVER\n> > (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> > conflict_type3 = resolver3,...);\n> >\n> > 1b)\n> > ALTER SUBSCRIPTION <subname> CONFLICT RESOLVER\n> > (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> > conflict_type3 = resolver3,...);\n> >\n> > Earlier the syntax suggested in [1] was:\n> > CREATE SUBSCRIPTION <subname> CONNECTION <conninfo> PUBLICATION <pubname>\n> > CONFLICT RESOLVER 'conflict_resolver1' FOR 'conflict_type1',\n> > CONFLICT RESOLVER 'conflict_resolver2' FOR 'conflict_type2';\n> >\n> > I think the currently implemented syntax is good as it has less\n> > repetition, unless others think otherwise.\n> >\n> > ~~\n> >\n> > 2)\n> > For subscription based resolvers, do we need a RESET command to reset\n> > resolvers to default? Any one of below or both?\n> >\n> > 2a) reset all at once:\n> > ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVERS\n> >\n> > 2b) reset one at a time:\n> > ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVER for 'conflict_type';\n> >\n> > The issue I see here is, to implement 1a and 1b, we have introduced\n> > the 'RESOLVER' keyword. If we want to implement 2a, we will have to\n> > introduce the 'RESOLVERS' keyword as well. But we can come up with\n> > some alternative syntax if we plan to implement these. Thoughts?\n> >\n>\n> It makes sense to have a RESET on the lines of (a) and (b). At this\n> stage, we should do minimal in extending the syntax. How about RESET\n> CONFLICT RESOLVER ALL for (a)?\n>\n> > ~~\n> >\n> > 3) Regarding update_exists:\n> >\n> > 3a)\n> > Currently update_exists resolver patch is kept separate. The reason\n> > being, it performs resolution which will need deletion of multiple\n> > rows. It will be good to discuss if we want to target this in the\n> > first draft. Please see the example:\n> >\n> > create table tab (a int primary key, b int unique, c int unique);\n> >\n> > Pub: insert into tab values (1,1,1);\n> > Sub:\n> > insert into tab values (2,20,30);\n> > insert into tab values (3,40,50);\n> > insert into tab values (4,60,70);\n> >\n> > Pub: update tab set a=2,b=40,c=70 where a=1;\n> >\n> > The above 'update' on pub will result in 'update_exists' on sub and if\n> > resolution is in favour of 'apply', then it will conflict with all the\n> > three local rows of subscriber due to unique constraint present on all\n> > three columns. Thus in order to resolve the conflict, it will have to\n> > delete these 3 rows on sub:\n> >\n> > 2,20,30\n> > 3,40,50\n> > 4,60,70\n> > and then update 1,1,1 to 2,40,70.\n> >\n> > Just need opinion on if we shall target this in the initial draft.\n> >\n>\n> This case looks a bit complicated. It seems there is no other\n> alternative than to delete the multiple rows. It is better to create a\n> separate top-up patch for this and we can discuss in detail about this\n> once the basic patch is in better shape.\n\nv9 onwards the patch-0003 is a separate top-up patch implementing update_exists.\n\n> > 3b)\n> > If we plan to implement this, we need to work on optimal design where\n> > we can find all the conflicting rows at once and delete those.\n> > Currently the implementation has been done using recursion i.e. find\n> > one conflicting row, then delete it and then next and so on i.e. we\n> > call apply_handle_update_internal() recursively. On initial code\n> > review, I feel it is doable to scan all indexes at once and get\n> > conflicting-tuple-ids in one go and get rid of recursion. It can be\n> > attempted once we decide on 3a.\n> >\n>\n> I suggest following the simplest strategy (even if that means calling\n> the update function recursively) by adding comments on the optimal\n> strategy. We can optimize it later as well.\n>\n> > ~~\n> >\n> > 4)\n> > Now for insert_exists and update_exists, we are doing a pre-scan of\n> > all unique indexes to find conflict. Also there is post-scan to figure\n> > out if the conflicting row is inserted meanwhile. This needs to be\n> > reviewed for optimization. We need to avoid pre-scan wherever\n> > possible. I think the only case for which it can be avoided is\n> > 'ERROR'. For the cases where resolver is in favor of remote-apply, we\n> > need to check conflict beforehand to avoid rollback of already\n> > inserted data. And for the case where resolver is in favor of skipping\n> > the change, then too we should know beforehand about the conflict to\n> > avoid heap-insertion and rollback. Thoughts?\n> >\n>\n> It makes sense to skip the pre-scan wherever possible. Your analysis\n> sounds reasonable to me.\n>\n\nDone.\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Tue, 27 Aug 2024 14:02:08 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Aug 23, 2024 at 10:39 AM shveta malik <[email protected]> wrote:\n>\n> On Thu, Aug 22, 2024 at 3:44 PM shveta malik <[email protected]> wrote:\n> >\n> >\n> > For clock-skew and timestamp based resolution, if needed, I will post\n> > another email for the design items where suggestions are needed.\n> >\n>\n> Please find issues which need some thoughts and approval for\n> time-based resolution and clock-skew.\n>\n> 1)\n> Time based conflict resolution and two phase transactions:\n>\n> Time based conflict resolution (last_update_wins) is the one\n> resolution which will not result in data-divergence considering\n> clock-skew is taken care of. But when it comes to two-phase\n> transactions, it might not be the case. For two-phase transaction, we\n> do not have commit timestamp when the changes are being applied. Thus\n> for time-based comparison, initially it was decided to user prepare\n> timestamp but it may result in data-divergence. Please see the\n> example at [1].\n>\n> Example at [1] is a tricky situation, and thus in the initial draft,\n> we decided to restrict usage of 2pc and CDR together. The plan is:\n>\n> a) During Create subscription, if the user has given last_update_wins\n> resolver for any conflict_type and 'two_phase' is also enabled, we\n> ERROR out.\n> b) During Alter subscription, if the user tries to update resolver to\n> 'last_update_wins' but 'two_phase' is enabled, we error out.\n>\n> Another solution could be to save both prepare_ts and commit_ts. And\n> when any txn comes for conflict resolution, we first check if\n> prepare_ts is available, use that else use commit_ts. Availability of\n> prepare_ts would indicate it was a prepared txn and thus even if it is\n> committed, we should use prepare_ts for comparison for consistency.\n> This will have some overhead of storing prepare_ts along with\n> commit_ts. But if the number of prepared txns are reasonably small,\n> this overhead should be less.\n>\n> We currently plan to go with restricting 2pc and last_update_wins\n> together, unless others have different opinions.\n>\n> ~~\n>\n> 2)\n> parallel apply worker and conflict-resolution:\n> As discussed in [2] (see last paragraph in [2]), for streaming of\n> in-progress transactions by parallel worker, we do not have\n> commit-timestamp with each change and thus it makes sense to disable\n> parallel apply worker with CDR. The plan is to not start parallel\n> apply worker if 'last_update_wins' is configured for any\n> conflict_type.\n>\n> ~~\n>\n> 3)\n> parallel apply worker and clock skew management:\n> Regarding clock-skew management as discussed in [3], we will wait for\n> the local clock to come within tolerable range during 'begin' rather\n> than before 'commit'. And this wait needs commit-timestamp in the\n> beginning, thus we plan to restrict starting pa-worker even when\n> clock-skew related GUCs are configured.\n>\n> Earlier we had restricted both 2pc and parallel worker worker start\n> when detect_conflict was enabled, but now since detect_conflict\n> parameter is removed, we will change the implementation to restrict\n> all 3 above cases when last_update_wins is configured. When the\n> changes are done, we will post the patch.\n>\n> ~~\n>\n> 4)\n> <not related to timestamp and clock skew>\n> Earlier when 'detect_conflict' was enabled, we were giving WARNING if\n> 'track_commit_timestamp' was not enabled. This was during CREATE and\n> ALTER subscription. Now with this parameter removed, this WARNING has\n> also been removed. But I think we need to bring back this WARNING.\n> Currently default resolvers set may work without\n> 'track_commit_timestamp' but when user gives CONFLICT RESOLVER in\n> create-sub or alter-sub explicitly making them configured to\n> non-default values (or say any values, does not matter if few are\n> defaults), we may still emit this warning to alert user:\n>\n> 2024-07-26 09:14:03.152 IST [195415] WARNING: conflict detection\n> could be incomplete due to disabled track_commit_timestamp\n> 2024-07-26 09:14:03.152 IST [195415] DETAIL: Conflicts update_differ\n> and delete_differ cannot be detected, and the origin and commit\n> timestamp for the local row will not be logged.\n>\n> Thoughts?\n>\n> If we emit this WARNING during each resolution, then it may flood our\n> log files, thus it seems better to emit it during create or alter\n> subscription instead of during resolution.\n>\n>\nDone.\nv10 has implemented the suggested warning when a user gives CONFLICT\nRESOLVER in create-sub or alter-sub explicitly.\n\nThanks,\nNisha\n\n\n", "msg_date": "Tue, 27 Aug 2024 14:06:32 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Aug 27, 2024 at 1:51 PM Nisha Moond <[email protected]> wrote:\n>\n> Please find v10 patch-set. Changes are:\n>\n> 1) patch-001:\n> - Corrected a patch application warning.\n> - Added support for pg_dump.\n> - As suggested in pt.4 of [1]: added a warning during CREATE and\n> ALTER subscription when track_commit_timestamp is OFF.\n>\n> 2) patch-002 & patch-003:\n> - Reduced code duplication in execReplication.c\n> - As suggested in pt.4 of [2]: Optimized the pre-scan for\n> insert_exists and update_exists cases when resolver is set to ERROR.\n> - Fixed a bug reported by Shveta in [3]\n>\n> Thank You Ajin for working on pg_dump support changes.\n>\n\nThank You for the patches. Few comments for pg_dump\n\n1)\nIf there are multiple subscriptions with different resolver\nconfiguration, pg_dump currently dumps resolver in different orders\nfor each subscription. It is not a problem, but it will be better to\nhave it in the same order. We can have an order-by in the pg_dump's\ncode while querying resolvers.\n\n2)\nCurrently pg_dump is dumping even the default resolvers configuration.\nAs an example if I have not changed default configuration for say\nsub1, it still dumps all:\n\nCREATE SUBSCRIPTION sub1 CONNECTION '..' PUBLICATION pub1 WITH (....)\nCONFLICT RESOLVER (insert_exists = 'error', update_differ =\n'apply_remote', update_exists = 'error', update_missing = 'skip',\ndelete_differ = 'apply_remote', delete_missing = 'skip');\n\nI am not sure if we need to dump default resolvers. Would like to know\nwhat others think on this.\n\n3)\nWhy in 002_pg_dump.pl we have default resolvers set explicitly?\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 28 Aug 2024 09:57:07 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Aug 28, 2024 at 2:27 PM shveta malik <[email protected]> wrote:\n\n>\n> 2)\n> Currently pg_dump is dumping even the default resolvers configuration.\n> As an example if I have not changed default configuration for say\n> sub1, it still dumps all:\n>\n> CREATE SUBSCRIPTION sub1 CONNECTION '..' PUBLICATION pub1 WITH (....)\n> CONFLICT RESOLVER (insert_exists = 'error', update_differ =\n> 'apply_remote', update_exists = 'error', update_missing = 'skip',\n> delete_differ = 'apply_remote', delete_missing = 'skip');\n>\n> I am not sure if we need to dump default resolvers. Would like to know\n> what others think on this.\n>\n> 3)\n> Why in 002_pg_dump.pl we have default resolvers set explicitly?\n>\n> In 003_pg_dump.pl, default resolvers are not set explicitly, that is the\nregexp to check the pg_dump generated command for creating subscriptions.\nThis is again connected to your 2nd question.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Wed, Aug 28, 2024 at 2:27 PM shveta malik <[email protected]> wrote:\n2)\nCurrently pg_dump is dumping even the default resolvers configuration.\nAs an example if I have not changed default configuration for say\nsub1, it still dumps all:\n\nCREATE SUBSCRIPTION sub1 CONNECTION '..' PUBLICATION pub1 WITH (....)\nCONFLICT RESOLVER (insert_exists = 'error', update_differ =\n'apply_remote', update_exists = 'error', update_missing = 'skip',\ndelete_differ = 'apply_remote', delete_missing = 'skip');\n\nI am not sure if we need to dump default resolvers. Would like to know\nwhat others think on this.\n\n3)\nWhy in 002_pg_dump.pl we have default resolvers set explicitly?\nIn 003_pg_dump.pl, default resolvers are not set explicitly, that is the regexp to check the pg_dump generated command for creating subscriptions. This is again connected to your 2nd question.regards,Ajin CherianFujitsu Australia", "msg_date": "Wed, 28 Aug 2024 15:00:08 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Aug 28, 2024 at 10:30 AM Ajin Cherian <[email protected]> wrote:\n>\n>> 2)\n>> Currently pg_dump is dumping even the default resolvers configuration.\n>> As an example if I have not changed default configuration for say\n>> sub1, it still dumps all:\n>>\n>> CREATE SUBSCRIPTION sub1 CONNECTION '..' PUBLICATION pub1 WITH (....)\n>> CONFLICT RESOLVER (insert_exists = 'error', update_differ =\n>> 'apply_remote', update_exists = 'error', update_missing = 'skip',\n>> delete_differ = 'apply_remote', delete_missing = 'skip');\n>>\n>> I am not sure if we need to dump default resolvers. Would like to know\n>> what others think on this.\n>>\n>> 3)\n>> Why in 002_pg_dump.pl we have default resolvers set explicitly?\n>>\n> In 003_pg_dump.pl, default resolvers are not set explicitly, that is the regexp to check the pg_dump generated command for creating subscriptions. This is again connected to your 2nd question.\n\nOkay so we may not need this change if we plan to *not *dump defaults\nin pg_dump.\n\nAnother point about 'defaults' is regarding insertion into the\npg_subscription_conflict table. We currently do insert default\nresolvers into 'pg_subscription_conflict' even if the user has not\nexplicitly configured them. I think it is okay to insert defaults\nthere as the user will be able to know which resolver is picked for\nany conflict type. But again, I would like to know the thoughts of\nothers on this.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 28 Aug 2024 10:58:22 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "> On Wed, Aug 28, 2024 at 10:30 AM Ajin Cherian <[email protected]> wrote:\n> >\n\nThe review is WIP. Please find a few comments on patch001.\n\n1)\nlogical-repliction.sgmlL\n\n+ Additional logging is triggered for specific conflict_resolvers.\nUsers can also configure conflict_types while creating the\nsubscription. Refer to section CONFLICT RESOLVERS for details on\nconflict_types and conflict_resolvers.\n\nCan we please change it to:\n\nAdditional logging is triggered in various conflict scenarios, each\nidentified as a conflict type. Users have the option to configure a\nconflict resolver for each conflict type when creating a subscription.\nFor more information on the conflict types detected and the supported\nconflict resolvers, refer to the section <CONFLICT RESOLVERS>\n\n2)\nSetSubConflictResolver\n\n+ for (type = 0; type < resolvers_cnt; type++)\n\n'type' does not look like the correct name here. The variable does not\nstate conflict_type, it is instead a resolver-array-index, so please\nrename accordingly. Maybe idx or res_idx?\n\n 3)\nCreateSubscription():\n\n+ if (stmt->resolvers)\n+ check_conflict_detection();\n\n3a) We can have a comment saying warn users if prerequisites are not met.\n\n3b) Also, I do not find the name 'check_conflict_detection'\nappropriate. One suggestion could be\n'conf_detection_check_prerequisites' (similar to\nreplorigin_check_prerequisites)\n\n3c) We can move the below comment after check_conflict_detection() as\nit makes more sense there.\n /*\n * Parse and check conflict resolvers. Initialize with default values\n */\n\n4)\nShould we allow repetition/duplicates of 'conflict_type=..' in CREATE\nand ALTER SUB? As an example:\nALTER SUBSCRIPTION sub1 CONFLICT RESOLVER (insert_exists =\n'apply_remote', insert_exists = 'error');\n\nSuch a repetition works for Create-Sub but gives some internal error\nfor alter-sub. (ERROR: tuple already updated by self). Behaviour\nshould be the same for both. And if we give an error, it should be\nsome user understandable one. But I would like to know the opinions of\nothers. Shall it give an error or the last one should be accepted as\nvalid configuration in case of repetition?\n\n5)\nGetAndValidateSubsConflictResolverList():\n+ ConflictTypeResolver *CTR = NULL;\n\nWe can change the name to a more appropriate one similar to other\nvariables. It need not be in all capital.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 28 Aug 2024 16:07:37 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Aug 28, 2024 at 4:07 PM shveta malik <[email protected]> wrote:\n>\n> > On Wed, Aug 28, 2024 at 10:30 AM Ajin Cherian <[email protected]> wrote:\n> > >\n>\n> The review is WIP. Please find a few comments on patch001.\n>\n\nMore comments on ptach001 in continuation of previous comments:\n\n6)\nSetDefaultResolvers() can be called from\nparse_subscription_conflict_resolvers() itself. This will be similar\nto how parse_subscription_options() sets defaults internally.\n\n7)\nparse_subscription_conflict_resolvers():\n+ if (!stmtresolvers)\n+ return;\n\nI think we do not need the above, 'foreach' will take care of it.\nSince we do not have any logic after foreach, we should be good\nwithout the above check explicitly added.\n\n8)\nI think SetSubConflictResolver() should be moved before\nreplorigin_create(). We can insert resolver entries immediately after\nwe insert subscription entries.\n\n9)\ncheck_conflict_detection/conf_detection_check_prerequisites shall be\nmoved to conflict.c file.\n\n10)\nvalidate_conflict_type_and_resolver():\nPlease mention in header that:\n\nIt returns an enum ConflictType corresponding to the conflict type\nstring passed by the caller.\n\n11)\nUpdateSubConflictResolvers():\n11a) Rename CTR similar to other variables.\n11b) Please correct the header as we deal with multiple conflict-types\nin it instead of 1.\nSuggestion: Update the subscription's conflict resolvers in\npg_subscription_conflict system catalog for the given conflict types.\n\n12)\nSetSubConflictResolver():\n12a) I think we do not need 'replaces' during INSERT and thus this is\nnot needed:\n+ memset(replaces, false, sizeof(replaces));\n\n12b)\nShouldn't below be outside of loop:\n+ memset(nulls, false, sizeof(nulls));\n\n13)\nShall we rename RemoveSubscriptionConflictBySubid with\nRemoveSubscriptionConflictResolvers()? 'BySubid' is not needed as we\nhave Subscription in the name and we do not have any other variation\nof removal.\n\n14)\nWe shall rename pg_subscription_conflict_sub_index to\npg_subscription_conflict_confsubid_confrtype_index to give more\nclarity that it is any index on subid and conftype\n\nAnd SubscriptionConflictSubIndexId to SubscriptionConflictSubidTypeIndexId\nAnd SUBSCRIPTIONCONFLICTSUBOID to SUBSCRIPTIONCONFLMAP\n\n15)\nconflict.h:\n+ See ConflictTypeResolverMap in conflcit.c to find out which all\n\nconflcit.c --> conflict.c\n\n16)\nsubscription.sql:\n16a) add one more test case for 'fail' scenario where both conflict\ntype and resolver are valid but resolver is not for that particular\nconflict type.\n\n16b)\n--try setting resolvers for few types\nChange to below (similar to other comments)\n-- ok - valid conflict types and resolvers\n\n16c)\n-- ok - valid conflict type and resolver\nmaybe change to: -- ok - valid conflict types and resolvers\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 29 Aug 2024 10:19:50 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Aug 23, 2024 at 10:39 AM shveta malik <[email protected]> wrote:\n>\n> Please find issues which need some thoughts and approval for\n> time-based resolution and clock-skew.\n>\n> 1)\n> Time based conflict resolution and two phase transactions:\n>\n> Time based conflict resolution (last_update_wins) is the one\n> resolution which will not result in data-divergence considering\n> clock-skew is taken care of. But when it comes to two-phase\n> transactions, it might not be the case. For two-phase transaction, we\n> do not have commit timestamp when the changes are being applied. Thus\n> for time-based comparison, initially it was decided to user prepare\n> timestamp but it may result in data-divergence. Please see the\n> example at [1].\n>\n> Example at [1] is a tricky situation, and thus in the initial draft,\n> we decided to restrict usage of 2pc and CDR together. The plan is:\n>\n> a) During Create subscription, if the user has given last_update_wins\n> resolver for any conflict_type and 'two_phase' is also enabled, we\n> ERROR out.\n> b) During Alter subscription, if the user tries to update resolver to\n> 'last_update_wins' but 'two_phase' is enabled, we error out.\n>\n> Another solution could be to save both prepare_ts and commit_ts. And\n> when any txn comes for conflict resolution, we first check if\n> prepare_ts is available, use that else use commit_ts. Availability of\n> prepare_ts would indicate it was a prepared txn and thus even if it is\n> committed, we should use prepare_ts for comparison for consistency.\n> This will have some overhead of storing prepare_ts along with\n> commit_ts. But if the number of prepared txns are reasonably small,\n> this overhead should be less.\n>\n\nYet another idea is that if the conflict is detected and the\nresolution strategy is last_update_wins then from that point we start\nwriting all the changes to the file similar to what we do for\nstreaming mode and only once commit_prepared arrives, we will read and\napply changes. That will solve this problem.\n\n> We currently plan to go with restricting 2pc and last_update_wins\n> together, unless others have different opinions.\n>\n\nSounds reasonable but we should add comments on the possible solution\nlike the one I have mentioned so that we can extend it afterwards.\n\n> ~~\n>\n> 2)\n> parallel apply worker and conflict-resolution:\n> As discussed in [2] (see last paragraph in [2]), for streaming of\n> in-progress transactions by parallel worker, we do not have\n> commit-timestamp with each change and thus it makes sense to disable\n> parallel apply worker with CDR. The plan is to not start parallel\n> apply worker if 'last_update_wins' is configured for any\n> conflict_type.\n>\n\nThe other idea is that we can let the changes written to file if any\nconflict is detected and then at commit time let the remaining changes\nbe applied by apply worker. This can introduce some complexity, so\nsimilar to two_pc we can extend this functionality later.\n\n> ~~\n>\n> 3)\n> parallel apply worker and clock skew management:\n> Regarding clock-skew management as discussed in [3], we will wait for\n> the local clock to come within tolerable range during 'begin' rather\n> than before 'commit'. And this wait needs commit-timestamp in the\n> beginning, thus we plan to restrict starting pa-worker even when\n> clock-skew related GUCs are configured.\n>\n> Earlier we had restricted both 2pc and parallel worker worker start\n> when detect_conflict was enabled, but now since detect_conflict\n> parameter is removed, we will change the implementation to restrict\n> all 3 above cases when last_update_wins is configured. When the\n> changes are done, we will post the patch.\n>\n\nAt this stage, we are not sure how we want to deal with clock skew.\nThere is an argument that clock-skew should be handled outside the\ndatabase, so we can probably have the clock-skew-related stuff in a\nseparate patch.\n\n> ~~\n>\n> 4)\n> <not related to timestamp and clock skew>\n> Earlier when 'detect_conflict' was enabled, we were giving WARNING if\n> 'track_commit_timestamp' was not enabled. This was during CREATE and\n> ALTER subscription. Now with this parameter removed, this WARNING has\n> also been removed. But I think we need to bring back this WARNING.\n> Currently default resolvers set may work without\n> 'track_commit_timestamp' but when user gives CONFLICT RESOLVER in\n> create-sub or alter-sub explicitly making them configured to\n> non-default values (or say any values, does not matter if few are\n> defaults), we may still emit this warning to alert user:\n>\n> 2024-07-26 09:14:03.152 IST [195415] WARNING: conflict detection\n> could be incomplete due to disabled track_commit_timestamp\n> 2024-07-26 09:14:03.152 IST [195415] DETAIL: Conflicts update_differ\n> and delete_differ cannot be detected, and the origin and commit\n> timestamp for the local row will not be logged.\n>\n> Thoughts?\n>\n> If we emit this WARNING during each resolution, then it may flood our\n> log files, thus it seems better to emit it during create or alter\n> subscription instead of during resolution.\n>\n\nSounds reasonable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Aug 2024 16:42:50 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "Here is the v11 patch-set. Changes are:\n1) Updated conflict type names in accordance with the recent commit[1] as -\n update_differ --> update_origin_differs\n delete_differ --> delete_origin_differs\n\n2) patch-001:\n - Implemented the RESET command to restore the default resolvers as\nsuggested in pt.2a & 2b in [2]\n\n3) patch-004:\n - Rebased the patch which implements last_update_wins resolver and\nclock-skew management.\n - Restricts the setting of two_phase and last_update_wins together\n - Prevents the start of parallel apply-worker if 'last_update_wins'\nis configured for any conflict_type.\n - Added test cases for last_update_wins in 034_conflict_resolver.pl\n\nThanks, Shveta for your help in patch-004 solutions and thank you Ajin\nfor providing RESET command changes(patch-001).\n\n [1]: https://www.postgresql.org/message-id/CAA4eK1%2BpoV1dDqK%3Dhdv-Zh2m2kBdB%3Ds1TBk8MwscgdzULBonbw%40mail.gmail.com\n [2]: https://www.postgresql.org/message-id/CAJpy0uBrXZE6LLofX5tc8WOm5F%2BFNgnQjRLQerOY8cOqqvtrNg%40mail.gmail.com\n\n--\nThanks,\nNisha", "msg_date": "Fri, 30 Aug 2024 11:00:46 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Aug 23, 2024 at 10:39 AM shveta malik <[email protected]> wrote:\n>\n> On Thu, Aug 22, 2024 at 3:44 PM shveta malik <[email protected]> wrote:\n> >\n> >\n> > For clock-skew and timestamp based resolution, if needed, I will post\n> > another email for the design items where suggestions are needed.\n> >\n>\n> Please find issues which need some thoughts and approval for\n> time-based resolution and clock-skew.\n>\n> 1)\n> Time based conflict resolution and two phase transactions:\n>\n> Time based conflict resolution (last_update_wins) is the one\n> resolution which will not result in data-divergence considering\n> clock-skew is taken care of. But when it comes to two-phase\n> transactions, it might not be the case. For two-phase transaction, we\n> do not have commit timestamp when the changes are being applied. Thus\n> for time-based comparison, initially it was decided to user prepare\n> timestamp but it may result in data-divergence. Please see the\n> example at [1].\n>\n> Example at [1] is a tricky situation, and thus in the initial draft,\n> we decided to restrict usage of 2pc and CDR together. The plan is:\n>\n> a) During Create subscription, if the user has given last_update_wins\n> resolver for any conflict_type and 'two_phase' is also enabled, we\n> ERROR out.\n> b) During Alter subscription, if the user tries to update resolver to\n> 'last_update_wins' but 'two_phase' is enabled, we error out.\n>\n> Another solution could be to save both prepare_ts and commit_ts. And\n> when any txn comes for conflict resolution, we first check if\n> prepare_ts is available, use that else use commit_ts. Availability of\n> prepare_ts would indicate it was a prepared txn and thus even if it is\n> committed, we should use prepare_ts for comparison for consistency.\n> This will have some overhead of storing prepare_ts along with\n> commit_ts. But if the number of prepared txns are reasonably small,\n> this overhead should be less.\n>\n> We currently plan to go with restricting 2pc and last_update_wins\n> together, unless others have different opinions.\n>\n\nDone. v11-004 implements the idea of restricting 2pc and\nlast_update_wins together.\n\n> ~~\n>\n> 2)\n> parallel apply worker and conflict-resolution:\n> As discussed in [2] (see last paragraph in [2]), for streaming of\n> in-progress transactions by parallel worker, we do not have\n> commit-timestamp with each change and thus it makes sense to disable\n> parallel apply worker with CDR. The plan is to not start parallel\n> apply worker if 'last_update_wins' is configured for any\n> conflict_type.\n>\n\nDone.\n\n> ~~\n>\n> 3)\n> parallel apply worker and clock skew management:\n> Regarding clock-skew management as discussed in [3], we will wait for\n> the local clock to come within tolerable range during 'begin' rather\n> than before 'commit'. And this wait needs commit-timestamp in the\n> beginning, thus we plan to restrict starting pa-worker even when\n> clock-skew related GUCs are configured.\n>\n\nDone. v11 implements it.\n\n> Earlier we had restricted both 2pc and parallel worker worker start\n> when detect_conflict was enabled, but now since detect_conflict\n> parameter is removed, we will change the implementation to restrict\n> all 3 above cases when last_update_wins is configured. When the\n> changes are done, we will post the patch.\n>\n> ~~\n>\n--\nThanks,\nNisha\n\n\n", "msg_date": "Fri, 30 Aug 2024 12:04:51 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Aug 26, 2024 at 2:23 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 22, 2024 at 3:45 PM shveta malik <[email protected]> wrote:\n> >\n> > On Wed, Aug 21, 2024 at 4:08 PM Nisha Moond <[email protected]> wrote:\n> > >\n> > > The patches have been rebased on the latest pgHead following the merge\n> > > of the conflict detection patch [1].\n> >\n> > Thanks for working on patches.\n> >\n> > Summarizing the issues which need some suggestions/thoughts.\n> >\n> > 1)\n> > For subscription based resolvers, currently the syntax implemented is:\n> >\n> > 1a)\n> > CREATE SUBSCRIPTION <subname>\n> > CONNECTION <conninfo> PUBLICATION <pubname>\n> > CONFLICT RESOLVER\n> > (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> > conflict_type3 = resolver3,...);\n> >\n> > 1b)\n> > ALTER SUBSCRIPTION <subname> CONFLICT RESOLVER\n> > (conflict_type1 = resolver1, conflict_type2 = resolver2,\n> > conflict_type3 = resolver3,...);\n> >\n> > Earlier the syntax suggested in [1] was:\n> > CREATE SUBSCRIPTION <subname> CONNECTION <conninfo> PUBLICATION <pubname>\n> > CONFLICT RESOLVER 'conflict_resolver1' FOR 'conflict_type1',\n> > CONFLICT RESOLVER 'conflict_resolver2' FOR 'conflict_type2';\n> >\n> > I think the currently implemented syntax is good as it has less\n> > repetition, unless others think otherwise.\n> >\n> > ~~\n> >\n> > 2)\n> > For subscription based resolvers, do we need a RESET command to reset\n> > resolvers to default? Any one of below or both?\n> >\n> > 2a) reset all at once:\n> > ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVERS\n> >\n> > 2b) reset one at a time:\n> > ALTER SUBSCRIPTION <name> RESET CONFLICT RESOLVER for 'conflict_type';\n> >\n> > The issue I see here is, to implement 1a and 1b, we have introduced\n> > the 'RESOLVER' keyword. If we want to implement 2a, we will have to\n> > introduce the 'RESOLVERS' keyword as well. But we can come up with\n> > some alternative syntax if we plan to implement these. Thoughts?\n> >\n>\n> It makes sense to have a RESET on the lines of (a) and (b). At this\n> stage, we should do minimal in extending the syntax. How about RESET\n> CONFLICT RESOLVER ALL for (a)?\n>\n\nDone, v11 implements the suggested RESET command.\n\n> > ~~\n> >\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Fri, 30 Aug 2024 12:04:57 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Aug 28, 2024 at 10:58 AM shveta malik <[email protected]> wrote:\n>\n> On Wed, Aug 28, 2024 at 10:30 AM Ajin Cherian <[email protected]> wrote:\n> >\n> >> 2)\n> >> Currently pg_dump is dumping even the default resolvers configuration.\n> >> As an example if I have not changed default configuration for say\n> >> sub1, it still dumps all:\n> >>\n> >> CREATE SUBSCRIPTION sub1 CONNECTION '..' PUBLICATION pub1 WITH (....)\n> >> CONFLICT RESOLVER (insert_exists = 'error', update_differ =\n> >> 'apply_remote', update_exists = 'error', update_missing = 'skip',\n> >> delete_differ = 'apply_remote', delete_missing = 'skip');\n> >>\n> >> I am not sure if we need to dump default resolvers. Would like to know\n> >> what others think on this.\n> >>\n\nNormally, we don't add defaults in the dumped command. For example,\ndumpSubscription won't dump the options where the default is\nunchanged. We shouldn't do it unless we have a reason for dumping\ndefaults.\n\n> >> 3)\n> >> Why in 002_pg_dump.pl we have default resolvers set explicitly?\n> >>\n> > In 003_pg_dump.pl, default resolvers are not set explicitly, that is the regexp to check the pg_dump generated command for creating subscriptions. This is again connected to your 2nd question.\n>\n> Okay so we may not need this change if we plan to *not *dump defaults\n> in pg_dump.\n>\n> Another point about 'defaults' is regarding insertion into the\n> pg_subscription_conflict table. We currently do insert default\n> resolvers into 'pg_subscription_conflict' even if the user has not\n> explicitly configured them.\n>\n\nI don't see any problem with it. BTW, if we don't do it, I think\nwherever we are referring the resolvers for a conflict, we need some\nspecial handling for default and non-default. Am I missing something?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 30 Aug 2024 12:12:56 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Aug 30, 2024 at 12:13 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Aug 28, 2024 at 10:58 AM shveta malik <[email protected]> wrote:\n> >\n> > On Wed, Aug 28, 2024 at 10:30 AM Ajin Cherian <[email protected]> wrote:\n> > >\n> > >> 2)\n> > >> Currently pg_dump is dumping even the default resolvers configuration.\n> > >> As an example if I have not changed default configuration for say\n> > >> sub1, it still dumps all:\n> > >>\n> > >> CREATE SUBSCRIPTION sub1 CONNECTION '..' PUBLICATION pub1 WITH (....)\n> > >> CONFLICT RESOLVER (insert_exists = 'error', update_differ =\n> > >> 'apply_remote', update_exists = 'error', update_missing = 'skip',\n> > >> delete_differ = 'apply_remote', delete_missing = 'skip');\n> > >>\n> > >> I am not sure if we need to dump default resolvers. Would like to know\n> > >> what others think on this.\n> > >>\n>\n> Normally, we don't add defaults in the dumped command. For example,\n> dumpSubscription won't dump the options where the default is\n> unchanged. We shouldn't do it unless we have a reason for dumping\n> defaults.\n\nAgreed, we should not dump defaults. I had the same opinion.\n\n>\n> > >> 3)\n> > >> Why in 002_pg_dump.pl we have default resolvers set explicitly?\n> > >>\n> > > In 003_pg_dump.pl, default resolvers are not set explicitly, that is the regexp to check the pg_dump generated command for creating subscriptions. This is again connected to your 2nd question.\n> >\n> > Okay so we may not need this change if we plan to *not *dump defaults\n> > in pg_dump.\n> >\n> > Another point about 'defaults' is regarding insertion into the\n> > pg_subscription_conflict table. We currently do insert default\n> > resolvers into 'pg_subscription_conflict' even if the user has not\n> > explicitly configured them.\n> >\n>\n> I don't see any problem with it.\n\nYes, no problem\n\n> BTW, if we don't do it, I think\n> wherever we are referring the resolvers for a conflict, we need some\n> special handling for default and non-default.\n\nYes, we will need special handling in such a case. Thus we shall go\nwith inserting defaults.\n\n> Am I missing something?\n\nNo, I just wanted to know others' opinions, so I asked.\n\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 30 Aug 2024 12:18:35 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Aug 28, 2024 at 4:07 PM shveta malik <[email protected]> wrote:\n>\n> > On Wed, Aug 28, 2024 at 10:30 AM Ajin Cherian <[email protected]> wrote:\n> > >\n>\n> The review is WIP. Please find a few comments on patch001.\n>\n> 1)\n> logical-repliction.sgmlL\n>\n> + Additional logging is triggered for specific conflict_resolvers.\n> Users can also configure conflict_types while creating the\n> subscription. Refer to section CONFLICT RESOLVERS for details on\n> conflict_types and conflict_resolvers.\n>\n> Can we please change it to:\n>\n> Additional logging is triggered in various conflict scenarios, each\n> identified as a conflict type. Users have the option to configure a\n> conflict resolver for each conflict type when creating a subscription.\n> For more information on the conflict types detected and the supported\n> conflict resolvers, refer to the section <CONFLICT RESOLVERS>\n>\n> 2)\n> SetSubConflictResolver\n>\n> + for (type = 0; type < resolvers_cnt; type++)\n>\n> 'type' does not look like the correct name here. The variable does not\n> state conflict_type, it is instead a resolver-array-index, so please\n> rename accordingly. Maybe idx or res_idx?\n>\n> 3)\n> CreateSubscription():\n>\n> + if (stmt->resolvers)\n> + check_conflict_detection();\n>\n> 3a) We can have a comment saying warn users if prerequisites are not met.\n>\n> 3b) Also, I do not find the name 'check_conflict_detection'\n> appropriate. One suggestion could be\n> 'conf_detection_check_prerequisites' (similar to\n> replorigin_check_prerequisites)\n>\n> 3c) We can move the below comment after check_conflict_detection() as\n> it makes more sense there.\n> /*\n> * Parse and check conflict resolvers. Initialize with default values\n> */\n>\n> 4)\n> Should we allow repetition/duplicates of 'conflict_type=..' in CREATE\n> and ALTER SUB? As an example:\n> ALTER SUBSCRIPTION sub1 CONFLICT RESOLVER (insert_exists =\n> 'apply_remote', insert_exists = 'error');\n>\n> Such a repetition works for Create-Sub but gives some internal error\n> for alter-sub. (ERROR: tuple already updated by self). Behaviour\n> should be the same for both. And if we give an error, it should be\n> some user understandable one. But I would like to know the opinions of\n> others. Shall it give an error or the last one should be accepted as\n> valid configuration in case of repetition?\n>\n\nI have tried the below statement to check existing behavior:\ncreate subscription sub1 connection 'dbname=postgres' publication pub1\nwith (streaming = on, streaming=off);\nERROR: conflicting or redundant options\nLINE 1: ...=postgres' publication pub1 with (streaming = on, streaming=...\n\nSo duplicate options are not allowed. If we see any challenges to\nfollow same for resolvers then we can discuss but it seems better to\nfollow the existing behavior of other subscription options.\n\nAlso, the behavior for CREATE/ALTER should be the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 30 Aug 2024 12:27:57 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, 30 Aug 2024 at 11:01, Nisha Moond <[email protected]> wrote:\n>\n> Here is the v11 patch-set. Changes are:\n\n1) This command crashes:\nALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR NULL;\n#0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n#1 0x000055c67270600a in ResetConflictResolver (subid=16404,\nconflict_type=0x0) at conflict.c:744\n#2 0x000055c67247e0c3 in AlterSubscription (pstate=0x55c6748ff9d0,\nstmt=0x55c67497dfe0, isTopLevel=true) at subscriptioncmds.c:1664\n\n+ | ALTER SUBSCRIPTION name RESET CONFLICT\nRESOLVER FOR conflict_type\n+ {\n+ AlterSubscriptionStmt *n =\n+ makeNode(AlterSubscriptionStmt);\n+\n+ n->kind =\nALTER_SUBSCRIPTION_RESET_CONFLICT_RESOLVER;\n+ n->subname = $3;\n+ n->conflict_type = $8;\n+ $$ = (Node *) n;\n+ }\n+ ;\n+conflict_type:\n+ Sconst\n { $$ = $1; }\n+ | NULL_P\n { $$ = NULL; }\n ;\n\nMay be conflict_type should be changed to:\n+conflict_type:\n+ Sconst\n { $$ = $1; }\n ;\n2) Conflict resolver is not shown in describe command:\npostgres=# \\dRs+\n\n List of subscriptions\n Name | Owner | Enabled | Publication | Binary | Streaming |\nTwo-phase commit | Disable on error | Origin | Password required | Run\nas owner? | Failover | Synchronous commit | Conninfo\n | Skip LSN\n------+---------+---------+-------------+--------+-----------+------------------+------------------+--------+-------------------+---------------+----------+--------------------+----------------------------------\n--------+----------\n sub1 | vignesh | t | {pub1} | f | off | d\n | f | any | t | f\n | f | off | dbname=postgres host=localhost po\nrt=5432 | 0/0\n sub2 | vignesh | t | {pub1} | f | off | d\n | f | any | t | f\n | f | off | dbname=postgres host=localhost po\nrt=5432 | 0/0\n(2 rows)\n\n3) Tab completion is not handled to include Conflict resolver:\npostgres=# alter subscription sub1\nADD PUBLICATION CONNECTION DISABLE DROP\nPUBLICATION ENABLE OWNER TO REFRESH\nPUBLICATION RENAME TO SET SKIP (\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 3 Sep 2024 15:11:45 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, 30 Aug 2024 at 11:01, Nisha Moond <[email protected]> wrote:\n>\n> Here is the v11 patch-set. Changes are:\n> 1) Updated conflict type names in accordance with the recent commit[1] as -\n> update_differ --> update_origin_differs\n> delete_differ --> delete_origin_differs\n>\n> 2) patch-001:\n> - Implemented the RESET command to restore the default resolvers as\n> suggested in pt.2a & 2b in [2]\n\nFew comments on 0001 patch:\n1) Currently create subscription has WITH option before conflict\nresolver, I felt WITH option can be after CONNECTION, PUBLICATION and\nCONFLICT RESOLVER option and WITH option at the end:\n CreateSubscriptionStmt:\n- CREATE SUBSCRIPTION name CONNECTION Sconst\nPUBLICATION name_list opt_definition\n+ CREATE SUBSCRIPTION name CONNECTION Sconst PUBLICATION\nname_list opt_definition opt_resolver_definition\n {\n CreateSubscriptionStmt *n =\n\nmakeNode(CreateSubscriptionStmt);\n@@ -10696,6 +10702,7 @@ CreateSubscriptionStmt:\n n->conninfo = $5;\n n->publication = $7;\n n->options = $8;\n+ n->resolvers = $9;\n $$ = (Node *) n;\n\n2) Case sensitive:\n2.a) Should conflict type be case insensitive:\nCREATE SUBSCRIPTION sub3 CONNECTION 'dbname=postgres host=localhost\nport=5432' PUBLICATION pub1 with (copy_data= true) CONFLICT RESOLVER\n(\"INSERT_EXISTS\" = 'error');\nERROR: INSERT_EXISTS is not a valid conflict type\n\nIn few other places it is not case sensitive:\ncreate publication pub1 with ( PUBLISH= 'INSERT,UPDATE,delete');\nset log_min_MESSAGES TO warning ;\n\n2.b) Similarly in case of conflict resolver too:\nCREATE SUBSCRIPTION sub3 CONNECTION 'dbname=postgres host=localhost\nport=5432' PUBLICATION pub1 with (copy_data= true) CONFLICT RESOLVER\n(\"insert_exists\" = 'erroR');\nERROR: erroR is not a valid conflict resolver\n\n3) Since there is only one key used to search, we can remove nkeys\nvariable and directly specify as 1:\n+RemoveSubscriptionConflictBySubid(Oid subid)\n+{\n+ Relation rel;\n+ HeapTuple tup;\n+ TableScanDesc scan;\n+ ScanKeyData skey[1];\n+ int nkeys = 0;\n+\n+ rel = table_open(SubscriptionConflictId, RowExclusiveLock);\n+\n+ /*\n+ * Search using the subid, this should return all conflict resolvers for\n+ * this sub\n+ */\n+ ScanKeyInit(&skey[nkeys++],\n+ Anum_pg_subscription_conflict_confsubid,\n+ BTEqualStrategyNumber,\n+ F_OIDEQ,\n+ ObjectIdGetDatum(subid));\n+\n+ scan = table_beginscan_catalog(rel, nkeys, skey);\n\n4) Currently we are including CONFLICT RESOLVER even if a subscription\nwith default CONFLICT RESOLVER is created, we can add the CONFLICT\nRESOLVER option only for non-default subscription option:\n+ /* add conflict resolvers, if any */\n+ if (fout->remoteVersion >= 180000)\n+ {\n+ PQExpBuffer InQry = createPQExpBuffer();\n+ PGresult *res;\n+ int i_confrtype;\n+ int i_confrres;\n+\n+ /* get the conflict types and their resolvers from the\ncatalog */\n+ appendPQExpBuffer(InQry,\n+ \"SELECT confrtype, confrres \"\n+ \"FROM\npg_catalog.pg_subscription_conflict\"\n+ \" WHERE confsubid =\n%u;\\n\", subinfo->dobj.catId.oid);\n+ res = ExecuteSqlQuery(fout, InQry->data, PGRES_TUPLES_OK);\n+\n+ i_confrtype = PQfnumber(res, \"confrtype\");\n+ i_confrres = PQfnumber(res, \"confrres\");\n+\n+ if (PQntuples(res) > 0)\n+ {\n+ appendPQExpBufferStr(query, \") CONFLICT RESOLVER (\");\n\n5) Should remote_apply be apply_remote here as this is what is\nspecified in code:\n+ <varlistentry\nid=\"sql-createsubscription-params-with-conflict_resolver-remote-apply\">\n+ <term><literal>remote_apply</literal> (<type>enum</type>)</term>\n+ <listitem>\n+ <para>\n\n6) I think this should be \"It is the default resolver for update_origin_differs\"\n6.a)\n+ <varlistentry\nid=\"sql-createsubscription-params-with-conflict_resolver-remote-apply\">\n+ <term><literal>remote_apply</literal> (<type>enum</type>)</term>\n+ <listitem>\n+ <para>\n+ This resolver applies the remote change. It can be used for\n+ <literal>insert_exists</literal>, <literal>update_exists</literal>,\n+ <literal>update_differ</literal> and\n<literal>delete_differ</literal>.\n+ It is the default resolver for <literal>insert_exists</literal> and\n+ <literal>update_exists</literal>.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\n6.b)\n+ <varlistentry\nid=\"sql-createsubscription-params-with-conflict_type-update-differ\">\n+ <term><literal>update_differ</literal> (<type>enum</type>)</term>\n+ <listitem>\n+ <para>\n+ This conflict occurs when updating a row that was previously\n\n6.c)\n+ <varlistentry\nid=\"sql-createsubscription-params-with-conflict_resolver-remote-apply\">\n+ <term><literal>remote_apply</literal> (<type>enum</type>)</term>\n+ <listitem>\n+ <para>\n+ This resolver applies the remote change. It can be used for\n+ <literal>insert_exists</literal>, <literal>update_exists</literal>,\n+ <literal>update_differ</literal> and\n<literal>delete_differ</literal>.\n+ It is the default resolver for <literal>insert_exists</literal> and\n+ <literal>update_exists</literal>.\n\n6.d)\n+ <varlistentry\nid=\"sql-createsubscription-params-with-conflict_resolver-remote-apply\">\n+ <term><literal>remote_apply</literal> (<type>enum</type>)</term>\n+ <listitem>\n+ <para>\n+ This resolver applies the remote change. It can be used for\n+ <literal>insert_exists</literal>, <literal>update_exists</literal>,\n+ <literal>update_differ</literal> and\n<literal>delete_differ</literal>.\n\nSimilarly this change should be done in other places too.\n\n7)\n7.a) Should delete_differ be changed to delete_origin_differs as that\nis what is specified in the subscription commands:\n+check_conflict_detection(void)\n+{\n+ if (!track_commit_timestamp)\n+ ereport(WARNING,\n+\nerrcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"conflict detection and\nresolution could be incomplete due to disabled\ntrack_commit_timestamp\"),\n+ errdetail(\"Conflicts update_differ and\ndelete_differ cannot be detected, \"\n+ \"and the origin and\ncommit timestamp for the local row will not be logged.\"));\n\n7.b) similarly here too:\n+ <varlistentry\nid=\"sql-createsubscription-params-with-conflict_type-delete-differ\">\n+ <term><literal>delete_differ</literal> (<type>enum</type>)</term>\n+ <listitem>\n+ <para>\n+ This conflict occurs when deleting a row that was\npreviously modified\n+ by another origin. Note that this conflict can only be detected when\n+ <link\nlinkend=\"guc-track-commit-timestamp\"><varname>track_commit_timestamp</varname></link>\n+ is enabled on the subscriber. Currently, the delete is\nalways applied\n+ regardless of the origin of the local row.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nSimilarly this change should be done in other places too.\n\n8) ConflictTypeResolver should be added to typedefs.list to resolve\nthe pgindent issues:\n8.a)\n+static void\n+parse_subscription_conflict_resolvers(List *stmtresolvers,\n+\n ConflictTypeResolver * resolvers)\n\n8.b) Similarly FormData_pg_subscription_conflict should also be added:\n} FormData_pg_subscription_conflict;\n\n/* ----------------\n * Form_pg_subscription_conflict corresponds to a pointer to a row with\n * the format of pg_subscription_conflict relation.\n * ----------------\n */\ntypedef FormData_pg_subscription_conflict * Form_pg_subscription_conflict;\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 3 Sep 2024 16:03:34 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Aug 29, 2024 at 2:50 PM shveta malik <[email protected]> wrote:\n\n> On Wed, Aug 28, 2024 at 4:07 PM shveta malik <[email protected]>\n> wrote:\n> >\n> > > On Wed, Aug 28, 2024 at 10:30 AM Ajin Cherian <[email protected]>\n> wrote:\n> > > >\n> >\n> > The review is WIP. Please find a few comments on patch001.\n> >\n>\n> More comments on ptach001 in continuation of previous comments:\n>\n>\nThank you for your feedback, Shveta. I've addressed both sets of comments\nyou provided. Additionally, I've revised the logic for how `pg_dump`\nretrieves conflict resolver information and implemented a smarter approach\nto avoid including default resolvers in the `CREATE SUBSCRIPTION` command.\nTo achieve this, I had to duplicate some of the conflict resolver\ndefinitions in the `pg_dump.h` header, as including `conflict.h` directly\nintroduced too many dependencies with other headers.\nThanks to Nisha for separating 004 patch into two - 004(last_update_wins)\nand 005(clock-skew).\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Fri, 6 Sep 2024 18:35:14 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Aug 29, 2024 at 4:43 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Aug 23, 2024 at 10:39 AM shveta malik <[email protected]> wrote:\n> >\n> > Please find issues which need some thoughts and approval for\n> > time-based resolution and clock-skew.\n> >\n> > 1)\n> > Time based conflict resolution and two phase transactions:\n> >\n> > Time based conflict resolution (last_update_wins) is the one\n> > resolution which will not result in data-divergence considering\n> > clock-skew is taken care of. But when it comes to two-phase\n> > transactions, it might not be the case. For two-phase transaction, we\n> > do not have commit timestamp when the changes are being applied. Thus\n> > for time-based comparison, initially it was decided to user prepare\n> > timestamp but it may result in data-divergence. Please see the\n> > example at [1].\n> >\n> > Example at [1] is a tricky situation, and thus in the initial draft,\n> > we decided to restrict usage of 2pc and CDR together. The plan is:\n> >\n> > a) During Create subscription, if the user has given last_update_wins\n> > resolver for any conflict_type and 'two_phase' is also enabled, we\n> > ERROR out.\n> > b) During Alter subscription, if the user tries to update resolver to\n> > 'last_update_wins' but 'two_phase' is enabled, we error out.\n> >\n> > Another solution could be to save both prepare_ts and commit_ts. And\n> > when any txn comes for conflict resolution, we first check if\n> > prepare_ts is available, use that else use commit_ts. Availability of\n> > prepare_ts would indicate it was a prepared txn and thus even if it is\n> > committed, we should use prepare_ts for comparison for consistency.\n> > This will have some overhead of storing prepare_ts along with\n> > commit_ts. But if the number of prepared txns are reasonably small,\n> > this overhead should be less.\n> >\n>\n> Yet another idea is that if the conflict is detected and the\n> resolution strategy is last_update_wins then from that point we start\n> writing all the changes to the file similar to what we do for\n> streaming mode and only once commit_prepared arrives, we will read and\n> apply changes. That will solve this problem.\n>\n> > We currently plan to go with restricting 2pc and last_update_wins\n> > together, unless others have different opinions.\n> >\n>\n> Sounds reasonable but we should add comments on the possible solution\n> like the one I have mentioned so that we can extend it afterwards.\n>\n\nDone, v12-004 patch has the comments for the possible solution.\n\n> > ~~\n> >\n> > 2)\n> > parallel apply worker and conflict-resolution:\n> > As discussed in [2] (see last paragraph in [2]), for streaming of\n> > in-progress transactions by parallel worker, we do not have\n> > commit-timestamp with each change and thus it makes sense to disable\n> > parallel apply worker with CDR. The plan is to not start parallel\n> > apply worker if 'last_update_wins' is configured for any\n> > conflict_type.\n> >\n>\n> The other idea is that we can let the changes written to file if any\n> conflict is detected and then at commit time let the remaining changes\n> be applied by apply worker. This can introduce some complexity, so\n> similar to two_pc we can extend this functionality later.\n>\n\nv12-004 patch has the comments to extend it later.\n\n> > ~~\n> >\n> > 3)\n> > parallel apply worker and clock skew management:\n> > Regarding clock-skew management as discussed in [3], we will wait for\n> > the local clock to come within tolerable range during 'begin' rather\n> > than before 'commit'. And this wait needs commit-timestamp in the\n> > beginning, thus we plan to restrict starting pa-worker even when\n> > clock-skew related GUCs are configured.\n> >\n> > Earlier we had restricted both 2pc and parallel worker worker start\n> > when detect_conflict was enabled, but now since detect_conflict\n> > parameter is removed, we will change the implementation to restrict\n> > all 3 above cases when last_update_wins is configured. When the\n> > changes are done, we will post the patch.\n> >\n>\n> At this stage, we are not sure how we want to deal with clock skew.\n> There is an argument that clock-skew should be handled outside the\n> database, so we can probably have the clock-skew-related stuff in a\n> separate patch.\n>\n\nSeparated the clock-skew related code in v12-005 patch.\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Fri, 6 Sep 2024 17:10:21 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Sep 6, 2024 at 2:05 PM Ajin Cherian <[email protected]> wrote:\n>\n>\n> Thank you for your feedback, Shveta. I've addressed both sets of comments you provided.\n\nThanks for the patches. I am reviewing v12-patch001, it is WIP. But\nplease find first set of comments:\n\n1)\nsrc/sgml/logical-replication.sgml:\n+ Users have the option to configure a conflict_resolver\n\nFull stop for previous line is missing.\n\n2)\n+ For more information on the conflict_types detected and the\nsupported conflict_resolvers, refer to section CONFLICT RESOLVERS.\n\nWe may change to :\n For more information on the supported conflict_types and\nconflict_resolvers, refer to section CONFLICT RESOLVERS.\n\n\n 3)\n src/backend/commands/subscriptioncmds.c:\nLine removed. This change is not needed.\n\n static void CheckAlterSubOption(Subscription *sub, const char *option,\n bool slot_needs_update, bool isTopLevel);\n-\n\n4)\n\nLet's stick to the same comments format as the rest of the file i.e.\nfirst letter in caps.\n\n+ /* first initialise the resolvers with default values */\n\nfirst --> First\ninitialise --> initialize\n\nSame for below comments:\n+ /* validate the conflict type and resolver */\n+ /* update the corresponding resolver for the given conflict type */\n\nPlease verify the rest of the file for the same.\n\n5)\nPlease add below in header of parse_subscription_conflict_resolvers\n(similar to parse_subscription_options):\n\n * This function will report an error if mutually exclusive options\nare specified.\n\n6)\n+ * Warn users if prerequisites are not met.\n+ * Initialize with default values.\n+ */\n+ if (stmt->resolvers)\n+ conf_detection_check_prerequisites();\n+\n\nWould it be better to move the above call inside\nparse_subscription_conflict_resolvers(), then we will have all\nresolver related stuff at one place?\nIrrespective of whether we move it or not, please remove 'Initialize\nwith default values.' from above as that is now not done here.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 9 Sep 2024 14:58:32 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Sep 6, 2024 at 2:05 PM Ajin Cherian <[email protected]> wrote:\n>\n>\n>\n> On Thu, Aug 29, 2024 at 2:50 PM shveta malik <[email protected]> wrote:\n>>\n>> On Wed, Aug 28, 2024 at 4:07 PM shveta malik <[email protected]> wrote:\n>> >\n>> > > On Wed, Aug 28, 2024 at 10:30 AM Ajin Cherian <[email protected]> wrote:\n>> > > >\n>> >\n>> > The review is WIP. Please find a few comments on patch001.\n>> >\n>>\n>> More comments on ptach001 in continuation of previous comments:\n>>\n>\n> Thank you for your feedback, Shveta. I've addressed both sets of comments you provided.\n\nThanks for the patches. I tested the v12-0001 patch, and here are my comments:\n\n1) An unexpected error occurs when attempting to alter the resolver\nfor multiple conflict_type(s) in ALTER SUB...CONFLICT RESOLVER\ncommand. See below examples :\n\npostgres=# alter subscription sub2 CONFLICT RESOLVER\n(update_exists=keep_local, delete_missing=error,\nupdate_origin_differs=error);\nERROR: unrecognized node type: 1633972341\n\npostgres=# alter subscription sub2 CONFLICT RESOLVER (\nupdate_origin_differs=error, update_exists=error);\nERROR: unrecognized node type: 1633972341\n\npostgres=# alter subscription sub2 CONFLICT RESOLVER (\ndelete_origin_differs=error, delete_missing=error);\nERROR: unrecognized node type: 1701602660\n\npostgres=# alter subscription sub2 CONFLICT RESOLVER\n(update_exists=keep_local, delete_missing=error);\nALTER SUBSCRIPTION\n\n-- It appears that the error occurs only when at least two conflict\ntypes belong to the same category, either UPDATE or DELETE.\n\n2) Given the above issue, it would be beneficial to add a test in\nsubscription.sql to cover cases where all valid conflict types are set\nwith appropriate resolvers in both the ALTER and CREATE commands.\n\nThanks,\nNisha\n\n\n", "msg_date": "Mon, 9 Sep 2024 15:15:22 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Sep 9, 2024 at 2:58 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Sep 6, 2024 at 2:05 PM Ajin Cherian <[email protected]> wrote:\n> >\n> >\n> > Thank you for your feedback, Shveta. I've addressed both sets of comments you provided.\n>\n> Thanks for the patches. I am reviewing v12-patch001, it is WIP. But\n> please find first set of comments:\n>\n\nIt will be good if we can use parse_subscription_conflict_resolvers()\nfrom both CREATE and ALTER flow instead of writing different functions\nfor both the flows. Please review once to see this feasibility.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 10 Sep 2024 10:21:03 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Tue, Sep 3, 2024 at 7:42 PM vignesh C <[email protected]> wrote:\n\n On Fri, 30 Aug 2024 at 11:01, Nisha Moond <[email protected]>\nwrote:\n >\n > Here is the v11 patch-set. Changes are:\n\n 1) This command crashes:\n ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR NULL;\n #0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n #1 0x000055c67270600a in ResetConflictResolver (subid=16404,\n conflict_type=0x0) at conflict.c:744\n #2 0x000055c67247e0c3 in AlterSubscription (pstate=0x55c6748ff9d0,\n stmt=0x55c67497dfe0, isTopLevel=true) at subscriptioncmds.c:1664\n\n + | ALTER SUBSCRIPTION name RESET CONFLICT\n RESOLVER FOR conflict_type\n + {\n + AlterSubscriptionStmt *n =\n +\nmakeNode(AlterSubscriptionStmt);\n +\n + n->kind =\n ALTER_SUBSCRIPTION_RESET_CONFLICT_RESOLVER;\n + n->subname = $3;\n + n->conflict_type = $8;\n + $$ = (Node *) n;\n + }\n + ;\n +conflict_type:\n + Sconst\n { $$ = $1; }\n + | NULL_P\n { $$ = NULL; }\n ;\n\n May be conflict_type should be changed to:\n +conflict_type:\n + Sconst\n { $$ = $1; }\n ;\n\n\nFixed.\n\n\n 2) Conflict resolver is not shown in describe command:\n postgres=# \\dRs+\n\n List of subscriptions\n Name | Owner | Enabled | Publication | Binary | Streaming |\n Two-phase commit | Disable on error | Origin | Password required | Run\n as owner? | Failover | Synchronous commit | Conninfo\n | Skip LSN\n\n------+---------+---------+-------------+--------+-----------+------------------+------------------+--------+-------------------+---------------+----------+--------------------+----------------------------------\n --------+----------\n sub1 | vignesh | t | {pub1} | f | off | d\n | f | any | t | f\n | f | off | dbname=postgres host=localhost po\n rt=5432 | 0/0\n sub2 | vignesh | t | {pub1} | f | off | d\n | f | any | t | f\n | f | off | dbname=postgres host=localhost po\n rt=5432 | 0/0\n (2 rows)\n\n\nFixed.\n\n\n 3) Tab completion is not handled to include Conflict resolver:\n postgres=# alter subscription sub1\n ADD PUBLICATION CONNECTION DISABLE DROP\n PUBLICATION ENABLE OWNER TO REFRESH\n PUBLICATION RENAME TO SET SKIP (\n\n\nFixed\n\nOn Tue, Sep 3, 2024 at 8:33 PM vignesh C <[email protected]> wrote:\n\n On Fri, 30 Aug 2024 at 11:01, Nisha Moond <[email protected]>\nwrote:\n >\n > Here is the v11 patch-set. Changes are:\n > 1) Updated conflict type names in accordance with the recent\ncommit[1] as -\n > update_differ --> update_origin_differs\n > delete_differ --> delete_origin_differs\n >\n > 2) patch-001:\n > - Implemented the RESET command to restore the default resolvers as\n > suggested in pt.2a & 2b in [2]\n\n Few comments on 0001 patch:\n 1) Currently create subscription has WITH option before conflict\n resolver, I felt WITH option can be after CONNECTION, PUBLICATION and\n CONFLICT RESOLVER option and WITH option at the end:\n CreateSubscriptionStmt:\n - CREATE SUBSCRIPTION name CONNECTION Sconst\n PUBLICATION name_list opt_definition\n + CREATE SUBSCRIPTION name CONNECTION Sconst PUBLICATION\n name_list opt_definition opt_resolver_definition\n {\n CreateSubscriptionStmt *n =\n\n makeNode(CreateSubscriptionStmt);\n @@ -10696,6 +10702,7 @@ CreateSubscriptionStmt:\n n->conninfo = $5;\n n->publication = $7;\n n->options = $8;\n + n->resolvers = $9;\n $$ = (Node *) n;\n\n\nChanged as suggested.\n\n\n 2) Case sensitive:\n 2.a) Should conflict type be case insensitive:\n CREATE SUBSCRIPTION sub3 CONNECTION 'dbname=postgres host=localhost\n port=5432' PUBLICATION pub1 with (copy_data= true) CONFLICT RESOLVER\n (\"INSERT_EXISTS\" = 'error');\n ERROR: INSERT_EXISTS is not a valid conflict type\n\n In few other places it is not case sensitive:\n create publication pub1 with ( PUBLISH= 'INSERT,UPDATE,delete');\n set log_min_MESSAGES TO warning ;\n\n 2.b) Similarly in case of conflict resolver too:\n CREATE SUBSCRIPTION sub3 CONNECTION 'dbname=postgres host=localhost\n port=5432' PUBLICATION pub1 with (copy_data= true) CONFLICT RESOLVER\n (\"insert_exists\" = 'erroR');\n ERROR: erroR is not a valid conflict resolver\n\n\nFixed.\n\n\n 3) Since there is only one key used to search, we can remove nkeys\n variable and directly specify as 1:\n +RemoveSubscriptionConflictBySubid(Oid subid)\n +{\n + Relation rel;\n + HeapTuple tup;\n + TableScanDesc scan;\n + ScanKeyData skey[1];\n + int nkeys = 0;\n +\n + rel = table_open(SubscriptionConflictId, RowExclusiveLock);\n +\n + /*\n + * Search using the subid, this should return all conflict\nresolvers for\n + * this sub\n + */\n + ScanKeyInit(&skey[nkeys++],\n + Anum_pg_subscription_conflict_confsubid,\n + BTEqualStrategyNumber,\n + F_OIDEQ,\n + ObjectIdGetDatum(subid));\n +\n + scan = table_beginscan_catalog(rel, nkeys, skey);\n\n\nFixed.\n\n\n 4) Currently we are including CONFLICT RESOLVER even if a subscription\n with default CONFLICT RESOLVER is created, we can add the CONFLICT\n RESOLVER option only for non-default subscription option:\n + /* add conflict resolvers, if any */\n + if (fout->remoteVersion >= 180000)\n + {\n + PQExpBuffer InQry = createPQExpBuffer();\n + PGresult *res;\n + int i_confrtype;\n + int i_confrres;\n +\n + /* get the conflict types and their resolvers from the\n catalog */\n + appendPQExpBuffer(InQry,\n + \"SELECT confrtype,\nconfrres \"\n + \"FROM\n pg_catalog.pg_subscription_conflict\"\n + \" WHERE confsubid =\n %u;\\n\", subinfo->dobj.catId.oid);\n + res = ExecuteSqlQuery(fout, InQry->data,\nPGRES_TUPLES_OK);\n +\n + i_confrtype = PQfnumber(res, \"confrtype\");\n + i_confrres = PQfnumber(res, \"confrres\");\n +\n + if (PQntuples(res) > 0)\n + {\n + appendPQExpBufferStr(query, \") CONFLICT\nRESOLVER (\");\n\n\nFixed.\n\n\n 5) Should remote_apply be apply_remote here as this is what is\n specified in code:\n + <varlistentry\n id=\"sql-createsubscription-params-with-conflict_resolver-remote-apply\">\n + <term><literal>remote_apply</literal>\n(<type>enum</type>)</term>\n + <listitem>\n + <para>\n\n 6) I think this should be \"It is the default resolver for\nupdate_origin_differs\"\n 6.a)\n + <varlistentry\n id=\"sql-createsubscription-params-with-conflict_resolver-remote-apply\">\n + <term><literal>remote_apply</literal>\n(<type>enum</type>)</term>\n + <listitem>\n + <para>\n + This resolver applies the remote change. It can be used for\n + <literal>insert_exists</literal>,\n<literal>update_exists</literal>,\n + <literal>update_differ</literal> and\n <literal>delete_differ</literal>.\n + It is the default resolver for\n<literal>insert_exists</literal> and\n + <literal>update_exists</literal>.\n + </para>\n + </listitem>\n + </varlistentry>\n\n 6.b)\n + <varlistentry\n id=\"sql-createsubscription-params-with-conflict_type-update-differ\">\n + <term><literal>update_differ</literal>\n(<type>enum</type>)</term>\n + <listitem>\n + <para>\n + This conflict occurs when updating a row that was previously\n\n 6.c)\n + <varlistentry\n id=\"sql-createsubscription-params-with-conflict_resolver-remote-apply\">\n + <term><literal>remote_apply</literal>\n(<type>enum</type>)</term>\n + <listitem>\n + <para>\n + This resolver applies the remote change. It can be used for\n + <literal>insert_exists</literal>,\n<literal>update_exists</literal>,\n + <literal>update_differ</literal> and\n <literal>delete_differ</literal>.\n + It is the default resolver for\n<literal>insert_exists</literal> and\n + <literal>update_exists</literal>.\n\n 6.d)\n + <varlistentry\n id=\"sql-createsubscription-params-with-conflict_resolver-remote-apply\">\n + <term><literal>remote_apply</literal>\n(<type>enum</type>)</term>\n + <listitem>\n + <para>\n + This resolver applies the remote change. It can be used for\n + <literal>insert_exists</literal>,\n<literal>update_exists</literal>,\n + <literal>update_differ</literal> and\n <literal>delete_differ</literal>.\n\n Similarly this change should be done in other places too.\n\n\nFixed.\n\n\n\n 7)\n 7.a) Should delete_differ be changed to delete_origin_differs as that\n is what is specified in the subscription commands:\n +check_conflict_detection(void)\n +{\n + if (!track_commit_timestamp)\n + ereport(WARNING,\n +\n errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n + errmsg(\"conflict detection and\n resolution could be incomplete due to disabled\n track_commit_timestamp\"),\n + errdetail(\"Conflicts update_differ and\n delete_differ cannot be detected, \"\n + \"and the origin and\n commit timestamp for the local row will not be logged.\"));\n\n 7.b) similarly here too:\n + <varlistentry\n id=\"sql-createsubscription-params-with-conflict_type-delete-differ\">\n + <term><literal>delete_differ</literal>\n(<type>enum</type>)</term>\n + <listitem>\n + <para>\n + This conflict occurs when deleting a row that was\n previously modified\n + by another origin. Note that this conflict can only be\ndetected when\n + <link\n\nlinkend=\"guc-track-commit-timestamp\"><varname>track_commit_timestamp</varname></link>\n + is enabled on the subscriber. Currently, the delete is\n always applied\n + regardless of the origin of the local row.\n + </para>\n + </listitem>\n + </varlistentry>\n\n Similarly this change should be done in other places too.\n\n\nFixed.\n\n\n\n 8) ConflictTypeResolver should be added to typedefs.list to resolve\n the pgindent issues:\n 8.a)\n +static void\n +parse_subscription_conflict_resolvers(List *stmtresolvers,\n +\n ConflictTypeResolver * resolvers)\n\n\nFixed.\n\n\n 8.b) Similarly FormData_pg_subscription_conflict should also be added:\n } FormData_pg_subscription_conflict;\n\n /* ----------------\n * Form_pg_subscription_conflict corresponds to a pointer to a row with\n * the format of pg_subscription_conflict relation.\n * ----------------\n */\n typedef FormData_pg_subscription_conflict *\nForm_pg_subscription_conflict;\n\n\nFixed.\n\nOn Mon, Sep 9, 2024 at 7:28 PM shveta malik <[email protected]> wrote:\n\n> On Fri, Sep 6, 2024 at 2:05 PM Ajin Cherian <[email protected]> wrote:\n> >\n> >\n> > Thank you for your feedback, Shveta. I've addressed both sets of\n> comments you provided.\n>\n> Thanks for the patches. I am reviewing v12-patch001, it is WIP. But\n> please find first set of comments:\n>\n> 1)\n> src/sgml/logical-replication.sgml:\n> + Users have the option to configure a conflict_resolver\n>\n> Full stop for previous line is missing.\n>\n\nFixed\n\n\n>\n> 2)\n> + For more information on the conflict_types detected and the\n> supported conflict_resolvers, refer to section CONFLICT RESOLVERS.\n>\n> We may change to :\n> For more information on the supported conflict_types and\n> conflict_resolvers, refer to section CONFLICT RESOLVERS.\n>\n>\n>\nFixed.\n\n\n> 3)\n> src/backend/commands/subscriptioncmds.c:\n> Line removed. This change is not needed.\n>\n> static void CheckAlterSubOption(Subscription *sub, const char *option,\n> bool slot_needs_update, bool isTopLevel);\n> -\n>\n>\nFixed.\n\n\n> 4)\n>\n> Let's stick to the same comments format as the rest of the file i.e.\n> first letter in caps.\n>\n> + /* first initialise the resolvers with default values */\n>\n> first --> First\n> initialise --> initialize\n>\n> Same for below comments:\n> + /* validate the conflict type and resolver */\n> + /* update the corresponding resolver for the given conflict type */\n>\n> Please verify the rest of the file for the same.\n>\n>\nFixed.\n\n\n> 5)\n> Please add below in header of parse_subscription_conflict_resolvers\n> (similar to parse_subscription_options):\n>\n> * This function will report an error if mutually exclusive options\n> are specified.\n>\n\nFixed.\n\n\n>\n> 6)\n> + * Warn users if prerequisites are not met.\n> + * Initialize with default values.\n> + */\n> + if (stmt->resolvers)\n> + conf_detection_check_prerequisites();\n> +\n>\n> Would it be better to move the above call inside\n> parse_subscription_conflict_resolvers(), then we will have all\n> resolver related stuff at one place?\n> Irrespective of whether we move it or not, please remove 'Initialize\n> with default values.' from above as that is now not done here.\n>\n>\nFixed.\n\nOn Mon, Sep 9, 2024 at 7:45 PM Nisha Moond <[email protected]> wrote:\n\n>\n> Thanks for the patches. I tested the v12-0001 patch, and here are my\n> comments:\n>\n> 1) An unexpected error occurs when attempting to alter the resolver\n> for multiple conflict_type(s) in ALTER SUB...CONFLICT RESOLVER\n> command. See below examples :\n>\n> postgres=# alter subscription sub2 CONFLICT RESOLVER\n> (update_exists=keep_local, delete_missing=error,\n> update_origin_differs=error);\n> ERROR: unrecognized node type: 1633972341\n>\n> postgres=# alter subscription sub2 CONFLICT RESOLVER (\n> update_origin_differs=error, update_exists=error);\n> ERROR: unrecognized node type: 1633972341\n>\n> postgres=# alter subscription sub2 CONFLICT RESOLVER (\n> delete_origin_differs=error, delete_missing=error);\n> ERROR: unrecognized node type: 1701602660\n>\n> postgres=# alter subscription sub2 CONFLICT RESOLVER\n> (update_exists=keep_local, delete_missing=error);\n> ALTER SUBSCRIPTION\n>\n> -- It appears that the error occurs only when at least two conflict\n> types belong to the same category, either UPDATE or DELETE.\n>\n>\nFixed this\n\n\n> 2) Given the above issue, it would be beneficial to add a test in\n> subscription.sql to cover cases where all valid conflict types are set\n> with appropriate resolvers in both the ALTER and CREATE commands.\n>\n>\nI've added a few more cases but I feel adding too many tests into \"make\ncheck\" will make it too long.\nI plan to write an alternate script to test this.\n\nNote: As part of this patch the syntax has been changed, now the CONFLICT\nRESOLVER comes before the WITH options as suggested by Vignesh.\n\n\nCREATE SUBSCRIPTION *subscription_name*\n CONNECTION '*conninfo*'\n PUBLICATION *publication_name* [, ...]\n [ CONFLICT RESOLVER ( *conflict_type* [= *conflict_resolver*] [, ...] ) ]\n [ WITH ( *subscription_parameter* [= *value*] [, ... ] ) ]\n\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Thu, 12 Sep 2024 18:33:11 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, 12 Sept 2024 at 14:03, Ajin Cherian <[email protected]> wrote:\n>\n> On Tue, Sep 3, 2024 at 7:42 PM vignesh C <[email protected]> wrote:\n>\n> On Fri, 30 Aug 2024 at 11:01, Nisha Moond <[email protected]> wrote:\n> >\n> > Here is the v11 patch-set. Changes are:\n>\n> 1) This command crashes:\n> ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR NULL;\n> #0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n> #1 0x000055c67270600a in ResetConflictResolver (subid=16404,\n> conflict_type=0x0) at conflict.c:744\n> #2 0x000055c67247e0c3 in AlterSubscription (pstate=0x55c6748ff9d0,\n> stmt=0x55c67497dfe0, isTopLevel=true) at subscriptioncmds.c:1664\n>\n> + | ALTER SUBSCRIPTION name RESET CONFLICT\n> RESOLVER FOR conflict_type\n> + {\n> + AlterSubscriptionStmt *n =\n> + makeNode(AlterSubscriptionStmt);\n> +\n> + n->kind =\n> ALTER_SUBSCRIPTION_RESET_CONFLICT_RESOLVER;\n> + n->subname = $3;\n> + n->conflict_type = $8;\n> + $$ = (Node *) n;\n> + }\n> + ;\n> +conflict_type:\n> + Sconst\n> { $$ = $1; }\n> + | NULL_P\n> { $$ = NULL; }\n> ;\n>\n> May be conflict_type should be changed to:\n> +conflict_type:\n> + Sconst\n> { $$ = $1; }\n> ;\n>\n>\n> Fixed.\n\nFew comments:\n1) Tab completion missing for:\na) ALTER SUBSCRIPTION name CONFLICT RESOLVER\nb) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER ALL\nc) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR\n\n2) Documentation missing for:\na) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER ALL\nb) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR\n\n3) This reset is not required here, if valid was false it would have\nthrown an error and exited:\na)\n+ if (!valid)\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"%s is not a valid conflict\ntype\", conflict_type));\n+\n+ /* Reset */\n+ valid = false;\n\nb)\nSimilarly here too:\n+ if (!valid)\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"%s is not a valid conflict\nresolver\", conflict_resolver));\n+\n+ /* Reset */\n+ valid = false;\n\n4) How about adding CT_MAX inside the enum itself as the last enum value:\ntypedef enum\n{\n/* The row to be inserted violates unique constraint */\nCT_INSERT_EXISTS,\n\n/* The row to be updated was modified by a different origin */\nCT_UPDATE_ORIGIN_DIFFERS,\n\n/* The updated row value violates unique constraint */\nCT_UPDATE_EXISTS,\n\n/* The row to be updated is missing */\nCT_UPDATE_MISSING,\n\n/* The row to be deleted was modified by a different origin */\nCT_DELETE_ORIGIN_DIFFERS,\n\n/* The row to be deleted is missing */\nCT_DELETE_MISSING,\n\n/*\n* Other conflicts, such as exclusion constraint violations, involve more\n* complex rules than simple equality checks. These conflicts are left for\n* future improvements.\n*/\n} ConflictType;\n\n#define CONFLICT_NUM_TYPES (CT_DELETE_MISSING + 1)\n\n/* Min and max conflict type */\n#define CT_MIN CT_INSERT_EXISTS\n#define CT_MAX CT_DELETE_MISSING\n\nand the for loop can be changed to:\nfor (type = 0; type < CT_MAX; type++)\n\nThis way CT_MIN can be removed and CT_MAX need not be changed every\ntime a new enum is added.\n\nAlso the following +1 can be removed from the variables:\nConflictTypeResolver conflictResolvers[CT_MAX + 1];\n\n5) Similar thing can be done with ConflictResolver enum too. i.e\nremove CR_MIN and add CR_MAX as the last element of enum\ntypedef enum ConflictResolver\n{\n/* Apply the remote change */\nCR_APPLY_REMOTE = 1,\n\n/* Keep the local change */\nCR_KEEP_LOCAL,\n\n/* Apply the remote change; skip if it can not be applied */\nCR_APPLY_OR_SKIP,\n\n/* Apply the remote change; emit error if it can not be applied */\nCR_APPLY_OR_ERROR,\n\n/* Skip applying the change */\nCR_SKIP,\n\n/* Error out */\nCR_ERROR,\n} ConflictResolver;\n\n/* Min and max conflict resolver */\n#define CR_MIN CR_APPLY_REMOTE\n#define CR_MAX CR_ERROR\n\n6) Except scansup.h inclusion, other inclusions added are not required\nin subscriptioncmds.c file.\n\n7)The inclusions \"access/heaptoast.h\", \"access/table.h\",\n\"access/tableam.h\", \"catalog/dependency.h\",\n\"catalog/pg_subscription.h\", \"catalog/pg_subscription_conflict.h\" and\n\"catalog/pg_inherits.h\" are not required in conflict.c file.\n\n8) Can we change this to use the new foreach_ptr implementations added:\n+ foreach(lc, stmtresolvers)\n+ {\n+ DefElem *defel = (DefElem *) lfirst(lc);\n+ ConflictType type;\n+ char *resolver;\n\nto use foreach_ptr like:\nforeach_ptr(DefElem, defel, stmtresolvers)\n{\n+ ConflictType type;\n+ char *resolver;\n....\n}\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 13 Sep 2024 17:50:21 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, 12 Sept 2024 at 14:03, Ajin Cherian <[email protected]> wrote:\n>\n> On Tue, Sep 3, 2024 at 7:42 PM vignesh C <[email protected]> wrote:\n>\n> On Fri, 30 Aug 2024 at 11:01, Nisha Moond <[email protected]> wrote:\n> >\n> > Here is the v11 patch-set. Changes are:\n>\n> 1) This command crashes:\n> ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR NULL;\n> #0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n> #1 0x000055c67270600a in ResetConflictResolver (subid=16404,\n> conflict_type=0x0) at conflict.c:744\n> #2 0x000055c67247e0c3 in AlterSubscription (pstate=0x55c6748ff9d0,\n> stmt=0x55c67497dfe0, isTopLevel=true) at subscriptioncmds.c:1664\n>\n> + | ALTER SUBSCRIPTION name RESET CONFLICT\n> RESOLVER FOR conflict_type\n> + {\n> + AlterSubscriptionStmt *n =\n> + makeNode(AlterSubscriptionStmt);\n> +\n> + n->kind =\n> ALTER_SUBSCRIPTION_RESET_CONFLICT_RESOLVER;\n> + n->subname = $3;\n> + n->conflict_type = $8;\n> + $$ = (Node *) n;\n> + }\n> + ;\n> +conflict_type:\n> + Sconst\n> { $$ = $1; }\n> + | NULL_P\n> { $$ = NULL; }\n> ;\n>\n> May be conflict_type should be changed to:\n> +conflict_type:\n> + Sconst\n> { $$ = $1; }\n> ;\n>\n>\n> Fixed.\n>\n\nFew comments:\n1) This should be in (fout->remoteVersion >= 180000) check to support\ndumping backward compatible server objects, else dump with older\nversion will fail:\n+ /* Populate conflict type fields using the new query */\n+ confQuery = createPQExpBuffer();\n+ appendPQExpBuffer(confQuery,\n+ \"SELECT confrtype,\nconfrres FROM pg_catalog.pg_subscription_conflict \"\n+ \"WHERE confsubid =\n%u;\", subinfo[i].dobj.catId.oid);\n+ confRes = ExecuteSqlQuery(fout, confQuery->data,\nPGRES_TUPLES_OK);\n+\n+ ntuples = PQntuples(confRes);\n+ for (j = 0; j < ntuples; j++)\n\n2) Can we check and throw an error before the warning is logged in\nthis case as it seems strange to throw a warning first and then an\nerror for the same track_commit_timestamp configuration:\npostgres=# create subscription sub1 connection ... publication pub1\nconflict resolver (insert_exists = 'last_update_wins');\nWARNING: conflict detection and resolution could be incomplete due to\ndisabled track_commit_timestamp\nDETAIL: Conflicts update_origin_differs and delete_origin_differs\ncannot be detected, and the origin and commit timestamp for the local\nrow will not be logged.\nERROR: resolver last_update_wins requires \"track_commit_timestamp\" to\nbe enabled\nHINT: Make sure the configuration parameter \"track_commit_timestamp\" is set.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 18 Sep 2024 10:45:56 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, 12 Sept 2024 at 14:03, Ajin Cherian <[email protected]> wrote:\n>\n> On Tue, Sep 3, 2024 at 7:42 PM vignesh C <[email protected]> wrote:\n>\n> On Fri, 30 Aug 2024 at 11:01, Nisha Moond <[email protected]> wrote:\n> >\n> > Here is the v11 patch-set. Changes are:\n>\n\nI was reviewing the CONFLICT RESOLVER (insert_exists='apply_remote')\nand found that one conflict remains unresolved in the following\nscenario:\nPub:\nCREATE TABLE circles(c1 CIRCLE, c2 text, EXCLUDE USING gist (c1 WITH &&));\nCREATE PUBLICATION pub1 for table circles;\n\nSub:\nCREATE TABLE circles(c1 CIRCLE, c2 text, EXCLUDE USING gist (c1 WITH &&))\ninsert into circles values('<(0,0), 5>', 'sub');\nCREATE SUBSCRIPTION ... PUBLICATION pub1 CONFLICT RESOLVER\n(insert_exists='apply_remote');\n\nThe following conflict is not detected and resolved with remote tuple data:\nPub:\nINSERT INTO circles VALUES('<(0,0), 5>', 'pub');\n\n 2024-09-19 17:32:36.637 IST [31463] 31463 LOG: conflict detected on\nrelation \"public.t1\": conflict=insert_exists, Resolution=apply_remote.\n 2024-09-19 17:32:36.637 IST [31463] 31463 DETAIL: Key already\nexists in unique index \"t1_pkey\", modified in transaction 742,\napplying the remote changes.\nKey (c1)=(1); existing local tuple (1, sub); remote tuple (1, pub).\n 2024-09-19 17:32:36.637 IST [31463] 31463 CONTEXT: processing\nremote data for replication origin \"pg_16398\" during message type\n\"INSERT\" for replication target relation \"public.t1\" in transaction\n744, finished at 0/1528E88\n........\n 2024-09-19 17:32:44.653 IST [31463] 31463 ERROR: conflicting key\nvalue violates exclusion constraint \"circles_c1_excl\"\n 2024-09-19 17:32:44.653 IST [31463] 31463 DETAIL: Key\n(c1)=(<(0,0),5>) conflicts with existing key (c1)=(<(0,0),5>).\n........\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 19 Sep 2024 17:43:09 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "Hello!\n\nSorry for being noisy, just for the case, want to notice that [1] needs to\nbe addressed before any real usage of conflict resolution.\n\n[1]:\nhttps://www.postgresql.org/message-id/flat/OS0PR01MB5716E30952F542E256DD72E294802%40OS0PR01MB5716.jpnprd01.prod.outlook.com#8aa2083efa76e6a65f51b8a7fd579a23\n\nHello!Sorry for being noisy, just for the case, want to notice that [1] needs to be addressed before any real usage of conflict resolution.[1]: https://www.postgresql.org/message-id/flat/OS0PR01MB5716E30952F542E256DD72E294802%40OS0PR01MB5716.jpnprd01.prod.outlook.com#8aa2083efa76e6a65f51b8a7fd579a23", "msg_date": "Thu, 19 Sep 2024 17:31:44 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Wed, Sep 18, 2024 at 10:46 AM vignesh C <[email protected]> wrote:\n>\n> On Thu, 12 Sept 2024 at 14:03, Ajin Cherian <[email protected]> wrote:\n> >\n> > On Tue, Sep 3, 2024 at 7:42 PM vignesh C <[email protected]> wrote:\n> >\n> > On Fri, 30 Aug 2024 at 11:01, Nisha Moond <[email protected]> wrote:\n> > >\n> > > Here is the v11 patch-set. Changes are:\n> >\n> > 1) This command crashes:\n> > ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR NULL;\n> > #0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n> > #1 0x000055c67270600a in ResetConflictResolver (subid=16404,\n> > conflict_type=0x0) at conflict.c:744\n> > #2 0x000055c67247e0c3 in AlterSubscription (pstate=0x55c6748ff9d0,\n> > stmt=0x55c67497dfe0, isTopLevel=true) at subscriptioncmds.c:1664\n> >\n> > + | ALTER SUBSCRIPTION name RESET CONFLICT\n> > RESOLVER FOR conflict_type\n> > + {\n> > + AlterSubscriptionStmt *n =\n> > + makeNode(AlterSubscriptionStmt);\n> > +\n> > + n->kind =\n> > ALTER_SUBSCRIPTION_RESET_CONFLICT_RESOLVER;\n> > + n->subname = $3;\n> > + n->conflict_type = $8;\n> > + $$ = (Node *) n;\n> > + }\n> > + ;\n> > +conflict_type:\n> > + Sconst\n> > { $$ = $1; }\n> > + | NULL_P\n> > { $$ = NULL; }\n> > ;\n> >\n> > May be conflict_type should be changed to:\n> > +conflict_type:\n> > + Sconst\n> > { $$ = $1; }\n> > ;\n> >\n> >\n> > Fixed.\n> >\n>\n> Few comments:\n> 1) This should be in (fout->remoteVersion >= 180000) check to support\n> dumping backward compatible server objects, else dump with older\n> version will fail:\n> + /* Populate conflict type fields using the new query */\n> + confQuery = createPQExpBuffer();\n> + appendPQExpBuffer(confQuery,\n> + \"SELECT confrtype,\n> confrres FROM pg_catalog.pg_subscription_conflict \"\n> + \"WHERE confsubid =\n> %u;\", subinfo[i].dobj.catId.oid);\n> + confRes = ExecuteSqlQuery(fout, confQuery->data,\n> PGRES_TUPLES_OK);\n> +\n> + ntuples = PQntuples(confRes);\n> + for (j = 0; j < ntuples; j++)\n>\n> 2) Can we check and throw an error before the warning is logged in\n> this case as it seems strange to throw a warning first and then an\n> error for the same track_commit_timestamp configuration:\n> postgres=# create subscription sub1 connection ... publication pub1\n> conflict resolver (insert_exists = 'last_update_wins');\n> WARNING: conflict detection and resolution could be incomplete due to\n> disabled track_commit_timestamp\n> DETAIL: Conflicts update_origin_differs and delete_origin_differs\n> cannot be detected, and the origin and commit timestamp for the local\n> row will not be logged.\n> ERROR: resolver last_update_wins requires \"track_commit_timestamp\" to\n> be enabled\n> HINT: Make sure the configuration parameter \"track_commit_timestamp\" is set.\n>\n\nThanks for the review.\nHere is the v14 patch-set fixing review comments in [1] and [2].\n\nNew in patches:\n1) Added partition table tests in 034_conflict_resolver.pl in 002 and\n003 patches.\n2) 003 has a bug fix for update_exists conflict resolution on\npartitioned tables.\n\n[1]: https://www.postgresql.org/message-id/CALDaNm3es1JqU8Qcv5Yw%3D7Ts2dOvaV8a_boxPSdofB%2BDTx1oFg%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CALDaNm18HuAcNsEC47J6qLRC7rMD2Q9_wT_hFtcc4UWqsfkgjA%40mail.gmail.com\n\nThanks,\nNisha", "msg_date": "Fri, 20 Sep 2024 08:40:39 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Sep 19, 2024 at 5:43 PM vignesh C <[email protected]> wrote:\n>\n> >\n>\n> I was reviewing the CONFLICT RESOLVER (insert_exists='apply_remote')\n> and found that one conflict remains unresolved in the following\n> scenario:\n\nThanks for the review and testing.\n\n> Pub:\n> CREATE TABLE circles(c1 CIRCLE, c2 text, EXCLUDE USING gist (c1 WITH &&));\n> CREATE PUBLICATION pub1 for table circles;\n>\n> Sub:\n> CREATE TABLE circles(c1 CIRCLE, c2 text, EXCLUDE USING gist (c1 WITH &&))\n> insert into circles values('<(0,0), 5>', 'sub');\n> CREATE SUBSCRIPTION ... PUBLICATION pub1 CONFLICT RESOLVER\n> (insert_exists='apply_remote');\n>\n> The following conflict is not detected and resolved with remote tuple data:\n> Pub:\n> INSERT INTO circles VALUES('<(0,0), 5>', 'pub');\n>\n> 2024-09-19 17:32:36.637 IST [31463] 31463 LOG: conflict detected on\n> relation \"public.t1\": conflict=insert_exists, Resolution=apply_remote.\n> 2024-09-19 17:32:36.637 IST [31463] 31463 DETAIL: Key already\n> exists in unique index \"t1_pkey\", modified in transaction 742,\n> applying the remote changes.\n> Key (c1)=(1); existing local tuple (1, sub); remote tuple (1, pub).\n> 2024-09-19 17:32:36.637 IST [31463] 31463 CONTEXT: processing\n> remote data for replication origin \"pg_16398\" during message type\n> \"INSERT\" for replication target relation \"public.t1\" in transaction\n> 744, finished at 0/1528E88\n> ........\n> 2024-09-19 17:32:44.653 IST [31463] 31463 ERROR: conflicting key\n> value violates exclusion constraint \"circles_c1_excl\"\n> 2024-09-19 17:32:44.653 IST [31463] 31463 DETAIL: Key\n> (c1)=(<(0,0),5>) conflicts with existing key (c1)=(<(0,0),5>).\n> ........\n\nWe don't support conflict detection for exclusion constraints yet.\nPlease see the similar issue raised in the conflict-detection thread\nand the responses at [1] and [2]. Also see the docs at [3].\n\n[1]: https://www.postgresql.org/message-id/TYAPR01MB569224262F44875973FAF344F5B22%40TYAPR01MB5692.jpnprd01.prod.outlook.com\n[2]: https://www.postgresql.org/message-id/CAA4eK1KwqAUGDV3trUZf4hkrUYO3yzwjmBqYtoyFAPMFXpHy3g%40mail.gmail.com\n[3]: https://www.postgresql.org/docs/devel/logical-replication-conflicts.html\n<See this in doc: Note that there are other conflict scenarios, such\nas exclusion constraint violations. Currently, we do not provide\nadditional details for them in the log.>\n\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 20 Sep 2024 08:46:46 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Sep 20, 2024 at 8:40 AM Nisha Moond <[email protected]> wrote:\n>\n> On Wed, Sep 18, 2024 at 10:46 AM vignesh C <[email protected]> wrote:\n> >\n> > On Thu, 12 Sept 2024 at 14:03, Ajin Cherian <[email protected]> wrote:\n> > >\n> > > On Tue, Sep 3, 2024 at 7:42 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Fri, 30 Aug 2024 at 11:01, Nisha Moond <[email protected]> wrote:\n> > > >\n> > > > Here is the v11 patch-set. Changes are:\n> > >\n> > > 1) This command crashes:\n> > > ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR NULL;\n> > > #0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n> > > #1 0x000055c67270600a in ResetConflictResolver (subid=16404,\n> > > conflict_type=0x0) at conflict.c:744\n> > > #2 0x000055c67247e0c3 in AlterSubscription (pstate=0x55c6748ff9d0,\n> > > stmt=0x55c67497dfe0, isTopLevel=true) at subscriptioncmds.c:1664\n> > >\n> > > + | ALTER SUBSCRIPTION name RESET CONFLICT\n> > > RESOLVER FOR conflict_type\n> > > + {\n> > > + AlterSubscriptionStmt *n =\n> > > + makeNode(AlterSubscriptionStmt);\n> > > +\n> > > + n->kind =\n> > > ALTER_SUBSCRIPTION_RESET_CONFLICT_RESOLVER;\n> > > + n->subname = $3;\n> > > + n->conflict_type = $8;\n> > > + $$ = (Node *) n;\n> > > + }\n> > > + ;\n> > > +conflict_type:\n> > > + Sconst\n> > > { $$ = $1; }\n> > > + | NULL_P\n> > > { $$ = NULL; }\n> > > ;\n> > >\n> > > May be conflict_type should be changed to:\n> > > +conflict_type:\n> > > + Sconst\n> > > { $$ = $1; }\n> > > ;\n> > >\n> > >\n> > > Fixed.\n> > >\n> >\n> > Few comments:\n> > 1) This should be in (fout->remoteVersion >= 180000) check to support\n> > dumping backward compatible server objects, else dump with older\n> > version will fail:\n> > + /* Populate conflict type fields using the new query */\n> > + confQuery = createPQExpBuffer();\n> > + appendPQExpBuffer(confQuery,\n> > + \"SELECT confrtype,\n> > confrres FROM pg_catalog.pg_subscription_conflict \"\n> > + \"WHERE confsubid =\n> > %u;\", subinfo[i].dobj.catId.oid);\n> > + confRes = ExecuteSqlQuery(fout, confQuery->data,\n> > PGRES_TUPLES_OK);\n> > +\n> > + ntuples = PQntuples(confRes);\n> > + for (j = 0; j < ntuples; j++)\n> >\n> > 2) Can we check and throw an error before the warning is logged in\n> > this case as it seems strange to throw a warning first and then an\n> > error for the same track_commit_timestamp configuration:\n> > postgres=# create subscription sub1 connection ... publication pub1\n> > conflict resolver (insert_exists = 'last_update_wins');\n> > WARNING: conflict detection and resolution could be incomplete due to\n> > disabled track_commit_timestamp\n> > DETAIL: Conflicts update_origin_differs and delete_origin_differs\n> > cannot be detected, and the origin and commit timestamp for the local\n> > row will not be logged.\n> > ERROR: resolver last_update_wins requires \"track_commit_timestamp\" to\n> > be enabled\n> > HINT: Make sure the configuration parameter \"track_commit_timestamp\" is set.\n> >\n>\n> Thanks for the review.\n> Here is the v14 patch-set fixing review comments in [1] and [2].\n\nClarification:\nThe fixes for mentioned comments from Vignesh - [1] & [2] are fixed in\npatch-001. Thank you Ajin for providing the changes.\n\n> New in patches:\n> 1) Added partition table tests in 034_conflict_resolver.pl in 002 and\n> 003 patches.\n> 2) 003 has a bug fix for update_exists conflict resolution on\n> partitioned tables.\n>\n> [1]: https://www.postgresql.org/message-id/CALDaNm3es1JqU8Qcv5Yw%3D7Ts2dOvaV8a_boxPSdofB%2BDTx1oFg%40mail.gmail.com\n> [2]: https://www.postgresql.org/message-id/CALDaNm18HuAcNsEC47J6qLRC7rMD2Q9_wT_hFtcc4UWqsfkgjA%40mail.gmail.com\n>\n> Thanks,\n> Nisha\n\n\n", "msg_date": "Fri, 20 Sep 2024 09:33:56 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Sep 13, 2024 at 10:20 PM vignesh C <[email protected]> wrote:\n\n>\n> Few comments:\n> 1) Tab completion missing for:\n> a) ALTER SUBSCRIPTION name CONFLICT RESOLVER\n> b) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER ALL\n> c) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR\n>\n>\nAdded.\n\n\n> 2) Documentation missing for:\n> a) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER ALL\n> b) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR\n>\n>\nAdded.\n\n\n> 3) This reset is not required here, if valid was false it would have\n> thrown an error and exited:\n> a)\n> + if (!valid)\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"%s is not a valid conflict\n> type\", conflict_type));\n> +\n> + /* Reset */\n> + valid = false;\n>\n> b)\n> Similarly here too:\n> + if (!valid)\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"%s is not a valid conflict\n> resolver\", conflict_resolver));\n> +\n> + /* Reset */\n> + valid = false;\n>\n>\nActually, the reset is for when valid becomes true. I think it it is\nrequired here.\n\n\n> 4) How about adding CT_MAX inside the enum itself as the last enum value:\n> typedef enum\n> {\n> /* The row to be inserted violates unique constraint */\n> CT_INSERT_EXISTS,\n>\n> /* The row to be updated was modified by a different origin */\n> CT_UPDATE_ORIGIN_DIFFERS,\n>\n> /* The updated row value violates unique constraint */\n> CT_UPDATE_EXISTS,\n>\n> /* The row to be updated is missing */\n> CT_UPDATE_MISSING,\n>\n> /* The row to be deleted was modified by a different origin */\n> CT_DELETE_ORIGIN_DIFFERS,\n>\n> /* The row to be deleted is missing */\n> CT_DELETE_MISSING,\n>\n> /*\n> * Other conflicts, such as exclusion constraint violations, involve more\n> * complex rules than simple equality checks. These conflicts are left for\n> * future improvements.\n> */\n> } ConflictType;\n>\n> #define CONFLICT_NUM_TYPES (CT_DELETE_MISSING + 1)\n>\n> /* Min and max conflict type */\n> #define CT_MIN CT_INSERT_EXISTS\n> #define CT_MAX CT_DELETE_MISSING\n>\n> and the for loop can be changed to:\n> for (type = 0; type < CT_MAX; type++)\n>\n> This way CT_MIN can be removed and CT_MAX need not be changed every\n> time a new enum is added.\n>\n> Also the following +1 can be removed from the variables:\n> ConflictTypeResolver conflictResolvers[CT_MAX + 1];\n>\n>\nI tried changing this, but the enums are used in swicth cases and this\nthrows a compiler warning that CT_MAX is not checked in the switch case.\nHowever, I have changed the use of (CT_MAX +1) and instead used\nCONFLICT_NUM_TYPES in those places.\n\n5) Similar thing can be done with ConflictResolver enum too. i.e\n> remove CR_MIN and add CR_MAX as the last element of enum\n> typedef enum ConflictResolver\n> {\n> /* Apply the remote change */\n> CR_APPLY_REMOTE = 1,\n>\n> /* Keep the local change */\n> CR_KEEP_LOCAL,\n>\n> /* Apply the remote change; skip if it can not be applied */\n> CR_APPLY_OR_SKIP,\n>\n> /* Apply the remote change; emit error if it can not be applied */\n> CR_APPLY_OR_ERROR,\n>\n> /* Skip applying the change */\n> CR_SKIP,\n>\n> /* Error out */\n> CR_ERROR,\n> } ConflictResolver;\n>\n> /* Min and max conflict resolver */\n> #define CR_MIN CR_APPLY_REMOTE\n> #define CR_MAX CR_ERROR\n>\n>\n same as previous comment.\n\n\n> 6) Except scansup.h inclusion, other inclusions added are not required\n> in subscriptioncmds.c file.\n>\n> 7)The inclusions \"access/heaptoast.h\", \"access/table.h\",\n> \"access/tableam.h\", \"catalog/dependency.h\",\n> \"catalog/pg_subscription.h\", \"catalog/pg_subscription_conflict.h\" and\n> \"catalog/pg_inherits.h\" are not required in conflict.c file.\n>\n>\nRemoved.\n\n\n> 8) Can we change this to use the new foreach_ptr implementations added:\n> + foreach(lc, stmtresolvers)\n> + {\n> + DefElem *defel = (DefElem *) lfirst(lc);\n> + ConflictType type;\n> + char *resolver;\n>\n> to use foreach_ptr like:\n> foreach_ptr(DefElem, defel, stmtresolvers)\n> {\n> + ConflictType type;\n> + char *resolver;\n> ....\n> }\n>\n\nChanged accordingly.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Fri, Sep 13, 2024 at 10:20 PM vignesh C <[email protected]> wrote:\nFew comments:\n1) Tab completion missing for:\na) ALTER SUBSCRIPTION name CONFLICT RESOLVER\nb) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER ALL\nc) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR\nAdded. \n2) Documentation missing for:\na) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER ALL\nb) ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR\nAdded. \n3) This reset is not required here, if valid was false it would have\nthrown an error and exited:\na)\n+       if (!valid)\n+               ereport(ERROR,\n+                               errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+                               errmsg(\"%s is not a valid conflict\ntype\", conflict_type));\n+\n+       /* Reset */\n+       valid = false;\n\nb)\nSimilarly here too:\n+       if (!valid)\n+               ereport(ERROR,\n+                               errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+                               errmsg(\"%s is not a valid conflict\nresolver\", conflict_resolver));\n+\n+       /* Reset */\n+       valid = false;\nActually, the reset is for when valid becomes true. I think it it is required here. \n4) How about adding CT_MAX inside the enum itself as the last enum value:\ntypedef enum\n{\n/* The row to be inserted violates unique constraint */\nCT_INSERT_EXISTS,\n\n/* The row to be updated was modified by a different origin */\nCT_UPDATE_ORIGIN_DIFFERS,\n\n/* The updated row value violates unique constraint */\nCT_UPDATE_EXISTS,\n\n/* The row to be updated is missing */\nCT_UPDATE_MISSING,\n\n/* The row to be deleted was modified by a different origin */\nCT_DELETE_ORIGIN_DIFFERS,\n\n/* The row to be deleted is missing */\nCT_DELETE_MISSING,\n\n/*\n* Other conflicts, such as exclusion constraint violations, involve more\n* complex rules than simple equality checks. These conflicts are left for\n* future improvements.\n*/\n} ConflictType;\n\n#define CONFLICT_NUM_TYPES (CT_DELETE_MISSING + 1)\n\n/* Min and max conflict type */\n#define CT_MIN CT_INSERT_EXISTS\n#define CT_MAX CT_DELETE_MISSING\n\nand the for loop can be changed to:\nfor (type = 0; type < CT_MAX; type++)\n\nThis way CT_MIN can be removed and CT_MAX need not be changed every\ntime a new enum is added.\n\nAlso the following +1 can be removed from the variables:\nConflictTypeResolver conflictResolvers[CT_MAX + 1];\nI tried changing this, but the enums are used in swicth cases and this throws a compiler warning that CT_MAX is not checked in the switch case. However, I have changed the use of (CT_MAX +1)  and instead used \nCONFLICT_NUM_TYPES in those places.\n5) Similar thing can be done with ConflictResolver enum too. i.e\nremove  CR_MIN and add CR_MAX as the last element of enum\ntypedef enum ConflictResolver\n{\n/* Apply the remote change */\nCR_APPLY_REMOTE = 1,\n\n/* Keep the local change */\nCR_KEEP_LOCAL,\n\n/* Apply the remote change; skip if it can not be applied */\nCR_APPLY_OR_SKIP,\n\n/* Apply the remote change; emit error if it can not be applied */\nCR_APPLY_OR_ERROR,\n\n/* Skip applying the change */\nCR_SKIP,\n\n/* Error out */\nCR_ERROR,\n} ConflictResolver;\n\n/* Min and max conflict resolver */\n#define CR_MIN CR_APPLY_REMOTE\n#define CR_MAX CR_ERROR\n same as previous comment. \n6) Except scansup.h inclusion, other inclusions added are not required\nin subscriptioncmds.c file.\n\n7)The inclusions \"access/heaptoast.h\",  \"access/table.h\",\n\"access/tableam.h\", \"catalog/dependency.h\",\n\"catalog/pg_subscription.h\",  \"catalog/pg_subscription_conflict.h\" and\n\"catalog/pg_inherits.h\" are not required in conflict.c file.\nRemoved. \n8) Can we change this to  use the new foreach_ptr implementations added:\n+       foreach(lc, stmtresolvers)\n+       {\n+               DefElem    *defel = (DefElem *) lfirst(lc);\n+               ConflictType type;\n+               char       *resolver;\n\nto use foreach_ptr like:\nforeach_ptr(DefElem, defel, stmtresolvers)\n{\n+               ConflictType type;\n+               char       *resolver;\n....\n}Changed accordingly.regards,Ajin CherianFujitsu Australia", "msg_date": "Fri, 20 Sep 2024 14:34:11 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Sep 20, 2024 at 8:40 AM Nisha Moond <[email protected]> wrote:\n>\n> On Wed, Sep 18, 2024 at 10:46 AM vignesh C <[email protected]> wrote:\n> >\n> > On Thu, 12 Sept 2024 at 14:03, Ajin Cherian <[email protected]> wrote:\n> > >\n> > > On Tue, Sep 3, 2024 at 7:42 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Fri, 30 Aug 2024 at 11:01, Nisha Moond <[email protected]> wrote:\n> > > >\n> > > > Here is the v11 patch-set. Changes are:\n> > >\n> > > 1) This command crashes:\n> > > ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR NULL;\n> > > #0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n> > > #1 0x000055c67270600a in ResetConflictResolver (subid=16404,\n> > > conflict_type=0x0) at conflict.c:744\n> > > #2 0x000055c67247e0c3 in AlterSubscription (pstate=0x55c6748ff9d0,\n> > > stmt=0x55c67497dfe0, isTopLevel=true) at subscriptioncmds.c:1664\n> > >\n> > > + | ALTER SUBSCRIPTION name RESET CONFLICT\n> > > RESOLVER FOR conflict_type\n> > > + {\n> > > + AlterSubscriptionStmt *n =\n> > > + makeNode(AlterSubscriptionStmt);\n> > > +\n> > > + n->kind =\n> > > ALTER_SUBSCRIPTION_RESET_CONFLICT_RESOLVER;\n> > > + n->subname = $3;\n> > > + n->conflict_type = $8;\n> > > + $$ = (Node *) n;\n> > > + }\n> > > + ;\n> > > +conflict_type:\n> > > + Sconst\n> > > { $$ = $1; }\n> > > + | NULL_P\n> > > { $$ = NULL; }\n> > > ;\n> > >\n> > > May be conflict_type should be changed to:\n> > > +conflict_type:\n> > > + Sconst\n> > > { $$ = $1; }\n> > > ;\n> > >\n> > >\n> > > Fixed.\n> > >\n> >\n> > Few comments:\n> > 1) This should be in (fout->remoteVersion >= 180000) check to support\n> > dumping backward compatible server objects, else dump with older\n> > version will fail:\n> > + /* Populate conflict type fields using the new query */\n> > + confQuery = createPQExpBuffer();\n> > + appendPQExpBuffer(confQuery,\n> > + \"SELECT confrtype,\n> > confrres FROM pg_catalog.pg_subscription_conflict \"\n> > + \"WHERE confsubid =\n> > %u;\", subinfo[i].dobj.catId.oid);\n> > + confRes = ExecuteSqlQuery(fout, confQuery->data,\n> > PGRES_TUPLES_OK);\n> > +\n> > + ntuples = PQntuples(confRes);\n> > + for (j = 0; j < ntuples; j++)\n> >\n> > 2) Can we check and throw an error before the warning is logged in\n> > this case as it seems strange to throw a warning first and then an\n> > error for the same track_commit_timestamp configuration:\n> > postgres=# create subscription sub1 connection ... publication pub1\n> > conflict resolver (insert_exists = 'last_update_wins');\n> > WARNING: conflict detection and resolution could be incomplete due to\n> > disabled track_commit_timestamp\n> > DETAIL: Conflicts update_origin_differs and delete_origin_differs\n> > cannot be detected, and the origin and commit timestamp for the local\n> > row will not be logged.\n> > ERROR: resolver last_update_wins requires \"track_commit_timestamp\" to\n> > be enabled\n> > HINT: Make sure the configuration parameter \"track_commit_timestamp\" is set.\n> >\n>\n> Thanks for the review.\n> Here is the v14 patch-set fixing review comments in [1] and [2].\n\nJust noticed that there are failures for the new\n034_conflict_resolver.pl test on CFbot. From the initial review it\nseems to be a test issue and not a bug.\nWe will fix these along with the next version of patch-sets.\n\nThanks,\nNisha\n\n\n", "msg_date": "Fri, 20 Sep 2024 11:16:03 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Sep 20, 2024 at 8:40 AM Nisha Moond <[email protected]> wrote:\n>\n> Thanks for the review.\n> Here is the v14 patch-set fixing review comments in [1] and [2].\n>\n\nThanks for the patches. I am reviewing patch001, it is WIP, but please\nfind initial set of comments:\n\n1)\nPlease see these 2 errors:\n\npostgres=# create subscription sub2 connection '....' publication pub1\nCONFLICT RESOLVER(insert_exists = 'error') WITH (two_phase=true,\nstreaming=ON, streaming=OFF);\nERROR: conflicting or redundant options\nLINE 1: ...ists='error') WITH (two_phase=true, streaming=ON, streaming=...\n\n ^\npostgres=# create subscription sub2 connection '....' publication pub1\nCONFLICT RESOLVER(insert_exists = 'error', insert_exists = 'error')\nWITH (two_phase=true);\nERROR: duplicate conflict type \"insert_exists\" found\n\nWhen we give duplicate options in 'WITH', we get an error as\n'conflicting or redundant options' with 'position' pointed out, while\nin case of CONFLICT RESOLVER, it is different. Can we review to see if\nwe can have similar error in CONFLICT RESOLVER as that of WITH?\nPerhaps we need to call 'errorConflictingDefElem' from resolver flow.\n\n2)\n+static void\n+parse_subscription_conflict_resolvers(List *stmtresolvers,\n+ ConflictTypeResolver *resolvers)\n+{\n+ ListCell *lc;\n+ List *SeenTypes = NIL;\n+\n+\n\nRemove redundant blank line\n\n3)\nparse_subscription_conflict_resolvers():\n\n+ if (stmtresolvers)\n+ conf_detection_check_prerequisites();\n+\n+}\n\nRemove redundant blank line\n\n4)\nparse_subscription_conflict_resolvers():\n+ resolver = defGetString(defel);\n+ type = validate_conflict_type_and_resolver(defel->defname,\n+ defGetString(defel));\n\nShall we use 'resolver' as arg to validate function instead of doing\ndefGetStringagain?\n\n5)\nparse_subscription_conflict_resolvers():\n\n+ /* Update the corresponding resolver for the given conflict type. */\n+ resolvers[type].resolver = downcase_truncate_identifier(resolver,\nstrlen(resolver), false);\n\nShouldn't we do this before validate_conflict_type_and_resolver()\nitself like we do it in GetAndValidateSubsConflictResolverList()? And\ndo we need downcase_truncate_identifier on defel->defname as well\nbefore we do validate_conflict_type_and_resolver()?\n\n6)\nGetAndValidateSubsConflictResolverList() and\nparse_subscription_conflict_resolvers() are similar but yet have so\nmany differences which I pointed out above. Not a good idea to\nmaintain 2 such functions. We should have a common parsing function\nfor both Create and Alter Sub. Can you please review the possibility\nof that?\n\n~~\n\nconflict.c:\n\n7)\n+\n+\n+/*\n+ * Set default values for CONFLICT RESOLVERS for each conflict type\n+ */\n+void\n+SetDefaultResolvers(ConflictTypeResolver * conflictResolvers)\n\nRemove redundant blank line\n\n8)\n * Set default values for CONFLICT RESOLVERS for each conflict type\n\nIs it better to change to: Set default resolver for each conflict type\n\n9)\n validate_conflict_type_and_resolver(): Since it is called from other\nfile as well, shall we rename to ValidateConflictTypeAndResolver()\n\n10)\n+ return type;\n+\n+}\nRemove redundant blank line after 'return'\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 26 Sep 2024 14:57:05 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Thu, Sep 26, 2024 at 2:57 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Sep 20, 2024 at 8:40 AM Nisha Moond <[email protected]> wrote:\n> >\n> > Thanks for the review.\n> > Here is the v14 patch-set fixing review comments in [1] and [2].\n> >\n>\n> Thanks for the patches. I am reviewing patch001, it is WIP, but please\n> find initial set of comments:\n>\n\nPlease find next set of comments on patch001:\n\n\n11)\nconflict.c\n#include \"access/tableam.h\" (existing)\n#include \"replication/logicalproto.h\" (added by patch002)\n\nAbove 2 are not needed. The code compiles without these. I think the\nfirst one has become redundant due to inclusion of other header files\nwhich indirectly include this.\n\n\n12)\ncreate_subscription.sgml:\n+ apply_remote (enum)\n+ This resolver applies the remote change. It can be used for\ninsert_exists, update_exists, update_origin_differs and\ndelete_origin_differs. It is the default resolver for insert_exists\nand update_exists.\n\nWrong info, it is default for update_origin_differs and delete_origin_differs\n\n\n13)\nalter_subscription.sgml:\nSynopsis:\n+ ALTER SUBSCRIPTION name RESET CONFLICT RESOLVER FOR (conflict_type)\n\nwe don't support parenthesis in the syntax. So please correct the doc.\npostgres=# ALTER SUBSCRIPTION sub1 RESET CONFLICT RESOLVER FOR\n('insert_exists');\nERROR: syntax error at or near \"(\"\n\n\n14)\nalter_subscription.sgml:\n+ CONFLICT RESOLVER ( conflict_type [= conflict_resolver] [, ... ] )\n+ This clause alters either the default conflict resolvers or those\nset by CREATE SUBSCRIPTION. Refer to section CONFLICT RESOLVERS for\nthe details on supported conflict_types and conflict_resolvers.\n\n+ conflict_type\n+ The conflict type being reset to its default resolver setting. For\ndetails on conflict types and their default resolvers, refer to\nsection CONFLICT RESOLVERS\n\na) These details seem problematic. Shouldn't we have RESET as heading\nsimilar to SKIP and then try explaining both ALL and conflict_type\nunder that. Above seems we are trying to explain conflict_type of\n'CONFLICT RESOLVER ( conflict_type [= conflict_resolver]' subcommand\nwhile\ngiving details of RESET subcommand.\n\nb) OTOH, 'CONFLICT RESOLVER ( conflict_type [= conflict_resolver]'\nshould have its own explanation of conflict_type and conflict_resolver\nparameters.\n\n\n15)\nlogical-replication.sgml:\n\nExisting:\n+ Additional logging is triggered in various conflict scenarios, each\nidentified as a conflict type, and the conflict statistics are\ncollected (displayed in the pg_stat_subscription_stats view). Users\nhave the option to configure a conflict_resolver for each\nconflict_type when creating a subscription. For more information on\nthe supported conflict_types detected and conflict_resolvers, refer to\nsection CONFLICT RESOLVERS.\n\nSuggestion:\nAdditional logging is triggered for various conflict scenarios, each\ncategorized by a specific conflict type, with conflict statistics\nbeing gathered and displayed in the pg_stat_subscription_stats view.\nUsers can configure a conflict_resolver for each conflict_type when\ncreating a subscription.\nFor more details on the supported conflict types and corresponding\nconflict resolvers, refer to the section on <CONFLICT RESOLVERS>.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 27 Sep 2024 10:44:37 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "Here are some review comments for v14-0001.\n\nThis is a WIP, but here are my comments for all the SGML parts.\n\n(There will be some overlap here with comments already posted by Shveta)\n\n======\n1. file modes after applying the patch\n\n mode change 100644 => 100755 doc/src/sgml/ref/alter_subscription.sgml\n mode change 100644 => 100755 doc/src/sgml/ref/create_subscription.sgml\n\nWhat's going on here? Why are those SGMLs changed to executable?\n\n======\nCommit message\n\n2.\nnit - a missing period in the first sentence\nnit - typo /reseting/resetting/\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n3.\n- <title>Conflicts</title>\n+ <title>Conflicts and conflict resolution</title>\n\nnit - change the capitalisation to \"and Conflict Resolution\" to match\nother titles.\n\n~~~\n\n4.\n+ Additional logging is triggered in various conflict scenarios,\neach identified as a\n+ conflict type, and the conflict statistics are collected (displayed in the\n+ <link linkend=\"monitoring-pg-stat-subscription-stats\"><structname>pg_stat_subscription_stats</structname></link>\nview).\n+ Users have the option to configure a\n<literal>conflict_resolver</literal> for each\n+ <literal>conflict_type</literal> when creating a subscription.\n+ For more information on the supported\n<literal>conflict_types</literal> detected and\n+ <literal>conflict_resolvers</literal>, refer to section\n+ <link linkend=\"sql-createsubscription-params-with-conflict-resolver\"><literal>CONFLICT\nRESOLVERS</literal></link>.\n+\n\nnit - \"Additional logging is triggered\" sounds strange. I reworded\nthis in the nits attachment. Please see if you approve.\nnit - The \"conflict_type\" and \"conflict_resolver\" you are referring to\nhere are syntax elements of the CREATE SUBSCRIPTION, so here I think\nthey should just be called (without the underscores) \"conflict type\"\nand \"conflict resolver\".\nnit - IMO this would be better split into multiple paragraphs.\nnit - There is no such section called \"CONFLICT RESOLVERS\". I reworded\nthis link text.\n\n======\ndoc/src/sgml/monitoring.sgml\n\n5.\nThe changes here all render with the link including the type \"(enum)\"\ndisplayed, which I thought it unnecessary/strange.\n\nFor example:\nSee insert_exists (enum) for details about this conflict.\n\nIIUC there is no problem here, but maybe the other end of the link\nneeded to define xreflabels. I have made the necessary modifications\nin the create_subscription.sgml.\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n6.\n+ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable>\nCONFLICT RESOLVER ( <replaceable\nclass=\"parameter\">conflict_type</replaceable> [= <replaceable\nclass=\"parameter\">conflict_resolver</replaceable>] [, ...] )\n\nThis syntax seems wrong to me.\n\nCurrently, it says:\nALTER SUBSCRIPTION name CONFLICT RESOLVER ( conflict_type [=\nconflict_resolver] [, ...] )\n\nBut, shouldn't that say:\nALTER SUBSCRIPTION name CONFLICT RESOLVER ( conflict_type =\nconflict_resolver [, ...] )\n\n~~~\n7.\n+ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable>\nRESET CONFLICT RESOLVER FOR (<replaceable\nclass=\"parameter\">conflict_type</replaceable>)\n\nI can see that this matches the implementation, but I was wondering\nwhy don't you permit resetting multiple conflict_types at the same\ntime. e.g. what if I want to reset some but not ALL?\n\n~~~\n\nnit - there are some minor whitespace indent problems in the SGML\n\n~~~\n\n8.\n+ <varlistentry id=\"sql-altersubscription-params-conflict-resolver\">\n+ <term><literal>CONFLICT RESOLVER ( <replaceable\nclass=\"parameter\">conflict_type</replaceable> [= <replaceable\nclass=\"parameter\">conflict_resolver</replaceable>] [, ... ]\n)</literal></term>\n+ <listitem>\n+ <para>\n+ This clause alters either the default conflict resolvers or\nthose set by <xref linkend=\"sql-createsubscription\"/>.\n+ Refer to section <link\nlinkend=\"sql-createsubscription-params-with-conflict-resolver\"><literal>CONFLICT\nRESOLVERS</literal></link>\n+ for the details on supported <literal>conflict_types</literal>\nand <literal>conflict_resolvers</literal>.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n+ <varlistentry id=\"sql-altersubscription-params-conflict-type\">\n+ <term><replaceable class=\"parameter\">conflict_type</replaceable></term>\n+ <listitem>\n+ <para>\n+ The conflict type being reset to its default resolver setting.\n+ For details on conflict types and their default resolvers, refer\nto section <link\nlinkend=\"sql-createsubscription-params-with-conflict-resolver\"><literal>CONFLICT\nRESOLVERS</literal></link>\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ </variablelist>\n\nThis section seems problematic:\ne.g the syntax seems wrong same as before.\n\n~\nThere are other nits.\n(I've given a rough fix in the nits attachment. Please see it and make\nit better).\n\nnit - why do you care if it is \"either the default conflict resolvers\nor those set...\". Why not just say \"current resolver\"\nnit - it does not mention 'conflict_resolver' type in the normal way\nnit - there is no actual section called \"CONFLICT RESOLVERS\"\nnit - the part that says \"The conflict type being reset to its default\nresolver setting.\" is bogus for this form of the ALTER statement.\n\n~~~\n\n9.\nThere is no description for the \"RESET CONFLICT RESOLVER ALL\"\n\n~~~\n\n10.\nThere is no description for the \"RESET CONFLICT RESOLVER FOR (conflict_type)\"\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n11. General - Order\n\n+ <varlistentry id=\"sql-createsubscription-params-with-conflict-resolver\">\n+ <term><literal>CONFLICT RESOLVER ( <replaceable\nclass=\"parameter\">conflict_type</replaceable> = <replaceable\n\nnit - IMO this entire new entry about \"CONFLICT RESOLVER\" should\nappear on the page *above* the \"WITH\" section, because that is the\norder that it is defined in the CREATE SUBSCRIPTION syntax.\n\n~~~\n\n12. General - whitespace\n\nnit - Much of this new section seems to have a slightly wrong\nindentation in the SGML. Mostly it is out by 1 or 2 spaces.\n\n~~~\n\n13. General - ordering of conflict_type.\n\nnit - Instead of just some apparent random order, let's put each\ninsert/update/delete conflict type in alphabetical order, so at least\nusers can find them where they would expect to find them.\n\n~~~\n\n14.\n99. General - ordering of conflict_resolver\n\nnit - ditto. Let's name these in alphabetical order. IMO it makes more\nsense than the current random ordering.\n\n~~~\n\n15.\n+ <para>\n+ This optional clause specifies options for conflict resolvers\nfor different conflict_types.\n+ </para>\n\nnit - IMO we don't need the words \"options for\" here.\n\n~~~\n\n16.\n+ <para>\n+ The <replaceable class=\"parameter\">conflict_type</replaceable>\nand their default behaviour are listed below.\n\nnit - sounded strange to me. reworded it slightly.\n\n~~~\n\n17.\n+ <varlistentry\nid=\"sql-createsubscription-params-with-conflict_type-insert-exists\">\n\nnit - Here, and for all other conflict types, add \"xreflabel\". See my\nreview comment #5 for the reason why.\n\n~~~\n\n18.\n+ <para>\n+ The <replaceable\nclass=\"parameter\">conflict_resolver</replaceable> and their behaviour\n+ are listed below. Users can use any of the following resolvers\nfor automatic conflict\n+ resolution.\n+ <variablelist>\n\nnit - reworded this too, to be like the previous review comment.\n\n~~~\n\n19. General - readability.\n\n19a.\nIMO the information about what are the default resolvers for each\nconflict type, and what resolvers are allowed for each conflict type\nshould ideally be documented in a tabular form.\n\nMaybe all information is already present in the current document, but\nit is certainly hard to easily see it.\n\nAs an example, I have added a table in this section. Maybe it is the\nbest placement for this table, but I gave it mostly how you can\npresent the same information so it is easier to read.\n\n~\n19b.\nBug. In doing this exercise I discovered there are 2 resolvers\n(\"error\" and \"apply_remote\") that both claim to be defaults for the\nsame conflict types.\n\nThey both say:\n\n+ It is the default resolver for <literal>insert_exists</literal> and\n+ <literal>update_exists</literal>.\n\nAnyway, this demonstrates that the current information was hard to read.\n\nI can tell from the code implementation what the document was supposed\nto say, but I will leave it to the patch authors to fix this one.\n(e.g. \"apply_remote\" says the wrong defaults)\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 27 Sep 2024 17:30:03 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Sep 27, 2024 at 10:44 AM shveta malik <[email protected]> wrote:\n>\n> > >\n> > > Thanks for the review.\n> > > Here is the v14 patch-set fixing review comments in [1] and [2].\n> > >\n> >\n> > Thanks for the patches. I am reviewing patch001, it is WIP, but please\n> > find initial set of comments:\n> >\n>\n\nPlease find the next set of comments.\n\n16)\nIn pg_dump.h, there is a lot of duplication of structures from\nconflict.h, we can avoid that by making below changes:\n--In SubscriptionInfo(), we can have a list of ConflictTypeResolver\nstructure and fill the elements of the list in getSubscriptions()\nsimply by output of pg_subscription_conflict.\n--Then in dumpSubscription() we can traverse the list to verify if the\nresolver is the default one, if so, skip the dump. We can create a new\nfunction to return whether the resolver is default or not.\n--We can get rid of enum ConflictType, enum ConflictResolver,\nConflictResolverNames, ConflictTypeDefaultResolvers from pg_dump.h\n\n17)\nIn describe.c, we can have an 'order by' in the query so that order is\nnot changed everytime we update a resolver. Please see this:\n\nFor sub1, \\dRs was showing below as output for Conflict Resolvers:\ninsert_exists = error, update_origin_differs = apply_remote,\nupdate_exists = error, update_missing = skip, delete_origin_differs =\napply_remote, delete_missing = skip\n\nOnce I update resolver, the order gets changed:\npostgres=# ALTER SUBSCRIPTION sub1 CONFLICT RESOLVER\n(insert_exists='apply_remote');\nALTER SUBSCRIPTION\n\n\\dRs:\nupdate_origin_differs = apply_remote, update_exists = error,\nupdate_missing = skip, delete_origin_differs = apply_remote,\ndelete_missing = skip, insert_exists = apply_remote\n\n18)\nSimilarly after making change 16, for pg_dump too, it will be good if\nwe maintain the order and thus can have order-by in pg_dump's query as\nwell.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 27 Sep 2024 14:33:44 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Sep 27, 2024 at 1:00 PM Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for v14-0001.\n>\n> This is a WIP, but here are my comments for all the SGML parts.\n>\n> (There will be some overlap here with comments already posted by Shveta)\n>\n> ======\n> 1. file modes after applying the patch\n>\n> mode change 100644 => 100755 doc/src/sgml/ref/alter_subscription.sgml\n> mode change 100644 => 100755 doc/src/sgml/ref/create_subscription.sgml\n>\n> What's going on here? Why are those SGMLs changed to executable?\n>\n> ======\n> Commit message\n>\n> 2.\n> nit - a missing period in the first sentence\n> nit - typo /reseting/resetting/\n>\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> 3.\n> - <title>Conflicts</title>\n> + <title>Conflicts and conflict resolution</title>\n>\n> nit - change the capitalisation to \"and Conflict Resolution\" to match\n> other titles.\n>\n> ~~~\n>\n> 4.\n> + Additional logging is triggered in various conflict scenarios,\n> each identified as a\n> + conflict type, and the conflict statistics are collected (displayed in the\n> + <link linkend=\"monitoring-pg-stat-subscription-stats\"><structname>pg_stat_subscription_stats</structname></link>\n> view).\n> + Users have the option to configure a\n> <literal>conflict_resolver</literal> for each\n> + <literal>conflict_type</literal> when creating a subscription.\n> + For more information on the supported\n> <literal>conflict_types</literal> detected and\n> + <literal>conflict_resolvers</literal>, refer to section\n> + <link linkend=\"sql-createsubscription-params-with-conflict-resolver\"><literal>CONFLICT\n> RESOLVERS</literal></link>.\n> +\n>\n> nit - \"Additional logging is triggered\" sounds strange. I reworded\n> this in the nits attachment. Please see if you approve.\n> nit - The \"conflict_type\" and \"conflict_resolver\" you are referring to\n> here are syntax elements of the CREATE SUBSCRIPTION, so here I think\n> they should just be called (without the underscores) \"conflict type\"\n> and \"conflict resolver\".\n> nit - IMO this would be better split into multiple paragraphs.\n> nit - There is no such section called \"CONFLICT RESOLVERS\". I reworded\n> this link text.\n>\n> ======\n> doc/src/sgml/monitoring.sgml\n>\n> 5.\n> The changes here all render with the link including the type \"(enum)\"\n> displayed, which I thought it unnecessary/strange.\n>\n> For example:\n> See insert_exists (enum) for details about this conflict.\n>\n> IIUC there is no problem here, but maybe the other end of the link\n> needed to define xreflabels. I have made the necessary modifications\n> in the create_subscription.sgml.\n>\n> ======\n> doc/src/sgml/ref/alter_subscription.sgml\n>\n> 6.\n> +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable>\n> CONFLICT RESOLVER ( <replaceable\n> class=\"parameter\">conflict_type</replaceable> [= <replaceable\n> class=\"parameter\">conflict_resolver</replaceable>] [, ...] )\n>\n> This syntax seems wrong to me.\n>\n> Currently, it says:\n> ALTER SUBSCRIPTION name CONFLICT RESOLVER ( conflict_type [=\n> conflict_resolver] [, ...] )\n>\n> But, shouldn't that say:\n> ALTER SUBSCRIPTION name CONFLICT RESOLVER ( conflict_type =\n> conflict_resolver [, ...] )\n>\n> ~~~\n> 7.\n> +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable>\n> RESET CONFLICT RESOLVER FOR (<replaceable\n> class=\"parameter\">conflict_type</replaceable>)\n>\n> I can see that this matches the implementation, but I was wondering\n> why don't you permit resetting multiple conflict_types at the same\n> time. e.g. what if I want to reset some but not ALL?\n>\n> ~~~\n>\n> nit - there are some minor whitespace indent problems in the SGML\n>\n> ~~~\n>\n> 8.\n> + <varlistentry id=\"sql-altersubscription-params-conflict-resolver\">\n> + <term><literal>CONFLICT RESOLVER ( <replaceable\n> class=\"parameter\">conflict_type</replaceable> [= <replaceable\n> class=\"parameter\">conflict_resolver</replaceable>] [, ... ]\n> )</literal></term>\n> + <listitem>\n> + <para>\n> + This clause alters either the default conflict resolvers or\n> those set by <xref linkend=\"sql-createsubscription\"/>.\n> + Refer to section <link\n> linkend=\"sql-createsubscription-params-with-conflict-resolver\"><literal>CONFLICT\n> RESOLVERS</literal></link>\n> + for the details on supported <literal>conflict_types</literal>\n> and <literal>conflict_resolvers</literal>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry id=\"sql-altersubscription-params-conflict-type\">\n> + <term><replaceable class=\"parameter\">conflict_type</replaceable></term>\n> + <listitem>\n> + <para>\n> + The conflict type being reset to its default resolver setting.\n> + For details on conflict types and their default resolvers, refer\n> to section <link\n> linkend=\"sql-createsubscription-params-with-conflict-resolver\"><literal>CONFLICT\n> RESOLVERS</literal></link>\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + </variablelist>\n>\n> This section seems problematic:\n> e.g the syntax seems wrong same as before.\n>\n> ~\n> There are other nits.\n> (I've given a rough fix in the nits attachment. Please see it and make\n> it better).\n>\n> nit - why do you care if it is \"either the default conflict resolvers\n> or those set...\". Why not just say \"current resolver\"\n> nit - it does not mention 'conflict_resolver' type in the normal way\n> nit - there is no actual section called \"CONFLICT RESOLVERS\"\n> nit - the part that says \"The conflict type being reset to its default\n> resolver setting.\" is bogus for this form of the ALTER statement.\n>\n> ~~~\n>\n> 9.\n> There is no description for the \"RESET CONFLICT RESOLVER ALL\"\n>\n> ~~~\n>\n> 10.\n> There is no description for the \"RESET CONFLICT RESOLVER FOR (conflict_type)\"\n>\n> ======\n> doc/src/sgml/ref/create_subscription.sgml\n>\n> 11. General - Order\n>\n> + <varlistentry id=\"sql-createsubscription-params-with-conflict-resolver\">\n> + <term><literal>CONFLICT RESOLVER ( <replaceable\n> class=\"parameter\">conflict_type</replaceable> = <replaceable\n>\n> nit - IMO this entire new entry about \"CONFLICT RESOLVER\" should\n> appear on the page *above* the \"WITH\" section, because that is the\n> order that it is defined in the CREATE SUBSCRIPTION syntax.\n>\n> ~~~\n>\n> 12. General - whitespace\n>\n> nit - Much of this new section seems to have a slightly wrong\n> indentation in the SGML. Mostly it is out by 1 or 2 spaces.\n>\n> ~~~\n>\n> 13. General - ordering of conflict_type.\n>\n> nit - Instead of just some apparent random order, let's put each\n> insert/update/delete conflict type in alphabetical order, so at least\n> users can find them where they would expect to find them.\n\nThis ordering was decided while implementing the 'conflict-detection\nand logging' patch and thus perhaps should be maintained as same. The\nordering is insert, update and delete (different variants of these).\nPlease see a comment on it in [1] (comment #2).\n\n[1]:https://www.postgresql.org/message-id/TYAPR01MB569224262F44875973FAF344F5B22%40TYAPR01MB5692.jpnprd01.prod.outlook.com\n\n\n> ~~~\n>\n> 14.\n> 99. General - ordering of conflict_resolver\n>\n> nit - ditto. Let's name these in alphabetical order. IMO it makes more\n> sense than the current random ordering.\n>\n\n I feel ordering of resolvers should be same as that of conflict\ntypes, i.e. resolvers of insert variants first, then update variants,\nthen delete variants. But would like to know what others think on\nthis.\n\n> ~~~\n>\n> 15.\n> + <para>\n> + This optional clause specifies options for conflict resolvers\n> for different conflict_types.\n> + </para>\n>\n> nit - IMO we don't need the words \"options for\" here.\n>\n> ~~~\n>\n> 16.\n> + <para>\n> + The <replaceable class=\"parameter\">conflict_type</replaceable>\n> and their default behaviour are listed below.\n>\n> nit - sounded strange to me. reworded it slightly.\n>\n> ~~~\n>\n> 17.\n> + <varlistentry\n> id=\"sql-createsubscription-params-with-conflict_type-insert-exists\">\n>\n> nit - Here, and for all other conflict types, add \"xreflabel\". See my\n> review comment #5 for the reason why.\n>\n> ~~~\n>\n> 18.\n> + <para>\n> + The <replaceable\n> class=\"parameter\">conflict_resolver</replaceable> and their behaviour\n> + are listed below. Users can use any of the following resolvers\n> for automatic conflict\n> + resolution.\n> + <variablelist>\n>\n> nit - reworded this too, to be like the previous review comment.\n>\n> ~~~\n>\n> 19. General - readability.\n>\n> 19a.\n> IMO the information about what are the default resolvers for each\n> conflict type, and what resolvers are allowed for each conflict type\n> should ideally be documented in a tabular form.\n>\n> Maybe all information is already present in the current document, but\n> it is certainly hard to easily see it.\n>\n> As an example, I have added a table in this section. Maybe it is the\n> best placement for this table, but I gave it mostly how you can\n> present the same information so it is easier to read.\n>\n> ~\n> 19b.\n> Bug. In doing this exercise I discovered there are 2 resolvers\n> (\"error\" and \"apply_remote\") that both claim to be defaults for the\n> same conflict types.\n>\n> They both say:\n>\n> + It is the default resolver for <literal>insert_exists</literal> and\n> + <literal>update_exists</literal>.\n>\n> Anyway, this demonstrates that the current information was hard to read.\n>\n> I can tell from the code implementation what the document was supposed\n> to say, but I will leave it to the patch authors to fix this one.\n> (e.g. \"apply_remote\" says the wrong defaults)\n>\n> ======\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n\n\n", "msg_date": "Mon, 30 Sep 2024 09:56:58 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Sep 30, 2024 at 2:27 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Sep 27, 2024 at 1:00 PM Peter Smith <[email protected]> wrote:\n...\n> >\n> > 13. General - ordering of conflict_type.\n> >\n> > nit - Instead of just some apparent random order, let's put each\n> > insert/update/delete conflict type in alphabetical order, so at least\n> > users can find them where they would expect to find them.\n>\n> This ordering was decided while implementing the 'conflict-detection\n> and logging' patch and thus perhaps should be maintained as same. The\n> ordering is insert, update and delete (different variants of these).\n> Please see a comment on it in [1] (comment #2).\n>\n> [1]:https://www.postgresql.org/message-id/TYAPR01MB569224262F44875973FAF344F5B22%40TYAPR01MB5692.jpnprd01.prod.outlook.com\n>\n\n+1 for order insert/update/delete.\n\nMy issue was only about the order *within* each of those variants.\ne.g. I think it should be alphabetical:\n\nCURRENT\ninsert_exists\nupdate_origin_differs\nupdate_exists\nupdate_missing\ndelete_origin_differs\ndelete_missing\n\nSUGGESTED\ninsert_exists\nupdate_exists\nupdate_missing\nupdate_origin_differs\ndelete_missing\ndelete_origin_differs\n\n>\n> > ~~~\n> >\n> > 14.\n> > 99. General - ordering of conflict_resolver\n> >\n> > nit - ditto. Let's name these in alphabetical order. IMO it makes more\n> > sense than the current random ordering.\n> >\n>\n> I feel ordering of resolvers should be same as that of conflict\n> types, i.e. resolvers of insert variants first, then update variants,\n> then delete variants. But would like to know what others think on\n> this.\n>\n\nResolvers in v14 were documented in this random order:\nerror\nskip\napply_remote\nkeep_local\napply_or_skip\napply_or_error\n\nSome of these are resolvers for different conflicts. How can you order\nthese as \"resolvers for insert\" followed by \"resolvers for update\"\nfollowed by \"resolvers for delete\" without it all still appearing in\nrandom order?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 30 Sep 2024 15:34:16 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Sep 30, 2024 at 11:04 AM Peter Smith <[email protected]> wrote:\n>\n> On Mon, Sep 30, 2024 at 2:27 PM shveta malik <[email protected]> wrote:\n> >\n> > On Fri, Sep 27, 2024 at 1:00 PM Peter Smith <[email protected]> wrote:\n> ...\n> > >\n> > > 13. General - ordering of conflict_type.\n> > >\n> > > nit - Instead of just some apparent random order, let's put each\n> > > insert/update/delete conflict type in alphabetical order, so at least\n> > > users can find them where they would expect to find them.\n> >\n> > This ordering was decided while implementing the 'conflict-detection\n> > and logging' patch and thus perhaps should be maintained as same. The\n> > ordering is insert, update and delete (different variants of these).\n> > Please see a comment on it in [1] (comment #2).\n> >\n> > [1]:https://www.postgresql.org/message-id/TYAPR01MB569224262F44875973FAF344F5B22%40TYAPR01MB5692.jpnprd01.prod.outlook.com\n> >\n>\n> +1 for order insert/update/delete.\n>\n> My issue was only about the order *within* each of those variants.\n> e.g. I think it should be alphabetical:\n>\n> CURRENT\n> insert_exists\n> update_origin_differs\n> update_exists\n> update_missing\n> delete_origin_differs\n> delete_missing\n>\n> SUGGESTED\n> insert_exists\n> update_exists\n> update_missing\n> update_origin_differs\n> delete_missing\n> delete_origin_differs\n>\n\nOkay, got it now. I have no strong opinion here. I am okay with both.\nBut since it was originally added by other thread, so it will be good\nto know the respective author's opinion as well.\n\n> >\n> > > ~~~\n> > >\n> > > 14.\n> > > 99. General - ordering of conflict_resolver\n> > >\n> > > nit - ditto. Let's name these in alphabetical order. IMO it makes more\n> > > sense than the current random ordering.\n> > >\n> >\n> > I feel ordering of resolvers should be same as that of conflict\n> > types, i.e. resolvers of insert variants first, then update variants,\n> > then delete variants. But would like to know what others think on\n> > this.\n> >\n>\n> Resolvers in v14 were documented in this random order:\n> error\n> skip\n> apply_remote\n> keep_local\n> apply_or_skip\n> apply_or_error\n>\n\nYes, these should be changed.\n\n> Some of these are resolvers for different conflicts. How can you order\n> these as \"resolvers for insert\" followed by \"resolvers for update\"\n> followed by \"resolvers for delete\" without it all still appearing in\n> random order?\n\nI was thinking of ordering them like this:\n\napply_remote: applicable to insert_exists, update_exists,\nupdate_origin_differ, delete_origin_differ\nkeep_local: applicable to insert_exists,\nupdate_exists, update_origin_differ, delete_origin_differ\napply_or_skip: applicable to update_missing\napply_or_error : applicable to update_missing\nskip: applicable to update_missing and\ndelete_missing\nerror: applicable to all.\n\ni.e. in order of how they are applicable to conflict_types starting\nfrom insert_exists till delete_origin_differ (i.e. reading\nConflictTypeResolverMap, from left to right and then top to bottom).\nExcept I have kept 'error' at the end instead of keeping it after\n'keep_local' as the former makes more sense there.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 30 Sep 2024 11:59:04 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Fri, Sep 27, 2024 at 2:33 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Sep 27, 2024 at 10:44 AM shveta malik <[email protected]> wrote:\n> >\n> > > >\n> > > > Thanks for the review.\n> > > > Here is the v14 patch-set fixing review comments in [1] and [2].\n> > > >\n> > >\n> > > Thanks for the patches. I am reviewing patch001, it is WIP, but please\n> > > find initial set of comments:\n> > >\n> >\n\nPlease find next set of comments:\n\n1)\nparse_subscription_conflict_resolvers()\n\nShall we free 'SeenTypes' list at the end?\n\n2)\nGeneral logic comment:\nI think SetSubConflictResolver should also accept a list similar to\nUpdateSubConflictResolvers() instead of array. Then we can even try\nmerging these 2 functions later (once we do this change, it will be\nmore clear). For SetSubConflictResolver to accept a list,\nSetDefaultResolvers should give a list as output instead of an array\ncurrently.\n\n3)\nExisting logic:\ncase ALTER_SUBSCRIPTION_RESET_ALL_CONFLICT_RESOLVERS:\n{\nConflictTypeResolver conflictResolvers[CONFLICT_NUM_TYPES];\n\n/* Remove the existing conflict resolvers. */\nRemoveSubscriptionConflictResolvers(subid);\n\n/*\n* Create list of conflict resolvers and set them in the\n* catalog.\n*/\nSetDefaultResolvers(conflictResolvers);\nSetSubConflictResolver(subid, conflictResolvers, CONFLICT_NUM_TYPES);\n}\n\nSuggestion:\nIf we fix comment #2 and make SetSubConflictResolver and\nSetDefaultResolvers to deal with list, then here we can get rid of\nRemoveSubscriptionConflictResolvers(), we can simply make a default\nlist using SetDefaultResolvers and call UpdateSubConflictResolvers().\nNo need for 2 separate calls for delete and insert/set.\n\n4)\nShall ResetConflictResolver() function also call\nUpdateSubConflictResolvers internally? It will get rid of a lot code\nduplication.\n\nResetConflictResolver()'s new approach could be:\na) validate conflict type and get enum value. To do this job, make a\nsub-function validate_conflict_type() which will be called both from\nhere and from validate_conflict_type_and_resolver().\nb) get default resolver for given conflict-type enum and then get\nresolver string for that to help step c.\nc) create a list of single element of ConflictTypeResolver and call\nUpdateSubConflictResolvers.\n\n5)\ntypedefs.list\nConflictResolver is missed?\n\n6)\nsubscriptioncmds.c\n\n/* Get the list of conflict types and resolvers and validate them. */\nconflict_resolvers = GetAndValidateSubsConflictResolverList(stmt->resolvers);\n\nNo full stop needed in one line comment. But since it is >80 chars,\nit is good to split it to multiple lines and then full stop can be\nretained.\n\n\n7)\nShall we move the call to conf_detection_check_prerequisites() to\nGetAndValidateSubsConflictResolverList() similar to how we do it for\nparse_subscription_conflict_resolvers()? (I still prefer that\nGetAndValidateSubsConflictResolverList and\nparse_subscription_conflict_resolvers should be merged in the first\nplace. Array to list conversion as suggested in comment #2 will make\nthese two functions more similar, and then we can review to merge\nthem.)\n\n\n8)\nShall parse_subscription_conflict_resolvers() be moved to conflict.c\nas well? Or since it is subscription options' parsing, is it more\nsuited in the current file? Thoughts?\n\n9)\nExisting:\n/*\n * Parsing function for conflict resolvers in CREATE SUBSCRIPTION command.\n * This function will report an error if mutually exclusive or duplicate\n * options are specified.\n */\n\nSuggestion:\n/*\n * Parsing function for conflict resolvers in CREATE SUBSCRIPTION command.\n *\n * In addition to parsing and validating the resolvers' configuration,\nthis function\n * also reports an error if mutually exclusive options are specified.\n */\n\n\n10) Test comments (subscription.sql):\n\n------\na)\n-- fail - invalid conflict resolvers\nCREATE SUBSCRIPTION regress_testsub CONNECTION\n'dbname=regress_doesnotexist' PUBLICATION testpub CONFLICT RESOLVER\n(insert_exists = foo) WITH (connect = false);\n\n-- fail - invalid conflict types\nCREATE SUBSCRIPTION regress_testsub CONNECTION\n'dbname=regress_doesnotexist' PUBLICATION testpub CONFLICT RESOLVER\n(foo = 'keep_local') WITH (connect = false);\n\nWe should swap the order of these 2 tests. Make it similar to ALTER tests.\n\nb)\n-- fail - invalid conflict resolvers\nresolvers-->resolver\n\n-- fail - invalid conflict types\ntypes-->type\n\n-- fail - duplicate conflict types\ntypes->type\n\nc)\n-- creating subscription should create default conflict resolvers\n\nSuggestion:\n-- creating subscription with no explicit conflict resolvers should\nconfigure default conflict resolvers\n\nd)\n-- ok - valid conflict type and resolvers\ntype-->types\n\ne)\n-- fail - altering with duplicate conflict types\ntypes --> type\n------\n\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 30 Sep 2024 12:35:49 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Conflict Detection and Resolution" }, { "msg_contents": "On Mon, Sep 30, 2024 at 4:29 PM shveta malik <[email protected]> wrote:\n>\n> On Mon, Sep 30, 2024 at 11:04 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Mon, Sep 30, 2024 at 2:27 PM shveta malik <[email protected]> wrote:\n> > >\n> > > On Fri, Sep 27, 2024 at 1:00 PM Peter Smith <[email protected]> wrote:\n> > >\n> > > > ~~~\n> > > >\n> > > > 14.\n> > > > 99. General - ordering of conflict_resolver\n> > > >\n> > > > nit - ditto. Let's name these in alphabetical order. IMO it makes more\n> > > > sense than the current random ordering.\n> > > >\n> > >\n> > > I feel ordering of resolvers should be same as that of conflict\n> > > types, i.e. resolvers of insert variants first, then update variants,\n> > > then delete variants. But would like to know what others think on\n> > > this.\n> > >\n> >\n> > Resolvers in v14 were documented in this random order:\n> > error\n> > skip\n> > apply_remote\n> > keep_local\n> > apply_or_skip\n> > apply_or_error\n> >\n>\n> Yes, these should be changed.\n>\n> > Some of these are resolvers for different conflicts. How can you order\n> > these as \"resolvers for insert\" followed by \"resolvers for update\"\n> > followed by \"resolvers for delete\" without it all still appearing in\n> > random order?\n>\n> I was thinking of ordering them like this:\n>\n> apply_remote: applicable to insert_exists, update_exists,\n> update_origin_differ, delete_origin_differ\n> keep_local: applicable to insert_exists,\n> update_exists, update_origin_differ, delete_origin_differ\n> apply_or_skip: applicable to update_missing\n> apply_or_error : applicable to update_missing\n> skip: applicable to update_missing and\n> delete_missing\n> error: applicable to all.\n>\n> i.e. in order of how they are applicable to conflict_types starting\n> from insert_exists till delete_origin_differ (i.e. reading\n> ConflictTypeResolverMap, from left to right and then top to bottom).\n> Except I have kept 'error' at the end instead of keeping it after\n> 'keep_local' as the former makes more sense there.\n>\n\nThis proves my point because, without your complicated explanation to\naccompany it, the final order (below) just looks random to me:\napply_remote\nkeep_local\napply_or_skip\napply_or_error\nskip\nerror\n\nUnless there is some compelling reason to do it differently, I still\nprefer A-Z (the KISS principle).\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 30 Sep 2024 19:25:00 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict Detection and Resolution" } ]
[ { "msg_contents": "Hello, I have a question about cross-compiling. I get an error when doing initdb for postgresql for arm64 architecture devices.\r\nThe error information is Error relocating /data/postgresql/postgresql-16.3-arm64-v8a-build/tmp_install/usr/postgresql/arm64-v8a/lib/dict_snowball.so: palloc0: symbol not found.\r\nIn fact, the library exists in this directory, and the palloc symbol exists but is not defined.\r\nAny tips to go around this issue?\r\nThanks!\nHello, I have a question about cross-compiling. I get an error when doing initdb for postgresql for arm64 architecture devices.The error information is Error relocating /data/postgresql/postgresql-16.3-arm64-v8a-build/tmp_install/usr/postgresql/arm64-v8a/lib/dict_snowball.so: palloc0: symbol not found.In fact, the library exists in this directory, and the palloc symbol exists but is not defined.Any tips to go around this issue?Thanks!", "msg_date": "Thu, 23 May 2024 17:08:05 +0800", "msg_from": "\"=?utf-8?B?6ZmI5Lqa5p2w?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "about cross-compiling issue" }, { "msg_contents": "\"=?utf-8?B?6ZmI5Lqa5p2w?=\" <[email protected]> writes:\n> Hello, I have a question about cross-compiling. I get an error when doing initdb for postgresql for arm64 architecture devices.\n> The error information is Error relocating /data/postgresql/postgresql-16.3-arm64-v8a-build/tmp_install/usr/postgresql/arm64-v8a/lib/dict_snowball.so: palloc0: symbol not found.\n\nWe don't really support cross-compiling, because there are too many\nthings that the configure script can't check for if it can't run a\ntest program. In this particular case I think what is biting you\nis that configure won't add -Wl,--export-dynamic to the backend\nlink switches.\n\nYou might think that that shouldn't require a test program to\nverify, but c-compiler.m4 says differently:\n\n# Given a string, check if the compiler supports the string as a\n# command-line option. If it does, add to the given variable.\n# For reasons you'd really rather not know about, this checks whether\n# you can link to a particular function, not just whether you can link.\n# In fact, we must actually check that the resulting program runs :-(\n\nThis check dates to 2008, and maybe it's no longer necessary on\nany modern system, but I'm unexcited about trying to find out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 May 2024 11:10:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: about cross-compiling issue" }, { "msg_contents": "Hi Chen Yajie,\n\n Your provided information is fuzzy, so I can only give some simple suggestions:\n1. Use `file dict_snowball.so` to see the detail info of dict_snowball.so, maybe you can get some useful hint.\n2. Use gdb to debug the initdb processing, then you can get more detail error info. That will help you figuring out the reason that initdb not working.\n\n\n\n\n\nAt 2024-05-23 16:08:05, \"陈亚杰\" <[email protected]> wrote:\n\n\n\n\n\nHello, I have a question about cross-compiling. I get an error when doing initdb for postgresql for arm64 architecture devices.\nThe error information is Error relocating /data/postgresql/postgresql-16.3-arm64-v8a-build/tmp_install/usr/postgresql/arm64-v8a/lib/dict_snowball.so: palloc0: symbol not found.\nIn fact, the library exists in this directory, and the palloc symbol exists but is not defined.\nAny tips to go around this issue?\nThanks!\nHi Chen Yajie, Your provided information is fuzzy, so I can only give some simple suggestions:1. Use `file dict_snowball.so` to see the detail info of dict_snowball.so, maybe you can get some useful hint.2. Use gdb to debug the initdb processing, then you can get more detail error info. That will help you figuring out the reason that initdb not working.At 2024-05-23 16:08:05, \"陈亚杰\" <[email protected]> wrote:Hello, I have a question about cross-compiling. I get an error when doing initdb for postgresql for arm64 architecture devices.The error information is Error relocating /data/postgresql/postgresql-16.3-arm64-v8a-build/tmp_install/usr/postgresql/arm64-v8a/lib/dict_snowball.so: palloc0: symbol not found.In fact, the library exists in this directory, and the palloc symbol exists but is not defined.Any tips to go around this issue?Thanks!", "msg_date": "Fri, 24 May 2024 14:02:52 +0800 (CST)", "msg_from": "\"Long Song\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re:about cross-compiling issue" } ]
[ { "msg_contents": "Hello hackers,\n\nI'd like to discuss ways to improve the buildfarm experience for anyone who\nare interested in using information which buildfarm gives to us.\n\nUnless I'm missing something, as of now there are no means to determine\nwhether some concrete failure is known/investigated or fixed, how\nfrequently it occurs and so on... From my experience, it's not that\nunbelievable that some failure occurred two years ago and lost in time was\nan indication of e. g. a race condition still existing in the code/tests\nand thus worth fixing. But without classifying/marking failures it's hard\nto find such or other interesting failure among many others...\n\nThe first way to improve things I can imagine is to add two fields to the\nbuildfarm database: a link to the failure discussion (set when the failure\nis investigated/reproduced and reported in -bugs or -hackers) and a commit\nid/link (set when the failure is fixed). I understand that it requires\nmodifying the buildfarm code, and adding some UI to update these fields,\nbut it allows to add filters to see only unknown/non-investigated failures\nin the buildfarm web interface later.\n\nThe second way is to create a wiki page, similar to \"PostgreSQL 17 Open\nItems\", say, \"Known buildfarm test failures\" and fill it like below:\n<url to failure1>\n<url to failure2>\n...\nUseful info from the failure logs for reference\n...\n<link to -hackers thread>\n---\nThis way is less invasive, but it would work well only if most of\ninterested people know of it/use it.\n(I could start with the second approach, if you don't mind, and we'll see\nhow it works.)\n\nBest regards,\nAlexande|r|\n\n\n\n\n\n Hello hackers,\n\n I'd like to discuss ways to improve the buildfarm experience for\n anyone who\n are interested in using information which buildfarm gives to us.\n\n Unless I'm missing something, as of now there are no means to\n determine\n whether some concrete failure is known/investigated or fixed, how\n frequently it occurs and so on... From my experience, it's not that\n unbelievable that some failure occurred two years ago and lost in\n time was\n an indication of e. g. a race condition still existing in the\n code/tests\n and thus worth fixing. But without classifying/marking failures it's\n hard\n to find such or other interesting failure among many others...\n\n The first way to improve things I can imagine is to add two fields\n to the\n buildfarm database: a link to the failure discussion (set when the\n failure\n is investigated/reproduced and reported in -bugs or -hackers) and a\n commit\n id/link (set when the failure is fixed). I understand that it\n requires\n modifying the buildfarm code, and adding some UI to update these\n fields,\n but it allows to add filters to see only unknown/non-investigated\n failures\n in the buildfarm web interface later.\n\n The second way is to create a wiki page, similar to \"PostgreSQL 17\n Open\n Items\", say, \"Known buildfarm test failures\" and fill it like below:\n <url to failure1>\n <url to failure2>\n ...\n Useful info from the failure logs for reference\n ...\n <link to -hackers thread>\n ---\n This way is less invasive, but it would work well only if most of\n interested people know of it/use it.\n (I could start with the second approach, if you don't mind, and\n we'll see\n how it works.)\n\n Best regards,\n Alexander", "msg_date": "Thu, 23 May 2024 14:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Improving tracking/processing of buildfarm test failures" }, { "msg_contents": "On Thu, May 23, 2024 at 4:30 PM Alexander Lakhin <[email protected]> wrote:\n>\n> I'd like to discuss ways to improve the buildfarm experience for anyone who\n> are interested in using information which buildfarm gives to us.\n>\n> Unless I'm missing something, as of now there are no means to determine\n> whether some concrete failure is known/investigated or fixed, how\n> frequently it occurs and so on... From my experience, it's not that\n> unbelievable that some failure occurred two years ago and lost in time was\n> an indication of e. g. a race condition still existing in the code/tests\n> and thus worth fixing. But without classifying/marking failures it's hard\n> to find such or other interesting failure among many others...\n>\n> The first way to improve things I can imagine is to add two fields to the\n> buildfarm database: a link to the failure discussion (set when the failure\n> is investigated/reproduced and reported in -bugs or -hackers) and a commit\n> id/link (set when the failure is fixed). I understand that it requires\n> modifying the buildfarm code, and adding some UI to update these fields,\n> but it allows to add filters to see only unknown/non-investigated failures\n> in the buildfarm web interface later.\n>\n> The second way is to create a wiki page, similar to \"PostgreSQL 17 Open\n> Items\", say, \"Known buildfarm test failures\" and fill it like below:\n> <url to failure1>\n> <url to failure2>\n> ...\n> Useful info from the failure logs for reference\n> ...\n> <link to -hackers thread>\n> ---\n> This way is less invasive, but it would work well only if most of\n> interested people know of it/use it.\n> (I could start with the second approach, if you don't mind, and we'll see\n> how it works.)\n>\n\nI feel it is a good idea to do something about this. It makes sense to\nstart with something simple and see how it works. I think this can\nalso help us whether we need to chase a particular BF failure\nimmediately after committing.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 24 May 2024 16:45:07 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving tracking/processing of buildfarm test failures" }, { "msg_contents": "On Thu, May 23, 2024 at 02:00:00PM +0300, Alexander Lakhin wrote:\n> I'd like to discuss ways to improve the buildfarm experience for anyone who\n> are interested in using information which buildfarm gives to us.\n> \n> Unless I'm missing something, as of now there are no means to determine\n> whether some concrete failure is known/investigated or fixed, how\n> frequently it occurs and so on... From my experience, it's not that\n> unbelievable that some failure occurred two years ago and lost in time was\n> an indication of e. g. a race condition still existing in the code/tests\n> and thus worth fixing. But without classifying/marking failures it's hard\n> to find such or other interesting failure among many others...\n\nI agree this is an area of difficulty consuming buildfarm results. I have an\ninefficient template for studying a failure, which your proposals would help:\n\n**** grep recent -hackers for animal name\n**** search the log for ~10 strings (e.g. \"was terminated\") to find the real indicator of where it failed\n**** search mailing lists for that indicator\n**** search buildfarm database for that indicator\n\n> The first way to improve things I can imagine is to add two fields to the\n> buildfarm database: a link to the failure discussion (set when the failure\n> is investigated/reproduced and reported in -bugs or -hackers) and a commit\n> id/link (set when the failure is fixed). I understand that it requires\n\nI bet the hard part is getting data submissions, so I'd err on the side of\nmaking this as easy as possible for submitters. For example, accept free-form\ntext for quick notes, not only URLs and commit IDs.\n\n> modifying the buildfarm code, and adding some UI to update these fields,\n> but it allows to add filters to see only unknown/non-investigated failures\n> in the buildfarm web interface later.\n> \n> The second way is to create a wiki page, similar to \"PostgreSQL 17 Open\n> Items\", say, \"Known buildfarm test failures\" and fill it like below:\n> <url to failure1>\n> <url to failure2>\n> ...\n> Useful info from the failure logs for reference\n> ...\n> <link to -hackers thread>\n> ---\n> This way is less invasive, but it would work well only if most of\n> interested people know of it/use it.\n> (I could start with the second approach, if you don't mind, and we'll see\n> how it works.)\n\nCertainly you doing (2) can only help, though it may help less than (1).\n\n\nI recommend considering what the buildfarm server could discover and publish\non its own. Examples:\n\n- N members failed at the same step, in a related commit range. Those members\n are now mostly green. Defect probably got fixed quickly.\n\n- Log contains the following lines that are highly correlated with failure.\n The following other reports, if any, also contained them.\n\n\n", "msg_date": "Fri, 24 May 2024 13:00:35 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving tracking/processing of buildfarm test failures" }, { "msg_contents": "Hello Amit and Noah,\n\n24.05.2024 14:15, Amit Kapila wrote:\n> I feel it is a good idea to do something about this. It makes sense to\n> start with something simple and see how it works. I think this can\n> also help us whether we need to chase a particular BF failure\n> immediately after committing.\n\n24.05.2024 23:00, Noah Misch wrote:\n\n>\n>> (I could start with the second approach, if you don't mind, and we'll see\n>> how it works.)\n> Certainly you doing (2) can only help, though it may help less than (1).\n\nThank you for paying attention to this!\n\nI've created such page to accumulate information on test failures:\nhttps://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures\n\nI've deliberately added a trivial issue with partition_split, which is\ndoomed to be fixed soon, to test the information workflow, and I'm going\nto add a few other items in the coming days.\n\nPlease share your comments and suggestions, if any.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 25 May 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving tracking/processing of buildfarm test failures" }, { "msg_contents": "\nOn 2024-05-24 Fr 16:00, Noah Misch wrote:\n> On Thu, May 23, 2024 at 02:00:00PM +0300, Alexander Lakhin wrote:\n>> I'd like to discuss ways to improve the buildfarm experience for anyone who\n>> are interested in using information which buildfarm gives to us.\n>>\n>> Unless I'm missing something, as of now there are no means to determine\n>> whether some concrete failure is known/investigated or fixed, how\n>> frequently it occurs and so on... From my experience, it's not that\n>> unbelievable that some failure occurred two years ago and lost in time was\n>> an indication of e. g. a race condition still existing in the code/tests\n>> and thus worth fixing. But without classifying/marking failures it's hard\n>> to find such or other interesting failure among many others...\n> I agree this is an area of difficulty consuming buildfarm results. I have an\n> inefficient template for studying a failure, which your proposals would help:\n>\n> **** grep recent -hackers for animal name\n> **** search the log for ~10 strings (e.g. \"was terminated\") to find the real indicator of where it failed\n> **** search mailing lists for that indicator\n> **** search buildfarm database for that indicator\n>\n>> The first way to improve things I can imagine is to add two fields to the\n>> buildfarm database: a link to the failure discussion (set when the failure\n>> is investigated/reproduced and reported in -bugs or -hackers) and a commit\n>> id/link (set when the failure is fixed). I understand that it requires\n> I bet the hard part is getting data submissions, so I'd err on the side of\n> making this as easy as possible for submitters. For example, accept free-form\n> text for quick notes, not only URLs and commit IDs.\n>\n>> modifying the buildfarm code, and adding some UI to update these fields,\n>> but it allows to add filters to see only unknown/non-investigated failures\n>> in the buildfarm web interface later.\n>>\n>> The second way is to create a wiki page, similar to \"PostgreSQL 17 Open\n>> Items\", say, \"Known buildfarm test failures\" and fill it like below:\n>> <url to failure1>\n>> <url to failure2>\n>> ...\n>> Useful info from the failure logs for reference\n>> ...\n>> <link to -hackers thread>\n>> ---\n>> This way is less invasive, but it would work well only if most of\n>> interested people know of it/use it.\n>> (I could start with the second approach, if you don't mind, and we'll see\n>> how it works.)\n> Certainly you doing (2) can only help, though it may help less than (1).\n>\n>\n> I recommend considering what the buildfarm server could discover and publish\n> on its own. Examples:\n>\n> - N members failed at the same step, in a related commit range. Those members\n> are now mostly green. Defect probably got fixed quickly.\n>\n> - Log contains the following lines that are highly correlated with failure.\n> The following other reports, if any, also contained them.\n>\n>\n\n\nI'm prepared to help, but also bear in mind that currently the only \npeople who can submit notes are animal owners who can attach notes to \ntheir own animals. I'm not keen to allow general public submission of \nnotes to the database. We already get lots of spam requests that we turn \naway.\n\nIf you have queries that you want canned we can look at that. Ditto \nextra database fields. Currently we don't have any processing that \ncorrelates different failures, but that's not inconceivable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 26 May 2024 07:44:11 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving tracking/processing of buildfarm test failures" }, { "msg_contents": "Hello hackers,\n\n25.05.2024 15:00, I wrote:\n> I've created such page to accumulate information on test failures:\n> https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures\n>\n\nOne month later,  I'd like to summarize failures that I've investigated\nand classified during June, 2024 on the aforementioned wiki page.\n(Maybe it would make sense to issue a monthly report with such information\nin the future.)\n\nImagining a hypothetical table, we could get such statistics:\n# SELECT br, count(*) FROM failures WHERE dt >= '2024-06-01' AND\n  dt < '2024-07-01' GROUP BY br;\nREL_12_STABLE: 6\nREL_13_STABLE: 14\nREL_14_STABLE: 13\nREL_15_STABLE: 10\nREL_16_STABLE: 4\nHEAD: 47\n-- Total: 94\n(Counting test failures only, excluding indent-check, Configure, Build\nerrors.)\n\n# SELECT COUNT(*) FROM (SELECT DISTINCT issue_link FROM failures WHERE\n  dt >= '2024-06-01' AND dt < '2024-07-01');\n21\n\n\n# SELECT issue_link, count(*) FROM failures WHERE dt >= '2024-06-01' AND\n  dt < '2024-07-01' GROUP BY issue_link ORDER BY 2 DESC LIMIT 7;\nhttps://www.postgresql.org/message-id/[email protected]: 13\n-- \nhttps://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#inplace-inval.spec_fails_on_prion_and_trilobite_on_checking_relhasindex\n-- Fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 10\n-- \nhttps://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#008_fsm_truncation_failing_on_dodo_in_v14-_due_to_slow_fsync\n-- An environmental issue\n\nhttps://www.postgresql.org/message-id/[email protected]: 9\n-- https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#regress-running.2Fregress_fails_on_skink_due_to_timeout\n-- An environmental issue\n\nhttps://www.postgresql.org/message-id/[email protected]: 9\n-- \nhttps://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#ssl_tests_.28001_ssltests.pl.2C_002_scram.pl.2C_003_sslinfo.pl.29_fail_due_to_TCP_port_conflict\n-- A fix proposed, commit pending\n\nhttps://www.postgresql.org/message-id/[email protected]: 9\n-- \nhttps://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#plperl.sql_failing_in_v15-_on_caiman_with_a_newer_Perl_version\n-- Fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 7\n-- \nhttps://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#040_pg_createsubscriber.pl_fails_on_Windows_due_to_unterminated_quoted_string\n-- Fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 6\n-- \nhttps://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#Isolation_tests_fail_on_hamerkop_with_.22too_many_clients.22_errors\n-- A fix proposed, commit pending\n\n\n# SELECT fix_link, count(*) FROM failures WHERE dt >= '2024-06-01' AND\n  dt < '2024-07-01' AND fix_link IS NOT NULL GROUP BY fix_link ORDER BY 2 DESC;\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=458fada72: 13\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f853e23bf: 10\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a1333ec04: 7\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b96391382: 3\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e656657f2: 1\n-- Total: 5\n\n\n# SELECT log_link FROM failures WHERE dt >= '2024-06-01' AND\n  dt < '2024-07-01' AND issue_link IS NULL; -- Not investigated/classified failures\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-17%2004%3A21%3A42\ninitdb: error: invalid locale settings; check LANG and LC_* environment variables\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-06-27%2015%3A38%3A27\nStopDb-C:4\npg_ctl: server does not shut down\n-- The most mysterious issue to me, more information needed\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-06-13%2017%3A58%3A28\nStopDb-C:4\npg_ctl: server does not shut down\n-- The most mysterious issue to me, more information needed\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-28%2001%3A06%3A00\n# Running: pg_ctl -D \nC:\\\\prog\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/recovery/002_archiving\\\\data/t_002_archiving_standby_data/pgdata \n-l C:\\\\prog\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/recovery/002_archiving\\\\log/002_archiving_standby.log promote\nwaiting for server to \npromote........................................................................................................................... \nstopped waiting\npg_ctl: server did not promote in time\n-- Most probably the machine's performance issue, an issue report is pending.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-06-04%2022%3A45%3A00\nconnection error: 'psql: error: connection to server on socket \"/tmp/9IDPzZm7Pp/.s.PGSQL.63572\" failed: FATAL:  role \n\"bf\" does not exist'\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-06-17%2016%3A02%3A03&stg=xversion-upgrade-REL_16_STABLE-HEAD\nprogram \"postgres\" is needed by pg_ctl but was not found in the same directory as \n\"/home/andrew/bf/root/saves.crake/REL_16_STABLE/bin/pg_ctl\"\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-06-17%2017%3A07%3A03&stg=xversion-upgrade-REL_16_STABLE-HEAD\nprogram \"postgres\" is needed by pg_ctl but was not found in the same directory as \n\"/home/andrew/bf/root/saves.crake/REL_16_STABLE/bin/pg_ctl\"\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-07-01%2003%3A13%3A26\n+ERROR:  could not access file \"/repos/build-farm-17/HEAD/inst/lib/postgresql/plpgsql.so\": No such file or directory\n\n-- Total: 8\n\nAll the queries above are imaginary and some numbers could be inaccurate,\nbut I think it still represents the current state of affairs.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 2 Jul 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving tracking/processing of buildfarm test failures" }, { "msg_contents": "02.07.2024 15:00, Alexander Lakhin wrote:\n>\n> One month later,  I'd like to summarize failures that I've investigated\n> and classified during June, 2024 on the aforementioned wiki page.\n> (Maybe it would make sense to issue a monthly report with such information\n> in the future.)\n\nPlease take a look at July report on the buildfarm failures:\n# SELECT br, count(*) FROM failures WHERE dt >= '2024-07-01' AND\n  dt < '2024-08-01' GROUP BY br;\n\nREL_12_STABLE: 11\nREL_13_STABLE: 9\nREL_14_STABLE: 7\nREL_15_STABLE: 10\nREL_16_STABLE: 9\nREL_17_STABLE: 68\nHEAD: 106\n-- Total: 220\n(Counting test failures only, excluding indent-check, Configure, Build\nerrors.)\n\n# SELECT COUNT(*) FROM (SELECT DISTINCT issue_link FROM failures WHERE\n  dt >= '2024-07-01' AND dt < '2024-08-01');\n40\n\n# SELECT issue_link, count(*) FROM failures WHERE dt >= '2024-07-01' AND\n  dt < '2024-08-01' GROUP BY issue_link ORDER BY 2 DESC LIMIT 9;\n\nhttps://www.postgresql.org/message-id/[email protected]: 29\n-- An environmental issue\n\nhttps://www.postgresql.org/message-id/[email protected]: 20\n-- Probably fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 11\n-- Fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 9\n\nhttps://www.postgresql.org/message-id/[email protected]: 8\n-- An environmental issue; probably fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 8\n\nhttps://www.postgresql.org/message-id/[email protected]: 8\n-- Fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 8\n-- Fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 8\n-- Fixed\n\n# SELECT count(*) FROM failures WHERE dt >= '2024-07-01' AND\n  dt < '2024-08-01' AND issue_link IS NULL; -- Unsorted/unhelpful failures\n17\n\nAnd one more metric, that might be useful, but it requires also time\nanalysis — short-lived (eliminated immediately) failures: 83\n\nI also wrote a simple script (see attached) to check for unknown buildfarm\nfailures using \"HTML API\", to make sure no failures missed. Surely, it\ncould be improved in many ways, but I find it rather useful as-is.\n\nBest regards,\nAlexander", "msg_date": "Thu, 1 Aug 2024 12:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving tracking/processing of buildfarm test failures" }, { "msg_contents": "\nOn 2024-08-01 Th 5:00 AM, Alexander Lakhin wrote:\n>\n>\n> I also wrote a simple script (see attached) to check for unknown \n> buildfarm\n> failures using \"HTML API\", to make sure no failures missed. Surely, it\n> could be improved in many ways, but I find it rather useful as-is.\n>\n>\n\nI think we can improve on that. Scraping HTML is not a terribly \nefficient way of doing it. I'd very much like to improve the reporting \nside of the server.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 1 Aug 2024 17:19:43 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving tracking/processing of buildfarm test failures" }, { "msg_contents": "Hello hackers,\n\nPlease take a look at the August report on buildfarm failures:\n# SELECT br, count(*) FROM failures WHERE dt >= '2024-08-01' AND\n  dt < '2024-09-01' GROUP BY br;\nREL_12_STABLE: 2\nREL_13_STABLE: 2\nREL_14_STABLE: 12\nREL_15_STABLE: 3\nREL_16_STABLE: 5\nREL_17_STABLE: 17\nHEAD: 38\n-- Total: 79\n(Counting test failures only, excluding indent-check, Configure, Build\nerrors.)\n\n# SELECT COUNT(*) FROM (SELECT DISTINCT issue_link FROM failures WHERE\n  dt >= '2024-08-01' AND dt < '2024-09-01');\n21\n\n# SELECT issue_link, count(*) FROM failures WHERE dt >= '2024-08-01' AND\n  dt < '2024-09-01' GROUP BY issue_link ORDER BY 2 DESC LIMIT 6;\nhttps://www.postgresql.org/message-id/8ce8261a-bf3a-25e6-b473-4808f50a6ea7%40gmail.com: 13\n-- An environmental issue; fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 9\n-- An environmental issue?; probably fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 7\n-- Fixed\n\nhttps://www.postgresql.org/message-id/[email protected]: 6\n-- Expected to be fixed with Release 18 of the buildfarm client\n\nhttps://www.postgresql.org/message-id/[email protected]: 5\n\nhttps://www.postgresql.org/message-id/[email protected]: 4\n-- Fixed\n\n# SELECT count(*) FROM failures WHERE dt >= '2024-08-01' AND\n  dt < '2024-09-01' AND issue_link IS NULL; -- Unsorted/unhelpful failures\n13\n\nShort-lived failures: 21\n\nThere were also two mysterious never-before-seen failures, both occurred on\nPOWER animals:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-08-19%2019%3A17%3A59 - REL_17_STABLE\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=iguana&dt=2024-08-29%2013%3A57%3A57 - REL_15_STABLE\n\n(I'm not sure yet, whether they should be considered \"unhelpful\". I'll\nwait for more information from these animals/buildfarm in general to\ndetermine what to do with these failures.)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 1 Sep 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving tracking/processing of buildfarm test failures" }, { "msg_contents": "Hello everyone\n\nI am a developer interested in this project. Had a little involvement with\nMariaDB and now I like to work on Postgres. Never worked with mailing lists\nso I am not sure if this is the way I should interact. Liked to be pointed\nto some tasks and documents to get started.\n\nHello everyoneI am a developer interested in this project. Had a little involvement with MariaDB and now I like to work on Postgres. Never worked with mailing lists so I am not sure if this is the way I should interact. Liked to be pointed to some tasks and documents to get started.", "msg_date": "Sun, 1 Sep 2024 22:16:41 +0330", "msg_from": "sia kc <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving tracking/processing of buildfarm test failures" }, { "msg_contents": "On 2024-09-01 Su 2:46 PM, sia kc wrote:\n> Hello everyone\n>\n> I am a developer interested in this project. Had a little involvement \n> with MariaDB and now I like to work on Postgres. Never worked with \n> mailing lists so I am not sure if this is the way I should interact. \n> Liked to be pointed to some tasks and documents to get started.\n\n\nDo you mean you want to be involved with $subject, or that you just want \nto be involved in Postgres development generally? If the latter, then \nreplying to a specific email thread is not the way to go, and the first \nthing to do is look at this wiki page \n<https://wiki.postgresql.org/wiki/Developer_FAQ>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-09-01 Su 2:46 PM, sia kc wrote:\n\n\n\n\n\nHello everyone\n\n\nI am a developer interested in this project. Had\n a little involvement with MariaDB and now I like to work\n on Postgres. Never worked with mailing lists so I am not\n sure if this is the way I should interact. Liked to be\n pointed to some tasks and documents to get started.\n\n\n\n\n\n\nDo you mean you want to be involved with $subject,\n or that you just want to be involved in Postgres development\n generally? If the latter, then replying to a specific email\n thread is not the way to go, and the first thing to do is look\n at this wiki page\n <https://wiki.postgresql.org/wiki/Developer_FAQ>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 2 Sep 2024 17:45:53 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving tracking/processing of buildfarm test failures" } ]
[ { "msg_contents": "Hi ,\n\nI need to grant select on privilege in pg_catalog to user so I can connect via Toad Data point ,\n\nI tried by\ngrant select on all tables in schema pg_catalog to group sys;\nwhile connecting as sys.\n\nBut it throws me error\n\ngrant select on all tables in schema pg_catalog to sys ;\n\nERROR: permission denied for table pg_statistic SQL state: 42501\n\n\nCan I get some help please\n\n\nThank you\n\nSanjay karki\n\n\n\nSanjay Karki\n\nDatabase Administrator III\n\nITG\n\n[cid:[email protected]]\n\nO: 816-783-8718\nM: 816-394-4383\nW: www.naic.org<https://www.naic.org>\n\nFollow the NAIC on\n[cid:[email protected]]<https://www.facebook.com/NAIC.News> [cid:[email protected]] <https://twitter.com/NAIC> [cid:[email protected]] <https://www.youtube.com/user/NAICCommunications> [cid:[email protected]] <https://www.linkedin.com/company/naic/>\n\n\n\n--------------------------------------------------\n\nCONFIDENTIALITY NOTICE\n\n--------------------------------------------------\n\nThis message and any attachments are from the NAIC and are intended only for the addressee. Information contained herein is confidential, and may be privileged or exempt from disclosure pursuant to applicable federal or state law. This message is not intended as a waiver of the confidential, privileged or exempted status of the information transmitted. Unauthorized forwarding, printing, copying, distribution or use of such information is strictly prohibited and may be unlawful. If you are not the addressee, please promptly delete this message and notify the sender of the delivery error by e-mail or by forwarding it to the NAIC Service Desk at [email protected].", "msg_date": "Thu, 23 May 2024 22:01:41 +0000", "msg_from": "\"Karki, Sanjay\" <[email protected]>", "msg_from_op": true, "msg_subject": "PG catalog" }, { "msg_contents": "On Thursday, May 23, 2024, Karki, Sanjay <[email protected]> wrote:\n>\n> I need to grant select on privilege in pg_catalog to user so I can connect\n> via Toad Data point ,\n>\n> Users can already select from the tables in pg_catalog, grant able\nprivileges not required or allowed. Of course, some specific data is\nrestricted from non-superusers.\n\nDavid J.\n\nOn Thursday, May 23, 2024, Karki, Sanjay <[email protected]> wrote:\nI need to grant select on privilege in pg_catalog to user so I can connect via Toad Data point ,Users can already select from the tables in pg_catalog, grant able privileges not required or allowed.  Of course, some specific data is restricted from non-superusers.David J.", "msg_date": "Fri, 24 May 2024 04:07:51 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG catalog" }, { "msg_contents": "\"Karki, Sanjay\" <[email protected]> writes:\n> I need to grant select on privilege in pg_catalog to user so I can connect via Toad Data point ,\n\nWhy do you think you need to do that? Most catalogs have public\nselect privilege already, and for the ones that don't, there are\nvery good reasons why not. I don't know what \"Toad Data point\"\nis, but if it thinks it needs more privilege than is normally\ngranted, you should be asking very pointed questions about why\nand why that shouldn't be considered a security breach.\n\n(Usually we get complaints that the default permissions on the\ncatalogs are too loose, not too tight.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 09:57:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG catalog" } ]
[ { "msg_contents": "hi\n\ntypedef struct RelOptInfo\n{\n....\n/*\n* information about a base rel (not set for join rels!)\n*/\nIndex relid;\n...\n}\n\nimho, the above comment is not very helpful.\nwe should say more about what kind of information relid says about a base rel?\n\nI don't know much about RelOptInfo, that's why I ask.\n\n\n", "msg_date": "Fri, 24 May 2024 10:57:57 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "struct RelOptInfo member relid comments" }, { "msg_contents": "jian he <[email protected]> 于2024年5月24日周五 10:58写道:\n\n> hi\n>\n> typedef struct RelOptInfo\n> {\n> ....\n> /*\n> * information about a base rel (not set for join rels!)\n> */\n> Index relid;\n> ...\n> }\n>\n> imho, the above comment is not very helpful.\n> we should say more about what kind of information relid says about a base\n> rel?\n>\n> I don't know much about RelOptInfo, that's why I ask.\n>\n>\n> The fields in struct RelOptInfo between comment \" information about a base\nrel \" and\ncommnet \"Information about foreign tables and foreign joins\" are all about\na base rel.\n\nEvery field has a comment. I think that's already helpful for understanding\nwhat information\nwe need to optimize a base rel.\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\njian he <[email protected]> 于2024年5月24日周五 10:58写道:hi\n\ntypedef struct RelOptInfo\n{\n....\n/*\n* information about a base rel (not set for join rels!)\n*/\nIndex relid;\n...\n}\n\nimho, the above comment is not very helpful.\nwe should say more about what kind of information relid says about a base rel?\n\nI don't know much about RelOptInfo, that's why I ask.\n\n\nThe fields in struct RelOptInfo between comment \" information about a base rel \" andcommnet \"Information about foreign tables and foreign joins\" are all about a base rel.Every field has a comment. I think that's already helpful for understanding what informationwe need to optimize a base rel.-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Fri, 24 May 2024 11:13:45 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: struct RelOptInfo member relid comments" }, { "msg_contents": "jian he <[email protected]> writes:\n> imho, the above comment is not very helpful.\n> we should say more about what kind of information relid says about a base rel?\n\n\"Relid\" is defined at the very top of the file:\n\n/*\n * Relids\n *\t\tSet of relation identifiers (indexes into the rangetable).\n */\ntypedef Bitmapset *Relids;\n\nRepeating that everyplace the term \"relid\" appears would not be\ntremendously helpful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 May 2024 23:14:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: struct RelOptInfo member relid comments" }, { "msg_contents": "On Fri, May 24, 2024 at 11:14 AM Tom Lane <[email protected]> wrote:\n>\n> jian he <[email protected]> writes:\n> > imho, the above comment is not very helpful.\n> > we should say more about what kind of information relid says about a base rel?\n>\n> \"Relid\" is defined at the very top of the file:\n>\n> /*\n> * Relids\n> * Set of relation identifiers (indexes into the rangetable).\n> */\n> typedef Bitmapset *Relids;\n>\n> Repeating that everyplace the term \"relid\" appears would not be\n> tremendously helpful.\n>\n\n`Index relid;`\nis a relation identifier, a base rel's rangetable index.\n\nDoes the above description make sense?\n\n\nBTW, I found several occurrences of \"base+OJ\", but I cannot find the\nexplanation of \"OJ\" or the \"OJ\" Acronyms.\n\n\n", "msg_date": "Fri, 24 May 2024 11:39:37 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: struct RelOptInfo member relid comments" }, { "msg_contents": "On Fri, May 24, 2024 at 9:09 AM jian he <[email protected]> wrote:\n\n> On Fri, May 24, 2024 at 11:14 AM Tom Lane <[email protected]> wrote:\n> >\n> > jian he <[email protected]> writes:\n> > > imho, the above comment is not very helpful.\n> > > we should say more about what kind of information relid says about a\n> base rel?\n> >\n> > \"Relid\" is defined at the very top of the file:\n> >\n> > /*\n> > * Relids\n> > * Set of relation identifiers (indexes into the\n> rangetable).\n> > */\n> > typedef Bitmapset *Relids;\n> >\n> > Repeating that everyplace the term \"relid\" appears would not be\n> > tremendously helpful.\n> >\n>\n> `Index relid;`\n> is a relation identifier, a base rel's rangetable index.\n>\n> Does the above description make sense?\n>\n>\n> BTW, I found several occurrences of \"base+OJ\", but I cannot find the\n> explanation of \"OJ\" or the \"OJ\" Acronyms.\n>\n>\n> OJ is an outer join, AFAIU. OJ's have their own relids. If you are\nwondering why not all joins - I think inner joins need not be tracked as\nseparated relations in parse tree, but OJ's need to be.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, May 24, 2024 at 9:09 AM jian he <[email protected]> wrote:On Fri, May 24, 2024 at 11:14 AM Tom Lane <[email protected]> wrote:\n>\n> jian he <[email protected]> writes:\n> > imho, the above comment is not very helpful.\n> > we should say more about what kind of information relid says about a base rel?\n>\n> \"Relid\" is defined at the very top of the file:\n>\n> /*\n>  * Relids\n>  *              Set of relation identifiers (indexes into the rangetable).\n>  */\n> typedef Bitmapset *Relids;\n>\n> Repeating that everyplace the term \"relid\" appears would not be\n> tremendously helpful.\n>\n\n`Index relid;`\nis a relation identifier, a base rel's rangetable index.\n\nDoes the above description make sense?\n\n\nBTW, I found several occurrences of \"base+OJ\", but I cannot find the\nexplanation of \"OJ\" or the \"OJ\" Acronyms.\n\n\nOJ is an outer join, AFAIU. OJ's have their own relids. If you are wondering why not all joins - I think inner joins need not be tracked as separated relations in parse tree, but OJ's need to be.-- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 24 May 2024 10:53:18 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: struct RelOptInfo member relid comments" }, { "msg_contents": "Ashutosh Bapat <[email protected]> writes:\n> OJ is an outer join, AFAIU. OJ's have their own relids. If you are\n> wondering why not all joins - I think inner joins need not be tracked as\n> separated relations in parse tree, but OJ's need to be.\n\nAn outer join is necessarily associated with explicit JOIN syntax\nin the FROM clause, and each such JOIN has its own rangetable entry\nand hence a relid. Inner joins might arise from comma syntax\n(that is, \"SELECT ... FROM tab1, tab2\"). For perhaps-historical\nreasons that syntax doesn't give rise to an explicit RTE_JOIN\nrangetable entry, so the implied join has no relid.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 01:28:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: struct RelOptInfo member relid comments" } ]
[ { "msg_contents": "Hi All,\n\nWe all know that installing an extension typically requires superuser\nprivileges, which means the database objects it creates are owned by the\nsuperuser.\n\nIf the extension creates any SECURITY DEFINER functions, it can introduce\nsecurity vulnerabilities. For example, consider an extension that creates\nthe following functions, outer_func and inner_func, in the schema s1 when\ninstalled:\n\nCREATE OR REPLACE FUNCTION s1.inner_func(data text)\nRETURNS void AS $$\nBEGIN\n INSERT INTO tab1(data_column) VALUES (data);\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION s1.outer_func(data text)\nRETURNS void AS $$\nBEGIN\n PERFORM inner_func(data);\nEND;\n$$ LANGUAGE plpgsql SECURITY DEFINER;\n\nIf a regular user creates another function with name \"inner_func\" with the\nsame signature in the public schema and sets the search path to public, s1,\nthe function created by the regular user in the public schema takes\nprecedence when outer_func is called. Since outer_func is marked as\nSECURITY DEFINER, the inner_func created by the user in the public schema\nis executed with superuser privileges. This allows the execution of any\nstatements within the function block, leading to potential security issues.\n\nTo address this problem, one potential solution is to adjust the function\nresolution logic. For instance, if the caller function belongs to a\nspecific schema, functions in the same schema should be given preference.\nAlthough I haven’t reviewed the feasibility in the code, this is one\npossible approach.\n\nAnother solution could be to categorize extension-created functions to\navoid duplication. This might not be an ideal solution, but it's another\nconsideration worth sharing.\n\nThoughts?\n\n--\nWith Regards,\nAshutosh Sharma.\n\nHi All,We all know that installing an extension typically requires superuser privileges, which means the database objects it creates are owned by the superuser.If the extension creates any SECURITY DEFINER functions, it can introduce security vulnerabilities. For example, consider an extension that creates the following functions, outer_func and inner_func, in the schema s1 when installed:CREATE OR REPLACE FUNCTION s1.inner_func(data text)RETURNS void AS $$BEGIN    INSERT INTO tab1(data_column) VALUES (data);END;$$ LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION s1.outer_func(data text)RETURNS void AS $$BEGIN    PERFORM inner_func(data);END;$$ LANGUAGE plpgsql SECURITY DEFINER;If a regular user creates another function with name \"inner_func\" with the same signature in the public schema and sets the search path to public, s1, the function created by the regular user in the public schema takes precedence when outer_func is called. Since outer_func is marked as SECURITY DEFINER, the inner_func created by the user in the public schema is executed with superuser privileges. This allows the execution of any statements within the function block, leading to potential security issues.To address this problem, one potential solution is to adjust the function resolution logic. For instance, if the caller function belongs to a specific schema, functions in the same schema should be given preference. Although I haven’t reviewed the feasibility in the code, this is one possible approach.Another solution could be to categorize extension-created functions to avoid duplication. This might not be an ideal solution, but it's another consideration worth sharing.Thoughts?--With Regards,Ashutosh Sharma.", "msg_date": "Fri, 24 May 2024 12:51:43 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Addressing SECURITY DEFINER Function Vulnerabilities in PostgreSQL\n Extensions" }, { "msg_contents": "On Fri, May 24, 2024 at 12:52 PM Ashutosh Sharma <[email protected]>\nwrote:\n\n> Hi All,\n>\n> We all know that installing an extension typically requires superuser\n> privileges, which means the database objects it creates are owned by the\n> superuser.\n>\n> If the extension creates any SECURITY DEFINER functions, it can introduce\n> security vulnerabilities. For example, consider an extension that creates\n> the following functions, outer_func and inner_func, in the schema s1 when\n> installed:\n>\n> CREATE OR REPLACE FUNCTION s1.inner_func(data text)\n> RETURNS void AS $$\n> BEGIN\n> INSERT INTO tab1(data_column) VALUES (data);\n> END;\n> $$ LANGUAGE plpgsql;\n>\n> CREATE OR REPLACE FUNCTION s1.outer_func(data text)\n> RETURNS void AS $$\n> BEGIN\n> PERFORM inner_func(data);\n> END;\n> $$ LANGUAGE plpgsql SECURITY DEFINER;\n>\n> If a regular user creates another function with name \"inner_func\" with the\n> same signature in the public schema and sets the search path to public, s1,\n> the function created by the regular user in the public schema takes\n> precedence when outer_func is called. Since outer_func is marked as\n> SECURITY DEFINER, the inner_func created by the user in the public schema\n> is executed with superuser privileges. This allows the execution of any\n> statements within the function block, leading to potential security issues.\n>\n> To address this problem, one potential solution is to adjust the function\n> resolution logic. For instance, if the caller function belongs to a\n> specific schema, functions in the same schema should be given preference.\n> Although I haven’t reviewed the feasibility in the code, this is one\n> possible approach.\n>\n> Another solution could be to categorize extension-created functions to\n> avoid duplication. This might not be an ideal solution, but it's another\n> consideration worth sharing.\n>\n>\nFunction call should schema qualify it. That's painful, but it can be\navoided by setting a search path from inside the function. There was some\ndiscussion about setting a search path for a function at [1]. But the last\nmessage there is non-conclusive. We may want to extend it to extensions\nsuch that all the object references in a given extension are resolved using\nextension specific search_path.\n\n[1]\nhttps://www.postgresql.org/message-id/2710f56add351a1ed553efb677408e51b060e67c.camel%40j-davis.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, May 24, 2024 at 12:52 PM Ashutosh Sharma <[email protected]> wrote:Hi All,We all know that installing an extension typically requires superuser privileges, which means the database objects it creates are owned by the superuser.If the extension creates any SECURITY DEFINER functions, it can introduce security vulnerabilities. For example, consider an extension that creates the following functions, outer_func and inner_func, in the schema s1 when installed:CREATE OR REPLACE FUNCTION s1.inner_func(data text)RETURNS void AS $$BEGIN    INSERT INTO tab1(data_column) VALUES (data);END;$$ LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION s1.outer_func(data text)RETURNS void AS $$BEGIN    PERFORM inner_func(data);END;$$ LANGUAGE plpgsql SECURITY DEFINER;If a regular user creates another function with name \"inner_func\" with the same signature in the public schema and sets the search path to public, s1, the function created by the regular user in the public schema takes precedence when outer_func is called. Since outer_func is marked as SECURITY DEFINER, the inner_func created by the user in the public schema is executed with superuser privileges. This allows the execution of any statements within the function block, leading to potential security issues.To address this problem, one potential solution is to adjust the function resolution logic. For instance, if the caller function belongs to a specific schema, functions in the same schema should be given preference. Although I haven’t reviewed the feasibility in the code, this is one possible approach.Another solution could be to categorize extension-created functions to avoid duplication. This might not be an ideal solution, but it's another consideration worth sharing.Function call should schema qualify it. That's painful, but it can be avoided by setting a search path from inside the function. There was some discussion about setting a search path for a function at [1]. But the last message there is non-conclusive. We may want to extend it to extensions such that all the object references in a given extension are resolved using extension specific search_path.[1] https://www.postgresql.org/message-id/2710f56add351a1ed553efb677408e51b060e67c.camel%40j-davis.com-- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 24 May 2024 14:24:52 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Fri, May 24, 2024 at 2:25 PM Ashutosh Bapat <[email protected]>\nwrote:\n\n>\n>\n> On Fri, May 24, 2024 at 12:52 PM Ashutosh Sharma <[email protected]>\n> wrote:\n>\n>> Hi All,\n>>\n>> We all know that installing an extension typically requires superuser\n>> privileges, which means the database objects it creates are owned by the\n>> superuser.\n>>\n>> If the extension creates any SECURITY DEFINER functions, it can introduce\n>> security vulnerabilities. For example, consider an extension that creates\n>> the following functions, outer_func and inner_func, in the schema s1 when\n>> installed:\n>>\n>> CREATE OR REPLACE FUNCTION s1.inner_func(data text)\n>> RETURNS void AS $$\n>> BEGIN\n>> INSERT INTO tab1(data_column) VALUES (data);\n>> END;\n>> $$ LANGUAGE plpgsql;\n>>\n>> CREATE OR REPLACE FUNCTION s1.outer_func(data text)\n>> RETURNS void AS $$\n>> BEGIN\n>> PERFORM inner_func(data);\n>> END;\n>> $$ LANGUAGE plpgsql SECURITY DEFINER;\n>>\n>> If a regular user creates another function with name \"inner_func\" with\n>> the same signature in the public schema and sets the search path to public,\n>> s1, the function created by the regular user in the public schema takes\n>> precedence when outer_func is called. Since outer_func is marked as\n>> SECURITY DEFINER, the inner_func created by the user in the public schema\n>> is executed with superuser privileges. This allows the execution of any\n>> statements within the function block, leading to potential security issues.\n>>\n>> To address this problem, one potential solution is to adjust the function\n>> resolution logic. For instance, if the caller function belongs to a\n>> specific schema, functions in the same schema should be given preference.\n>> Although I haven’t reviewed the feasibility in the code, this is one\n>> possible approach.\n>>\n>> Another solution could be to categorize extension-created functions to\n>> avoid duplication. This might not be an ideal solution, but it's another\n>> consideration worth sharing.\n>>\n>>\n> Function call should schema qualify it. That's painful, but it can be\n> avoided by setting a search path from inside the function. There was some\n> discussion about setting a search path for a function at [1]. But the last\n> message there is non-conclusive. We may want to extend it to extensions\n> such that all the object references in a given extension are resolved using\n> extension specific search_path.\n>\n> [1]\n> https://www.postgresql.org/message-id/2710f56add351a1ed553efb677408e51b060e67c.camel%40j-davis.com\n>\n\nThank you, Ashutosh, for the quick response. I've drafted a patch aimed at\naddressing this issue. The patch attempts to solve this issue by\nconfiguring the search_path for all security definer functions created by\nthe extension. It ensures they are set to trusted schemas, which includes\nthe schema where the extension and the function are created. PFA patch.\n\n--\nWith Regards,\nAshutosh Sharma.", "msg_date": "Wed, 5 Jun 2024 14:36:54 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Wed, 2024-06-05 at 14:36 +0530, Ashutosh Sharma wrote:\n> Thank you, Ashutosh, for the quick response. I've drafted a patch\n> aimed at addressing this issue. The patch attempts to solve this\n> issue by configuring the search_path for all security definer\n> functions created by the extension.\n\nI like the general direction you propose, but I think it needs more\ndiscussion about the details.\n\n* What exactly is the right search_path for a function defined in an\nextension?\n\n* Do we need a new magic search_path value of \"$extension_schema\" that\nresolves to the extension's schema, so that it can handle ALTER\nEXTENSION ... SET SCHEMA?\n\n* What do we do for functions that want the current behavior and how do\nwe handle migration issues?\n\n* What about SECURITY INVOKER functions? Those can still be vulnerable\nto manipulation by the caller by setting search_path, which can cause\nan incorrect value to be returned. That can matter in some contexts\nlike a CHECK constraint.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 05 Jun 2024 14:06:11 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 6, 2024 at 2:36 AM Jeff Davis <[email protected]> wrote:\n\n> On Wed, 2024-06-05 at 14:36 +0530, Ashutosh Sharma wrote:\n> > Thank you, Ashutosh, for the quick response. I've drafted a patch\n> > aimed at addressing this issue. The patch attempts to solve this\n> > issue by configuring the search_path for all security definer\n> > functions created by the extension.\n\n\n> I like the general direction you propose, but I think it needs more\n> discussion about the details.\n>\n\nI agree.\n\n\n>\n> * What exactly is the right search_path for a function defined in an\n> extension?\n>\n\nDetermining the precise value can be challenging. However, since it's a\nfunction installed by an extension, typically, the search_path should\ninclude the extension's search_path and the schema where the function\nresides. If the function relies on a schema other than the one we set in\nits search_path, which would usually be the one created by the extension,\nthis approach will enforce the extension developers to set the extension's\nspecific search_path in the create function statement, if it's not set. The\nprimary goal here is to ensure that the security definer functions created\nby an extension do not refer to any untrusted schema(s).\n\n\n>\n> * Do we need a new magic search_path value of \"$extension_schema\" that\n> resolves to the extension's schema, so that it can handle ALTER\n> EXTENSION ... SET SCHEMA?\n>\n\nPossibly yes, we can think about it, I think it would be something like the\n$user we have now.\n\n\n>\n> * What do we do for functions that want the current behavior and how do\n> we handle migration issues?\n>\n\n\nThat can be controlled via some GUC if needed, I guess.\n\n\n>\n> * What about SECURITY INVOKER functions? Those can still be vulnerable\n> to manipulation by the caller by setting search_path, which can cause\n> an incorrect value to be returned. That can matter in some contexts\n> like a CHECK constraint.\n>\n\nI didn't get you completely here. w.r.t extensions how will this have an\nimpact if we set the search_path for definer functions.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nHi,On Thu, Jun 6, 2024 at 2:36 AM Jeff Davis <[email protected]> wrote:On Wed, 2024-06-05 at 14:36 +0530, Ashutosh Sharma wrote:\n> Thank you, Ashutosh, for the quick response. I've drafted a patch\n> aimed at addressing this issue. The patch attempts to solve this\n> issue by configuring the search_path for all security definer\n> functions created by the extension.\n\nI like the general direction you propose, but I think it needs more\ndiscussion about the details.I agree. \n\n* What exactly is the right search_path for a function defined in an\nextension?Determining the precise value can be challenging. However, since it's a function installed by an extension, typically, the search_path should include the extension's search_path and the schema where the function resides. If the function relies on a schema other than the one we set in its search_path, which would usually be the one created by the extension, this approach will enforce the extension developers to set the extension's specific search_path in the create function statement, if it's not set. The primary goal here is to ensure that the security definer functions created by an extension do not refer to any untrusted schema(s). \n\n* Do we need a new magic search_path value of \"$extension_schema\" that\nresolves to the extension's schema, so that it can handle ALTER\nEXTENSION ... SET SCHEMA?Possibly yes, we can think about it, I think it would be something like the $user we have now. \n\n* What do we do for functions that want the current behavior and how do\nwe handle migration issues?That can be controlled via some GUC if needed, I guess. \n\n* What about SECURITY INVOKER functions? Those can still be vulnerable\nto manipulation by the caller by setting search_path, which can cause\nan incorrect value to be returned. That can matter in some contexts\nlike a CHECK constraint.I didn't get you completely here. w.r.t extensions how will this have an impact if we set the search_path for definer functions. --With Regards,Ashutosh Sharma.", "msg_date": "Thu, 6 Jun 2024 21:17:17 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Thu, 2024-06-06 at 21:17 +0530, Ashutosh Sharma wrote:\n> That can be controlled via some GUC if needed, I guess.\n\nThat's a possibility, but it's easy to create a mess that way. I don't\nnecessarily oppose it, but we'd need some pretty strong agreement that\nwe are somehow moving users in a better direction and not just creating\ntwo behaviors that last forever.\n\nI also think there should be a way to explicitly request the old\nbehavior -- obtaining search_path from the session -- regardless of how\nthe GUC is set.\n\n> I didn't get you completely here. w.r.t extensions how will this have\n> an impact if we set the search_path for definer functions. \n\nIf we only set the search path for SECURITY DEFINER functions, I don't\nthink that solves the whole problem.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 06 Jun 2024 09:53:19 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Thu, 6 Jun 2024 at 12:53, Jeff Davis <[email protected]> wrote:\n\n\n> > I didn't get you completely here. w.r.t extensions how will this have\n> > an impact if we set the search_path for definer functions.\n>\n> If we only set the search path for SECURITY DEFINER functions, I don't\n> think that solves the whole problem.\n\n\nIndeed. While the ability for a caller to set the search_path for a\nsecurity definer functions introduces security problems that are different\nthan for security invoker functions, it's still weird for the behaviour of\na function to depend on the caller's search_path. It’s even weirder for the\ndefault search path behaviour to be different depending on whether or not\nthe function is security definer.\n\nOn Thu, 6 Jun 2024 at 12:53, Jeff Davis <[email protected]> wrote: \n> I didn't get you completely here. w.r.t extensions how will this have\n> an impact if we set the search_path for definer functions. \n\nIf we only set the search path for SECURITY DEFINER functions, I don't\nthink that solves the whole problem.Indeed. While the ability for a caller to set the search_path for a security definer functions introduces security problems that are different than for security invoker functions, it's still weird for the behaviour of a function to depend on the caller's search_path. It’s even weirder for the default search path behaviour to be different depending on whether or not the function is security definer.", "msg_date": "Thu, 6 Jun 2024 14:09:55 -0400", "msg_from": "Isaac Morland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Thu, 6 Jun 2024 at 20:10, Isaac Morland <[email protected]> wrote:\n>\n> On Thu, 6 Jun 2024 at 12:53, Jeff Davis <[email protected]> wrote:\n>\n>>\n>> > I didn't get you completely here. w.r.t extensions how will this have\n>> > an impact if we set the search_path for definer functions.\n>>\n>> If we only set the search path for SECURITY DEFINER functions, I don't\n>> think that solves the whole problem.\n>\n>\n> Indeed. While the ability for a caller to set the search_path for a security definer functions introduces security problems that are different than for security invoker functions, it's still weird for the behaviour of a function to depend on the caller's search_path. It’s even weirder for the default search path behaviour to be different depending on whether or not the function is security definer.\n\n+1\n\nAnd +1 to the general idea and direction this thread is going in. I\ndefinitely think we should be making extensions more secure by\ndefault, and this is an important piece of it.\n\nEven by default making the search_path \"pg_catalog, pg_temp\" for\nfunctions created by extensions would be very useful.\n\n\n", "msg_date": "Fri, 7 Jun 2024 00:19:16 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Fri, 2024-06-07 at 00:19 +0200, Jelte Fennema-Nio wrote:\n> Even by default making the search_path \"pg_catalog, pg_temp\" for\n> functions created by extensions would be very useful.\n\nRight now there's no syntax to override that. We'd need something to\nsay \"get the search_path from the session\".\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 06 Jun 2024 16:20:16 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Thu, Jun 6, 2024 at 2:36 AM Jeff Davis <[email protected]> wrote:\n>\n> On Wed, 2024-06-05 at 14:36 +0530, Ashutosh Sharma wrote:\n> > Thank you, Ashutosh, for the quick response. I've drafted a patch\n> > aimed at addressing this issue. The patch attempts to solve this\n> > issue by configuring the search_path for all security definer\n> > functions created by the extension.\n>\n> I like the general direction you propose, but I think it needs more\n> discussion about the details.\n>\n> * What exactly is the right search_path for a function defined in an\n> extension?\n>\n> * Do we need a new magic search_path value of \"$extension_schema\" that\n> resolves to the extension's schema, so that it can handle ALTER\n> EXTENSION ... SET SCHEMA?\n>\n> * What do we do for functions that want the current behavior and how do\n> we handle migration issues?\n>\n> * What about SECURITY INVOKER functions? Those can still be vulnerable\n> to manipulation by the caller by setting search_path, which can cause\n> an incorrect value to be returned. That can matter in some contexts\n> like a CHECK constraint.\n>\n\nAttached is the new version of patch addressing aforementioned\ncomments. It implements the following changes:\n\n1) Extends the CREATE EXTENSION command to support a new option, SET\nSEARCH_PATH.\n2) If the SET SEARCH_PATH option is specified with the CREATE\nEXTENSION command, the implicit search_path for functions created by\nan extension is set, if not already configured. This is true for both\nSECURITY DEFINER and SECURITY INVOKER functions.\n3) When the ALTER EXTENSION SET SCHEMA command is executed and if the\nfunction's search_path contains the old schema of the extension, it is\nupdated with the new schema.\n\nPlease have a look and let me know your comments.\n\n--\nWith Regards,\nAshutosh Sharma.", "msg_date": "Tue, 11 Jun 2024 15:24:30 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Tue, 11 Jun 2024 at 11:54, Ashutosh Sharma <[email protected]> wrote:\n> 1) Extends the CREATE EXTENSION command to support a new option, SET\n> SEARCH_PATH.\n\n\nI don't think it makes sense to add such an option to CREATE EXTENSION.\nI feel like such a thing should be part of the extension control file\ninstead. That way the extension author controls the search path, not\nthe person that installs the extension.\n\n\n", "msg_date": "Tue, 11 Jun 2024 13:31:56 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 11, 2024 at 5:02 PM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Tue, 11 Jun 2024 at 11:54, Ashutosh Sharma <[email protected]> wrote:\n> > 1) Extends the CREATE EXTENSION command to support a new option, SET\n> > SEARCH_PATH.\n>\n>\n> I don't think it makes sense to add such an option to CREATE EXTENSION.\n> I feel like such a thing should be part of the extension control file\n> instead. That way the extension author controls the search path, not\n> the person that installs the extension.\n\nIf the author has configured the search_path for any desired function,\nusing this option with the CREATE EXTENSION command will not affect\nthose functions.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Tue, 11 Jun 2024 18:19:56 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi,\n\nOn Tue, 11 Jun 2024 at 14:50, Ashutosh Sharma <[email protected]> wrote:\n\n> If the author has configured the search_path for any desired function,\n> using this option with the CREATE EXTENSION command will not affect\n> those functions.\n>\n\nThen effectively this feature is useless.\nNow attackers can just set search_path for the current session.\nWith this feature they will also be able to influence search_path of not\nprotected functions when they create an extension.\n\nRegards,\n--\nAlexander Kukushkin\n\nHi,On Tue, 11 Jun 2024 at 14:50, Ashutosh Sharma <[email protected]> wrote:If the author has configured the search_path for any desired function,\nusing this option with the CREATE EXTENSION command will not affect\nthose functions.Then effectively this feature is useless.Now attackers can just set search_path for the current session.With this feature they will also be able to influence search_path of not protected functions when they create an extension.Regards,--Alexander Kukushkin", "msg_date": "Tue, 11 Jun 2024 14:56:26 +0200", "msg_from": "Alexander Kukushkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi Alexander,\n\nOn Tue, Jun 11, 2024 at 6:26 PM Alexander Kukushkin <[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, 11 Jun 2024 at 14:50, Ashutosh Sharma <[email protected]> wrote:\n>>\n>> If the author has configured the search_path for any desired function,\n>> using this option with the CREATE EXTENSION command will not affect\n>> those functions.\n>\n>\n> Then effectively this feature is useless.\n> Now attackers can just set search_path for the current session.\n> With this feature they will also be able to influence search_path of not protected functions when they create an extension.\n>\n\nApologies for any confusion, but I'm not entirely following your\nexplanation. Could you kindly provide further clarification?\nAdditionally, would you mind reviewing the problem description\noutlined in the initial email?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Tue, 11 Jun 2024 18:57:16 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Tue, 2024-06-11 at 14:56 +0200, Alexander Kukushkin wrote:\n> Now attackers can just set search_path for the current session.\n\nIIUC, the proposal is that only the function's \"SET\" clause can\noverride the behavior, not a top-level SET command.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 11 Jun 2024 14:31:39 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Tue, 2024-06-11 at 15:24 +0530, Ashutosh Sharma wrote:\n> 3) When the ALTER EXTENSION SET SCHEMA command is executed and if the\n> function's search_path contains the old schema of the extension, it\n> is\n> updated with the new schema.\n\nI don't think it's reasonable to search-and-replace within a function's\nSET clause at ALTER time.\n\nI believe we need a new special search_path item, like\n\"$extension_schema\", to mean the schema of the extension owning the\nfunction. It would, like \"$user\", automatically adjust to the current\nvalue when changed.\n\nThat sounds like a useful and non-controversial change.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 11 Jun 2024 14:37:11 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi Jeff,\n\nOn Wed, Jun 12, 2024 at 3:07 AM Jeff Davis <[email protected]> wrote:\n>\n> On Tue, 2024-06-11 at 15:24 +0530, Ashutosh Sharma wrote:\n> > 3) When the ALTER EXTENSION SET SCHEMA command is executed and if the\n> > function's search_path contains the old schema of the extension, it\n> > is\n> > updated with the new schema.\n>\n> I don't think it's reasonable to search-and-replace within a function's\n> SET clause at ALTER time.\n>\n> I believe we need a new special search_path item, like\n> \"$extension_schema\", to mean the schema of the extension owning the\n> function. It would, like \"$user\", automatically adjust to the current\n> value when changed.\n>\n> That sounds like a useful and non-controversial change.\n\nI've definitely thought about it, and I agree that this approach could\nhelp simplify things. However, we do have some challenges with the\nresolution of $extension_schema variable compared to $user. Firstly,\nwe need something like CurrentExtensionId to resolve\n$extension_schema, and we need to decide where to manage it\n(CurrentExtensionId). From what I understand, we should set\nCurrentExtensionId when we're getting ready to execute a function, as\nthat's when we recompute the namespace path. But to set\nCurrentExtensionId at this point, we need information in pg_proc to\ndistinguish between extension-specific and non-extension functions. If\nit's an extension specific function, it should have the extension OID\nin pg_proc, which we can use to find the extension's namespace and\neventually resolve $extension_schema. So, to summarize this at a high\nlevel, here's is what I think can be done:\n\n1) Include extension-specific details, possibly the extension OID, for\nfunctions in pg_proc during function creation.\n\n2) Check if a function is extension-specific while preparing for\nfunction execution and set CurrentExtensionId accordingly.\n\n3) Utilize CurrentExtensionId to resolve the namespace path during\nrecomputation.\n\n4) Above steps will automatically repeat if the function is nested.\n\n5) Upon completion of function execution, reset CurrentExtensionId to\nInvalidOid.\n\nI think this should effectively tackle the outlined challenges with\nresolution of $extension_schema during namespace path recomputation.\nWhat are your thoughts on it?\n\nThanks,\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 12 Jun 2024 09:35:58 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Wed, Jun 12, 2024 at 9:35 AM Ashutosh Sharma <[email protected]> wrote:\n>\n> Hi Jeff,\n>\n> On Wed, Jun 12, 2024 at 3:07 AM Jeff Davis <[email protected]> wrote:\n> >\n> > On Tue, 2024-06-11 at 15:24 +0530, Ashutosh Sharma wrote:\n> > > 3) When the ALTER EXTENSION SET SCHEMA command is executed and if the\n> > > function's search_path contains the old schema of the extension, it\n> > > is\n> > > updated with the new schema.\n> >\n> > I don't think it's reasonable to search-and-replace within a function's\n> > SET clause at ALTER time.\n> >\n> > I believe we need a new special search_path item, like\n> > \"$extension_schema\", to mean the schema of the extension owning the\n> > function. It would, like \"$user\", automatically adjust to the current\n> > value when changed.\n> >\n> > That sounds like a useful and non-controversial change.\n>\n> I've definitely thought about it, and I agree that this approach could\n> help simplify things. However, we do have some challenges with the\n> resolution of $extension_schema variable compared to $user. Firstly,\n> we need something like CurrentExtensionId to resolve\n> $extension_schema, and we need to decide where to manage it\n> (CurrentExtensionId). From what I understand, we should set\n> CurrentExtensionId when we're getting ready to execute a function, as\n> that's when we recompute the namespace path. But to set\n> CurrentExtensionId at this point, we need information in pg_proc to\n> distinguish between extension-specific and non-extension functions. If\n> it's an extension specific function, it should have the extension OID\n> in pg_proc, which we can use to find the extension's namespace and\n> eventually resolve $extension_schema. So, to summarize this at a high\n> level, here's is what I think can be done:\n>\n> 1) Include extension-specific details, possibly the extension OID, for\n> functions in pg_proc during function creation.\n>\n\nAlternatively, we could leverage the extension dependency information\nto determine whether the function is created by an extension or not.\n\n> 2) Check if a function is extension-specific while preparing for\n> function execution and set CurrentExtensionId accordingly.\n>\n> 3) Utilize CurrentExtensionId to resolve the namespace path during\n> recomputation.\n>\n> 4) Above steps will automatically repeat if the function is nested.\n>\n> 5) Upon completion of function execution, reset CurrentExtensionId to\n> InvalidOid.\n>\n> I think this should effectively tackle the outlined challenges with\n> resolution of $extension_schema during namespace path recomputation.\n> What are your thoughts on it?\n>\n> Thanks,\n>\n> --\n> With Regards,\n> Ashutosh Sharma.\n\n\n", "msg_date": "Wed, 12 Jun 2024 09:51:22 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Wed, Jun 12, 2024 at 9:51 AM Ashutosh Sharma <[email protected]>\nwrote:\n\n> On Wed, Jun 12, 2024 at 9:35 AM Ashutosh Sharma <[email protected]>\n> wrote:\n> >\n> > Hi Jeff,\n> >\n> > On Wed, Jun 12, 2024 at 3:07 AM Jeff Davis <[email protected]> wrote:\n> > >\n> > > On Tue, 2024-06-11 at 15:24 +0530, Ashutosh Sharma wrote:\n> > > > 3) When the ALTER EXTENSION SET SCHEMA command is executed and if the\n> > > > function's search_path contains the old schema of the extension, it\n> > > > is\n> > > > updated with the new schema.\n> > >\n> > > I don't think it's reasonable to search-and-replace within a function's\n> > > SET clause at ALTER time.\n> > >\n> > > I believe we need a new special search_path item, like\n> > > \"$extension_schema\", to mean the schema of the extension owning the\n> > > function. It would, like \"$user\", automatically adjust to the current\n> > > value when changed.\n> > >\n> > > That sounds like a useful and non-controversial change.\n> >\n> > I've definitely thought about it, and I agree that this approach could\n> > help simplify things. However, we do have some challenges with the\n> > resolution of $extension_schema variable compared to $user. Firstly,\n> > we need something like CurrentExtensionId to resolve\n> > $extension_schema, and we need to decide where to manage it\n> > (CurrentExtensionId). From what I understand, we should set\n> > CurrentExtensionId when we're getting ready to execute a function, as\n> > that's when we recompute the namespace path. But to set\n> > CurrentExtensionId at this point, we need information in pg_proc to\n> > distinguish between extension-specific and non-extension functions. If\n> > it's an extension specific function, it should have the extension OID\n> > in pg_proc, which we can use to find the extension's namespace and\n> > eventually resolve $extension_schema. So, to summarize this at a high\n> > level, here's is what I think can be done:\n> >\n> > 1) Include extension-specific details, possibly the extension OID, for\n> > functions in pg_proc during function creation.\n> >\n>\n> Alternatively, we could leverage the extension dependency information\n> to determine whether the function is created by an extension or not.\n>\n>\nThat will be simpler. We do that sort of thing for identity sequences. So\nthere's a precedent to do that kind of stuff.\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Wed, Jun 12, 2024 at 9:51 AM Ashutosh Sharma <[email protected]> wrote:On Wed, Jun 12, 2024 at 9:35 AM Ashutosh Sharma <[email protected]> wrote:\n>\n> Hi Jeff,\n>\n> On Wed, Jun 12, 2024 at 3:07 AM Jeff Davis <[email protected]> wrote:\n> >\n> > On Tue, 2024-06-11 at 15:24 +0530, Ashutosh Sharma wrote:\n> > > 3) When the ALTER EXTENSION SET SCHEMA command is executed and if the\n> > > function's search_path contains the old schema of the extension, it\n> > > is\n> > > updated with the new schema.\n> >\n> > I don't think it's reasonable to search-and-replace within a function's\n> > SET clause at ALTER time.\n> >\n> > I believe we need a new special search_path item, like\n> > \"$extension_schema\", to mean the schema of the extension owning the\n> > function. It would, like \"$user\", automatically adjust to the current\n> > value when changed.\n> >\n> > That sounds like a useful and non-controversial change.\n>\n> I've definitely thought about it, and I agree that this approach could\n> help simplify things. However, we do have some challenges with the\n> resolution of $extension_schema variable compared to $user. Firstly,\n> we need something like CurrentExtensionId to resolve\n> $extension_schema, and we need to decide where to manage it\n> (CurrentExtensionId). From what I understand, we should set\n> CurrentExtensionId when we're getting ready to execute a function, as\n> that's when we recompute the namespace path. But to set\n> CurrentExtensionId at this point, we need information in pg_proc to\n> distinguish between extension-specific and non-extension functions. If\n> it's an extension specific function, it should have the extension OID\n> in pg_proc, which we can use to find the extension's namespace and\n> eventually resolve $extension_schema. So, to summarize this at a high\n> level, here's is what I think can be done:\n>\n> 1) Include extension-specific details, possibly the extension OID, for\n> functions in pg_proc during function creation.\n>\n\nAlternatively, we could leverage the extension dependency information\nto determine whether the function is created by an extension or not.\nThat will be simpler. We do that sort of thing for identity sequences. So there's a precedent to do that kind of stuff. -- Best Wishes,Ashutosh Bapat", "msg_date": "Wed, 12 Jun 2024 12:13:56 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi Ashutosh,\n\nApologies for any confusion, but I'm not entirely following your\n> explanation. Could you kindly provide further clarification?\n> Additionally, would you mind reviewing the problem description\n> outlined in the initial email?\n>\n\nI know about the problem and have seen the original email.\nWhat confused me, is that your email didn't specify that SET SEARCH_PATH in\nthe CREATE EXTENSION is a boolean flag, hence I made an assumption that it\nis a TEXT (similar to GUC with the same name). Now after looking at your\ncode it makes more sense. Sorry about the confusion.\n\nBut, I also agree with Jelte, it should be a property of a control file,\nrather than a user controlled parameter, so that an attacker can't opt out.\n\nRegards,\n--\nAlexander Kukushkin\n\nHi Ashutosh,\nApologies for any confusion, but I'm not entirely following your\nexplanation. Could you kindly provide further clarification?\nAdditionally, would you mind reviewing the problem description\noutlined in the initial email?I know about the problem and have seen the original email.What confused me, is that your email didn't specify that SET SEARCH_PATH in the CREATE EXTENSION is a boolean flag, hence I made an assumption that it is a TEXT (similar to GUC with the same name). Now after looking at your code it makes more sense. Sorry about the confusion.But, I also agree with Jelte, it should be a property of a control file, rather than a user controlled parameter, so that an attacker can't opt out.Regards,--Alexander Kukushkin", "msg_date": "Wed, 12 Jun 2024 09:11:03 +0200", "msg_from": "Alexander Kukushkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi,\n\n> I know about the problem and have seen the original email.\n\nI'm sympathetic to the problem of potential privilege escalation when\nexecuting functions. At the same time I'm not sure if there's that\nmuch of a difference between 'CREATE EXTENSION' vs superusers copying\na series of 'CREATE FUNCTION' where they don't understand these same\nnuances.\n\nCREATE FUNCTION already provides an ability to set the search_path. So\nI'm wondering what the problem we want to solve here is. Is it that\nthere's too much friction for extension authors to have to set\nsearch_path as part of the function definition and we want to make it\neasier for them to \"set once and forget\"?\n\n\n> But, I also agree with Jelte, it should be a property of a control file, rather than a user controlled parameter, so that an attacker can't opt out.\n\n+1. Also curious what happens if an extension author has search_path\nalready set in proconfig for a function that doesn't match what's in\nthe control file. I'm guessing the function one should take\nprecedence.\n\n\n--\nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Wed, 12 Jun 2024 11:05:33 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Wed, 2024-06-12 at 12:13 +0530, Ashutosh Bapat wrote:\n> > Alternatively, we could leverage the extension dependency\n> > information\n> > to determine whether the function is created by an extension or\n> > not.\n> \n> That will be simpler. We do that sort of thing for identity\n> sequences. So there's a precedent to do that kind of stuff. \n\nI did not look at the details, but +1 for using information we already\nhave. There's a little bit of extra work to resolve it, but thanks to\nthe search_path cache it should only need to be done once per unique\nsearch_path setting per session.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 12:10:29 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Wed, Jun 12, 2024 at 2:05 PM John H <[email protected]> wrote:>\n> I'm sympathetic to the problem of potential privilege escalation when\n> executing functions. At the same time I'm not sure if there's that\n> much of a difference between 'CREATE EXTENSION' vs superusers copying\n> a series of 'CREATE FUNCTION' where they don't understand these same\n> nuances.\n\n+1.\n\n> CREATE FUNCTION already provides an ability to set the search_path. So\n> I'm wondering what the problem we want to solve here is. Is it that\n> there's too much friction for extension authors to have to set\n> search_path as part of the function definition and we want to make it\n> easier for them to \"set once and forget\"?\n\nIf that's the problem we want to solve, I'm unconvinced that it's\nworth doing anything. But I think there's another problem, which is\nthat if the extension is relocatable, how do you set a secure\nsearch_path? You could say SET search_path = foo, pg_catalog if you\nknow the extension will be installed in schema foo, but if you don't\nknow in what schema the extension will be installed, then what are you\nsupposed to do? The proposal of litting $extension_schema could help\nwith that ...\n\n...except I'm not sure that's really a full solution either, because\nwhat if the extension is installed into a schema that's writable by\nothers, like public? If $extension_schema resolves to public and\npublic allows CREATE access to normal users, you're in a lot of\ntrouble again already, because now an attacker can try to capture\noperator references within your function, or function calls that are\napproximate matches to some existing function by introducing perfect\nmatches that take priority. Perhaps we want to consider the case where\nthe attacker can write to the schema containing the extension as an\nunsupported scenario, but if so, we'd better be explicit about that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 15:36:05 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Wed, 2024-06-12 at 15:36 -0400, Robert Haas wrote:\n> But I think there's another problem, which is\n> that if the extension is relocatable, how do you set a secure\n> search_path? You could say SET search_path = foo, pg_catalog if you\n> know the extension will be installed in schema foo, but if you don't\n> know in what schema the extension will be installed, then what are\n> you\n> supposed to do? The proposal of litting $extension_schema could help\n> with that ...\n> \n> ...except I'm not sure that's really a full solution either, because\n> what if the extension is installed into a schema that's writable by\n> others, like public?\n\nJelte proposed something to fix that here:\n\nhttps://www.postgresql.org/message-id/CAGECzQQzDqDzakBkR71ZkQ1N1ffTjAaruRSqppQAKu3WF%2B6rNQ%40mail.gmail.com\n\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 13:52:31 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 12, 2024 at 11:35 PM John H <[email protected]> wrote:\n>\n> > But, I also agree with Jelte, it should be a property of a control file, rather than a user controlled parameter, so that an attacker can't opt out.\n>\n\nThis will be addressed in the next patch version.\n\n> +1. Also curious what happens if an extension author has search_path\n> already set in proconfig for a function that doesn't match what's in\n> the control file. I'm guessing the function one should take\n> precedence.\n>\n\nYes, if the author has explicitly set the proconfig, it will take precedence.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Thu, 13 Jun 2024 08:39:37 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi,\n\nPlease find the attached patch addressing all the comments. I kindly\nrequest your review and feedback. Your thoughts are greatly\nappreciated.\n\n--\nWith Regards,\nAshutosh Sharma.", "msg_date": "Thu, 13 Jun 2024 13:39:01 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi Ashutosh,\n\nThinking about this more, could you clarify the problem/issue at hand?\nI think it's still not clear to me.\nYes, CREATE EXTENSION can create functions that lead to unexpected\nprivilege escalation, regardless\n if they are SECURITY DEFINER or SECURITY INVOKER (if the function is\ninadvertently executed by superuser).\nBut that's also true for a general CREATE FUNCTION call outside of extensions.\n\nIf I read the commit message I see:\n\n> This flag controls PostgreSQL's behavior in setting the implicit\n> search_path within the proconfig for functions created by an extension\n> that does not have the search_path explicitly set in proconfig\n\nIf that's the \"general\" issue you're trying to solve, I would wonder\nwhy we we wouldn't for instance be extending\nthe functionality to normal CREATE FUNCTION calls (ie, schema qualify\nbased off search_path)\n\nThanks,\nJohn H\n\nOn Thu, Jun 13, 2024 at 1:09 AM Ashutosh Sharma <[email protected]> wrote:\n>\n> Hi,\n>\n> Please find the attached patch addressing all the comments. I kindly\n> request your review and feedback. Your thoughts are greatly\n> appreciated.\n>\n> --\n> With Regards,\n> Ashutosh Sharma.\n\n\n\n-- \nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Mon, 17 Jun 2024 14:05:37 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi John,\n\nOn Tue, Jun 18, 2024 at 2:35 AM John H <[email protected]> wrote:\n>\n> Hi Ashutosh,\n>\n> Thinking about this more, could you clarify the problem/issue at hand?\n> I think it's still not clear to me.\n> Yes, CREATE EXTENSION can create functions that lead to unexpected\n> privilege escalation, regardless\n> if they are SECURITY DEFINER or SECURITY INVOKER (if the function is\n> inadvertently executed by superuser).\n> But that's also true for a general CREATE FUNCTION call outside of extensions.\n>\n\nThis specifically applies to extension functions, not standalone\nfunctions created independently. The difference is that installing\nextensions typically requires superuser privileges, which is not the\ncase with standalone functions.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Tue, 18 Jun 2024 09:44:54 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi Ashutosh,\n\nThanks for clarifying.\n\n> not standalone functions created independently\n\nI'm wondering why we wouldn't want to extend that functionality so\nthat if users (including superusers) do want to automatically\nconfigure search_path\nwhen they're creating functions it's available.\n\n> The difference is that installing extensions typically requires superuser privileges,\n> which is not the case with standalone functions.\n\nYes, but inadvertent access to different roles can still occur even if\nit's not a superuser.\n\nIt's not clear to me who the audience of this commit is aimed at,\ncause I see two sort of\ndifferent use cases?\n\n1. Make it easier for extension authors to configure search_path when\ncreating functions\n\nThe proposed mechanism via control file makes sense, although I would\nlike to understand\nwhy specifying it today in CREATE FUNCTION doesn't work. Is it an\nawareness issue?\nI suppose it's easier if you only need to set it once in the control file?\nIs that ease-of-use what we want to solve?\n\n2. Make it easier to avoid inadvertent escalations when functions are created\n(e.g CREATE EXTENSION/CREATE FUNCTION)\n\nThen I think it's better to provide users a way to force the\nsearch_path on functions when\nextensions are created so superusers aren't reliant on extension authors.\n\nThanks,\n\n--\nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Tue, 18 Jun 2024 17:36:35 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 19, 2024 at 6:06 AM John H <[email protected]> wrote:\n>\n> Hi Ashutosh,\n>\n> Thanks for clarifying.\n>\n> > not standalone functions created independently\n>\n> I'm wondering why we wouldn't want to extend that functionality so\n> that if users (including superusers) do want to automatically\n> configure search_path\n> when they're creating functions it's available.\n>\n> > The difference is that installing extensions typically requires superuser privileges,\n> > which is not the case with standalone functions.\n>\n> Yes, but inadvertent access to different roles can still occur even if\n> it's not a superuser.\n>\n> It's not clear to me who the audience of this commit is aimed at,\n> cause I see two sort of\n> different use cases?\n>\n> 1. Make it easier for extension authors to configure search_path when\n> creating functions\n>\n> The proposed mechanism via control file makes sense, although I would\n> like to understand\n> why specifying it today in CREATE FUNCTION doesn't work. Is it an\n> awareness issue?\n> I suppose it's easier if you only need to set it once in the control file?\n> Is that ease-of-use what we want to solve?\n>\n> 2. Make it easier to avoid inadvertent escalations when functions are created\n> (e.g CREATE EXTENSION/CREATE FUNCTION)\n>\n> Then I think it's better to provide users a way to force the\n> search_path on functions when\n> extensions are created so superusers aren't reliant on extension authors.\n>\n\nFor standalone functions, users can easily adjust the search_path\nsettings as needed. However, managing this becomes challenging for\nfunctions created via extensions. Extensions are relocatable, making\nit difficult to determine and apply search_path settings in advance\nwithin the extension_name--*.sql file when defining functions or\nprocedures. Introducing a new control flag option allows users to set\nimplicit search_path settings for functions created by the extension,\nif needed. The main aim here is to enhance security for functions\ncreated by extensions by setting search_path at the function level.\nThis ensures precise control over how objects are accessed within each\nfunction, making behavior more predictable and minimizing security\nrisks, especially for SECURITY DEFINER functions associated with\nextensions created by superusers.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 19 Jun 2024 08:53:25 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Wed, 2024-06-19 at 08:53 +0530, Ashutosh Sharma wrote:\n> For standalone functions, users can easily adjust the search_path\n> settings as needed. However, managing this becomes challenging for\n> functions created via extensions. Extensions are relocatable, making\n> it difficult to determine and apply search_path settings in advance\n> within the extension_name--*.sql file when defining functions or\n> procedures.\n\nA special search_path variable \"$extension_schema\" would be a better\nsolution to this problem. We need something like that anyway, in case\nthe extension is relocated, so that we don't have to dig through the\ncatalog and update all the settings.\n\n> Introducing a new control flag option allows users to set\n> implicit search_path settings for functions created by the extension,\n> if needed. The main aim here is to enhance security for functions\n> created by extensions by setting search_path at the function level.\n> This ensures precise control over how objects are accessed within\n> each\n> function, making behavior more predictable and minimizing security\n> risks,\n\nThat leaves me wondering whether it's being addressed at the right\nlevel.\n\nFor instance, did you consider just having a GUC to control the default\nsearch_path for a function? That may be too magical, but if so, why\nwould an extension-only setting be better?\n\n> especially for SECURITY DEFINER functions associated with\n> extensions created by superusers.\n\nI'm not sure that it's specific to SECURITY DEFINER functions.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 12:10:02 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Tue, Jun 25, 2024 at 12:40 AM Jeff Davis <[email protected]> wrote:\n>\n> On Wed, 2024-06-19 at 08:53 +0530, Ashutosh Sharma wrote:\n> > For standalone functions, users can easily adjust the search_path\n> > settings as needed. However, managing this becomes challenging for\n> > functions created via extensions. Extensions are relocatable, making\n> > it difficult to determine and apply search_path settings in advance\n> > within the extension_name--*.sql file when defining functions or\n> > procedures.\n>\n> A special search_path variable \"$extension_schema\" would be a better\n> solution to this problem. We need something like that anyway, in case\n> the extension is relocated, so that we don't have to dig through the\n> catalog and update all the settings.\n>\n> > Introducing a new control flag option allows users to set\n> > implicit search_path settings for functions created by the extension,\n> > if needed. The main aim here is to enhance security for functions\n> > created by extensions by setting search_path at the function level.\n> > This ensures precise control over how objects are accessed within\n> > each\n> > function, making behavior more predictable and minimizing security\n> > risks,\n>\n> That leaves me wondering whether it's being addressed at the right\n> level.\n>\n> For instance, did you consider just having a GUC to control the default\n> search_path for a function? That may be too magical, but if so, why\n> would an extension-only setting be better?\n>\n\nI haven't given any such thought, my current focus is on extensions,\nspecifically increasing their security with respect to superuser\nelevation.\n\nRegarding your first question which you had raised earlier : \"What\nexactly is the right search_path for a function defined in an\nextension?\"\n\nAs I understand it, we cannot precisely determine the search_path for\nfunctions at the time of function creation in the code. The most\naccurate path (or something close to it) we can identify for functions\ncreated by extensions is the search_path set by the extension during\nits creation, which is what we aim to achieve with $extension_schema.\nThis setting is up to the discretion of the author as to whether they\nare comfortable with this implicit search_path configuration at the\nfunction level. If not, they have the option to disable it. However,\nAFAIU, in most cases, especially when the extension can be relocated,\nthis situation is unlikely to occur.\n\n--\nWith Regards,\nAshutosh Sharma,\n\n\n", "msg_date": "Wed, 26 Jun 2024 11:19:12 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Mon, Jun 24, 2024 at 3:10 PM Jeff Davis <[email protected]> wrote:\n> A special search_path variable \"$extension_schema\" would be a better\n> solution to this problem. We need something like that anyway, in case\n> the extension is relocated, so that we don't have to dig through the\n> catalog and update all the settings.\n\n+1. I think we need that in order for this proposal to make any progress.\n\nAnd it seems like the patch has something like that, but I don't\nreally understand exactly what's going on here, because it's also got\nhunks like this in a bunch of places:\n\n+ if (strcmp($2, \"$extension_schema\") == 0)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"search_path cannot be set to $extension_schema\"),\n+ parser_errposition(@2)));\n\nSo it seems like the patch is trying to support $extension_schema in\nsome cases but not others, which seems like an odd choice that, as far\nas I can see, is not explained anywhere: not in the commit message,\nnot in the comments (which are pretty minimally updated by the patch),\nand not in the documentation (which the patch doesn't update at all).\n\nAshutosh, can we please get some of that stuff updated, or at least a\nclearer explanation of what's going on with $extension_schema here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jul 2024 11:32:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi Robert,\n\nOn Fri, Jul 12, 2024 at 9:02 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Jun 24, 2024 at 3:10 PM Jeff Davis <[email protected]> wrote:\n> > A special search_path variable \"$extension_schema\" would be a better\n> > solution to this problem. We need something like that anyway, in case\n> > the extension is relocated, so that we don't have to dig through the\n> > catalog and update all the settings.\n>\n> +1. I think we need that in order for this proposal to make any progress.\n>\n> And it seems like the patch has something like that, but I don't\n> really understand exactly what's going on here, because it's also got\n> hunks like this in a bunch of places:\n>\n> + if (strcmp($2, \"$extension_schema\") == 0)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"search_path cannot be set to $extension_schema\"),\n> + parser_errposition(@2)));\n>\n> So it seems like the patch is trying to support $extension_schema in\n> some cases but not others, which seems like an odd choice that, as far\n> as I can see, is not explained anywhere: not in the commit message,\n> not in the comments (which are pretty minimally updated by the patch),\n> and not in the documentation (which the patch doesn't update at all).\n>\n> Ashutosh, can we please get some of that stuff updated, or at least a\n> clearer explanation of what's going on with $extension_schema here?\n>\n\nI've added these changes to restrict users from explicitly setting the\n$extension_schema in the search_path. This ensures that\n$extension_schema can only be set implicitly for functions created by\nthe extension when the \"protected\" flag is enabled.\n\nI apologize for not commenting on this change initially. I'll review\nthe patch, add comments where needed, and submit an updated version.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Mon, 15 Jul 2024 17:35:05 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Mon, Jul 15, 2024 at 8:05 AM Ashutosh Sharma <[email protected]> wrote:\n> I've added these changes to restrict users from explicitly setting the\n> $extension_schema in the search_path. This ensures that\n> $extension_schema can only be set implicitly for functions created by\n> the extension when the \"protected\" flag is enabled.\n\nBut ... why? I mean, what's the point of prohibiting that? In fact,\nmaybe we should have *that* and forget about the protected flag in the\ncontrol file.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jul 2024 13:44:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Mon, 2024-07-15 at 13:44 -0400, Robert Haas wrote:\n> But ... why? I mean, what's the point of prohibiting that?\n\nAgreed. We ignore all kinds of stuff in search_path that doesn't make\nsense, like non-existent schemas. Simpler is better.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 15 Jul 2024 11:33:08 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Mon, Jul 15, 2024 at 2:33 PM Jeff Davis <[email protected]> wrote:\n> On Mon, 2024-07-15 at 13:44 -0400, Robert Haas wrote:\n> > But ... why? I mean, what's the point of prohibiting that?\n>\n> Agreed. We ignore all kinds of stuff in search_path that doesn't make\n> sense, like non-existent schemas. Simpler is better.\n\nOh, I had the opposite idea: I wasn't proposing ignoring it. I was\nproposing making it work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jul 2024 16:04:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Mon, 2024-07-15 at 16:04 -0400, Robert Haas wrote:\n> Oh, I had the opposite idea: I wasn't proposing ignoring it. I was\n> proposing making it work.\n\nI meant: ignore $extension_schema if the search_path has nothing to do\nwith an extension. In other words, if it's in a search_path for the\nsession, or on a function that's not part of an extension.\n\nOn re-reading, I see that you mean it should work if they explicitly\nset it as a part of a function that *is* part of an extension. And I\nagree with that -- just make it work.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 15 Jul 2024 16:28:02 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi Robert.\n\nOn Mon, Jul 15, 2024 at 11:15 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Jul 15, 2024 at 8:05 AM Ashutosh Sharma <[email protected]> wrote:\n> > I've added these changes to restrict users from explicitly setting the\n> > $extension_schema in the search_path. This ensures that\n> > $extension_schema can only be set implicitly for functions created by\n> > the extension when the \"protected\" flag is enabled.\n>\n> But ... why? I mean, what's the point of prohibiting that? In fact,\n> maybe we should have *that* and forget about the protected flag in the\n> control file.\n>\n\nJust to confirm, are you suggesting to remove the protected flag and\nset the default search_path (as $extension_schema,) for all functions\nwithin an extension where no explicit search_path is set? In addition\nto that, also allow users to explicitly set $extension_schema as the\nsearch_path and bypass resolution of $extension_schema for objects\noutside the extension?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Tue, 16 Jul 2024 11:25:36 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "On Tue, Jul 16, 2024 at 1:55 AM Ashutosh Sharma <[email protected]> wrote:\n> Just to confirm, are you suggesting to remove the protected flag and\n> set the default search_path (as $extension_schema,) for all functions\n> within an extension where no explicit search_path is set?\n\nNo, I'm not saying that. In fact I'm not sure we should have the\nprotected flag at all.\n\n> In addition\n> to that, also allow users to explicitly set $extension_schema as the\n> search_path and bypass resolution of $extension_schema for objects\n> outside the extension?\n\nYes, I'm saying that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Jul 2024 12:09:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi Robert,\n\nOn Tue, Jul 16, 2024 at 9:40 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jul 16, 2024 at 1:55 AM Ashutosh Sharma <[email protected]> wrote:\n> > Just to confirm, are you suggesting to remove the protected flag and\n> > set the default search_path (as $extension_schema,) for all functions\n> > within an extension where no explicit search_path is set?\n>\n> No, I'm not saying that. In fact I'm not sure we should have the\n> protected flag at all.\n>\n\nBased on our previous discussions in this thread - [1], [2], we wanted\nto give extension authors the ability to decide if they would like to\ngo with the current behavior or if they would like to adopt the new\nbehavior where the default search_path will be enforced for functions\nthat do not have search_path explicitly set. That is why we considered\nintroducing this flag.\n\n> > In addition\n> > to that, also allow users to explicitly set $extension_schema as the\n> > search_path and bypass resolution of $extension_schema for objects\n> > outside the extension?\n>\n> Yes, I'm saying that.\n>\n\nSure, thanks for confirming. I'll make sure to address this in the\nnext patch version.\n\n[1] - https://www.postgresql.org/message-id/340cd4a0c30b46a185e66b1c7e91535921137da8.camel%40j-davis.com\n[2] - https://www.postgresql.org/message-id/CAGECzQSms%2BikWo7E0E1QAVvhM2%2B9FQydEywyCLztPaAYr9s%2BBw%40mail.gmail.com\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 17 Jul 2024 19:01:56 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" }, { "msg_contents": "Hi All,\n\nHere is the v4 patch with the following new changes:\n\n1) Now allows users to explicitly set search_path to $extension_schema.\n\n2) Includes updates to the documentation.\n\n3) Adds comments where previously absent.\n\nNote: The new control file parameter is currently named as \"protected\"\nwhich is actually not the precise name knowing that it just solves a\nsmall portion of security problems related to extensions. I intend to\nrename it to something more appropriate later; but any suggestions are\nwelcome.\n\nBesides, should we consider restricting the installation of extensions\nin schemas where a UDF with the same name that the extension intends\nto create already exists? Additionally, similar restrictions can also\nbe applied if UDF being created shares its name with a function\nalready created by an extension in that schema? I haven't looked at\nthe feasibility part, but just thought of sharing it just because\nsomething of this sort would probably solve most of the issues related\nto extensions.\n\n--\nWith Regards,\nAshutosh Sharma.", "msg_date": "Wed, 24 Jul 2024 17:58:35 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Addressing SECURITY DEFINER Function Vulnerabilities in\n PostgreSQL Extensions" } ]
[ { "msg_contents": "Hi,\n\nWhile experimenting on an explain option to display all plan candidates\n(very rough prototype here [1]), I've noticed some discrepancies in some\ngenerated plans.\n\nEXPLAIN (ALL_CANDIDATES) SELECT * FROM pgbench_accounts order by aid;\n Plan 1\n-> Gather Merge (cost=11108.32..22505.38 rows=100000 width=97)\n Workers Planned: 1\n -> Sort (cost=10108.31..10255.37 rows=58824 width=97)\n Sort Key: aid\n -> Parallel Seq Scan on pgbench_accounts (cost=0.00..2228.24\nrows=58824 width=97)\n Plan 2\n-> Gather Merge (cost=11108.32..17873.08 rows=58824 width=97)\n Workers Planned: 1\n -> Sort (cost=10108.31..10255.37 rows=58824 width=97)\n Sort Key: aid\n -> Parallel Seq Scan on pgbench_accounts (cost=0.00..2228.24\nrows=58824 width=97)\n\nThe 2 plans are similar except one Gather Merge has a lower 58K estimated\nrows.\n\nThe first plan is created with generate_useful_gather_paths with\noverride_rows=false then create_gather_merge_path and will use rel->rows as\nthe row count (so the 100K rows of pgbench_accounts).\n#0: create_gather_merge_path(...) at pathnode.c:1885:30\n#1: generate_useful_gather_paths(... override_rows=false) at\nallpaths.c:3286:11\n#2: apply_scanjoin_target_to_paths(...) at planner.c:7744:3\n#3: grouping_planner(...) at planner.c:1638:3\n\nThe second plan is created through create_ordered_paths then\ncreate_gather_merge_path and the number of rows is estimated to\n(worker_rows * parallel_workers). Since we only have 1 parallel worker,\nthis yields 58K rows.\n#0: create_gather_merge_path(...) at pathnode.c:1885:30\n#1: create_ordered_paths(...) at planner.c:5287:5\n#2: grouping_planner(...) at planner.c:1717:17\n\nThe 58K row estimate looks possibly incorrect. A worker row count is\nestimated using total_rows/parallel_divisor. The parallel_divisor will\ninclude the possible leader participation and will be 1.7 for 1 worker thus\nthe 58K rows (100K/1.7=58K)\nHowever, the gather merge will only do 58K*1, dropping the leader\nparticipation component.\n\nI have a tentative patch split in two changes:\n1: This is a related refactoring to remove an unnecessary and confusing\nassignment of rows in create_gather_merge_path. This value is never used\nand immediately overwritten in cost_gather_merge\n2: This changes the row estimation of gather path to use\nworker_rows*parallel_divisor to get a more accurate estimation.\nAdditionally, when creating a gather merge path in create_ordered_paths, we\ntry to use the source's rel rows when available.\n\nThe patch triggered a small change in the hash_join regression test. Pre\npatch, we had the following candidates.\nPlan 4\n-> Aggregate (cost=511.01..511.02 rows=1 width=8)\n -> Gather (cost=167.02..511.00 rows=2 width=0)\n Workers Planned: 1\n -> Parallel Hash Join (cost=167.02..510.80 rows=1 width=0)\n Hash Cond: (r.id = s.id)\n -> Parallel Seq Scan on simple r (cost=0.00..299.65\nrows=11765 width=4)\n -> Parallel Hash (cost=167.01..167.01 rows=1 width=4)\n -> Parallel Seq Scan on extremely_skewed s\n (cost=0.00..167.01 rows=1 width=4)\nPlan 5\n-> Finalize Aggregate (cost=510.92..510.93 rows=1 width=8)\n -> Gather (cost=510.80..510.91 rows=1 width=8)\n Workers Planned: 1\n -> Partial Aggregate (cost=510.80..510.81 rows=1 width=8)\n -> Parallel Hash Join (cost=167.02..510.80 rows=1\nwidth=0)\n Hash Cond: (r.id = s.id)\n -> Parallel Seq Scan on simple r\n (cost=0.00..299.65 rows=11765 width=4)\n -> Parallel Hash (cost=167.01..167.01 rows=1\nwidth=4)\n -> Parallel Seq Scan on extremely_skewed s\n (cost=0.00..167.01 rows=1 width=4)\n\nPost patch, the plan candidates became:\nPlan 4\n-> Finalize Aggregate (cost=511.02..511.03 rows=1 width=8)\n -> Gather (cost=510.80..511.01 rows=2 width=8)\n Workers Planned: 1\n -> Partial Aggregate (cost=510.80..510.81 rows=1 width=8)\n -> Parallel Hash Join (cost=167.02..510.80 rows=1\nwidth=0)\n Hash Cond: (r.id = s.id)\n -> Parallel Seq Scan on simple r\n (cost=0.00..299.65 rows=11765 width=4)\n -> Parallel Hash (cost=167.01..167.01 rows=1\nwidth=4)\n -> Parallel Seq Scan on extremely_skewed s\n (cost=0.00..167.01 rows=1 width=4)\nPlan 5\n-> Aggregate (cost=511.01..511.02 rows=1 width=8)\n -> Gather (cost=167.02..511.00 rows=2 width=0)\n Workers Planned: 1\n -> Parallel Hash Join (cost=167.02..510.80 rows=1 width=0)\n Hash Cond: (r.id = s.id)\n -> Parallel Seq Scan on simple r (cost=0.00..299.65\nrows=11765 width=4)\n -> Parallel Hash (cost=167.01..167.01 rows=1 width=4)\n -> Parallel Seq Scan on extremely_skewed s\n (cost=0.00..167.01 rows=1 width=4)\n\nThe FinalizeAggregate plan has an increased cost of 1 post patch due to the\nnumber of rows in the Gather node that went from 1 to 2 (rint(1 * 1.7)=2).\nThis was enough to make the Agggregate plan cheaper. The test is to check\nthe parallel hash join so updating it with the new cheapest plan looked\nfine.\n\nRegards,\nAnthonin\n\n[1]: https://github.com/bonnefoa/postgres/tree/plan-candidates", "msg_date": "Fri, 24 May 2024 11:35:10 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Possible incorrect row estimation for Gather paths" }, { "msg_contents": "Hello Anthonin,\n\nI spent some time on this problem and your proposed solution.\nAs I understand it, this is the correction for the row count when the\nnumber of parallel workers < 4.\nOnce the number of workers is 4 or more, the output from\nparallel_divisor is the same as the number of parallel workers.\nAnd then the number of rows for such cases would be the same with or\nwithout the proposed patch.\nSo that way I think it is good to fix this case for a smaller number of workers.\n\nBut I don't quite understood the purpose of this,\n+ total_groups = input_rel->rows;\n+\n+ /*\n+ * If the number of rows is unknown, fallback to gather rows\n+ * estimation\n+ */\n+ if (total_groups == 0)\n+ total_groups = gather_rows_estimate(input_path);\n\nwhy not just use total_groups = gather_rows_estimate(input_path), what\nis the importance of having total_groups = input_rel->rows?\n\nWith respect to the change introduced by the patch in the regression\ntest, I wonder if we should test it on the tables of a larger scale\nand check for performance issues.\nImagine the case when Parallel Seq Scan on extremely_skewed s\n(cost=0.00..167.01 rows=1 width=4) returns 1000 rows instead of 1 ...\nI wonder which plan would perform better then or will there be a\ntotally different plan.\n\nHowever, my hunch is that there should not be serious problems,\nbecause before this patch the number of estimated rows was incorrect\nanyway.\n\nI don't see a problem in merging the two patches.\n\n\nOn Fri, 24 May 2024 at 11:35, Anthonin Bonnefoy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> While experimenting on an explain option to display all plan candidates (very rough prototype here [1]), I've noticed some discrepancies in some generated plans.\n>\n> EXPLAIN (ALL_CANDIDATES) SELECT * FROM pgbench_accounts order by aid;\n> Plan 1\n> -> Gather Merge (cost=11108.32..22505.38 rows=100000 width=97)\n> Workers Planned: 1\n> -> Sort (cost=10108.31..10255.37 rows=58824 width=97)\n> Sort Key: aid\n> -> Parallel Seq Scan on pgbench_accounts (cost=0.00..2228.24 rows=58824 width=97)\n> Plan 2\n> -> Gather Merge (cost=11108.32..17873.08 rows=58824 width=97)\n> Workers Planned: 1\n> -> Sort (cost=10108.31..10255.37 rows=58824 width=97)\n> Sort Key: aid\n> -> Parallel Seq Scan on pgbench_accounts (cost=0.00..2228.24 rows=58824 width=97)\n>\n> The 2 plans are similar except one Gather Merge has a lower 58K estimated rows.\n>\n> The first plan is created with generate_useful_gather_paths with override_rows=false then create_gather_merge_path and will use rel->rows as the row count (so the 100K rows of pgbench_accounts).\n> #0: create_gather_merge_path(...) at pathnode.c:1885:30\n> #1: generate_useful_gather_paths(... override_rows=false) at allpaths.c:3286:11\n> #2: apply_scanjoin_target_to_paths(...) at planner.c:7744:3\n> #3: grouping_planner(...) at planner.c:1638:3\n>\n> The second plan is created through create_ordered_paths then create_gather_merge_path and the number of rows is estimated to (worker_rows * parallel_workers). Since we only have 1 parallel worker, this yields 58K rows.\n> #0: create_gather_merge_path(...) at pathnode.c:1885:30\n> #1: create_ordered_paths(...) at planner.c:5287:5\n> #2: grouping_planner(...) at planner.c:1717:17\n>\n> The 58K row estimate looks possibly incorrect. A worker row count is estimated using total_rows/parallel_divisor. The parallel_divisor will include the possible leader participation and will be 1.7 for 1 worker thus the 58K rows (100K/1.7=58K)\n> However, the gather merge will only do 58K*1, dropping the leader participation component.\n>\n> I have a tentative patch split in two changes:\n> 1: This is a related refactoring to remove an unnecessary and confusing assignment of rows in create_gather_merge_path. This value is never used and immediately overwritten in cost_gather_merge\n> 2: This changes the row estimation of gather path to use worker_rows*parallel_divisor to get a more accurate estimation. Additionally, when creating a gather merge path in create_ordered_paths, we try to use the source's rel rows when available.\n>\n> The patch triggered a small change in the hash_join regression test. Pre patch, we had the following candidates.\n> Plan 4\n> -> Aggregate (cost=511.01..511.02 rows=1 width=8)\n> -> Gather (cost=167.02..511.00 rows=2 width=0)\n> Workers Planned: 1\n> -> Parallel Hash Join (cost=167.02..510.80 rows=1 width=0)\n> Hash Cond: (r.id = s.id)\n> -> Parallel Seq Scan on simple r (cost=0.00..299.65 rows=11765 width=4)\n> -> Parallel Hash (cost=167.01..167.01 rows=1 width=4)\n> -> Parallel Seq Scan on extremely_skewed s (cost=0.00..167.01 rows=1 width=4)\n> Plan 5\n> -> Finalize Aggregate (cost=510.92..510.93 rows=1 width=8)\n> -> Gather (cost=510.80..510.91 rows=1 width=8)\n> Workers Planned: 1\n> -> Partial Aggregate (cost=510.80..510.81 rows=1 width=8)\n> -> Parallel Hash Join (cost=167.02..510.80 rows=1 width=0)\n> Hash Cond: (r.id = s.id)\n> -> Parallel Seq Scan on simple r (cost=0.00..299.65 rows=11765 width=4)\n> -> Parallel Hash (cost=167.01..167.01 rows=1 width=4)\n> -> Parallel Seq Scan on extremely_skewed s (cost=0.00..167.01 rows=1 width=4)\n>\n> Post patch, the plan candidates became:\n> Plan 4\n> -> Finalize Aggregate (cost=511.02..511.03 rows=1 width=8)\n> -> Gather (cost=510.80..511.01 rows=2 width=8)\n> Workers Planned: 1\n> -> Partial Aggregate (cost=510.80..510.81 rows=1 width=8)\n> -> Parallel Hash Join (cost=167.02..510.80 rows=1 width=0)\n> Hash Cond: (r.id = s.id)\n> -> Parallel Seq Scan on simple r (cost=0.00..299.65 rows=11765 width=4)\n> -> Parallel Hash (cost=167.01..167.01 rows=1 width=4)\n> -> Parallel Seq Scan on extremely_skewed s (cost=0.00..167.01 rows=1 width=4)\n> Plan 5\n> -> Aggregate (cost=511.01..511.02 rows=1 width=8)\n> -> Gather (cost=167.02..511.00 rows=2 width=0)\n> Workers Planned: 1\n> -> Parallel Hash Join (cost=167.02..510.80 rows=1 width=0)\n> Hash Cond: (r.id = s.id)\n> -> Parallel Seq Scan on simple r (cost=0.00..299.65 rows=11765 width=4)\n> -> Parallel Hash (cost=167.01..167.01 rows=1 width=4)\n> -> Parallel Seq Scan on extremely_skewed s (cost=0.00..167.01 rows=1 width=4)\n>\n> The FinalizeAggregate plan has an increased cost of 1 post patch due to the number of rows in the Gather node that went from 1 to 2 (rint(1 * 1.7)=2). This was enough to make the Agggregate plan cheaper. The test is to check the parallel hash join so updating it with the new cheapest plan looked fine.\n>\n> Regards,\n> Anthonin\n>\n> [1]: https://github.com/bonnefoa/postgres/tree/plan-candidates\n\n\n\n-- \nRegards,\nRafia Sabih\n\n\n", "msg_date": "Wed, 10 Jul 2024 16:54:22 +0200", "msg_from": "Rafia Sabih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible incorrect row estimation for Gather paths" }, { "msg_contents": "Hi Rafia,\n\nThanks for reviewing.\n\nOn Wed, Jul 10, 2024 at 4:54 PM Rafia Sabih <[email protected]> wrote:\n> But I don't quite understood the purpose of this,\n> + total_groups = input_rel->rows;\n> +\n> + /*\n> + * If the number of rows is unknown, fallback to gather rows\n> + * estimation\n> + */\n> + if (total_groups == 0)\n> + total_groups = gather_rows_estimate(input_path);\n>\n> why not just use total_groups = gather_rows_estimate(input_path), what\n> is the importance of having total_groups = input_rel->rows?\n\nThe initial goal was to use the source tuples if available and avoid\npossible rounding errors. Though I realise that the difference would\nbe minimal. For example, 200K tuples and 3 workers would yield\nint(int(200000 / 2.4) * 2.4)=199999. That is probably not worth the\nadditional complexity, I've updated the patch to just use\ngather_rows_estimate.\n\n> With respect to the change introduced by the patch in the regression\n> test, I wonder if we should test it on the tables of a larger scale\n> and check for performance issues.\n> Imagine the case when Parallel Seq Scan on extremely_skewed s\n> (cost=0.00..167.01 rows=1 width=4) returns 1000 rows instead of 1 ...\n> I wonder which plan would perform better then or will there be a\n> totally different plan.\nFor the extremely_skewed test, having the parallel seq scan returning\nmore rows won't impact the Gather since The Hash Join will reduce the\nnumber of rows to 1. I've found an example where we can see the plan\nchanges with the default settings:\n\nCREATE TABLE simple (id SERIAL PRIMARY KEY, other bigint);\ninsert into simple select generate_series(1,500000), ceil(random()*100);\nanalyze simple;\nEXPLAIN SELECT * FROM simple where other < 10 order by id;\n\nUnpatched:\n Gather Merge (cost=9377.85..12498.60 rows=27137 width=12)\n Workers Planned: 1\n -> Sort (cost=8377.84..8445.68 rows=27137 width=12)\n Sort Key: id\n -> Parallel Seq Scan on simple (cost=0.00..6379.47\nrows=27137 width=12)\n Filter: (other < 10)\n\nPatched:\n Sort (cost=12381.73..12492.77 rows=44417 width=12)\n Sort Key: id\n -> Seq Scan on simple (cost=0.00..8953.00 rows=44417 width=12)\n Filter: (other < 10)\n\nLooking at the candidates, the previous Gather Merge now has an\nestimated number of rows of 44418. The 1 difference compared to the\nother Gather Merge plan is due to rounding (26128 * 1.7 = 44417.6).\n\n Plan 3\n -> Gather Merge (cost=9296.40..14358.75 rows=44418 width=12)\n Workers Planned: 1\n -> Sort (cost=8296.39..8361.71 rows=26128 width=12)\n Sort Key: id\n -> Parallel Seq Scan on simple (cost=0.00..6379.47\nrows=26128 width=12)\n Filter: (other < 10)\n Plan 4\n -> Gather Merge (cost=9296.40..14358.63 rows=44417 width=12)\n Workers Planned: 1\n -> Sort (cost=8296.39..8361.71 rows=26128 width=12)\n Sort Key: id\n -> Parallel Seq Scan on simple (cost=0.00..6379.47\nrows=26128 width=12)\n Filter: (other < 10)\n Plan 5\n -> Sort (cost=12381.73..12492.77 rows=44417 width=12)\n Sort Key: id\n -> Seq Scan on simple (cost=0.00..8953.00 rows=44417 width=12)\n Filter: (other < 10)\n\nThe Sort plan is slightly slower than the Gather Merge plan: 100ms\naverage against 83ms but the Gather Merge comes at the additional cost\nof creating and using a parallel worker. The unpatched row estimation\nmakes the parallel plan look cheaper and running a parallel query for\na 17ms improvement doesn't seem like a good trade.\n\nI've also realised from the comments in optimizer.h that\nnodes/pathnodes.h should not be included there and fixed it.\n\nRegards,\nAnthonin", "msg_date": "Tue, 16 Jul 2024 09:56:31 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible incorrect row estimation for Gather paths" }, { "msg_contents": "I can reproduce this problem with the query below.\n\nexplain (costs on) select * from tenk1 order by twenty;\n QUERY PLAN\n---------------------------------------------------------------------------------\n Gather Merge (cost=772.11..830.93 rows=5882 width=244)\n Workers Planned: 1\n -> Sort (cost=772.10..786.80 rows=5882 width=244)\n Sort Key: twenty\n -> Parallel Seq Scan on tenk1 (cost=0.00..403.82 rows=5882 width=244)\n(5 rows)\n\nOn Tue, Jul 16, 2024 at 3:56 PM Anthonin Bonnefoy\n<[email protected]> wrote:\n> The initial goal was to use the source tuples if available and avoid\n> possible rounding errors. Though I realise that the difference would\n> be minimal. For example, 200K tuples and 3 workers would yield\n> int(int(200000 / 2.4) * 2.4)=199999. That is probably not worth the\n> additional complexity, I've updated the patch to just use\n> gather_rows_estimate.\n\nI wonder if the changes in create_ordered_paths should also be reduced\nto 'total_groups = gather_rows_estimate(path);'.\n\n> I've also realised from the comments in optimizer.h that\n> nodes/pathnodes.h should not be included there and fixed it.\n\nI think perhaps it's better to declare gather_rows_estimate() in\ncost.h rather than optimizer.h.\n(BTW, I wonder if compute_gather_rows() would be a better name?)\n\nI noticed another issue in generate_useful_gather_paths() -- *rowsp\nwould have a random value if override_rows is true and we use\nincremental sort for gather merge. I think we should fix this too.\n\nThanks\nRichard\n\n\n", "msg_date": "Wed, 17 Jul 2024 09:59:05 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible incorrect row estimation for Gather paths" }, { "msg_contents": "On Wed, Jul 17, 2024 at 3:59 AM Richard Guo <[email protected]> wrote:\n>\n> I can reproduce this problem with the query below.\n>\n> explain (costs on) select * from tenk1 order by twenty;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------\n> Gather Merge (cost=772.11..830.93 rows=5882 width=244)\n> Workers Planned: 1\n> -> Sort (cost=772.10..786.80 rows=5882 width=244)\n> Sort Key: twenty\n> -> Parallel Seq Scan on tenk1 (cost=0.00..403.82 rows=5882 width=244)\n> (5 rows)\nI was looking for a test to add in the regress checks that wouldn't\nrely on explain cost since it is disabled. However, I've realised I\ncould do something similar to what's done in stats_ext with the\ncheck_estimated_rows function. I've added the get_estimated_rows test\nfunction that extracts the estimated rows from the top node and uses\nit to check the gather nodes' estimates. get_estimated_rows uses a\nsimple explain compared to check_estimated_rows which relies on an\nexplain analyze.\n\n> I wonder if the changes in create_ordered_paths should also be reduced\n> to 'total_groups = gather_rows_estimate(path);'.\n\nIt should already be the case with the patch v2. It does create\nrounding errors that are visible in the tests but they shouldn't\nexceed +1 or -1.\n\n> I think perhaps it's better to declare gather_rows_estimate() in\n> cost.h rather than optimizer.h.\n> (BTW, I wonder if compute_gather_rows() would be a better name?)\n\nGood point, I've moved the declaration and renamed it.\n\n> I noticed another issue in generate_useful_gather_paths() -- *rowsp\n> would have a random value if override_rows is true and we use\n> incremental sort for gather merge. I think we should fix this too.\n\nThat seems to be the case. I've tried to find a query that could\ntrigger this codepath without success. All grouping and distinct paths\nI've tried where fully sorted and didn't trigger an incremental sort.\nI will need a bit more time to check this.\n\nIn the meantime, I've updated the patches with the review comments.\n\nRegards,\nAnthonin", "msg_date": "Mon, 22 Jul 2024 10:47:24 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible incorrect row estimation for Gather paths" }, { "msg_contents": "On Mon, Jul 22, 2024 at 4:47 PM Anthonin Bonnefoy\n<[email protected]> wrote:\n> On Wed, Jul 17, 2024 at 3:59 AM Richard Guo <[email protected]> wrote:\n> > I can reproduce this problem with the query below.\n> >\n> > explain (costs on) select * from tenk1 order by twenty;\n> > QUERY PLAN\n> > ---------------------------------------------------------------------------------\n> > Gather Merge (cost=772.11..830.93 rows=5882 width=244)\n> > Workers Planned: 1\n> > -> Sort (cost=772.10..786.80 rows=5882 width=244)\n> > Sort Key: twenty\n> > -> Parallel Seq Scan on tenk1 (cost=0.00..403.82 rows=5882 width=244)\n> > (5 rows)\n> I was looking for a test to add in the regress checks that wouldn't\n> rely on explain cost since it is disabled. However, I've realised I\n> could do something similar to what's done in stats_ext with the\n> check_estimated_rows function. I've added the get_estimated_rows test\n> function that extracts the estimated rows from the top node and uses\n> it to check the gather nodes' estimates. get_estimated_rows uses a\n> simple explain compared to check_estimated_rows which relies on an\n> explain analyze.\n\nHmm, I'm hesitant about adding the tests that verify the number of\nestimated rows in this patch. The table 'tenk1' isn't created with\nautovacuum_enabled = off, so we may have unstable results from\nauto-analyze happening. I think the plan change in join_hash.out is\nsufficient to verify the changes in this patch.\n\n> > I wonder if the changes in create_ordered_paths should also be reduced\n> > to 'total_groups = gather_rows_estimate(path);'.\n>\n> It should already be the case with the patch v2. It does create\n> rounding errors that are visible in the tests but they shouldn't\n> exceed +1 or -1.\n>\n> > I think perhaps it's better to declare gather_rows_estimate() in\n> > cost.h rather than optimizer.h.\n> > (BTW, I wonder if compute_gather_rows() would be a better name?)\n>\n> Good point, I've moved the declaration and renamed it.\n>\n> > I noticed another issue in generate_useful_gather_paths() -- *rowsp\n> > would have a random value if override_rows is true and we use\n> > incremental sort for gather merge. I think we should fix this too.\n>\n> That seems to be the case. I've tried to find a query that could\n> trigger this codepath without success. All grouping and distinct paths\n> I've tried where fully sorted and didn't trigger an incremental sort.\n> I will need a bit more time to check this.\n>\n> In the meantime, I've updated the patches with the review comments.\n\nOtherwise I think the v3 patch looks good to me.\n\nAttached is an updated version of this patch with cosmetic changes and\ncomment updates. It also squishes the two patches together into one.\nI'm planning to push it soon, barring any objections or comments.\n\nThanks\nRichard", "msg_date": "Mon, 22 Jul 2024 17:55:47 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible incorrect row estimation for Gather paths" }, { "msg_contents": "On Mon, Jul 22, 2024 at 5:55 PM Richard Guo <[email protected]> wrote:\n> Otherwise I think the v3 patch looks good to me.\n>\n> Attached is an updated version of this patch with cosmetic changes and\n> comment updates. It also squishes the two patches together into one.\n> I'm planning to push it soon, barring any objections or comments.\n\nPushed.\n\nThanks\nRichard\n\n\n", "msg_date": "Tue, 23 Jul 2024 10:37:32 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible incorrect row estimation for Gather paths" } ]
[ { "msg_contents": "Hi PostgreSQL Community, \nwhen a backend process starts, pq_init is called where it opens a FD during CreateWaitEventSet()\n\n\nif (!AcquireExternalFD())\n{\n/* treat this as though epoll_create1 itself returned EMFILE */\nelog(ERROR, \"epoll_create1 failed: %m\");\n}\nset->epoll_fd = epoll_create1(EPOLL_CLOEXEC);\n\n\nbut we didn't closed or called ReleaseExternalFD() for accounting,lets say if we have multiple clients connected and are actively running queries, won't the max number of open FDs (ulimit -n) limit of the system gets reached and cause \"Too many open files issue\"?\n\nRegards\nSrinath Reddy\n\n\n\n", "msg_date": "Fri, 24 May 2024 17:47:00 +0530", "msg_from": "Srinath Reddy Sadipiralla <[email protected]>", "msg_from_op": true, "msg_subject": "Question: Why Are File Descriptors Not Closed and Accounted for\n PostgreSQL Backends?" }, { "msg_contents": "On 24/05/2024 15:17, Srinath Reddy Sadipiralla wrote:\n> Hi PostgreSQL Community,\n> when a backend process starts, pq_init is called where it opens a FD during CreateWaitEventSet()\n> \n> \n> if (!AcquireExternalFD())\n> {\n> /* treat this as though epoll_create1 itself returned EMFILE */\n> elog(ERROR, \"epoll_create1 failed: %m\");\n> }\n> set->epoll_fd = epoll_create1(EPOLL_CLOEXEC);\n> \n> \n> but we didn't closed or called ReleaseExternalFD() for accounting\n\nYes we do, see FreeWaitEventSet().\n\nThe WaitEventSet created fro pq_init() is never explicitly free'd \nthough, because it's created in the per-connection backend process. When \nthe connection is terminated, the backend process exits, cleaning up any \nresources including the WaitEventSet.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 24 May 2024 16:45:54 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question: Why Are File Descriptors Not Closed and Accounted for\n PostgreSQL Backends?" }, { "msg_contents": "Thanks for the reply,yeah i know about FreeWaitEventSet() but that is being used in few places but not for handling backends.\n\ni got it that FDs like FeBeWaitSet->epoll_fd will be free'd when connection is terminated but as i mentioned wouldn't it be an issue if the connection is long living lets take idle which can be running queries for long time,what if we have multiple connections like this running queries using multiple system FDs and reach the limit,cause they are using FDs ,so they may not be free'd.\n\n\n\n\n\n\n\n---- On Fri, 24 May 2024 19:15:54 +0530 Heikki Linnakangas <[email protected]> wrote ---\n\n\n\nOn 24/05/2024 15:17, Srinath Reddy Sadipiralla wrote: \n> Hi PostgreSQL Community, \n> when a backend process starts, pq_init is called where it opens a FD during CreateWaitEventSet() \n> \n> \n> if (!AcquireExternalFD()) \n> { \n> /* treat this as though epoll_create1 itself returned EMFILE */ \n> elog(ERROR, \"epoll_create1 failed: %m\"); \n> } \n> set->epoll_fd = epoll_create1(EPOLL_CLOEXEC); \n> \n> \n> but we didn't closed or called ReleaseExternalFD() for accounting \n \nYes we do, see FreeWaitEventSet(). \n \nThe WaitEventSet created fro pq_init() is never explicitly free'd \nthough, because it's created in the per-connection backend process. When \nthe connection is terminated, the backend process exits, cleaning up any \nresources including the WaitEventSet. \n \n-- \nHeikki Linnakangas \nNeon (https://neon.tech)\nThanks for the reply,yeah i know about FreeWaitEventSet() but that is being used in few places but not for handling backends.i got it that FDs like FeBeWaitSet->epoll_fd will be free'd when connection is terminated but as i mentioned wouldn't it be an issue if the connection is long living lets take idle which can be running queries for long time,what if we have multiple connections like this running queries using multiple system FDs and reach the limit,cause they are using FDs ,so they may not be free'd.---- On Fri, 24 May 2024 19:15:54 +0530 Heikki Linnakangas <[email protected]> wrote ---On 24/05/2024 15:17, Srinath Reddy Sadipiralla wrote: > Hi PostgreSQL Community, > when a backend process starts, pq_init is called where it opens a FD during CreateWaitEventSet() > > > if (!AcquireExternalFD()) > { > /* treat this as though epoll_create1 itself returned EMFILE */ > elog(ERROR, \"epoll_create1 failed: %m\"); > } > set->epoll_fd = epoll_create1(EPOLL_CLOEXEC); > > > but we didn't closed or called ReleaseExternalFD() for accounting Yes we do, see FreeWaitEventSet(). The WaitEventSet created fro pq_init() is never explicitly free'd though, because it's created in the per-connection backend process. When the connection is terminated, the backend process exits, cleaning up any resources including the WaitEventSet. -- Heikki Linnakangas Neon (https://neon.tech)", "msg_date": "Fri, 24 May 2024 19:35:34 +0530", "msg_from": "Srinath Reddy Sadipiralla <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question: Why Are File Descriptors Not Closed and Accounted for\n PostgreSQL Backends?" } ]
[ { "msg_contents": "1. Problem and reproducability\n\n\nAfter the release of PG17b1 I wanted to test it on a newly installed machine:\n\nI installed Fedora 40 Server on x86-64 and did a full dnf update (as of 23.may 2024).\n\n\nTo self-compile from source I did:\n\n\nsudo dnf group install \"C Development*\" \"Development*\"\n\nsudo dnf install wine mold meson perl zstd lz4 lz4-devel libzstd-devel\n(This may be a little too much, I was lazy for minimal installation)\n\nI built Postgres 17 beta1 from source with meson and installed it with ninja install.\nI started the server and created a new database with psql\nI restored a test database from dump with pg_restore.\n\nWhen I tried to connect to the restored database with psql \\c I got:\n\n\n[root@localhost local]# sudo -u postgres pgbeta/bin/psql -h /tmp -p 5431\npsql (17beta1)\nType \"help\" for help.\n\npostgres=# select version ();\n version\n--------------------------------------------------------------------\n PostgreSQL 17beta1 on x86_64-linux, compiled by gcc-14.1.1, 64-bit\n(1 row)\n\npostgres=# \\l\n List of databases\n Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges\n-----------+----------+----------+-----------------+---------+-------+--------+-----------+-----------------------\n cpsdb | postgres | UTF8 | builtin | C | C | C | |\n postgres | postgres | UTF8 | builtin | C | C | C | |\n template0 | postgres | UTF8 | builtin | C | C | C | | =c/postgres +\n | | | | | | | | postgres=CTc/postgres\n template1 | postgres | UTF8 | builtin | C | C | C | | =c/postgres +\n | | | | | | | | postgres=CTc/postgres\n(4 rows)\n\npostgres=# \\c cpsdb\npgbeta/bin/psql: symbol lookup error: pgbeta/bin/psql: undefined symbol: PQsocketPoll\n[root@localhost local]#\n\n----------------------------------------------------------------------- ERROR ^^^^^^^^^^^^^^^^^^^^^^\n\nSo it was not possible to use the database locally with psql.\n\n2. Analysis\n\n(To my understanding) the problem comes from incompatible libpq.so libraries on the system.\nThe installation of all the development packages installed :\n\n[root@localhost lib64]# dnf list installed libpq\nInstalled Packages\nlibpq.x86_64 16.1-4.fc40 @fedora\n\nThis is the older version provided by Fedora (Nov 2023!, even after 16.3 from May 2024)\n\n3. Questions\n\n- Why doesn't psql use the just created lib64/libpq.so.5.17 from ninja install?\n\nThe loading of the locally available libpq.so should always have priority over a system wide in /usr/lib64\n\n- Why is the Fedora supplied library 2 minor versions behind?\n\n- How to install the new libpq systemwide to make it usable for all applications (not only postgres supplied clients)?\nTo my understanding libpq is normally downward compatible, so it may be possible to use libpq17 against all supported older releases\n\n4. Proposals\n\n- The distributors like Fedora, Debian,Ubuntu etc. should be encouraged to update the minor versions IN A TIMELY FASHION like any other upgrades: Minor versions normally don't break anything and often contain security fixes and important bug fixes valuable to all update-willing users of the system. Perhaps somebody deeper involved can support the distributors in this case.\n\n- PGDG should provide the newest libpq.so (beta or stable) in its common repository starting at the first beta release of a major version.\nSo everybody can install it separately and test it against its own application. This should ease real world testing alot.\n\n- Due to the downward compatibility of libpq and the difficulty of handling multiple versions on the same machine I propose to always provide the newest libpq (highest stable version and latest beta version) for separate installation.\nThis should be independend installable from the main packages, much like the Visual runtime libraries for applications under Windows.\n\nThe user can choose between the latest stable version (at this time libpq 16.3), the latest beta (at this time libpq 17beta) or the version belonging to its major/minor version. This should be documented and be easyly changeable.\n\nThe buildfarm could be run always with the most current version. libpq could be thought of a separate product as base for all client applications.\n\n5. Solving the problem\n\nI don't know how to solve the problem in an official way.\n\nI haven't tried manual changes in the system (copying binaries, changing symbolic links, making own packages etc.)\n\nI am able to access the test database from outside (from a windows psql client of pg17b1), but this is not very practical.\n\nThe same problem occurs on my other machine running Fedora 39 and should occur in many other distributions also.\n\nI think a self compiled version of postgres should be self-confined and ready to run for testing.\n\n\nAny thoughts?\n\nHans Buschmann\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nProblem and reproducability \n\n\n\nAfter the release of  PG17b1 I wanted to test it on a newly installed machine:\nI installed Fedora 40 Server on x86-64 and did a full dnf update (as of 23.may 2024).\n\n\nTo self-compile from source I did:\n\n\n\nsudo dnf group install \"C Development*\" \"Development*\"\n\n\n\nsudo dnf install wine mold meson perl zstd lz4 lz4-devel libzstd-devel\n(This may be a little too much, I was lazy for minimal installation)\n\n\nI built Postgres 17 beta1 from source with meson and installed it with ninja install.\nI started the server and created a new database with psql\nI restored a test database from dump with pg_restore.\n\n\nWhen I tried to connect to the restored database with psql \\c I got:\n\n\n\n\n\n[root@localhost local]# sudo -u postgres pgbeta/bin/psql -h /tmp -p 5431\npsql (17beta1)\nType \"help\" for help.\n\n\npostgres=# select version ();\n                              version\n--------------------------------------------------------------------\n PostgreSQL 17beta1 on x86_64-linux, compiled by gcc-14.1.1, 64-bit\n(1 row)\n\n\npostgres=# \\l\n                                                List of databases\n   Name    |  Owner   | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules |   Access privileges\n-----------+----------+----------+-----------------+---------+-------+--------+-----------+-----------------------\n cpsdb     | postgres | UTF8     | builtin         | C       | C     | C      |           |\n postgres  | postgres | UTF8     | builtin         | C       | C     | C      |           |\n template0 | postgres | UTF8     | builtin         | C       | C     | C      |           | =c/postgres          +\n           |          |          |                 |         |       |        |           | postgres=CTc/postgres\n template1 | postgres | UTF8     | builtin         | C       | C     | C      |           | =c/postgres          +\n           |          |          |                 |         |       |        |           | postgres=CTc/postgres\n(4 rows)\n\n\npostgres=# \\c cpsdb\npgbeta/bin/psql: symbol lookup error: pgbeta/bin/psql: undefined symbol: PQsocketPoll\n[root@localhost local]#\n\n\n----------------------------------------------------------------------- ERROR ^^^^^^^^^^^^^^^^^^^^^^\n\n\nSo it was not possible to use the database locally with psql.\n\n\n2. Analysis\n\n\n(To my understanding) the problem comes from incompatible libpq.so libraries on the system.\nThe installation of all the development packages installed :\n\n\n\n[root@localhost lib64]# dnf list installed libpq\nInstalled Packages\nlibpq.x86_64                                         16.1-4.fc40                                          @fedora\n\n\nThis is the older version provided by Fedora (Nov 2023!, even after 16.3 from May 2024)\n\n\n3. Questions\n\n\n- Why doesn't psql use the just created lib64/libpq.so.5.17 from ninja install?\n\n\nThe loading of the locally available libpq.so should always have priority over a system wide in /usr/lib64\n\n\n- Why is the Fedora supplied library 2 minor versions behind?\n\n\n- How to install the new libpq systemwide to make it usable for all applications (not only postgres supplied clients)?\nTo my understanding libpq is normally downward compatible, so it may be possible to use libpq17 against all supported older releases\n\n\n4. Proposals\n\n\n- The distributors like Fedora, Debian,Ubuntu etc. should be encouraged to update the minor versions IN A TIMELY FASHION like any other upgrades: Minor versions normally don't break anything and often contain security fixes and important bug fixes valuable\n to all update-willing users of the system. Perhaps somebody deeper involved can support the distributors in this case.\n\n\n-  PGDG should provide the newest libpq.so (beta or stable) in its common repository starting at the first beta release of a major version.\nSo everybody can install it separately and test it against its own application. This should ease real world testing alot.\n\n\n- Due to the downward compatibility of libpq and the difficulty of handling multiple versions on the same machine I propose to always provide the newest libpq (highest stable version and latest beta version) for separate installation.\nThis should be independend installable from the main packages, much like the Visual runtime libraries for applications under Windows.\n\n\nThe user can choose between the latest stable version (at this time libpq 16.3), the latest beta (at this time libpq 17beta) or the version belonging to its major/minor version. This should be documented and be easyly changeable.\n\n\nThe buildfarm could be run always with the most current version. libpq could be thought of a separate product as base for all client applications.\n\n\n5. Solving the problem\n\n\nI don't know how to solve the problem in an official way.\n\n\nI haven't tried manual changes in the system (copying binaries, changing symbolic links, making own packages etc.)\n\n\nI am able to access the test database from outside (from a windows psql client of pg17b1), but this is not very practical.\n\n\nThe same problem occurs on my other machine running Fedora 39 and should occur in many other distributions also.\n\n\nI think a self compiled version of postgres should be self-confined and ready to run for testing.\n\n\n\n\nAny thoughts?\n\n\nHans Buschmann", "msg_date": "Fri, 24 May 2024 16:58:51 +0000", "msg_from": "Hans Buschmann <[email protected]>", "msg_from_op": true, "msg_subject": "PG17beta1: Unable to test Postgres on Fedora due to fatal Error in\n psql: undefined symbol: PQsocketPoll" }, { "msg_contents": "Hans Buschmann <[email protected]> writes:\n> When I tried to connect to the restored database with psql \\c I got:\n> ...\n> postgres=# \\c cpsdb\n> pgbeta/bin/psql: symbol lookup error: pgbeta/bin/psql: undefined symbol: PQsocketPoll\n\n> (To my understanding) the problem comes from incompatible libpq.so libraries on the system.\n\nRight, you must have a v16-or-earlier libpq lying around somewhere,\nand psql has bound to that not to the beta-test version.\nPQsocketPoll is new in v17.\n\n> - Why doesn't psql use the just created lib64/libpq.so.5.17 from ninja install?\n\nIt's on you to ensure that happens, especially on Linux systems which\nhave a strong bias towards pulling libraries from /usr/lib[64].\nNormally our --enable-rpath option is sufficient; while that's\ndefault in an autoconf-based build, I'm not sure that it is\nin a meson build. Also, if your beta libpq is not where the\nrpath option expected it to get installed, the linker will silently\nfall back to /usr/lib[64].\n\n> The loading of the locally available libpq.so should always have priority over a system wide in /usr/lib64\n\nTell it to the Linux developers --- they think the opposite.\nLikewise, all of your other proposals need to be addressed to\nthe various distros' packagers; this is not the place to complain.\n\nThe main thing that is bothering me about the behavior you\ndescribe is that it didn't fail until psql actually tried to\ncall PQsocketPoll. (AFAICT from a quick look, that occurs\nduring \\c but not during the startup connection.) I had thought\nthat we select link options that result in early binding and\nhence startup-time failure for a case like this. I can confirm\nthough that this acts as described on my RHEL8 box if I force\ncurrent psql to link to v16 libpq, so either we've broken that\nor it never did apply to frontend programs. But it doesn't\nseem to me to be a great thing for it to behave like this.\nYou could easily miss that you have a broken setup until\nafter you deploy it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 13:34:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG17beta1: Unable to test Postgres on Fedora due to fatal Error\n in psql: undefined symbol: PQsocketPoll" } ]
[ { "msg_contents": "Hi all,\n\nOur documentation implies that the ldapurl setting in pg_hba is used\nfor search+bind mode only. It was pointed out to me recently that this\nis not true, and if you're dealing with simple bind on a non-standard\nscheme or port, then ldapurl makes the HBA easier to read:\n\n ... ldap ldapurl=\"ldaps://ldap.example.net:49151\" ldapprefix=\"cn=\"\nldapsuffix=\", dc=example, dc=net\"\n\n0001 tries to document this helpful behavior a little better, and 0002\npins it with a test. WDYT?\n\nThanks,\n--Jacob", "msg_date": "Fri, 24 May 2024 11:54:49 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Document use of ldapurl with LDAP simple bind" }, { "msg_contents": "On 24.05.24 20:54, Jacob Champion wrote:\n> Our documentation implies that the ldapurl setting in pg_hba is used\n> for search+bind mode only. It was pointed out to me recently that this\n> is not true, and if you're dealing with simple bind on a non-standard\n> scheme or port, then ldapurl makes the HBA easier to read:\n> \n> ... ldap ldapurl=\"ldaps://ldap.example.net:49151\" ldapprefix=\"cn=\"\n> ldapsuffix=\", dc=example, dc=net\"\n> \n> 0001 tries to document this helpful behavior a little better, and 0002\n> pins it with a test. WDYT?\n\nYes, this looks correct. Since ldapurl is really just a shorthand that \nis expanded to various other parameters, it makes sense that it would \nwork for simple bind as well.\n\nhba.c has this error message:\n\n\"cannot use ldapbasedn, ldapbinddn, ldapbindpasswd, ldapsearchattribute, \nldapsearchfilter, or ldapurl together with ldapprefix\"\n\nThis appears to imply that specifying ldapurl is only applicable for \nsearch+bind. Maybe that whole message should be simplified to something \nlike\n\n\"configuration mixes arguments for simple bind and search+bind\"\n\n(The old wording also ignores that the error might arise via \"ldapsuffix\".)\n\n\n\n", "msg_date": "Fri, 28 Jun 2024 09:11:42 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document use of ldapurl with LDAP simple bind" }, { "msg_contents": "On Fri, Jun 28, 2024 at 12:11 AM Peter Eisentraut <[email protected]> wrote:\n> This appears to imply that specifying ldapurl is only applicable for\n> search+bind. Maybe that whole message should be simplified to something\n> like\n>\n> \"configuration mixes arguments for simple bind and search+bind\"\n>\n> (The old wording also ignores that the error might arise via \"ldapsuffix\".)\n\nI kept the imperative \"cannot\" and tried to match the terminology with\nour documentation at [1]:\n\n cannot mix options for simple bind and search+bind modes\n\nWDYT?\n\n--Jacob\n\n[1] https://www.postgresql.org/docs/17/auth-ldap.html", "msg_date": "Mon, 8 Jul 2024 14:27:12 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document use of ldapurl with LDAP simple bind" }, { "msg_contents": "On 08.07.24 23:27, Jacob Champion wrote:\n> On Fri, Jun 28, 2024 at 12:11 AM Peter Eisentraut <[email protected]> wrote:\n>> This appears to imply that specifying ldapurl is only applicable for\n>> search+bind. Maybe that whole message should be simplified to something\n>> like\n>>\n>> \"configuration mixes arguments for simple bind and search+bind\"\n>>\n>> (The old wording also ignores that the error might arise via \"ldapsuffix\".)\n> \n> I kept the imperative \"cannot\" and tried to match the terminology with\n> our documentation at [1]:\n> \n> cannot mix options for simple bind and search+bind modes\n\nCommitted.\n\n(I suppose this could be considered a bug fix, but I don't feel an \nurgency to go backpatching this. Let me know if there are different \nopinions.)\n\n\n\n", "msg_date": "Tue, 23 Jul 2024 10:37:36 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document use of ldapurl with LDAP simple bind" }, { "msg_contents": "On Tue, Jul 23, 2024 at 1:37 AM Peter Eisentraut <[email protected]> wrote:\n> Committed.\n\nThanks!\n\n> (I suppose this could be considered a bug fix, but I don't feel an\n> urgency to go backpatching this. Let me know if there are different\n> opinions.)\n\nCertainly no urgency. The docs part of the patch also could be\nbackported alone, but I don't feel strongly either way.\n\n--Jacob\n\n\n", "msg_date": "Tue, 23 Jul 2024 06:31:14 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document use of ldapurl with LDAP simple bind" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a POC patch to implement $SUBJECT.\n\nAdding relfilenode statistics has been proposed in [1]. The idea is to allow\ntracking dirtied blocks, written blocks,... on a per relation basis.\n\nThe attached patch is not in a fully \"polished\" state yet: there is more places\nwe should add relfilenode counters, create more APIS to retrieve the relfilenode\nstats....\n\nBut I think that it is in a state that can be used to discuss the approach it\nis implementing (so that we can agree or not on it) before moving forward.\n\nThe approach that is implemented in this patch is the following:\n\n- A new PGSTAT_KIND_RELFILENODE is added\n- A new attribute (aka relfile) has been added to PgStat_HashKey so that we\ncan record (dboid, spcOid and relfile) to identify a relfilenode entry\n- pgstat_create_transactional() is used in RelationCreateStorage()\n- pgstat_drop_transactional() is used in RelationDropStorage()\n- RelationPreserveStorage() will remove the entry from the list of dropped stats\n\nThe current approach to deal with table rewrite is to:\n\n- copy the relfilenode stats in table_relation_set_new_filelocator() from\nthe relfilenode stats entry to the shared table stats entry\n- in the pg_statio_all_tables view: add the table stats entry (that contains\n\"previous\" relfilenode stats (due to the above) that were linked to this relation\n) to the current relfilenode stats linked to the relation\n\nAn example is done in the attached patch for the new heap_blks_written field\nin pg_statio_all_tables. Outcome is:\n\n\"\npostgres=# create table bdt (a int);\nCREATE TABLE\npostgres=# select heap_blks_written from pg_statio_all_tables where relname = 'bdt';\n heap_blks_written\n-------------------\n 0\n(1 row)\n\npostgres=# insert into bdt select generate_series(1,10000);\nINSERT 0 10000\npostgres=# select heap_blks_written from pg_statio_all_tables where relname = 'bdt';\n heap_blks_written\n-------------------\n 0\n(1 row)\n\npostgres=# checkpoint;\nCHECKPOINT\npostgres=# select heap_blks_written from pg_statio_all_tables where relname = 'bdt';\n heap_blks_written\n-------------------\n 45\n(1 row)\n\npostgres=# truncate table bdt;\nTRUNCATE TABLE\npostgres=# select heap_blks_written from pg_statio_all_tables where relname = 'bdt';\n heap_blks_written\n-------------------\n 45\n(1 row)\n\npostgres=# insert into bdt select generate_series(1,10000);\nINSERT 0 10000\npostgres=# select heap_blks_written from pg_statio_all_tables where relname = 'bdt';\n heap_blks_written\n-------------------\n 45\n(1 row)\n\npostgres=# checkpoint;\nCHECKPOINT\npostgres=# select heap_blks_written from pg_statio_all_tables where relname = 'bdt';\n heap_blks_written\n-------------------\n 90\n(1 row)\n\"\n\nSome remarks:\n\n- My first attempt has been to call the pgstat_create_transactional() and\npgstat_drop_transactional() at the same places it is done for the relations but\nthat did not work well (mainly due to corner cases in case of rewrite).\n\n- Please don't take care of the pgstat_count_buffer_read() and \npgstat_count_buffer_hit() calls in pgstat_report_relfilenode_buffer_read()\nand pgstat_report_relfilenode_buffer_hit(). Those stats will follow the same\nflow as the one done and explained above for the new heap_blks_written one (\nshould we agree on it).\n\nLooking forward to your comments, feedback.\n\nRegards,\n\n[1]: https://www.postgresql.org/message-id/20231113204439.r4lmys73tessqmak%40awork3.anarazel.de\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 25 May 2024 07:52:02 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "relfilenode statistics" }, { "msg_contents": "Hi Bertrand,\n\nIt would be helpful to me if the reasons why we're splitting out\nrelfilenodestats could be more clearly spelled out. I see Andres's\ncomment in the thread to which you linked, but it's pretty vague about\nwhy we should do this (\"it's not nice\") and whether we should do this\n(\"I wonder if this is an argument for\") and maybe that's all fine if\nAndres is going to be the one to review and commit this, but even if\nthen it would be nice if the rest of us could follow along from home,\nand right now I can't.\n\nThe commit message is often a good place to spell this kind of thing\nout, because then it's included with every version of the patch you\npost, and may be of some use to the eventual committer in writing\ntheir commit message. The body of the email where you post the patch\nset can be fine, too.\n\n...Robert\n\n\n", "msg_date": "Mon, 27 May 2024 09:10:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi Robert,\n\nOn Mon, May 27, 2024 at 09:10:13AM -0400, Robert Haas wrote:\n> Hi Bertrand,\n> \n> It would be helpful to me if the reasons why we're splitting out\n> relfilenodestats could be more clearly spelled out. I see Andres's\n> comment in the thread to which you linked, but it's pretty vague about\n> why we should do this (\"it's not nice\") and whether we should do this\n> (\"I wonder if this is an argument for\") and maybe that's all fine if\n> Andres is going to be the one to review and commit this, but even if\n> then it would be nice if the rest of us could follow along from home,\n> and right now I can't.\n\nThanks for the feedback! \n\nYou’re completely right, my previous message is missing clear explanation as to\nwhy I think that relfilenode stats could be useful. Let me try to fix this.\n\nThe main argument is that we currently don’t have writes counters for relations.\nThe reason is that we don’t have the relation OID when writing buffers out.\nTracking writes per relfilenode would allow us to track/consolidate writes per\nrelation (example in the v1 patch and in the message up-thread).\n\nI think that adding instrumentation in this area (writes counters) could be\nbeneficial (like it is for the ones we currently have for reads).\n\nSecond argument is that this is also beneficial for the \"Split index and\ntable statistics into different types of stats\" thread (mentioned in the previous\nmessage). It would allow us to avoid additional branches in some situations (like\nthe one mentioned by Andres in the link I provided up-thread).\n\nIf we agree that the main argument makes sense to think about having relfilenode\nstats then I think using them as proposed in the second argument makes sense too:\n\nWe’d move the current buffer read and buffer hit counters from the relation stats\nto the relfilenode stats (while still being able to retrieve them from the \npg_statio_all_tables/indexes views: see the example for the new heap_blks_written\nstat added in the patch). Generally speaking, I think that tracking counters at\na common level (i.e relfilenode level instead of table or index level) is\nbeneficial (avoid storing/allocating space for the same counters in multiple\nstructs) and sounds more intuitive to me.\n\nAlso I think this is open door for new ideas: for example, with relfilenode\nstatistics in place, we could probably also start thinking about tracking\nchecksum errors per relfllenode.\n\n> The commit message is often a good place to spell this kind of thing\n> out, because then it's included with every version of the patch you\n> post, and may be of some use to the eventual committer in writing\n> their commit message. The body of the email where you post the patch\n> set can be fine, too.\n> \n\nYeah, I’ll update the commit message in V2 with better explanations once I get\nfeedback on V1 (should we decide to move on with the relfilenode stats idea).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 3 Jun 2024 11:11:46 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "On Mon, Jun 3, 2024 at 7:11 AM Bertrand Drouvot\n<[email protected]> wrote:\n> The main argument is that we currently don’t have writes counters for relations.\n> The reason is that we don’t have the relation OID when writing buffers out.\n\nOK.\n\n> Second argument is that this is also beneficial for the \"Split index and\n> table statistics into different types of stats\" thread (mentioned in the previous\n> message). It would allow us to avoid additional branches in some situations (like\n> the one mentioned by Andres in the link I provided up-thread).\n\nOK.\n\n> We’d move the current buffer read and buffer hit counters from the relation stats\n> to the relfilenode stats (while still being able to retrieve them from the\n> pg_statio_all_tables/indexes views: see the example for the new heap_blks_written\n> stat added in the patch). Generally speaking, I think that tracking counters at\n> a common level (i.e relfilenode level instead of table or index level) is\n> beneficial (avoid storing/allocating space for the same counters in multiple\n> structs) and sounds more intuitive to me.\n\nHmm. So if I CLUSTER or VACUUM FULL the relation, the relfilenode\nchanges. Does that mean I lose all of those stats? Is that a problem?\nOr is it good? Or what?\n\nI also thought about the other direction. Suppose I drop the a\nrelation and create a new one that gets a different relation OID but\nthe same relfilenode. But I don't think that's a problem: dropping the\nrelation should forcibly remove the old stats, so there won't be any\nconflict in this case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Jun 2024 09:26:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "On Tue, Jun 04, 2024 at 09:26:27AM -0400, Robert Haas wrote:\n> On Mon, Jun 3, 2024 at 7:11 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > We’d move the current buffer read and buffer hit counters from the relation stats\n> > to the relfilenode stats (while still being able to retrieve them from the\n> > pg_statio_all_tables/indexes views: see the example for the new heap_blks_written\n> > stat added in the patch). Generally speaking, I think that tracking counters at\n> > a common level (i.e relfilenode level instead of table or index level) is\n> > beneficial (avoid storing/allocating space for the same counters in multiple\n> > structs) and sounds more intuitive to me.\n> \n> Hmm. So if I CLUSTER or VACUUM FULL the relation, the relfilenode\n> changes. Does that mean I lose all of those stats? Is that a problem?\n> Or is it good? Or what?\n\nI think we should keep the stats in the relation during relfilenode changes.\nAs a POC, v1 implemented a way to do so during TRUNCATE (see the changes in\ntable_relation_set_new_filelocator() and in pg_statio_all_tables): as you can\nsee in the example provided up-thread the new heap_blks_written statistic has\nbeen preserved during the TRUNCATE. \n\nPlease note that the v1 POC only takes care of the new heap_blks_written stat and\nthat the logic used in table_relation_set_new_filelocator() would probably need\nto be applied in rebuild_relation() or such to deal with CLUSTER or VACUUM FULL.\n\nFor the relation, the new counter \"blocks_written\" has been added to the\nPgStat_StatTabEntry struct (it's not needed in the PgStat_TableCounts one as the\nrelfilenode stat takes care of it). It's added in PgStat_StatTabEntry only\nto copy/preserve the relfilenode stats during rewrite operations and to retrieve\nthe stats in pg_statio_all_tables.\n\nThen, if later we split the relation stats to index/table stats, we'd have\nblocks_written defined in less structs (as compare to doing the split without\nrelfilenode stat in place).\n\nAs mentioned up-thread, the new logic has been implemented in v1 only for the\nnew blocks_written stat (we'd need to do the same for the existing buffer read /\nbuffer hit if we agree on the approach implemented in v1).\n\n> I also thought about the other direction. Suppose I drop the a\n> relation and create a new one that gets a different relation OID but\n> the same relfilenode. But I don't think that's a problem: dropping the\n> relation should forcibly remove the old stats, so there won't be any\n> conflict in this case.\n\nYeah.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 5 Jun 2024 05:52:33 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 03, 2024 at 11:11:46AM +0000, Bertrand Drouvot wrote:\n> Yeah, I’ll update the commit message in V2 with better explanations once I get\n> feedback on V1 (should we decide to move on with the relfilenode stats idea).\n> \n\nPlease find attached v2, mandatory rebase due to cd312adc56. In passing it\nprovides a more detailed commit message (also making clear that the goal of this\npatch is to start the discussion and agree on the design before moving forward.)\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 6 Jun 2024 07:02:41 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "On Wed, Jun 5, 2024 at 1:52 AM Bertrand Drouvot\n<[email protected]> wrote:\n> I think we should keep the stats in the relation during relfilenode changes.\n> As a POC, v1 implemented a way to do so during TRUNCATE (see the changes in\n> table_relation_set_new_filelocator() and in pg_statio_all_tables): as you can\n> see in the example provided up-thread the new heap_blks_written statistic has\n> been preserved during the TRUNCATE.\n\nYeah, I think there's something weird about this design. Somehow we're\nending up with both per-relation and per-relfilenode counters:\n\n+ pg_stat_get_blocks_written(C.oid) +\npg_stat_get_relfilenode_blocks_written(d.oid, CASE WHEN\nC.reltablespace <> 0 THEN C.reltablespace ELSE d.dattablespace END,\nC.relfilenode) AS heap_blks_written,\n\nI'll defer to Andres if he thinks that's awesome, but to me it does\nnot seem right to track some blocks written in a per-relation counter\nand others in a per-relfilenode counter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 12:27:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn 2024-06-06 12:27:49 -0400, Robert Haas wrote:\n> On Wed, Jun 5, 2024 at 1:52 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > I think we should keep the stats in the relation during relfilenode changes.\n> > As a POC, v1 implemented a way to do so during TRUNCATE (see the changes in\n> > table_relation_set_new_filelocator() and in pg_statio_all_tables): as you can\n> > see in the example provided up-thread the new heap_blks_written statistic has\n> > been preserved during the TRUNCATE.\n>\n> Yeah, I think there's something weird about this design. Somehow we're\n> ending up with both per-relation and per-relfilenode counters:\n>\n> + pg_stat_get_blocks_written(C.oid) +\n> pg_stat_get_relfilenode_blocks_written(d.oid, CASE WHEN\n> C.reltablespace <> 0 THEN C.reltablespace ELSE d.dattablespace END,\n> C.relfilenode) AS heap_blks_written,\n>\n> I'll defer to Andres if he thinks that's awesome, but to me it does\n> not seem right to track some blocks written in a per-relation counter\n> and others in a per-relfilenode counter.\n\nIt doesn't immediately sound awesome. Nor really necessary?\n\nIf we just want to keep prior stats upon arelation rewrite, we can just copy\nthe stats from the old relfilenode. Or we can decide that those stats don't\nreally make sense anymore, and start from scratch.\n\n\nI *guess* I could see an occasional benefit in having both counter for \"prior\nrelfilenodes\" and \"current relfilenode\" - except that stats get reset manually\nand upon crash anyway, making this less useful than if it were really\n\"lifetime\" stats.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Jun 2024 20:17:36 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn 2024-06-03 11:11:46 +0000, Bertrand Drouvot wrote:\n> The main argument is that we currently don’t have writes counters for relations.\n> The reason is that we don’t have the relation OID when writing buffers out.\n> Tracking writes per relfilenode would allow us to track/consolidate writes per\n> relation (example in the v1 patch and in the message up-thread).\n> \n> I think that adding instrumentation in this area (writes counters) could be\n> beneficial (like it is for the ones we currently have for reads).\n> \n> Second argument is that this is also beneficial for the \"Split index and\n> table statistics into different types of stats\" thread (mentioned in the previous\n> message). It would allow us to avoid additional branches in some situations (like\n> the one mentioned by Andres in the link I provided up-thread).\n\nI think there's another *very* significant benefit:\n\nRight now physical replication doesn't populate statistics fields like\nn_dead_tup, which can be a huge issue after failovers, because there's little\ninformation about what autovacuum needs to do.\n\nAuto-analyze *partially* can fix it at times, if it's lucky enough to see\nenough dead tuples - but that's not a given and even if it works, is often\nwildly inaccurate.\n\n\nOnce we put things like n_dead_tup into per-relfilenode stats, we can populate\nthem during WAL replay. Thus after a promotion autovacuum has much better\ndata.\n\n\nThis also is important when we crash: We've been talking about storing a\nsnapshot of the stats alongside each REDO pointer. Combined with updating\nstats during crash recovery, we'll have accurate dead-tuple stats once recovey\nhas finished.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Jun 2024 20:38:06 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 06, 2024 at 08:38:06PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-06-03 11:11:46 +0000, Bertrand Drouvot wrote:\n> > The main argument is that we currently don’t have writes counters for relations.\n> > The reason is that we don’t have the relation OID when writing buffers out.\n> > Tracking writes per relfilenode would allow us to track/consolidate writes per\n> > relation (example in the v1 patch and in the message up-thread).\n> > \n> > I think that adding instrumentation in this area (writes counters) could be\n> > beneficial (like it is for the ones we currently have for reads).\n> > \n> > Second argument is that this is also beneficial for the \"Split index and\n> > table statistics into different types of stats\" thread (mentioned in the previous\n> > message). It would allow us to avoid additional branches in some situations (like\n> > the one mentioned by Andres in the link I provided up-thread).\n> \n> I think there's another *very* significant benefit:\n> \n> Right now physical replication doesn't populate statistics fields like\n> n_dead_tup, which can be a huge issue after failovers, because there's little\n> information about what autovacuum needs to do.\n> \n> Auto-analyze *partially* can fix it at times, if it's lucky enough to see\n> enough dead tuples - but that's not a given and even if it works, is often\n> wildly inaccurate.\n> \n> \n> Once we put things like n_dead_tup into per-relfilenode stats,\n\nHm - I had in mind to populate relfilenode stats only with stats that are\nsomehow related to I/O activities. Which ones do you have in mind to put in \nrelfilenode stats?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 09:00:51 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 06, 2024 at 08:17:36PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-06-06 12:27:49 -0400, Robert Haas wrote:\n> > On Wed, Jun 5, 2024 at 1:52 AM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > > I think we should keep the stats in the relation during relfilenode changes.\n> > > As a POC, v1 implemented a way to do so during TRUNCATE (see the changes in\n> > > table_relation_set_new_filelocator() and in pg_statio_all_tables): as you can\n> > > see in the example provided up-thread the new heap_blks_written statistic has\n> > > been preserved during the TRUNCATE.\n> >\n> > Yeah, I think there's something weird about this design. Somehow we're\n> > ending up with both per-relation and per-relfilenode counters:\n> >\n> > + pg_stat_get_blocks_written(C.oid) +\n> > pg_stat_get_relfilenode_blocks_written(d.oid, CASE WHEN\n> > C.reltablespace <> 0 THEN C.reltablespace ELSE d.dattablespace END,\n> > C.relfilenode) AS heap_blks_written,\n> >\n> > I'll defer to Andres if he thinks that's awesome, but to me it does\n> > not seem right to track some blocks written in a per-relation counter\n> > and others in a per-relfilenode counter.\n> \n> It doesn't immediately sound awesome. Nor really necessary?\n> \n> If we just want to keep prior stats upon arelation rewrite, we can just copy\n> the stats from the old relfilenode.\n\nAgree, that's another option. But I think that would be in another field like\n\"cumulative_XXX\" to ensure one could still retrieve stats that are \"dedicated\"\nto this particular \"new\" relfilenode. Thoughts?\n\n> Or we can decide that those stats don't\n> really make sense anymore, and start from scratch.\n> \n> \n> I *guess* I could see an occasional benefit in having both counter for \"prior\n> relfilenodes\" and \"current relfilenode\" - except that stats get reset manually\n> and upon crash anyway, making this less useful than if it were really\n> \"lifetime\" stats.\n\nRight but currently they are not lost during a relation rewrite. If we decide to\nnot keep the relfilenode stats during a rewrite then things like heap_blks_read\nwould stop surviving a rewrite (if we move it to relfilenode stats) while it\ncurrently does. \n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 09:24:33 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "On Thu, Jun 6, 2024 at 11:17 PM Andres Freund <[email protected]> wrote:\n> If we just want to keep prior stats upon arelation rewrite, we can just copy\n> the stats from the old relfilenode. Or we can decide that those stats don't\n> really make sense anymore, and start from scratch.\n\nI think we need to think carefully about what we want the user\nexperience to be here. \"Per-relfilenode stats\" could mean \"sometimes I\ndon't know the relation OID so I want to use the relfilenumber\ninstead, without changing the user experience\" or it could mean \"some\nof these stats actually properly pertain to the relfilenode rather\nthan the relation so I want to associate them with the right object\nand that will affect how the user sees things.\" We need to decide\nwhich it is. If it's the former, then we need to examine whether the\ngoal of hiding the distinction between relfilenode stats and relation\nstats from the user is in fact feasible. If it's the latter, then we\nneed to make sure the whole patch reflects that design, which would\ninclude e.g. NOT copying stats from the old to the new relfilenode,\nand which would also include documenting the behavior in a way that\nwill be understandable to users.\n\nIn my experience, the worst thing you can do in cases like this is be\nsomewhere in the middle. Then you tend to end up with stuff like: the\ndifference isn't supposed to be something that the user knows or cares\nabout, except that they do have to know and care because you haven't\nthoroughly covered up the deception, and often they have to reverse\nengineer the behavior because you didn't document what was really\nhappening because you imagined that they wouldn't notice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 09:24:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn Fri, Jun 07, 2024 at 09:24:41AM -0400, Robert Haas wrote:\n> On Thu, Jun 6, 2024 at 11:17 PM Andres Freund <[email protected]> wrote:\n> > If we just want to keep prior stats upon arelation rewrite, we can just copy\n> > the stats from the old relfilenode. Or we can decide that those stats don't\n> > really make sense anymore, and start from scratch.\n> \n> I think we need to think carefully about what we want the user\n> experience to be here. \"Per-relfilenode stats\" could mean \"sometimes I\n> don't know the relation OID so I want to use the relfilenumber\n> instead, without changing the user experience\" or it could mean \"some\n> of these stats actually properly pertain to the relfilenode rather\n> than the relation so I want to associate them with the right object\n> and that will affect how the user sees things.\" We need to decide\n> which it is. If it's the former, then we need to examine whether the\n> goal of hiding the distinction between relfilenode stats and relation\n> stats from the user is in fact feasible. If it's the latter, then we\n> need to make sure the whole patch reflects that design, which would\n> include e.g. NOT copying stats from the old to the new relfilenode,\n> and which would also include documenting the behavior in a way that\n> will be understandable to users.\n\nThanks for sharing your thoughts!\n\nLet's take the current heap_blks_read as an example: it currently survives\na relation rewrite and I guess we don't want to change the existing user\nexperience for it.\n\nNow say we want to add \"heap_blks_written\" (like in this POC patch) then I think\nthat it makes sense for the user to 1) query this new stat from the same place\nas the existing heap_blks_read: from pg_statio_all_tables and 2) to have the same\nexperience as far the relation rewrite is concerned (keep the previous stats).\n\nTo achieve the rewrite behavior we could:\n\n1) copy the stats from the OLD relfilenode to the relation (like in the POC patch)\n2) copy the stats from the OLD relfilenode to the NEW one (could be in a dedicated\nfield)\n\nThe PROS of 1) is that the behavior is consistent with the current heap_blks_read\nand that the user could still see the current relfilenode stats (through a new API)\nif he wants to.\n\n> In my experience, the worst thing you can do in cases like this is be\n> somewhere in the middle. Then you tend to end up with stuff like: the\n> difference isn't supposed to be something that the user knows or cares\n> about, except that they do have to know and care because you haven't\n> thoroughly covered up the deception, and often they have to reverse\n> engineer the behavior because you didn't document what was really\n> happening because you imagined that they wouldn't notice.\n\nMy idea was to move all that is in pg_statio_all_tables to relfilenode stats\nand 1) add new stats to pg_statio_all_tables (like heap_blks_written), 2) ensure\nthe user can still retrieve the stats from pg_statio_all_tables in such a way\nthat it survives a rewrite, 3) provide dedicated APIs to retrieve\nrelfilenode stats but only for the current relfilenode, 4) document this\nbehavior. This is what the POC patch is doing for heap_blks_written (would\nneed to do the same for heap_blks_read and friends) except for the documentation\npart. What do you think?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Jun 2024 08:09:56 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "At Mon, 10 Jun 2024 08:09:56 +0000, Bertrand Drouvot <[email protected]> wrote in \r\n> Hi,\r\n> \r\n> On Fri, Jun 07, 2024 at 09:24:41AM -0400, Robert Haas wrote:\r\n> > On Thu, Jun 6, 2024 at 11:17 PM Andres Freund <[email protected]> wrote:\r\n> > > If we just want to keep prior stats upon arelation rewrite, we can just copy\r\n> > > the stats from the old relfilenode. Or we can decide that those stats don't\r\n> > > really make sense anymore, and start from scratch.\r\n> > \r\n> > I think we need to think carefully about what we want the user\r\n> > experience to be here. \"Per-relfilenode stats\" could mean \"sometimes I\r\n> > don't know the relation OID so I want to use the relfilenumber\r\n> > instead, without changing the user experience\" or it could mean \"some\r\n> > of these stats actually properly pertain to the relfilenode rather\r\n> > than the relation so I want to associate them with the right object\r\n> > and that will affect how the user sees things.\" We need to decide\r\n> > which it is. If it's the former, then we need to examine whether the\r\n> > goal of hiding the distinction between relfilenode stats and relation\r\n> > stats from the user is in fact feasible. If it's the latter, then we\r\n> > need to make sure the whole patch reflects that design, which would\r\n> > include e.g. NOT copying stats from the old to the new relfilenode,\r\n> > and which would also include documenting the behavior in a way that\r\n> > will be understandable to users.\r\n> \r\n> Thanks for sharing your thoughts!\r\n> \r\n> Let's take the current heap_blks_read as an example: it currently survives\r\n> a relation rewrite and I guess we don't want to change the existing user\r\n> experience for it.\r\n> \r\n> Now say we want to add \"heap_blks_written\" (like in this POC patch) then I think\r\n> that it makes sense for the user to 1) query this new stat from the same place\r\n> as the existing heap_blks_read: from pg_statio_all_tables and 2) to have the same\r\n> experience as far the relation rewrite is concerned (keep the previous stats).\r\n> \r\n> To achieve the rewrite behavior we could:\r\n> \r\n> 1) copy the stats from the OLD relfilenode to the relation (like in the POC patch)\r\n> 2) copy the stats from the OLD relfilenode to the NEW one (could be in a dedicated\r\n> field)\r\n> \r\n> The PROS of 1) is that the behavior is consistent with the current heap_blks_read\r\n> and that the user could still see the current relfilenode stats (through a new API)\r\n> if he wants to.\r\n> \r\n> > In my experience, the worst thing you can do in cases like this is be\r\n> > somewhere in the middle. Then you tend to end up with stuff like: the\r\n> > difference isn't supposed to be something that the user knows or cares\r\n> > about, except that they do have to know and care because you haven't\r\n> > thoroughly covered up the deception, and often they have to reverse\r\n> > engineer the behavior because you didn't document what was really\r\n> > happening because you imagined that they wouldn't notice.\r\n> \r\n> My idea was to move all that is in pg_statio_all_tables to relfilenode stats\r\n> and 1) add new stats to pg_statio_all_tables (like heap_blks_written), 2) ensure\r\n> the user can still retrieve the stats from pg_statio_all_tables in such a way\r\n> that it survives a rewrite, 3) provide dedicated APIs to retrieve\r\n> relfilenode stats but only for the current relfilenode, 4) document this\r\n> behavior. This is what the POC patch is doing for heap_blks_written (would\r\n> need to do the same for heap_blks_read and friends) except for the documentation\r\n> part. What do you think?\r\n\r\nIn my opinion, it is certainly strange that bufmgr is aware of\r\nrelation kinds, but introducing relfilenode stats to avoid this skew\r\ndoesn't seem to be the best way, as it invites inconclusive arguments\r\nlike the one raised above. The fact that we transfer counters from old\r\nrelfilenodes to new ones indicates that we are not really interested\r\nin counts by relfilenode. If that's the case, wouldn't it be simpler\r\nto call pgstat_count_relation_buffer_read() from bufmgr.c and then\r\nbranch according to relkind within that function? If you're concerned\r\nabout the additional branch, some ingenuity may be needed.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Tue, 11 Jun 2024 15:35:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 11, 2024 at 03:35:23PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 10 Jun 2024 08:09:56 +0000, Bertrand Drouvot <[email protected]> wrote in \n> > \n> > My idea was to move all that is in pg_statio_all_tables to relfilenode stats\n> > and 1) add new stats to pg_statio_all_tables (like heap_blks_written), 2) ensure\n> > the user can still retrieve the stats from pg_statio_all_tables in such a way\n> > that it survives a rewrite, 3) provide dedicated APIs to retrieve\n> > relfilenode stats but only for the current relfilenode, 4) document this\n> > behavior. This is what the POC patch is doing for heap_blks_written (would\n> > need to do the same for heap_blks_read and friends) except for the documentation\n> > part. What do you think?\n> \n> In my opinion,\n\nThanks for looking at it!\n\n> it is certainly strange that bufmgr is aware of\n> relation kinds, but introducing relfilenode stats to avoid this skew\n> doesn't seem to be the best way, as it invites inconclusive arguments\n> like the one raised above. The fact that we transfer counters from old\n> relfilenodes to new ones indicates that we are not really interested\n> in counts by relfilenode. If that's the case, wouldn't it be simpler\n> to call pgstat_count_relation_buffer_read() from bufmgr.c and then\n> branch according to relkind within that function? If you're concerned\n> about the additional branch, some ingenuity may be needed.\n\nThat may be doable for \"read\" activities but what about write activities?\nDo you mean not relying on relfilenode stats for reads but relying on relfilenode\nstats for writes?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 13:29:54 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "On Sat, May 25, 2024 at 07:52:02AM +0000, Bertrand Drouvot wrote:\n> But I think that it is in a state that can be used to discuss the approach it\n> is implementing (so that we can agree or not on it) before moving\n> forward.\n\nI have read through the patch to get an idea of how things are done,\nand I am troubled by the approach taken (mentioned down by you), but\nthat's invasive compared to how pgstats wants to be transparent with\nits stats kinds.\n\n+ Oid objoid; /* object ID, either table or function\nor tablespace. */\n+ RelFileNumber relfile; /* relfilenumber for RelFileLocator. */\n } PgStat_HashKey;\n\nThis adds a relfilenode component to the central hash key used for the\ndshash of pgstats, which is something most stats types don't care\nabout. That looks like the incorrect thing to do to me, particularly\nseeing a couple of lines down that a stats kind is assigned so the\nHashKey uniqueness is ensured by the KindInfo:\n+ [PGSTAT_KIND_RELFILENODE] = {\n+ .name = \"relfilenode\",\n\nFWIW, I have on my stack of patches something to switch the objoid to\n8 bytes, actually, which is something that would be required for\npg_stat_statements as query IDs are wider than that and affect all\ndatabases, FWIW. Relfilenodes are 4 bytes, okay still Robert has\nproposed a couple of years ago a patch set to bump that to 56 bits,\nchange reverted in a448e49bcbe4. The objoid is also not something\nspecific to OIDs, see replication slots with their idx for example.\n\nWhat you would be looking instead is to use the relfilenode as an\nobjoid and keep track of the OID of the original relation in each\nPgStat_StatRelFileNodeEntry so as it is possible to know where a past\nrelfilenode was used? That makes looking back at the past relation's\nelfilenodes stats more complicated as it would be necessary to keep a\nlist of the past relfilenodes for a relation, as well. Perhaps with\nsome kind of cache that maintains a mapping between the relation and\nits relfilenode history?\n--\nMichael", "msg_date": "Wed, 10 Jul 2024 15:02:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn Wed, Jul 10, 2024 at 03:02:34PM +0900, Michael Paquier wrote:\n> On Sat, May 25, 2024 at 07:52:02AM +0000, Bertrand Drouvot wrote:\n> > But I think that it is in a state that can be used to discuss the approach it\n> > is implementing (so that we can agree or not on it) before moving\n> > forward.\n> \n> I have read through the patch to get an idea of how things are done,\n\nThanks!\n\n> and I am troubled by the approach taken (mentioned down by you), but\n> that's invasive compared to how pgstats wants to be transparent with\n> its stats kinds.\n> \n> + Oid objoid; /* object ID, either table or function\n> or tablespace. */\n> + RelFileNumber relfile; /* relfilenumber for RelFileLocator. */\n> } PgStat_HashKey;\n> \n> This adds a relfilenode component to the central hash key used for the\n> dshash of pgstats, which is something most stats types don't care\n> about.\n\nThat's right but that's an existing behavior without the patch as:\n\nPGSTAT_KIND_DATABASE does not care care about the objoid\nPGSTAT_KIND_REPLSLOT does not care care about the dboid\nPGSTAT_KIND_SUBSCRIPTION does not care care about the dboid\n\nThat's 3 kinds out of the 5 non fixed stats kind.\n\nNot saying it's good, just saying that's an existing behavior.\n\n> That looks like the incorrect thing to do to me, particularly\n> seeing a couple of lines down that a stats kind is assigned so the\n> HashKey uniqueness is ensured by the KindInfo:\n> + [PGSTAT_KIND_RELFILENODE] = {\n> + .name = \"relfilenode\",\n\nYou mean, just rely on kind, dboid and relfile to ensure uniqueness?\n\nI'm not sure that would work as there is this comment in relfilelocator.h:\n\n\"\n * Notice that relNumber is only unique within a database in a particular\n * tablespace.\n\"\n\nSo, I think it makes sense to link the hashkey to all the RelFileLocator\nfields, means:\n\ndboid (linked to RelFileLocator's dbOid)\nobjoid (linked to RelFileLocator's spcOid)\nrelfile (linked to RelFileLocator's relNumber)\n\n> FWIW, I have on my stack of patches something to switch the objoid to\n> 8 bytes, actually, which is something that would be required for\n> pg_stat_statements as query IDs are wider than that and affect all\n> databases, FWIW. Relfilenodes are 4 bytes, okay still Robert has\n> proposed a couple of years ago a patch set to bump that to 56 bits,\n> change reverted in a448e49bcbe4.\n\nRight, but it really looks like this extra field is needed to ensure\nuniqueness (see above).\n\n> What you would be looking instead is to use the relfilenode as an\n> objoid\n\nNot sure that works, as it looks like uniqueness won't be ensured (see above).\n\n> and keep track of the OID of the original relation in each\n> PgStat_StatRelFileNodeEntry so as it is possible to know where a past\n> relfilenode was used? That makes looking back at the past relation's\n> elfilenodes stats more complicated as it would be necessary to keep a\n> list of the past relfilenodes for a relation, as well. Perhaps with\n> some kind of cache that maintains a mapping between the relation and\n> its relfilenode history?\n\nYeah, I also thought about keeping a list of \"previous\" relfilenodes stats for a\nrelation but that would lead to:\n\n1. Keep previous relfilnode stats \n2. A more complicated way to look at relation stats (as you said)\n3. Extra memory usage\n\nI think the only reason \"previous\" relfilenode stats are needed is to provide\naccurate stats for the relation. Outside of this need, I don't think we would\nwant to retrieve \"individual\" previous relfilenode stats in the past.\n\nThat's why the POC patch \"simply\" copies the stats to the relation during a\nrewrite (before getting rid of the \"previous\" relfilenode stats).\n\nWhat do you think?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jul 2024 13:38:06 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "On Wed, Jul 10, 2024 at 01:38:06PM +0000, Bertrand Drouvot wrote:\n> On Wed, Jul 10, 2024 at 03:02:34PM +0900, Michael Paquier wrote:\n>> and I am troubled by the approach taken (mentioned down by you), but\n>> that's invasive compared to how pgstats wants to be transparent with\n>> its stats kinds.\n>> \n>> + Oid objoid; /* object ID, either table or function\n>> or tablespace. */\n>> + RelFileNumber relfile; /* relfilenumber for RelFileLocator. */\n>> } PgStat_HashKey;\n>> \n>> This adds a relfilenode component to the central hash key used for the\n>> dshash of pgstats, which is something most stats types don't care\n>> about.\n> \n> That's right but that's an existing behavior without the patch as:\n> \n> PGSTAT_KIND_DATABASE does not care care about the objoid\n> PGSTAT_KIND_REPLSLOT does not care care about the dboid\n> PGSTAT_KIND_SUBSCRIPTION does not care care about the dboid\n> \n> That's 3 kinds out of the 5 non fixed stats kind.\n\nI'd like to think that this is just going to increase across time.\n\n>> That looks like the incorrect thing to do to me, particularly\n>> seeing a couple of lines down that a stats kind is assigned so the\n>> HashKey uniqueness is ensured by the KindInfo:\n>> + [PGSTAT_KIND_RELFILENODE] = {\n>> + .name = \"relfilenode\",\n> \n> You mean, just rely on kind, dboid and relfile to ensure uniqueness?\n\nOr table OID for the objid, with a hardcoded number of past\nrelfilenodes stats stored, to limit bloating the dshash with too much\npast stats. See below.\n\n> So, I think it makes sense to link the hashkey to all the RelFileLocator\n> fields, means:\n> \n> dboid (linked to RelFileLocator's dbOid)\n> objoid (linked to RelFileLocator's spcOid)\n> relfile (linked to RelFileLocator's relNumber)\n\nHmm. How about using the table OID as objoid, but store in the stats\nof the new KindInfo an array of entries with the relfilenodes (current\nand past, perhaps with more data than the relfilenode to ensure the\nuniqueness tracking) and each of its stats? The number of past\nrelfilenodes would be fixed, meaning that there would be a strict\ncontrol with the retention of the past stats. When a table is\ndropped, removing its relfilenode stats would be as cheap as when its\nPGSTAT_KIND_RELATION is dropped.\n\n> Yeah, I also thought about keeping a list of \"previous\" relfilenodes stats for a\n> relation but that would lead to:\n> \n> 1. Keep previous relfilnode stats \n> 2. A more complicated way to look at relation stats (as you said)\n> 3. Extra memory usage\n> \n> I think the only reason \"previous\" relfilenode stats are needed is to provide\n> accurate stats for the relation. Outside of this need, I don't think we would\n> want to retrieve \"individual\" previous relfilenode stats in the past.\n> \n> That's why the POC patch \"simply\" copies the stats to the relation during a\n> rewrite (before getting rid of the \"previous\" relfilenode stats).\n\nHmm. Okay.\n--\nMichael", "msg_date": "Thu, 11 Jul 2024 13:58:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn Thu, Jul 11, 2024 at 01:58:19PM +0900, Michael Paquier wrote:\n> On Wed, Jul 10, 2024 at 01:38:06PM +0000, Bertrand Drouvot wrote:\n> > So, I think it makes sense to link the hashkey to all the RelFileLocator\n> > fields, means:\n> > \n> > dboid (linked to RelFileLocator's dbOid)\n> > objoid (linked to RelFileLocator's spcOid)\n> > relfile (linked to RelFileLocator's relNumber)\n> \n> Hmm. How about using the table OID as objoid,\n\nThe issue is that we don't have the relation OID when writing buffers out (that's\none of the reason explained in [1]).\n\n[1]: https://www.postgresql.org/message-id/Zl2k8u4HDTUW6QlC%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 11 Jul 2024 06:10:23 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn Thu, Jul 11, 2024 at 06:10:23AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Thu, Jul 11, 2024 at 01:58:19PM +0900, Michael Paquier wrote:\n> > On Wed, Jul 10, 2024 at 01:38:06PM +0000, Bertrand Drouvot wrote:\n> > > So, I think it makes sense to link the hashkey to all the RelFileLocator\n> > > fields, means:\n> > > \n> > > dboid (linked to RelFileLocator's dbOid)\n> > > objoid (linked to RelFileLocator's spcOid)\n> > > relfile (linked to RelFileLocator's relNumber)\n> > \n> > Hmm. How about using the table OID as objoid,\n> \n> The issue is that we don't have the relation OID when writing buffers out (that's\n> one of the reason explained in [1]).\n> \n> [1]: https://www.postgresql.org/message-id/Zl2k8u4HDTUW6QlC%40ip-10-97-1-34.eu-west-3.compute.internal\n> \n> Regards,\n> \n\nPlease find attached a mandatory rebase due to the recent changes around\nstatistics.\n\nAs mentioned up-thread:\n\nThe attached patch is not in a fully \"polished\" state yet: there is more places\nwe should add relfilenode counters, create more APIS to retrieve the relfilenode\nstats....\n\nIt is in a state that can be used to discuss the approach it is implementing (as\nwe have done so far) before moving forward.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 5 Aug 2024 05:28:22 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 05, 2024 at 05:28:22AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Thu, Jul 11, 2024 at 06:10:23AM +0000, Bertrand Drouvot wrote:\n> > Hi,\n> > \n> > On Thu, Jul 11, 2024 at 01:58:19PM +0900, Michael Paquier wrote:\n> > > On Wed, Jul 10, 2024 at 01:38:06PM +0000, Bertrand Drouvot wrote:\n> > > > So, I think it makes sense to link the hashkey to all the RelFileLocator\n> > > > fields, means:\n> > > > \n> > > > dboid (linked to RelFileLocator's dbOid)\n> > > > objoid (linked to RelFileLocator's spcOid)\n> > > > relfile (linked to RelFileLocator's relNumber)\n> > > \n> > > Hmm. How about using the table OID as objoid,\n> > \n> > The issue is that we don't have the relation OID when writing buffers out (that's\n> > one of the reason explained in [1]).\n> > \n> > [1]: https://www.postgresql.org/message-id/Zl2k8u4HDTUW6QlC%40ip-10-97-1-34.eu-west-3.compute.internal\n> > \n> > Regards,\n> > \n> \n> Please find attached a mandatory rebase due to the recent changes around\n> statistics.\n> \n> As mentioned up-thread:\n> \n> The attached patch is not in a fully \"polished\" state yet: there is more places\n> we should add relfilenode counters, create more APIS to retrieve the relfilenode\n> stats....\n> \n> It is in a state that can be used to discuss the approach it is implementing (as\n> we have done so far) before moving forward.\n\nPlease find attached a mandatory rebase.\n\nIn passing, checking if based on the previous discussion (and given that we\ndon't have the relation OID when writing buffers out) you see another approach\nthat the one this patch is implementing?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 5 Sep 2024 04:48:36 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" }, { "msg_contents": "Hi,\n\nOn Thu, Sep 05, 2024 at 04:48:36AM +0000, Bertrand Drouvot wrote:\n> Please find attached a mandatory rebase.\n> \n> In passing, checking if based on the previous discussion (and given that we\n> don't have the relation OID when writing buffers out) you see another approach\n> that the one this patch is implementing?\n\nAttached v5, mandatory rebase due to recent changes in the stats area.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 10 Sep 2024 05:30:32 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: relfilenode statistics" } ]
[ { "msg_contents": "Hi hackers,\nWhen I read the code, I noticed that in SimpleLruWriteAll(), only the last error is \nrecorded when the file fails to close. Like the following,\n```void\nSimpleLruWriteAll(SlruCtl ctl, bool allow_redirtied)\n{\n SlruShared shared = ctl->shared;\n SlruWriteAllData fdata;\n int64 pageno = 0;\n int prevbank = SlotGetBankNumber(0);\n bool ok;\n...\n /*\n * Now close any files that were open\n */\n ok = true;\n for (int i = 0; i < fdata.num_files; i++)\n {\n if (CloseTransientFile(fdata.fd[i]) != 0)\n {\n slru_errcause = SLRU_CLOSE_FAILED;\n slru_errno = errno;\n pageno = fdata.segno[i] * SLRU_PAGES_PER_SEGMENT;\n ok = false;\n }\n }\n if (!ok)\n SlruReportIOError(ctl, pageno, InvalidTransactionId);\n```\n// Here, SlruReportIOError() is called only once, meaning that the last error message is\n recorded. In my opinion, since failure to close a file is not common, logging an error \nmessage every time a failure occurs will not result in much log growth, but detailed error \nlogging will help in most cases.\n\nSo, I changed the code to move the call to SlruReportIOError() inside the while loop.\n\nAttached is the patch, I hope it can help.", "msg_date": "Sat, 25 May 2024 18:29:00 +0800 (CST)", "msg_from": "\"Long Song\" <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH]A minor improvement to the error-report in\n SimpleLruWriteAll()" }, { "msg_contents": "\nHi,\nActually, I still wonder why only the error message\nof the last failure to close the file was recorded.\nFor this unusual situation, it is acceptable to\nrecord all failure information without causing\ntoo much logging.\nWas it designed that way on purpose?\n\n\n\n\nAt 2024-05-25 17:29:00, \"Long Song\" <[email protected]> wrote:\n\n\n\nHi hackers,\nWhen I read the code, I noticed that in SimpleLruWriteAll(), only the last error is \nrecorded when the file fails to close. Like the following,\n```void\nSimpleLruWriteAll(SlruCtl ctl, bool allow_redirtied)\n{\n        SlruShared      shared = ctl->shared;\n        SlruWriteAllData fdata;\n        int64           pageno = 0;\n        int                     prevbank = SlotGetBankNumber(0);\n        bool            ok;\n...\n        /*\n         * Now close any files that were open\n         */\n        ok = true;\n        for (int i = 0; i < fdata.num_files; i++)\n        {\n                if (CloseTransientFile(fdata.fd[i]) != 0)\n                {\n                        slru_errcause = SLRU_CLOSE_FAILED;\n                        slru_errno = errno;\n                        pageno = fdata.segno[i] * SLRU_PAGES_PER_SEGMENT;\n                        ok = false;\n                }\n        }\n        if (!ok)\n                SlruReportIOError(ctl, pageno, InvalidTransactionId);\n```\n// Here, SlruReportIOError() is called only once, meaning that the last error message is\n recorded. In my opinion, since failure to close a file is not common, logging an error \nmessage every time a failure occurs will not result in much log growth, but detailed error \nlogging will help in most cases.\n\nSo, I changed the code to move the call to SlruReportIOError() inside the while loop.\n\nAttached is the patch, I hope it can help.\n\n\n\n\n", "msg_date": "Tue, 28 May 2024 20:15:59 +0800 (CST)", "msg_from": "\"Long Song\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:[PATCH]A minor improvement to the error-report in\n SimpleLruWriteAll()" }, { "msg_contents": "At Tue, 28 May 2024 20:15:59 +0800 (CST), \"Long Song\" <[email protected]> wrote in \n> \n> Hi,\n> Actually, I still wonder why only the error message\n> of the last failure to close the file was recorded.\n> For this unusual situation, it is acceptable to\n> record all failure information without causing\n> too much logging.\n> Was it designed that way on purpose?\n\nNote that SlruReportIOError() causes a non-local exit. To me, the\npoint of the loop seems to be that we want to close every single file,\napart from the failed ones. From that perspective, the patch disrupts\nthat intended behavior by exiting in the middle of the loop. It seems\nwe didn't want to bother collecting errors for every failed file in\nthat part.\n \nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 04 Jun 2024 16:44:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH]A minor improvement to the error-report in\n SimpleLruWriteAll()" }, { "msg_contents": "Hi Kyotaro,\nThank you for the response.\n\n\n\nAt 2024-06-04 14:44:09, \"Kyotaro Horiguchi\" <[email protected]> wrote:\n>At Tue, 28 May 2024 20:15:59 +0800 (CST), \"Long Song\" <[email protected]> wrote in \n>> \n>> Hi,\n>> Actually, I still wonder why only the error message\n>> of the last failure to close the file was recorded.\n>> For this unusual situation, it is acceptable to\n>> record all failure information without causing\n>> too much logging.\n>> Was it designed that way on purpose?\n>\n>Note that SlruReportIOError() causes a non-local exit. To me, the\n>point of the loop seems to be that we want to close every single file,\n>apart from the failed ones. From that perspective, the patch disrupts\n>that intended behavior by exiting in the middle of the loop. It seems\n>we didn't want to bother collecting errors for every failed file in\n>that part.\n\nYeah, thanks for your reminder.\nIt was my mistake not to notice the ereport() exit in the function.\nBut is it necessary to record it in a log? If there is a benefit to\nlogging, I can submit a modified patch and record the necessary\nfailure information into the log in another way.\n\n> \n>regards.\n>\n>-- \n>Kyotaro Horiguchi\n>NTT Open Source Software Center\n\n\n\n\n\n\n\n\n\n\n\n\n--\nBest Regards,\n\nLong\n", "msg_date": "Tue, 4 Jun 2024 18:03:17 +0800 (CST)", "msg_from": "\"Long Song\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:Re: [PATCH]A minor improvement to the error-report in\n SimpleLruWriteAll()" }, { "msg_contents": "Hi,\nI modified the patch, kept the old call of SlruReportIOError()\noutside the for-loop, and add a call of erport() with LOG level\nwhen file-close failure occurs in the for-loop.\n\nPlease check whether this modification is appropriate\nand let me know if there is any problem. Thank you.\n\n\n\nAt 2024-06-04 14:44:09, \"Kyotaro Horiguchi\" <[email protected]> wrote:\n>At Tue, 28 May 2024 20:15:59 +0800 (CST), \"Long Song\" <[email protected]> wrote in \n>> \n>> Hi,\n>> Actually, I still wonder why only the error message\n>> of the last failure to close the file was recorded.\n>> For this unusual situation, it is acceptable to\n>> record all failure information without causing\n>> too much logging.\n>> Was it designed that way on purpose?\n>\n>Note that SlruReportIOError() causes a non-local exit. To me, the\n>point of the loop seems to be that we want to close every single file,\n>apart from the failed ones. From that perspective, the patch disrupts\n>that intended behavior by exiting in the middle of the loop. It seems\n>we didn't want to bother collecting errors for every failed file in\n>that part.\n> \n>regards.\n>\n>-- \n>Kyotaro Horiguchi\n>NTT Open Source Software Center\n\n\n\n\n\n\n\n\n\n\n\n\n--\nBest Regards,\n\nLong", "msg_date": "Wed, 5 Jun 2024 16:30:44 +0800 (CST)", "msg_from": "\"Long Song\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:Re: [PATCH]A minor improvement to the error-report in\n SimpleLruWriteAll()" } ]
[ { "msg_contents": "Hi Postgres gurus,\n\n I try to perform DELETE and INSERT queries in the Trigger\nfunction,\n\nBEGIN\nIF (TG_OP = 'DELETE') THEN\nDELETE FROM…;\nINSERT INTO….;\nRETURN OLD;\nELSIF (TG_OP = 'UPDATE' OR TG_OP = 'INSERT') THEN\nDELETE FROM…;\nINSERT INTO….;\nRETURN NEW;\nEND IF;\nRETURN NULL; -- result is ignored since this is an AFTER trigger\nEND;\n\nbut as it is synchronous the performance of the each queries will be high\nso I need to make the Queries in trigger function to be performed\nasynchronously, I had found some approaches like dblink and pg_background,\nIn db_link it creates a new connection which is also not suit for my case,\nit also comsumes time so I dropped it ☹.\n\nI tried pg_background to achieve async queries like\n\nDECLARE\n\nresult text;\n\nBEGIN\nIF (TG_OP = 'DELETE') THEN\nSELECT * FROM pg_background_result(pg_background_launch(sql_command)) as\n(result TEXT) INTO result;\n\n SELECT * FROM pg_background_result(pg_background_launch(sql_command)) as\n(result TEXT) INTO result;\nRETURN OLD;\nELSIF (TG_OP = 'UPDATE' OR TG_OP = 'INSERT') THEN\nSELECT * FROM pg_background_result(pg_background_launch(sql_command)) as\n(result TEXT) INTO result;\n\nSELECT * FROM pg_background_result(pg_background_launch(sql_command)) as\n(result TEXT) INTO result;\n\nRETURN NEW;\nEND IF;\nRETURN NULL; -- result is ignored since this is an AFTER trigger\nEND;\n\n Here also we are facing performance issue it consumes more\ntime than a direct sync Queries, So is this approach is correct for my case\nand how to achieve it by any other approach. I had tried with LISTEN NOTIFY\nas pg_notify() but I can’t listen and perform additional queries inside\npostgres itself so I have wrote a java application to listen for this\nnotification and perform the queries asynchronously it is working fine😊\nbut I need to reduce external dependency here so please look up this issue\nany suggestions most welcome..\n#postgresql\n\nHi Postgres gurus,              I try to perform DELETE and INSERT queries in the Trigger function,BEGIN  IF (TG_OP = 'DELETE') THEN    DELETE FROM…;    INSERT INTO….;    RETURN OLD;  ELSIF (TG_OP = 'UPDATE' OR TG_OP = 'INSERT') THEN     DELETE FROM…;    INSERT INTO….;    RETURN NEW;  END IF;  RETURN NULL; -- result is ignored since this is an AFTER triggerEND;but as it is synchronous the performance of the each queries will be high so I need to make the Queries in trigger function to be performed asynchronously, I had found some approaches like dblink and pg_background, In db_link it creates a new connection which is also not suit for my case, it also comsumes time so I dropped it ☹. I tried pg_background to achieve async queries likeDECLAREresult text;BEGIN  IF (TG_OP = 'DELETE') THEN     SELECT * FROM pg_background_result(pg_background_launch(sql_command)) as (result TEXT) INTO result;  SELECT * FROM pg_background_result(pg_background_launch(sql_command)) as (result TEXT) INTO result;    RETURN OLD;  ELSIF (TG_OP = 'UPDATE' OR TG_OP = 'INSERT') THEN    SELECT * FROM pg_background_result(pg_background_launch(sql_command)) as (result TEXT) INTO result;SELECT * FROM pg_background_result(pg_background_launch(sql_command)) as (result TEXT) INTO result;    RETURN NEW;  END IF;  RETURN NULL; -- result is ignored since this is an AFTER triggerEND;              Here also we are facing performance issue it consumes more time than a direct sync Queries, So is this approach is correct for my case and how to achieve it by any other approach. I had tried with LISTEN NOTIFY as pg_notify() but I can’t listen and perform additional queries inside postgres itself so I have wrote a java application to listen for this notification and perform the queries asynchronously it is working fine😊 but I need to reduce external dependency here so please look up this issue any suggestions most welcome..#postgresql", "msg_date": "Sat, 25 May 2024 18:58:44 +0530", "msg_from": "_sanjiv_ SK <[email protected]>", "msg_from_op": true, "msg_subject": "Reg - pg_background async triggers" } ]
[ { "msg_contents": "Hi hackers,\n\nIn commit 4908c58[^1], a vectored variant of smgrwrite() is added and\nthe assertion condition in mdwritev() no longer matches the comment.\nThis patch helps fix it.\n\n[^1]: https://github.com/postgres/postgres/commit/4908c5872059c409aa647bcde758dfeffe07996e\n\nBest Regards,\nXing", "msg_date": "Sat, 25 May 2024 23:52:22 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Fix an incorrect assertion condition in mdwritev()." }, { "msg_contents": "On Sat, May 25, 2024 at 11:52:22PM +0800, Xing Guo wrote:\n> In commit 4908c58[^1], a vectored variant of smgrwrite() is added and\n> the assertion condition in mdwritev() no longer matches the comment.\n> This patch helps fix it.\n>\n> \t/* This assert is too expensive to have on normally ... */\n> #ifdef CHECK_WRITE_VS_EXTEND\n> -\tAssert(blocknum < mdnblocks(reln, forknum));\n> +\tAssert(blocknum + nblocks <= mdnblocks(reln, forknum));\n> #endif\n\nYes, it looks like you're right that this can be made stricter,\ncomputing the number of blocks we're adding in the number calculated\n(aka adding one block to this number fails immediately at initdb).\n--\nMichael", "msg_date": "Sun, 26 May 2024 09:22:06 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix an incorrect assertion condition in mdwritev()." }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Sat, May 25, 2024 at 11:52:22PM +0800, Xing Guo wrote:\n>> #ifdef CHECK_WRITE_VS_EXTEND\n>> -\tAssert(blocknum < mdnblocks(reln, forknum));\n>> +\tAssert(blocknum + nblocks <= mdnblocks(reln, forknum));\n>> #endif\n\n> Yes, it looks like you're right that this can be made stricter,\n> computing the number of blocks we're adding in the number calculated\n> (aka adding one block to this number fails immediately at initdb).\n\nHmm ... I agree that this is better normally. But there's an\nedge case where it would fail to notice a problem that the\nexisting code does notice: if blocknum is close to UINT32_MAX\nand adding nblocks causes it to wrap around to a small value.\nIs there an inexpensive way to catch that? (If not, it's\nnot a reason to block this patch; but let's think about it\nwhile we're here.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 May 2024 23:59:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix an incorrect assertion condition in mdwritev()." }, { "msg_contents": "I wrote:\n> Hmm ... I agree that this is better normally. But there's an\n> edge case where it would fail to notice a problem that the\n> existing code does notice: if blocknum is close to UINT32_MAX\n> and adding nblocks causes it to wrap around to a small value.\n> Is there an inexpensive way to catch that?\n\nAfter a few minutes' thought, how about:\n\n\tAssert((uint64) blocknum + (uint64) nblocks <= (uint64) mdnblocks(reln, forknum));\n\nThis'd stop being helpful if we ever widen BlockNumber to 64 bits,\nbut I think that's unlikely. (Partitioning seems like a better answer\nfor giant tables.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 May 2024 00:08:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix an incorrect assertion condition in mdwritev()." }, { "msg_contents": "On Sun, May 26, 2024 at 12:08:46AM -0400, Tom Lane wrote:\n> After a few minutes' thought, how about:\n> \n> \tAssert((uint64) blocknum + (uint64) nblocks <= (uint64) mdnblocks(reln, forknum));\n\nLGTM. Yeah that should be OK this way.\n--\nMichael", "msg_date": "Sun, 26 May 2024 20:07:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix an incorrect assertion condition in mdwritev()." }, { "msg_contents": "On Sun, May 26, 2024 at 12:08:46AM -0400, Tom Lane wrote:\n> After a few minutes' thought, how about:\n> \n> \tAssert((uint64) blocknum + (uint64) nblocks <= (uint64) mdnblocks(reln, forknum));\n> \n> This'd stop being helpful if we ever widen BlockNumber to 64 bits,\n> but I think that's unlikely. (Partitioning seems like a better answer\n> for giant tables.)\n\nNo idea if this will happen or not, but that's not the only area where\nwe are going to need a native uint128 implementation to control the\noverflows with uint64.\n\nWhat you are suggesting is good enough for me, so I've applied on HEAD\na version using that.\n--\nMichael", "msg_date": "Tue, 4 Jun 2024 07:17:51 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix an incorrect assertion condition in mdwritev()." }, { "msg_contents": "Hi,\n\nOn 2024-06-04 07:17:51 +0900, Michael Paquier wrote:\n> On Sun, May 26, 2024 at 12:08:46AM -0400, Tom Lane wrote:\n> > After a few minutes' thought, how about:\n> > \n> > \tAssert((uint64) blocknum + (uint64) nblocks <= (uint64) mdnblocks(reln, forknum));\n> > \n> > This'd stop being helpful if we ever widen BlockNumber to 64 bits,\n> > but I think that's unlikely. (Partitioning seems like a better answer\n> > for giant tables.)\n> \n> No idea if this will happen or not, but that's not the only area where\n> we are going to need a native uint128 implementation to control the\n> overflows with uint64.\n\nI'm confused - isn't using common/int.h entirely sufficient for that? Nearly\nall architectures have more efficient ways to check for 64bit overflows than\ndoing actual 128 bit math.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Jun 2024 15:24:07 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix an incorrect assertion condition in mdwritev()." }, { "msg_contents": "On Mon, Jun 03, 2024 at 03:24:07PM -0700, Andres Freund wrote:\n> I'm confused - isn't using common/int.h entirely sufficient for that? Nearly\n> all architectures have more efficient ways to check for 64bit overflows than\n> doing actual 128 bit math.\n\nOh, right. We could just plug in pg_add_u32_overflow here. Funny\nthing is that I'm the one who committed these toys with\n__builtin_add_overflow(), still nobody has found a case where this one\nwould be useful. At least until now.\n--\nMichael", "msg_date": "Tue, 4 Jun 2024 07:43:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix an incorrect assertion condition in mdwritev()." }, { "msg_contents": "On Mon, Jun 03, 2024 at 03:24:07PM -0700, Andres Freund wrote:\n> I'm confused - isn't using common/int.h entirely sufficient for that? Nearly\n> all architectures have more efficient ways to check for 64bit overflows than\n> doing actual 128 bit math.\n\nOne simple way to change the assertion would be something like that, I\nassume. Andres, does it answer your concerns?\n--\nMichael", "msg_date": "Wed, 5 Jun 2024 13:10:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix an incorrect assertion condition in mdwritev()." } ]
[ { "msg_contents": "This patch refactors the alignment checks for direct I/O to preprocess phase,\nthereby reducing some CPU cycles.\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Sun, 26 May 2024 15:16:19 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Improve conditional compilation for direct I/O alignment checks" }, { "msg_contents": "On Sun, May 26, 2024 at 3:16 PM Junwang Zhao <[email protected]> wrote:\n>\n> This patch refactors the alignment checks for direct I/O to preprocess phase,\n> thereby reducing some CPU cycles.\n>\n> --\n> Regards\n> Junwang Zhao\n\nPatch v2 with some additional minor polishment of the comments in `mdwriteback`.\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Sun, 26 May 2024 18:35:10 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve conditional compilation for direct I/O alignment checks" }, { "msg_contents": "On 26.05.24 09:16, Junwang Zhao wrote:\n> This patch refactors the alignment checks for direct I/O to preprocess phase,\n> thereby reducing some CPU cycles.\n\nThis patch replaces for example\n\n if (PG_O_DIRECT != 0 && PG_IO_ALIGN_SIZE <= BLCKSZ)\n Assert((uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer));\n\nwith\n\n #if PG_O_DIRECT != 0 && PG_IO_ALIGN_SIZE <= BLCKSZ\n Assert((uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer));\n #endif\n\nYou appear to be assuming that this saves some CPU cycles. But this is \nnot the case. The compiler will remove the unused code in the first \ncase because all the constants in the if expression are known at \ncompile-time.\n\n\n", "msg_date": "Sun, 26 May 2024 23:52:34 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve conditional compilation for direct I/O alignment checks" } ]
[ { "msg_contents": "I got a report on the PgBouncer repo[1] that running DISCARD ALL was\nnot sufficient between connection handoffs to force replanning of\nstored procedures. Turns out that while DISCARD AL and DISCARD PLAN\nreset the plan cache they do not reset the num_custom_plans fields of\nthe existing PlanSources. So while the generic plan is re-planned\nafter DISCARD ALL, the decision on whether to choose it or not won't\nbe changed. See below for a minimally reproducing example:\n\n\ncreate table test_mode (a int);\ninsert into test_mode select 1 from generate_series(1,1000000) union\nall select 2;\ncreate index on test_mode (a);\nanalyze test_mode;\n\ncreate function test_mode_func(int)\nreturns integer as $$\ndeclare\n v_count integer;\nbegin\n select into v_count count(*) from test_mode where a = $1;\n return v_count;\nEND;\n$$ language plpgsql;\n\n\\timing on\n-- trigger execution 5 times\nSELECT test_mode_func(1);\nSELECT test_mode_func(1);\nSELECT test_mode_func(1);\nSELECT test_mode_func(1);\nSELECT test_mode_func(1);\nDISCARD ALL;\n-- slow because of bad plan, even after DISCARD ALL\nSELECT test_mode_func(2);\n\\c\n-- fast after re-connect, because of custom plan\nSELECT test_mode_func(2);\n\n\n\n[1]: https://github.com/pgbouncer/pgbouncer/issues/912#issuecomment-2131250109\n\n\n", "msg_date": "Sun, 26 May 2024 19:15:54 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "DISCARD ALL does not force re-planning of plpgsql\n functions/procedures" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> I got a report on the PgBouncer repo[1] that running DISCARD ALL was\n> not sufficient between connection handoffs to force replanning of\n> stored procedures. Turns out that while DISCARD AL and DISCARD PLAN\n> reset the plan cache they do not reset the num_custom_plans fields of\n> the existing PlanSources. So while the generic plan is re-planned\n> after DISCARD ALL, the decision on whether to choose it or not won't\n> be changed.\n\nHm, should it be? That's hard-won knowledge, and I'm not sure there\nis a good reason to believe it's no longer applicable.\n\nNote that any change in behavior there would affect prepared\nstatements in general, not only plpgsql.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 May 2024 13:39:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DISCARD ALL does not force re-planning of plpgsql\n functions/procedures" }, { "msg_contents": "On Sun, 26 May 2024 at 19:39, Tom Lane <[email protected]> wrote:\n> Hm, should it be? That's hard-won knowledge, and I'm not sure there\n> is a good reason to believe it's no longer applicable.\n\nI think for DISCARD ALL it would probably make sense to forget this\nknowledge . Since that is advertised as \"reset the session to its initial\nstate\". DISCARD PLANS should probably forget about it though indeed.\n\n> Note that any change in behavior there would affect prepared\n> statements in general, not only plpgsql.\n\nDISCARD ALL already removes all prepared statements and thus their run\ncounts, so for prepared statements there would be no difference there.\n\nOn Sun, 26 May 2024 at 19:39, Tom Lane <[email protected]> wrote:\n> Hm, should it be?  That's hard-won knowledge, and I'm not sure there\n> is a good reason to believe it's no longer applicable.\nI think for DISCARD ALL it would probably make sense to forget this knowledge . Since that is advertised as \"reset the session to its initial state\". DISCARD PLANS should probably forget about it though indeed. \n\n> Note that any change in behavior there would affect prepared\n> statements in general, not only plpgsql.DISCARD ALL already removes all prepared statements and thus their run counts, so for prepared statements there would be no difference there.", "msg_date": "Sun, 26 May 2024 12:26:51 -0700", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DISCARD ALL does not force re-planning of plpgsql\n functions/procedures" }, { "msg_contents": "ne 26. 5. 2024 v 21:27 odesílatel Jelte Fennema-Nio <[email protected]>\nnapsal:\n\n> On Sun, 26 May 2024 at 19:39, Tom Lane <[email protected]> wrote:\n> > Hm, should it be? That's hard-won knowledge, and I'm not sure there\n> > is a good reason to believe it's no longer applicable.\n>\n> I think for DISCARD ALL it would probably make sense to forget this\n> knowledge . Since that is advertised as \"reset the session to its initial\n> state\". DISCARD PLANS should probably forget about it though indeed.\n>\n> > Note that any change in behavior there would affect prepared\n> > statements in general, not only plpgsql.\n>\n> DISCARD ALL already removes all prepared statements and thus their run\n> counts, so for prepared statements there would be no difference there.\n>\n\n+1\n\nPavel\n\nne 26. 5. 2024 v 21:27 odesílatel Jelte Fennema-Nio <[email protected]> napsal:On Sun, 26 May 2024 at 19:39, Tom Lane <[email protected]> wrote:\n> Hm, should it be?  That's hard-won knowledge, and I'm not sure there\n> is a good reason to believe it's no longer applicable.\nI think for DISCARD ALL it would probably make sense to forget this knowledge . Since that is advertised as \"reset the session to its initial state\". DISCARD PLANS should probably forget about it though indeed. \n\n> Note that any change in behavior there would affect prepared\n> statements in general, not only plpgsql.DISCARD ALL already removes all prepared statements and thus their run counts, so for prepared statements there would be no difference there. +1Pavel", "msg_date": "Sun, 26 May 2024 21:30:36 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DISCARD ALL does not force re-planning of plpgsql\n functions/procedures" }, { "msg_contents": "On Sun, May 26, 2024, 12:26 Jelte Fennema-Nio <[email protected]> wrote:\n\n> DISCARD PLANS should probably forget about it though indeed.\n>\n\nDISCARD PLANS should probably **not** forget about it\n\n\n> > Note that any change in behavior there would affect prepared\n> > statements in general, not only plpgsql.\n>\n> DISCARD ALL already removes all prepared statements and thus their run\n> counts, so for prepared statements there would be no difference there.\n>\n\nOn Sun, May 26, 2024, 12:26 Jelte Fennema-Nio <[email protected]> wrote:DISCARD PLANS should probably forget about it though indeed. DISCARD PLANS should probably **not** forget about it\n\n> Note that any change in behavior there would affect prepared\n> statements in general, not only plpgsql.DISCARD ALL already removes all prepared statements and thus their run counts, so for prepared statements there would be no difference there.", "msg_date": "Sun, 26 May 2024 12:34:36 -0700", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DISCARD ALL does not force re-planning of plpgsql\n functions/procedures" }, { "msg_contents": "On Sun, 26 May 2024 at 19:39, Tom Lane <[email protected]> wrote:\n> Hm, should it be? That's hard-won knowledge, and I'm not sure there\n> is a good reason to believe it's no longer applicable.\n\nOkay, so I looked into this a bit more and there's a similar case here\nthat's imho very clearly doing something incorrectly: num_custom_plans\nof prepared statements is not reset when you change the search_path.\nWhen the search_path is changed, there's no reason to assume that the\nprevious generic plans have any relevance to the new generic plans,\nbecause the tables that are being accessed might be completely\ndifferent. See below for an (imho) obviously incorrect choice of using\nthe generic plan after changing search_path. Maybe the fix for this\nissue should be that if a plan gets invalidated, then num_custom_plans\nfor the source of that plan should be set to zero too. So to be clear,\nthat means I now think that DISCARD PLANS should also reset\nnum_custom_plans (as opposed to what I said before).\n\ncreate schema a;\ncreate schema b;\ncreate table a.test_mode (a int);\ncreate table b.test_mode (a int);\ninsert into a.test_mode select 1 from generate_series(1,1000000) union\nall select 2;\ninsert into b.test_mode select 2 from generate_series(1,1000000) union\nall select 1;\ncreate index on a.test_mode (a);\ncreate index on b.test_mode (a);\nanalyze a.test_mode;\nanalyze b.test_mode;\n\nSET search_path = a;\nPREPARE test_mode_func(int) as select count(*) from test_mode where a = $1;\n\n\\timing on\n-- trigger execution 5 times\nEXECUTE test_mode_func(1);\nEXECUTE test_mode_func(1);\nEXECUTE test_mode_func(1);\nEXECUTE test_mode_func(1);\nEXECUTE test_mode_func(1);\n-- slow because of bad plan, even after changing search_path\nSET search_path = b;\nEXECUTE test_mode_func(1);\n\\c\n-- fast after re-connect, because of custom plan\nSET search_path = a;\nPREPARE test_mode_func(int) as select count(*) from test_mode where a = $1;\nSET search_path = b;\nEXECUTE test_mode_func(1);\n\n\n", "msg_date": "Mon, 27 May 2024 08:47:51 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DISCARD ALL does not force re-planning of plpgsql\n functions/procedures" } ]
[ { "msg_contents": "Hi.\n\nI think that commit 61461a3\n<https://github.com/postgres/postgres/commit/61461a300c1cb5d53955ecd792ad0ce75a104736>,\nleft some oversight.\nThe function *PQcancelCreate* fails in check,\nreturn of *calloc* function.\n\nTrivial fix is attached.\n\nBut, IMO, I think that has more problems.\nIf any allocation fails, all allocations must be cleared.\nOr is the current behavior acceptable?\n\nbest regards,\nRanier Vilela", "msg_date": "Mon, 27 May 2024 09:25:45 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Fix calloc check if oom (PQcancelCreate)" }, { "msg_contents": "> On 27 May 2024, at 14:25, Ranier Vilela <[email protected]> wrote:\n\n> I think that commit 61461a3, left some oversight.\n> The function *PQcancelCreate* fails in check,\n> return of *calloc* function.\n> \n> Trivial fix is attached.\n\nAgreed, this looks like a copy/paste from the calloc calls a few lines up.\n\n> But, IMO, I think that has more problems.\n> If any allocation fails, all allocations must be cleared.\n> Or is the current behavior acceptable?\n\nSince this is frontend library code I think we should free all the allocations\nin case of OOM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 27 May 2024 15:22:57 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix calloc check if oom (PQcancelCreate)" }, { "msg_contents": "Hi Daniel,\n\nEm seg., 27 de mai. de 2024 às 10:23, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 27 May 2024, at 14:25, Ranier Vilela <[email protected]> wrote:\n>\n> > I think that commit 61461a3, left some oversight.\n> > The function *PQcancelCreate* fails in check,\n> > return of *calloc* function.\n> >\n> > Trivial fix is attached.\n>\n> Agreed, this looks like a copy/paste from the calloc calls a few lines up.\n>\nYeah.\n\n\n>\n> > But, IMO, I think that has more problems.\n> > If any allocation fails, all allocations must be cleared.\n> > Or is the current behavior acceptable?\n>\n> Since this is frontend library code I think we should free all the\n> allocations\n> in case of OOM.\n>\nAgreed.\n\nWith v1 patch, it is handled.\n\nbest regards,\nRanier Vilela", "msg_date": "Mon, 27 May 2024 11:06:30 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix calloc check if oom (PQcancelCreate)" }, { "msg_contents": "On Mon, 27 May 2024 at 16:06, Ranier Vilela <[email protected]> wrote:\n> Em seg., 27 de mai. de 2024 às 10:23, Daniel Gustafsson <[email protected]> escreveu:\n>> > On 27 May 2024, at 14:25, Ranier Vilela <[email protected]> wrote:\n>> > I think that commit 61461a3, left some oversight.\n>> > The function *PQcancelCreate* fails in check,\n>> > return of *calloc* function.\n>> >\n>> > Trivial fix is attached.\n>>\n>> Agreed, this looks like a copy/paste from the calloc calls a few lines up.\n>\n> Yeah.\n\nAgreed, this was indeed a copy paste mistake\n\n\n>> > But, IMO, I think that has more problems.\n>> > If any allocation fails, all allocations must be cleared.\n>> > Or is the current behavior acceptable?\n>>\n>> Since this is frontend library code I think we should free all the allocations\n>> in case of OOM.\n>\n> Agreed.\n>\n> With v1 patch, it is handled.\n\nI much prefer the original trivial patch to the v1. Even in case of\nOOM users are expected to call PQcancelFinish on a non-NULL result,\nwhich in turn calls freePGConn. And that function will free any\npartially initialized PGconn correctly. This is also how\npqConnectOptions2 works.\n\n\n", "msg_date": "Mon, 27 May 2024 17:40:23 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix calloc check if oom (PQcancelCreate)" }, { "msg_contents": "Em seg., 27 de mai. de 2024 às 12:40, Jelte Fennema-Nio <[email protected]>\nescreveu:\n\n> On Mon, 27 May 2024 at 16:06, Ranier Vilela <[email protected]> wrote:\n> > Em seg., 27 de mai. de 2024 às 10:23, Daniel Gustafsson <[email protected]>\n> escreveu:\n> >> > On 27 May 2024, at 14:25, Ranier Vilela <[email protected]> wrote:\n> >> > I think that commit 61461a3, left some oversight.\n> >> > The function *PQcancelCreate* fails in check,\n> >> > return of *calloc* function.\n> >> >\n> >> > Trivial fix is attached.\n> >>\n> >> Agreed, this looks like a copy/paste from the calloc calls a few lines\n> up.\n> >\n> > Yeah.\n>\n> Agreed, this was indeed a copy paste mistake\n>\n>\n> >> > But, IMO, I think that has more problems.\n> >> > If any allocation fails, all allocations must be cleared.\n> >> > Or is the current behavior acceptable?\n> >>\n> >> Since this is frontend library code I think we should free all the\n> allocations\n> >> in case of OOM.\n> >\n> > Agreed.\n> >\n> > With v1 patch, it is handled.\n>\n> I much prefer the original trivial patch to the v1. Even in case of\n> OOM users are expected to call PQcancelFinish on a non-NULL result,\n> which in turn calls freePGConn.\n\nIs it mandatory to call PQcancelFinish in case PQcancelCreate fails?\nIMO, I would expect problems with users.\n\nAnd that function will free any\n> partially initialized PGconn correctly. This is also how\n> pqConnectOptions2 works.\n>\nWell, I think that function *pqReleaseConnHost*, is incomplete.\n1. Don't free connhost field;\n2. Don't free addr field;\n3. Leave nconnhost and naddr, non-zero.\n\nSo trust in *pqReleaseConnHost *, to clean properly, It's insecure.\n\nbest regards,\nRanier Vilela\n\nEm seg., 27 de mai. de 2024 às 12:40, Jelte Fennema-Nio <[email protected]> escreveu:On Mon, 27 May 2024 at 16:06, Ranier Vilela <[email protected]> wrote:\n> Em seg., 27 de mai. de 2024 às 10:23, Daniel Gustafsson <[email protected]> escreveu:\n>> > On 27 May 2024, at 14:25, Ranier Vilela <[email protected]> wrote:\n>> > I think that commit 61461a3, left some oversight.\n>> > The function *PQcancelCreate* fails in check,\n>> > return of *calloc* function.\n>> >\n>> > Trivial fix is attached.\n>>\n>> Agreed, this looks like a copy/paste from the calloc calls a few lines up.\n>\n> Yeah.\n\nAgreed, this was indeed a copy paste mistake\n\n\n>> > But, IMO, I think that has more problems.\n>> > If any allocation fails, all allocations must be cleared.\n>> > Or is the current behavior acceptable?\n>>\n>> Since this is frontend library code I think we should free all the allocations\n>> in case of OOM.\n>\n> Agreed.\n>\n> With v1 patch, it is handled.\n\nI much prefer the original trivial patch to the v1. Even in case of\nOOM users are expected to call PQcancelFinish on a non-NULL result,\nwhich in turn calls freePGConn.Is it mandatory to call PQcancelFinish in case PQcancelCreate fails?IMO, I would expect problems with users. And that function will free any\npartially initialized PGconn correctly. This is also how\npqConnectOptions2 works.Well, I think that function *pqReleaseConnHost*, is incomplete.1. Don't free connhost field;2. Don't free addr field;3. Leave nconnhost and naddr, non-zero.So trust in *pqReleaseConnHost\n\n*, to clean properly, It's insecure.best regards,Ranier Vilela", "msg_date": "Mon, 27 May 2024 13:16:15 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix calloc check if oom (PQcancelCreate)" }, { "msg_contents": "On Mon, 27 May 2024 at 18:16, Ranier Vilela <[email protected]> wrote:\n> Is it mandatory to call PQcancelFinish in case PQcancelCreate fails?\n\n\nYes, see the following line in the docs:\n\nNote that when PQcancelCreate returns a non-null pointer, you must\ncall PQcancelFinish when you are finished with it, in order to dispose\nof the structure and any associated memory blocks. This must be done\neven if the cancel request failed or was abandoned.\n\nSource: https://www.postgresql.org/docs/17/libpq-cancel.html#LIBPQ-PQCANCELCREATE\n\n> IMO, I would expect problems with users.\n\nThis pattern is taken from regular connection creation and is really\ncommon in libpq code so I don't expect issues. And even if there were\nan issue, your v1 patch would not be nearly enough. Because you're\nonly freeing the connhost and addr field now in case of OOM. But there\nare many more fields that would need to be freed.\n\n>> And that function will free any\n>> partially initialized PGconn correctly. This is also how\n>> pqConnectOptions2 works.\n>\n> Well, I think that function *pqReleaseConnHost*, is incomplete.\n> 1. Don't free connhost field;\n\nehm... it does free that afaict? It only doesn't set it to NULL. Which\nindeed would be good to do, but not doing so doesn't cause any issues\nwith it's current 2 usages afaict.\n\n> 2. Don't free addr field;\n\nThat's what release_conn_addrinfo is for.\n\n> 3. Leave nconnhost and naddr, non-zero.\n\nI agree that it would be good to do that pqReleaseConnHost and\nrelease_conn_addrinfo. But also here, I don't see any issues caused by\nnot doing that currently.\n\nSo overall I agree pqReleaseConnHost and release_conn_addrinfo can be\nimproved for easier safe usage in the future, but I don't think those\nimprovements should be grouped into the same commit with an actual\nbugfix.\n\n\n", "msg_date": "Mon, 27 May 2024 18:47:39 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix calloc check if oom (PQcancelCreate)" }, { "msg_contents": "Em seg., 27 de mai. de 2024 às 13:47, Jelte Fennema-Nio <[email protected]>\nescreveu:\n\n> On Mon, 27 May 2024 at 18:16, Ranier Vilela <[email protected]> wrote:\n> > Is it mandatory to call PQcancelFinish in case PQcancelCreate fails?\n>\n>\n> Yes, see the following line in the docs:\n>\n> Note that when PQcancelCreate returns a non-null pointer, you must\n> call PQcancelFinish when you are finished with it, in order to dispose\n> of the structure and any associated memory blocks. This must be done\n> even if the cancel request failed or was abandoned.\n>\n> Source:\n> https://www.postgresql.org/docs/17/libpq-cancel.html#LIBPQ-PQCANCELCREATE\n>\n> > IMO, I would expect problems with users.\n>\n> This pattern is taken from regular connection creation and is really\n> common in libpq code so I don't expect issues. And even if there were\n> an issue, your v1 patch would not be nearly enough. Because you're\n> only freeing the connhost and addr field now in case of OOM. But there\n> are many more fields that would need to be freed.\n>\n> >> And that function will free any\n> >> partially initialized PGconn correctly. This is also how\n> >> pqConnectOptions2 works.\n> >\n> > Well, I think that function *pqReleaseConnHost*, is incomplete.\n> > 1. Don't free connhost field;\n>\n> ehm... it does free that afaict? It only doesn't set it to NULL. Which\n> indeed would be good to do, but not doing so doesn't cause any issues\n> with it's current 2 usages afaict.\n>\n> > 2. Don't free addr field;\n>\n> That's what release_conn_addrinfo is for.\n>\n> > 3. Leave nconnhost and naddr, non-zero.\n>\n> I agree that it would be good to do that pqReleaseConnHost and\n> release_conn_addrinfo. But also here, I don't see any issues caused by\n> not doing that currently.\n>\n> So overall I agree pqReleaseConnHost and release_conn_addrinfo can be\n> improved for easier safe usage in the future, but I don't think those\n> improvements should be grouped into the same commit with an actual\n> bugfix.\n>\nThanks for detailed comments.\nI agreed to apply the trivial fix.\n\nWould you like to take charge of pqReleaseConnHost and\nrelease_conn_addrinfo improvements?\n\nbest regards,\nRanier Vilela\n\nEm seg., 27 de mai. de 2024 às 13:47, Jelte Fennema-Nio <[email protected]> escreveu:On Mon, 27 May 2024 at 18:16, Ranier Vilela <[email protected]> wrote:\n> Is it mandatory to call PQcancelFinish in case PQcancelCreate fails?\n\n\nYes, see the following line in the docs:\n\nNote that when PQcancelCreate returns a non-null pointer, you must\ncall PQcancelFinish when you are finished with it, in order to dispose\nof the structure and any associated memory blocks. This must be done\neven if the cancel request failed or was abandoned.\n\nSource: https://www.postgresql.org/docs/17/libpq-cancel.html#LIBPQ-PQCANCELCREATE\n\n> IMO, I would expect problems with users.\n\nThis pattern is taken from regular connection creation and is really\ncommon in libpq code so I don't expect issues. And even if there were\nan issue, your v1 patch would not be nearly enough. Because you're\nonly freeing the connhost and addr field now in case of OOM. But there\nare many more fields that would need to be freed.\n\n>> And that function will free any\n>> partially initialized PGconn correctly. This is also how\n>> pqConnectOptions2 works.\n>\n> Well, I think that function *pqReleaseConnHost*, is incomplete.\n> 1. Don't free connhost field;\n\nehm... it does free that afaict? It only doesn't set it to NULL. Which\nindeed would be good to do, but not doing so doesn't cause any issues\nwith it's current 2 usages afaict.\n\n> 2. Don't free addr field;\n\nThat's what release_conn_addrinfo is for.\n\n> 3. Leave nconnhost and naddr, non-zero.\n\nI agree that it would be good to do that pqReleaseConnHost and\nrelease_conn_addrinfo. But also here, I don't see any issues caused by\nnot doing that currently.\n\nSo overall I agree pqReleaseConnHost and release_conn_addrinfo can be\nimproved for easier safe usage in the future, but I don't think those\nimprovements should be grouped into the same commit with an actual\nbugfix.Thanks for detailed comments.I agreed to apply the trivial fix.Would you like to take charge of \npqReleaseConnHost and release_conn_addrinfo improvements?best regards,Ranier Vilela", "msg_date": "Mon, 27 May 2024 14:28:35 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix calloc check if oom (PQcancelCreate)" }, { "msg_contents": "> On 27 May 2024, at 18:47, Jelte Fennema-Nio <[email protected]> wrote:\n\n> So overall I agree pqReleaseConnHost and release_conn_addrinfo can be\n> improved for easier safe usage in the future, but I don't think those\n> improvements should be grouped into the same commit with an actual\n> bugfix.\n\nI have pushed the fix for the calloc check for now.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 27 May 2024 19:42:47 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix calloc check if oom (PQcancelCreate)" }, { "msg_contents": "Em seg., 27 de mai. de 2024 às 14:42, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 27 May 2024, at 18:47, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> > So overall I agree pqReleaseConnHost and release_conn_addrinfo can be\n> > improved for easier safe usage in the future, but I don't think those\n> > improvements should be grouped into the same commit with an actual\n> > bugfix.\n>\n> I have pushed the fix for the calloc check for now.\n>\nThanks Daniel.\n\nbest regards,\nRanier Vilela\n\nEm seg., 27 de mai. de 2024 às 14:42, Daniel Gustafsson <[email protected]> escreveu:> On 27 May 2024, at 18:47, Jelte Fennema-Nio <[email protected]> wrote:\n\n> So overall I agree pqReleaseConnHost and release_conn_addrinfo can be\n> improved for easier safe usage in the future, but I don't think those\n> improvements should be grouped into the same commit with an actual\n> bugfix.\n\nI have pushed the fix for the calloc check for now.Thanks Daniel.best regards,Ranier Vilela", "msg_date": "Mon, 27 May 2024 14:51:04 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix calloc check if oom (PQcancelCreate)" } ]
[ { "msg_contents": "On 2024-May-27, Alvaro Herrera wrote:\n\n> > JSON_SERIALIZE()\n\nI just noticed this behavior, which looks like a bug to me:\n\nselect json_serialize('{\"a\":1, \"a\":2}' returning varchar(5));\n json_serialize \n────────────────\n {\"a\":\n\nI think this function should throw an error if the destination type\ndoesn't have room for the output json. Otherwise, what good is the\nserialization function?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 27 May 2024 15:26:58 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Monday, May 27, 2024, Alvaro Herrera <[email protected]> wrote:\n\n> On 2024-May-27, Alvaro Herrera wrote:\n>\n> > > JSON_SERIALIZE()\n>\n> I just noticed this behavior, which looks like a bug to me:\n>\n> select json_serialize('{\"a\":1, \"a\":2}' returning varchar(5));\n> json_serialize\n> ────────────────\n> {\"a\":\n>\n> I think this function should throw an error if the destination type\n> doesn't have room for the output json. Otherwise, what good is the\n> serialization function?\n>\n>\nIt’s not a self-evident bug given that this is exactly how casting data to\nvarchar(n) behaves as directed by the SQL Standard.\n\nI'd probably leave the internal consistency and take the opportunity to\neducate the reader that text is the preferred type in PostgreSQL and,\nespecially here, there is little good reason to use anything else.\n\nDavid J.\n\nOn Monday, May 27, 2024, Alvaro Herrera <[email protected]> wrote:On 2024-May-27, Alvaro Herrera wrote:\n\n> > JSON_SERIALIZE()\n\nI just noticed this behavior, which looks like a bug to me:\n\nselect json_serialize('{\"a\":1, \"a\":2}' returning varchar(5));\n json_serialize \n────────────────\n {\"a\":\n\nI think this function should throw an error if the destination type\ndoesn't have room for the output json.  Otherwise, what good is the\nserialization function?It’s not a self-evident bug given that this is exactly how casting data to varchar(n) behaves as directed by the SQL Standard.I'd probably leave the internal consistency and take the opportunity to educate the reader that text is the preferred type in PostgreSQL and, especially here, there is little good reason to use anything else.David J.", "msg_date": "Tue, 28 May 2024 06:41:54 -0600", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "Hi Alvaro,\n\nOn Mon, May 27, 2024 at 7:10 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-May-27, Alvaro Herrera wrote:\n>\n> > > JSON_SERIALIZE()\n>\n> I just noticed this behavior, which looks like a bug to me:\n>\n> select json_serialize('{\"a\":1, \"a\":2}' returning varchar(5));\n> json_serialize\n> ────────────────\n> {\"a\":\n>\n> I think this function should throw an error if the destination type\n> doesn't have room for the output json. Otherwise, what good is the\n> serialization function?\n\nI remember using the reasoning mentioned by David G when testing\njson_query() et al with varchar(n), so you get:\n\nselect json_query('{\"a\":1, \"a\":2}', '$' returning varchar(5));\n json_query\n------------\n {\"a\":\n(1 row)\n\nwhich is the same as:\n\nselect '{\"a\":1, \"a\":2}'::varchar(5);\n varchar\n---------\n {\"a\":\n(1 row)\n\nAlso,\n\nselect json_value('{\"a\":\"abcdef\"}', '$.a' returning varchar(5) error on error);\n json_value\n------------\n abcde\n(1 row)\n\nThis behavior comes from using COERCE_EXPLICIT_CAST when creating the\ncoercion expression to convert json_*() functions' argument to the\nRETURNING type.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 28 May 2024 18:47:19 -0700", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "Amit Langote <[email protected]> writes:\n> On Mon, May 27, 2024 at 7:10 PM Alvaro Herrera <[email protected]> wrote:\n>> On 2024-May-27, Alvaro Herrera wrote:\n>> I just noticed this behavior, which looks like a bug to me:\n>> \n>> select json_serialize('{\"a\":1, \"a\":2}' returning varchar(5));\n>> json_serialize\n>> ────────────────\n>> {\"a\":\n>> \n>> I think this function should throw an error if the destination type\n>> doesn't have room for the output json. Otherwise, what good is the\n>> serialization function?\n\n> This behavior comes from using COERCE_EXPLICIT_CAST when creating the\n> coercion expression to convert json_*() functions' argument to the\n> RETURNING type.\n\nYeah, I too think this is a cast, and truncation is the spec-defined\nbehavior for casting to varchar with a specific length limit. I see\nlittle reason that this should work differently from\n\nselect json_serialize('{\"a\":1, \"a\":2}' returning text)::varchar(5);\n json_serialize \n----------------\n {\"a\":\n(1 row)\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 May 2024 09:44:35 -0700", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On 29.05.24 18:44, Tom Lane wrote:\n> Amit Langote <[email protected]> writes:\n>> On Mon, May 27, 2024 at 7:10 PM Alvaro Herrera <[email protected]> wrote:\n>>> On 2024-May-27, Alvaro Herrera wrote:\n>>> I just noticed this behavior, which looks like a bug to me:\n>>>\n>>> select json_serialize('{\"a\":1, \"a\":2}' returning varchar(5));\n>>> json_serialize\n>>> ────────────────\n>>> {\"a\":\n>>>\n>>> I think this function should throw an error if the destination type\n>>> doesn't have room for the output json. Otherwise, what good is the\n>>> serialization function?\n> \n>> This behavior comes from using COERCE_EXPLICIT_CAST when creating the\n>> coercion expression to convert json_*() functions' argument to the\n>> RETURNING type.\n> \n> Yeah, I too think this is a cast, and truncation is the spec-defined\n> behavior for casting to varchar with a specific length limit. I see\n> little reason that this should work differently from\n> \n> select json_serialize('{\"a\":1, \"a\":2}' returning text)::varchar(5);\n> json_serialize\n> ----------------\n> {\"a\":\n> (1 row)\n\nThe SQL standard says essentially that the output of json_serialize() is \nsome string that when parsed back in gives you an equivalent JSON value \nas the input. That doesn't seem compatible with truncating the output.\n\nIf you want output truncation, you can of course use an actual cast. \nBut it makes sense that the RETURNING clause is separate from that.\n\n\n\n", "msg_date": "Sun, 2 Jun 2024 23:17:06 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 29.05.24 18:44, Tom Lane wrote:\n>> Yeah, I too think this is a cast, and truncation is the spec-defined\n>> behavior for casting to varchar with a specific length limit.\n\n> The SQL standard says essentially that the output of json_serialize() is \n> some string that when parsed back in gives you an equivalent JSON value \n> as the input. That doesn't seem compatible with truncating the output.\n\nMaybe you should take this up with the SQL committee? If you don't\nlike our current behavior, then either you have to say that RETURNING\nwith a length-limited target type is illegal (which is problematic\nfor the spec, since they have no such type) or that the cast behaves\nlike an implicit cast, with errors for overlength input (which I find\nto be an unintuitive definition for a construct that names the target\ntype explicitly).\n\n> If you want output truncation, you can of course use an actual cast. \n> But it makes sense that the RETURNING clause is separate from that.\n\nAre you trying to say that the RETURNING clause's specified type\nisn't the actual output type? I can't buy that either.\n\nAgain, if you think our existing behavior isn't right, I think\nit's a problem for the SQL committee not us.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Jun 2024 00:46:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On 02.06.24 21:46, Tom Lane wrote:\n> If you don't\n> like our current behavior, then either you have to say that RETURNING\n> with a length-limited target type is illegal (which is problematic\n> for the spec, since they have no such type) or that the cast behaves\n> like an implicit cast, with errors for overlength input (which I find\n> to be an unintuitive definition for a construct that names the target\n> type explicitly).\n\nIt asks for the latter behavior, essentially (but it's not defined in \nterms of casts). It says:\n\n\"\"\"\nii) Let JV be an implementation-dependent (UV097) value of type TT and \nencoding ENC such that these two conditions hold:\n\n1) JV is a JSON text.\n\n2) When the General Rules of Subclause 9.42, “Parsing JSON text”, are \napplied with JV as JSON TEXT, FO as FORMAT OPTION, and WITHOUT UNIQUE \nKEYS as UNIQUENESS CONSTRAINT; let CST be the STATUS and let CSJI be the \nSQL/JSON ITEM returned from the application of those General Rules, CST \nis successful completion (00000) and CSJI is an SQL/JSON item that is \nequivalent to SJI.\n\nIf there is no such JV, then let ST be the exception condition: data \nexception — invalid JSON text (22032).\n\niii) If JV is longer than the length or maximum length of TT, then an \nexception condition is raised: data exception — string data, right \ntruncation (22001).\n\"\"\"\n\nOracle also behaves accordingly:\n\nSQL> select json_serialize('{\"a\":1, \"a\":2}' returning varchar2(20)) from \ndual;\n\nJSON_SERIALIZE('{\"A\"\n--------------------\n{\"a\":1,\"a\":2}\n\nSQL> select json_serialize('{\"a\":1, \"a\":2}' returning varchar2(5)) from \ndual;\nselect json_serialize('{\"a\":1, \"a\":2}' returning varchar2(5)) from dual\n *\nERROR at line 1:\nORA-40478: output value too large (maximum: 5)\nJZN-00018: Input to serializer is too large\nHelp: https://docs.oracle.com/error-help/db/ora-40478/\n\n\nAs opposed to:\n\nSQL> select cast(json_serialize('{\"a\":1, \"a\":2}') as varchar2(5)) from dual;\n\nCAST(\n-----\n{\"a\":\n\n\n\n", "msg_date": "Mon, 3 Jun 2024 11:15:37 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 02.06.24 21:46, Tom Lane wrote:\n>> If you don't\n>> like our current behavior, then either you have to say that RETURNING\n>> with a length-limited target type is illegal (which is problematic\n>> for the spec, since they have no such type) or that the cast behaves\n>> like an implicit cast, with errors for overlength input (which I find\n>> to be an unintuitive definition for a construct that names the target\n>> type explicitly).\n\n> It asks for the latter behavior, essentially (but it's not defined in \n> terms of casts). It says:\n\nMeh. Who needs consistency? But I guess the answer is to do what was\nsuggested earlier and change the code to use COERCE_IMPLICIT_CAST.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Jun 2024 13:20:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Tue, Jun 4, 2024 at 2:20 AM Tom Lane <[email protected]> wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > On 02.06.24 21:46, Tom Lane wrote:\n> >> If you don't\n> >> like our current behavior, then either you have to say that RETURNING\n> >> with a length-limited target type is illegal (which is problematic\n> >> for the spec, since they have no such type) or that the cast behaves\n> >> like an implicit cast, with errors for overlength input (which I find\n> >> to be an unintuitive definition for a construct that names the target\n> >> type explicitly).\n>\n> > It asks for the latter behavior, essentially (but it's not defined in\n> > terms of casts). It says:\n>\n> Meh. Who needs consistency? But I guess the answer is to do what was\n> suggested earlier and change the code to use COERCE_IMPLICIT_CAST.\n\nOK, will post a patch to do so in a new thread on -hackers.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 4 Jun 2024 19:03:18 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Tue, Jun 4, 2024 at 7:03 PM Amit Langote <[email protected]> wrote:\n> On Tue, Jun 4, 2024 at 2:20 AM Tom Lane <[email protected]> wrote:\n> > Peter Eisentraut <[email protected]> writes:\n> > > On 02.06.24 21:46, Tom Lane wrote:\n> > >> If you don't\n> > >> like our current behavior, then either you have to say that RETURNING\n> > >> with a length-limited target type is illegal (which is problematic\n> > >> for the spec, since they have no such type) or that the cast behaves\n> > >> like an implicit cast, with errors for overlength input (which I find\n> > >> to be an unintuitive definition for a construct that names the target\n> > >> type explicitly).\n> >\n> > > It asks for the latter behavior, essentially (but it's not defined in\n> > > terms of casts). It says:\n> >\n> > Meh. Who needs consistency? But I guess the answer is to do what was\n> > suggested earlier and change the code to use COERCE_IMPLICIT_CAST.\n>\n> OK, will post a patch to do so in a new thread on -hackers.\n\nOops, didn't realize that this is already on -hackers.\n\nAttached is a patch to use COERCE_IMPLICIT_CAST when the RETURNING\ntype specifies a length limit.\n\nGiven that this also affects JSON_OBJECT() et al that got added in\nv16, maybe back-patching is in order but I'd like to hear opinions on\nthat.\n\n\n--\nThanks, Amit Langote", "msg_date": "Tue, 18 Jun 2024 18:02:03 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Tue, Jun 18, 2024 at 5:02 PM Amit Langote <[email protected]> wrote:\n>\n> On Tue, Jun 4, 2024 at 7:03 PM Amit Langote <[email protected]> wrote:\n> > On Tue, Jun 4, 2024 at 2:20 AM Tom Lane <[email protected]> wrote:\n> > > Peter Eisentraut <[email protected]> writes:\n> > > > On 02.06.24 21:46, Tom Lane wrote:\n> > > >> If you don't\n> > > >> like our current behavior, then either you have to say that RETURNING\n> > > >> with a length-limited target type is illegal (which is problematic\n> > > >> for the spec, since they have no such type) or that the cast behaves\n> > > >> like an implicit cast, with errors for overlength input (which I find\n> > > >> to be an unintuitive definition for a construct that names the target\n> > > >> type explicitly).\n> > >\n> > > > It asks for the latter behavior, essentially (but it's not defined in\n> > > > terms of casts). It says:\n> > >\n> > > Meh. Who needs consistency? But I guess the answer is to do what was\n> > > suggested earlier and change the code to use COERCE_IMPLICIT_CAST.\n> >\n> > OK, will post a patch to do so in a new thread on -hackers.\n>\n> Oops, didn't realize that this is already on -hackers.\n>\n> Attached is a patch to use COERCE_IMPLICIT_CAST when the RETURNING\n> type specifies a length limit.\n>\n\nhi.\ni am a little confused.\n\nhere[1] tom says:\n> Yeah, I too think this is a cast, and truncation is the spec-defined\n> behavior for casting to varchar with a specific length limit. I see\n> little reason that this should work differently from\n>\n> select json_serialize('{\"a\":1, \"a\":2}' returning text)::varchar(5);\n> json_serialize\n> ----------------\n> {\"a\":\n> (1 row)\n\nif i understand it correctly, and my english interpretation is fine.\ni think tom means something like:\n\nselect json_serialize('{\"a\":1, \"a\":2}' returning text)::varchar(5) =\njson_serialize('{\"a\":1, \"a\":2}' returning varchar(5));\n\nshould return true.\nthe master will return true, but apply your patch, the above query\nwill yield an error.\n\n\nyour patch will make domain and char(n) behavior inconsistent.\ncreate domain char2 as char(2);\nSELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char2 ERROR ON ERROR);\nSELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char(2) ERROR ON ERROR);\n\n\nanother example:\nSELECT JSON_query(jsonb '\"aaa\"', '$' RETURNING char(2) keep quotes\ndefault '\"aaa\"'::jsonb ON ERROR);\nsame value (jsonb \"aaa\") error on error will yield error,\nbut `default expression on error` can coerce the value to char(2),\nwhich looks a little bit inconsistent, I think.\n\n\n------------------------------------------\ncurrent in ExecInitJsonExpr we have\n\nif (jsexpr->coercion_expr)\n...\nelse if (jsexpr->use_json_coercion)\n...\nelse if (jsexpr->use_io_coercion)\n...\n\ndo you think it is necessary to add following asserts:\nAssert (!(jsexpr->coercion_expr && jsexpr->use_json_coercion))\nAssert (!(jsexpr->coercion_expr && jsexpr->use_io_coercion))\n\n\n\n[1] https://www.postgresql.org/message-id/3189.1717001075%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 21 Jun 2024 15:05:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "Hi,\n\nThanks for taking a look.\n\nOn Fri, Jun 21, 2024 at 4:05 PM jian he <[email protected]> wrote:\n> On Tue, Jun 18, 2024 at 5:02 PM Amit Langote <[email protected]> wrote:\n> > On Tue, Jun 4, 2024 at 7:03 PM Amit Langote <[email protected]> wrote:\n> > > On Tue, Jun 4, 2024 at 2:20 AM Tom Lane <[email protected]> wrote:\n> > > > Peter Eisentraut <[email protected]> writes:\n> > > > > On 02.06.24 21:46, Tom Lane wrote:\n> > > > >> If you don't\n> > > > >> like our current behavior, then either you have to say that RETURNING\n> > > > >> with a length-limited target type is illegal (which is problematic\n> > > > >> for the spec, since they have no such type) or that the cast behaves\n> > > > >> like an implicit cast, with errors for overlength input (which I find\n> > > > >> to be an unintuitive definition for a construct that names the target\n> > > > >> type explicitly).\n> > > >\n> > > > > It asks for the latter behavior, essentially (but it's not defined in\n> > > > > terms of casts). It says:\n> > > >\n> > > > Meh. Who needs consistency? But I guess the answer is to do what was\n> > > > suggested earlier and change the code to use COERCE_IMPLICIT_CAST.\n> > >\n> > > OK, will post a patch to do so in a new thread on -hackers.\n> >\n> > Oops, didn't realize that this is already on -hackers.\n> >\n> > Attached is a patch to use COERCE_IMPLICIT_CAST when the RETURNING\n> > type specifies a length limit.\n>\n> hi.\n> i am a little confused.\n>\n> here[1] tom says:\n> > Yeah, I too think this is a cast, and truncation is the spec-defined\n> > behavior for casting to varchar with a specific length limit. I see\n> > little reason that this should work differently from\n> >\n> > select json_serialize('{\"a\":1, \"a\":2}' returning text)::varchar(5);\n> > json_serialize\n> > ----------------\n> > {\"a\":\n> > (1 row)\n>\n> if i understand it correctly, and my english interpretation is fine.\n> i think tom means something like:\n>\n> select json_serialize('{\"a\":1, \"a\":2}' returning text)::varchar(5) =\n> json_serialize('{\"a\":1, \"a\":2}' returning varchar(5));\n>\n> should return true.\n> the master will return true, but apply your patch, the above query\n> will yield an error.\n\nThe RETURNING variant giving an error is what the standard asks us to\ndo apparently. I read Tom's last message on this thread as agreeing\nto that, even though hesitantly. He can correct me if I got that\nwrong.\n\n> your patch will make domain and char(n) behavior inconsistent.\n> create domain char2 as char(2);\n> SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char2 ERROR ON ERROR);\n> SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char(2) ERROR ON ERROR);\n>\n>\n> another example:\n> SELECT JSON_query(jsonb '\"aaa\"', '$' RETURNING char(2) keep quotes\n> default '\"aaa\"'::jsonb ON ERROR);\n> same value (jsonb \"aaa\") error on error will yield error,\n> but `default expression on error` can coerce the value to char(2),\n> which looks a little bit inconsistent, I think.\n\nInteresting examples, thanks for sharing.\n\nAttached updated version should take into account that typmod may be\nhiding under domains. Please test.\n\n> ------------------------------------------\n> current in ExecInitJsonExpr we have\n>\n> if (jsexpr->coercion_expr)\n> ...\n> else if (jsexpr->use_json_coercion)\n> ...\n> else if (jsexpr->use_io_coercion)\n> ...\n>\n> do you think it is necessary to add following asserts:\n> Assert (!(jsexpr->coercion_expr && jsexpr->use_json_coercion))\n> Assert (!(jsexpr->coercion_expr && jsexpr->use_io_coercion))\n\nYeah, perhaps, but let's consider this independently please.\n\n--\nThanks, Amit Langote", "msg_date": "Fri, 21 Jun 2024 22:48:04 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Fri, Jun 21, 2024 at 10:48 PM Amit Langote <[email protected]> wrote:\n> On Fri, Jun 21, 2024 at 4:05 PM jian he <[email protected]> wrote:\n> > hi.\n> > i am a little confused.\n> >\n> > here[1] tom says:\n> > > Yeah, I too think this is a cast, and truncation is the spec-defined\n> > > behavior for casting to varchar with a specific length limit. I see\n> > > little reason that this should work differently from\n> > >\n> > > select json_serialize('{\"a\":1, \"a\":2}' returning text)::varchar(5);\n> > > json_serialize\n> > > ----------------\n> > > {\"a\":\n> > > (1 row)\n> >\n> > if i understand it correctly, and my english interpretation is fine.\n> > i think tom means something like:\n> >\n> > select json_serialize('{\"a\":1, \"a\":2}' returning text)::varchar(5) =\n> > json_serialize('{\"a\":1, \"a\":2}' returning varchar(5));\n> >\n> > should return true.\n> > the master will return true, but apply your patch, the above query\n> > will yield an error.\n>\n> The RETURNING variant giving an error is what the standard asks us to\n> do apparently. I read Tom's last message on this thread as agreeing\n> to that, even though hesitantly. He can correct me if I got that\n> wrong.\n>\n> > your patch will make domain and char(n) behavior inconsistent.\n> > create domain char2 as char(2);\n> > SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char2 ERROR ON ERROR);\n> > SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char(2) ERROR ON ERROR);\n> >\n> >\n> > another example:\n> > SELECT JSON_query(jsonb '\"aaa\"', '$' RETURNING char(2) keep quotes\n> > default '\"aaa\"'::jsonb ON ERROR);\n> > same value (jsonb \"aaa\") error on error will yield error,\n> > but `default expression on error` can coerce the value to char(2),\n> > which looks a little bit inconsistent, I think.\n>\n> Interesting examples, thanks for sharing.\n>\n> Attached updated version should take into account that typmod may be\n> hiding under domains. Please test.\n\nI'd like to push this one tomorrow, barring objections.\n\nI could use some advice on backpatching. As I mentioned upthread,\nthis changes the behavior for JSON_OBJECT(), JSON_ARRAY(),\nJSON_ARRAYAGG(), JSON_OBJECTAGG() too, which were added in v16.\nShould this change be backpatched? In general, what's our stance on\nchanges that cater to improving standard compliance, but are not\nnecessarily bugs.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Wed, 26 Jun 2024 21:38:57 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Wed, Jun 26, 2024 at 8:39 PM Amit Langote <[email protected]> wrote:\n>\n> >\n> > The RETURNING variant giving an error is what the standard asks us to\n> > do apparently. I read Tom's last message on this thread as agreeing\n> > to that, even though hesitantly. He can correct me if I got that\n> > wrong.\n> >\n> > > your patch will make domain and char(n) behavior inconsistent.\n> > > create domain char2 as char(2);\n> > > SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char2 ERROR ON ERROR);\n> > > SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char(2) ERROR ON ERROR);\n> > >\n> > >\n> > > another example:\n> > > SELECT JSON_query(jsonb '\"aaa\"', '$' RETURNING char(2) keep quotes\n> > > default '\"aaa\"'::jsonb ON ERROR);\n> > > same value (jsonb \"aaa\") error on error will yield error,\n> > > but `default expression on error` can coerce the value to char(2),\n> > > which looks a little bit inconsistent, I think.\n> >\n> > Interesting examples, thanks for sharing.\n> >\n> > Attached updated version should take into account that typmod may be\n> > hiding under domains. Please test.\n>\n\nSELECT JSON_VALUE(jsonb '111', '$' RETURNING queryfuncs_char2 default\n'13' on error);\nreturn\nERROR: value too long for type character(2)\nshould return 13\n\nI found out the source of the problem is in coerceJsonExprOutput\n/*\n* Use cast expression for domain types; we need CoerceToDomain here.\n*/\nif (get_typtype(returning->typid) != TYPTYPE_DOMAIN)\n{\njsexpr->use_io_coercion = true;\nreturn;\n}\n\n>\n> I'd like to push this one tomorrow, barring objections.\n>\n\nCurrently the latest patch available cannot be `git apply` cleanly.\n\n@@ -464,3 +466,9 @@ SELECT JSON_QUERY(jsonb 'null', '$xyz' PASSING 1 AS xyz);\n SELECT JSON_EXISTS(jsonb '1', '$' DEFAULT 1 ON ERROR);\n SELECT JSON_VALUE(jsonb '1', '$' EMPTY ON ERROR);\n SELECT JSON_QUERY(jsonb '1', '$' TRUE ON ERROR);\n+\n+-- Test implicit coercion domain over fixed-legth type specified in RETURNING\n+CREATE DOMAIN queryfuncs_char2 AS char(2) CHECK (VALUE NOT IN ('12'));\n+SELECT JSON_QUERY(jsonb '123', '$' RETURNING queryfuncs_char2 ERROR ON ERROR);\n+SELECT JSON_VALUE(jsonb '123', '$' RETURNING queryfuncs_char2 ERROR ON ERROR);\n+SELECT JSON_VALUE(jsonb '12', '$' RETURNING queryfuncs_char2 ERROR ON ERROR);\n\ncannot found `SELECT JSON_QUERY(jsonb '1', '$' TRUE ON ERROR);` in\nhttps://git.postgresql.org/cgit/postgresql.git/tree/src/test/regress/sql/sqljson_queryfuncs.sql\n\n\n", "msg_date": "Wed, 26 Jun 2024 22:46:26 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "hi.\nI have assembled a list of simple examples, some works (for comparison\nsake), most not work\nas intended.\n\nCREATE DOMAIN queryfuncs_char2 AS char(2) CHECK (VALUE NOT IN ('12'));\nCREATE DOMAIN queryfuncs_d_interval AS interval(2) CHECK (VALUE is not null);\n\nSELECT JSON_VALUE(jsonb '111', '$' RETURNING queryfuncs_char2 default\n'12' on error);\nSELECT JSON_VALUE(jsonb '12', '$' RETURNING queryfuncs_char2 default\n'11' on error);\nSELECT JSON_VALUE(jsonb '111', '$' RETURNING queryfuncs_char2 default\n'13' on error);\nSELECT JSON_VALUE(jsonb '\"111\"', '$' RETURNING queryfuncs_char2\ndefault '17' on error);\nSELECT JSON_QUERY(jsonb '111', '$' RETURNING queryfuncs_char2 default\n'14' on error);\nSELECT JSON_QUERY(jsonb '111', '$' RETURNING queryfuncs_char2 omit\nquotes default '15' on error);\nSELECT JSON_QUERY(jsonb '111', '$' RETURNING queryfuncs_char2 keep\nquotes default '16' on error);\n\n\nSELECT JSON_VALUE(jsonb '\"01:23:45.6789\"', '$' RETURNING\nqueryfuncs_d_interval default '01:23:45.6789' on error);\nSELECT JSON_VALUE(jsonb '\"01:23:45.6789\"', '$' RETURNING\nqueryfuncs_d_interval default '01:23:45.6789' on empty);\nSELECT JSON_QUERY(jsonb '\"01:23:45.6789\"', '$' RETURNING\nqueryfuncs_d_interval default '01:23:45.6789' on error);\nSELECT JSON_QUERY(jsonb '\"01:23:45.6789\"', '$' RETURNING\nqueryfuncs_d_interval default '01:23:45.6789' on empty);\nabove 4 queries fails, meaning the changes you propose within\ntransformJsonBehavior is wrong?\ni think it's because the COERCION_IMPLICIT cast from text to domain\nqueryfuncs_d_interval is not doable.\n\n\njson_table seems also have problem with \"exists\" cast to other type, example:\nSELECT * FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a char(2) EXISTS\nPATH '$.a' ));\nSELECT * FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a queryfuncs_char2\nEXISTS PATH '$.a'));\nSELECT * FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a queryfuncs_char2\nEXISTS PATH '$.a' error on error));\nSELECT * FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a queryfuncs_char2\nEXISTS PATH '$.a' error on empty));\n\n\n----------------------------------------------------------------------------------------------------\nSELECT JSON_VALUE(jsonb '111', '$' RETURNING queryfuncs_char2 default\n'13' on error);\nfor the above example:\ncoerceJsonExprOutput, coerceJsonFuncExpr set the result datum coercion\nnode to RelabelType:\nRelabelType is not error safe. so the above query will fail converting\ntext 111 to queryfuncs_char2\nwhich is not what we want.\n\nI think making coerceJsonExprOutput the following way can solve this problem.\nyour patch cannot apply cleanly, I just posted the actual code snippet\nof coerceJsonExprOutput, not a diff file.\n\nstatic void\ncoerceJsonExprOutput(ParseState *pstate, JsonExpr *jsexpr)\n{\nJsonReturning *returning = jsexpr->returning;\nNode *context_item = jsexpr->formatted_expr;\nint default_typmod;\nOid default_typid;\nbool omit_quotes =\njsexpr->op == JSON_QUERY_OP && jsexpr->omit_quotes;\nNode *coercion_expr = NULL;\nint32 baseTypmod = returning->typmod;\n\nAssert(returning);\n\n/*\n* Check for cases where the coercion should be handled at runtime, that\n* is, without using a cast expression.\n*/\nif (jsexpr->op == JSON_VALUE_OP)\n{\n/*\n* Use cast expression for domain types; we need CoerceToDomain here.\n*/\nif (get_typtype(returning->typid) != TYPTYPE_DOMAIN)\n{\njsexpr->use_io_coercion = true;\nreturn;\n}\nelse\n{\n/* domain type, typmod > 0 can only use use_io_coercion */\n(void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\nif (baseTypmod > 0)\n{\njsexpr->use_io_coercion = true;\nreturn;\n}\n}\n}\nelse if (jsexpr->op == JSON_QUERY_OP)\n{\n/*\n* Cast functions from jsonb to the following types (jsonb_bool() et\n* al) don't handle errors softly, so coerce either by calling\n* json_populate_type() or the type's input function so that any\n* errors are handled appropriately. The latter only if OMIT QUOTES is\n* true.\n*/\nswitch (returning->typid)\n{\ncase BOOLOID:\ncase NUMERICOID:\ncase INT2OID:\ncase INT4OID:\ncase INT8OID:\ncase FLOAT4OID:\ncase FLOAT8OID:\nif (jsexpr->omit_quotes)\njsexpr->use_io_coercion = true;\nelse\njsexpr->use_json_coercion = true;\nreturn;\ndefault:\nbreak;\n}\n\n/*\n* for returning domain type, we cannot use coercion expression.\n* it may not be able to catch the error, for example RelabelType\n* for we either use_io_coercion or use_json_coercion.\n*/\nif (get_typtype(returning->typid) == TYPTYPE_DOMAIN)\n(void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\n\n/*\n* coerceJsonFuncExpr() creates implicit casts for types with typmod,\n* which (if present) don't handle errors softly, so use runtime\n* coercion.\n*/\nif (baseTypmod > 0)\n{\nif (jsexpr->omit_quotes)\njsexpr->use_io_coercion = true;\nelse\njsexpr->use_json_coercion = true;\nreturn;\n}\n}\n...\n-------------------------------\n\n\n", "msg_date": "Thu, 27 Jun 2024 14:25:25 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 26, 2024 at 11:46 PM jian he <[email protected]> wrote:\n> On Wed, Jun 26, 2024 at 8:39 PM Amit Langote <[email protected]> wrote:\n> >\n> > >\n> > > The RETURNING variant giving an error is what the standard asks us to\n> > > do apparently. I read Tom's last message on this thread as agreeing\n> > > to that, even though hesitantly. He can correct me if I got that\n> > > wrong.\n> > >\n> > > > your patch will make domain and char(n) behavior inconsistent.\n> > > > create domain char2 as char(2);\n> > > > SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char2 ERROR ON ERROR);\n> > > > SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char(2) ERROR ON ERROR);\n> > > >\n> > > >\n> > > > another example:\n> > > > SELECT JSON_query(jsonb '\"aaa\"', '$' RETURNING char(2) keep quotes\n> > > > default '\"aaa\"'::jsonb ON ERROR);\n> > > > same value (jsonb \"aaa\") error on error will yield error,\n> > > > but `default expression on error` can coerce the value to char(2),\n> > > > which looks a little bit inconsistent, I think.\n> > >\n> > > Interesting examples, thanks for sharing.\n> > >\n> > > Attached updated version should take into account that typmod may be\n> > > hiding under domains. Please test.\n> >\n>\n> SELECT JSON_VALUE(jsonb '111', '$' RETURNING queryfuncs_char2 default\n> '13' on error);\n> return\n> ERROR: value too long for type character(2)\n> should return 13\n>\n> I found out the source of the problem is in coerceJsonExprOutput\n> /*\n> * Use cast expression for domain types; we need CoerceToDomain here.\n> */\n> if (get_typtype(returning->typid) != TYPTYPE_DOMAIN)\n> {\n> jsexpr->use_io_coercion = true;\n> return;\n> }\n\nThanks for this test case and the analysis. Yes, using a cast\nexpression for coercion to the RETURNING type generally seems to be a\nsource of many problems that could've been solved by fixing things so\nthat only use_io_coercion and use_json_coercion are enough to handle\nall the cases.\n\nI've attempted that in the attached 0001, which removes\nJsonExpr.coercion_expr and a bunch of code around it.\n\n0002 is now the original patch minus the changes to make\nJSON_EXISTS(), JSON_QUERY(), and JSON_VALUE() behave as we would like,\nbecause the changes in 0001 covers them. The changes for JsonBehavior\nexpression coercion as they were in the last version of the patch are\nstill needed, but I decided to move those into 0001 so that the\nchanges for query functions are all in 0001 and those for constructors\nin 0002. It would be nice to get rid of that coerce_to_target_type()\ncall to coerce the \"behavior expression\" to RETURNING type, but I'm\nleaving that as a task for another day.\n\n-- \nThanks, Amit Langote", "msg_date": "Thu, 27 Jun 2024 18:57:09 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Thu, Jun 27, 2024 at 6:57 PM Amit Langote <[email protected]> wrote:\n> On Wed, Jun 26, 2024 at 11:46 PM jian he <[email protected]> wrote:\n> > On Wed, Jun 26, 2024 at 8:39 PM Amit Langote <[email protected]> wrote:\n> > >\n> > > >\n> > > > The RETURNING variant giving an error is what the standard asks us to\n> > > > do apparently. I read Tom's last message on this thread as agreeing\n> > > > to that, even though hesitantly. He can correct me if I got that\n> > > > wrong.\n> > > >\n> > > > > your patch will make domain and char(n) behavior inconsistent.\n> > > > > create domain char2 as char(2);\n> > > > > SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char2 ERROR ON ERROR);\n> > > > > SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING char(2) ERROR ON ERROR);\n> > > > >\n> > > > >\n> > > > > another example:\n> > > > > SELECT JSON_query(jsonb '\"aaa\"', '$' RETURNING char(2) keep quotes\n> > > > > default '\"aaa\"'::jsonb ON ERROR);\n> > > > > same value (jsonb \"aaa\") error on error will yield error,\n> > > > > but `default expression on error` can coerce the value to char(2),\n> > > > > which looks a little bit inconsistent, I think.\n> > > >\n> > > > Interesting examples, thanks for sharing.\n> > > >\n> > > > Attached updated version should take into account that typmod may be\n> > > > hiding under domains. Please test.\n> > >\n> >\n> > SELECT JSON_VALUE(jsonb '111', '$' RETURNING queryfuncs_char2 default\n> > '13' on error);\n> > return\n> > ERROR: value too long for type character(2)\n> > should return 13\n> >\n> > I found out the source of the problem is in coerceJsonExprOutput\n> > /*\n> > * Use cast expression for domain types; we need CoerceToDomain here.\n> > */\n> > if (get_typtype(returning->typid) != TYPTYPE_DOMAIN)\n> > {\n> > jsexpr->use_io_coercion = true;\n> > return;\n> > }\n>\n> Thanks for this test case and the analysis. Yes, using a cast\n> expression for coercion to the RETURNING type generally seems to be a\n> source of many problems that could've been solved by fixing things so\n> that only use_io_coercion and use_json_coercion are enough to handle\n> all the cases.\n>\n> I've attempted that in the attached 0001, which removes\n> JsonExpr.coercion_expr and a bunch of code around it.\n>\n> 0002 is now the original patch minus the changes to make\n> JSON_EXISTS(), JSON_QUERY(), and JSON_VALUE() behave as we would like,\n> because the changes in 0001 covers them. The changes for JsonBehavior\n> expression coercion as they were in the last version of the patch are\n> still needed, but I decided to move those into 0001 so that the\n> changes for query functions are all in 0001 and those for constructors\n> in 0002. It would be nice to get rid of that coerce_to_target_type()\n> call to coerce the \"behavior expression\" to RETURNING type, but I'm\n> leaving that as a task for another day.\n\nUpdated 0001 to remove outdated references, remove some more unnecessary code.\n\n-- \nThanks, Amit Langote", "msg_date": "Thu, 27 Jun 2024 20:48:37 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Thu, Jun 27, 2024 at 7:48 PM Amit Langote <[email protected]> wrote:\n> >\n> > I've attempted that in the attached 0001, which removes\n> > JsonExpr.coercion_expr and a bunch of code around it.\n> >\n> > 0002 is now the original patch minus the changes to make\n> > JSON_EXISTS(), JSON_QUERY(), and JSON_VALUE() behave as we would like,\n> > because the changes in 0001 covers them. The changes for JsonBehavior\n> > expression coercion as they were in the last version of the patch are\n> > still needed, but I decided to move those into 0001 so that the\n> > changes for query functions are all in 0001 and those for constructors\n> > in 0002. It would be nice to get rid of that coerce_to_target_type()\n> > call to coerce the \"behavior expression\" to RETURNING type, but I'm\n> > leaving that as a task for another day.\n>\n> Updated 0001 to remove outdated references, remove some more unnecessary code.\n>\n\ni found some remaining references of \"coercion_expr\" should be removed.\n\nsrc/include/nodes/primnodes.h\n/* JsonExpr's collation, if coercion_expr is NULL. */\n\n\nsrc/include/nodes/execnodes.h\n/*\n* Address of the step to coerce the result value of jsonpath evaluation\n* to the RETURNING type using JsonExpr.coercion_expr. -1 if no coercion\n* is necessary or if either JsonExpr.use_io_coercion or\n* JsonExpr.use_json_coercion is true.\n*/\nint jump_eval_coercion;\n\nsrc/backend/jit/llvm/llvmjit_expr.c\n/* coercion_expr code */\nLLVMPositionBuilderAtEnd(b, b_coercion);\nif (jsestate->jump_eval_coercion >= 0)\nLLVMBuildBr(b, opblocks[jsestate->jump_eval_coercion]);\nelse\nLLVMBuildUnreachable(b);\n\n\nsrc/backend/executor/execExprInterp.c\n/*\n * Checks if an error occurred either when evaluating JsonExpr.coercion_expr or\n * in ExecEvalJsonCoercion(). If so, this sets JsonExprState.error to trigger\n * the ON ERROR handling steps.\n */\nvoid\nExecEvalJsonCoercionFinish(ExprState *state, ExprEvalStep *op)\n{\n}\n\nif (jbv == NULL)\n{\n/* Will be coerced with coercion_expr, if any. */\n*op->resvalue = (Datum) 0;\n*op->resnull = true;\n}\n\n\nsrc/backend/executor/execExpr.c\n/*\n* Jump to coerce the NULL using coercion_expr if present. Coercing NULL\n* is only interesting when the RETURNING type is a domain whose\n* constraints must be checked. jsexpr->coercion_expr containing a\n* CoerceToDomain node must have been set in that case.\n*/\n\n/*\n* Jump to coerce the NULL using coercion_expr if present. Coercing NULL\n* is only interesting when the RETURNING type is a domain whose\n* constraints must be checked. jsexpr->coercion_expr containing a\n* CoerceToDomain node must have been set in that case.\n*/\n\n\n", "msg_date": "Fri, 28 Jun 2024 14:13:53 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Fri, Jun 28, 2024 at 3:14 PM jian he <[email protected]> wrote:\n> On Thu, Jun 27, 2024 at 7:48 PM Amit Langote <[email protected]> wrote:\n> > >\n> > > I've attempted that in the attached 0001, which removes\n> > > JsonExpr.coercion_expr and a bunch of code around it.\n> > >\n> > > 0002 is now the original patch minus the changes to make\n> > > JSON_EXISTS(), JSON_QUERY(), and JSON_VALUE() behave as we would like,\n> > > because the changes in 0001 covers them. The changes for JsonBehavior\n> > > expression coercion as they were in the last version of the patch are\n> > > still needed, but I decided to move those into 0001 so that the\n> > > changes for query functions are all in 0001 and those for constructors\n> > > in 0002. It would be nice to get rid of that coerce_to_target_type()\n> > > call to coerce the \"behavior expression\" to RETURNING type, but I'm\n> > > leaving that as a task for another day.\n> >\n> > Updated 0001 to remove outdated references, remove some more unnecessary code.\n> >\n>\n> i found some remaining references of \"coercion_expr\" should be removed.\n>\n> src/include/nodes/primnodes.h\n> /* JsonExpr's collation, if coercion_expr is NULL. */\n>\n>\n> src/include/nodes/execnodes.h\n> /*\n> * Address of the step to coerce the result value of jsonpath evaluation\n> * to the RETURNING type using JsonExpr.coercion_expr. -1 if no coercion\n> * is necessary or if either JsonExpr.use_io_coercion or\n> * JsonExpr.use_json_coercion is true.\n> */\n> int jump_eval_coercion;\n>\n> src/backend/jit/llvm/llvmjit_expr.c\n> /* coercion_expr code */\n> LLVMPositionBuilderAtEnd(b, b_coercion);\n> if (jsestate->jump_eval_coercion >= 0)\n> LLVMBuildBr(b, opblocks[jsestate->jump_eval_coercion]);\n> else\n> LLVMBuildUnreachable(b);\n>\n>\n> src/backend/executor/execExprInterp.c\n> /*\n> * Checks if an error occurred either when evaluating JsonExpr.coercion_expr or\n> * in ExecEvalJsonCoercion(). If so, this sets JsonExprState.error to trigger\n> * the ON ERROR handling steps.\n> */\n> void\n> ExecEvalJsonCoercionFinish(ExprState *state, ExprEvalStep *op)\n> {\n> }\n>\n> if (jbv == NULL)\n> {\n> /* Will be coerced with coercion_expr, if any. */\n> *op->resvalue = (Datum) 0;\n> *op->resnull = true;\n> }\n>\n>\n> src/backend/executor/execExpr.c\n> /*\n> * Jump to coerce the NULL using coercion_expr if present. Coercing NULL\n> * is only interesting when the RETURNING type is a domain whose\n> * constraints must be checked. jsexpr->coercion_expr containing a\n> * CoerceToDomain node must have been set in that case.\n> */\n>\n> /*\n> * Jump to coerce the NULL using coercion_expr if present. Coercing NULL\n> * is only interesting when the RETURNING type is a domain whose\n> * constraints must be checked. jsexpr->coercion_expr containing a\n> * CoerceToDomain node must have been set in that case.\n> */\n\nThanks for checking.\n\nWill push the attached 0001 and 0002 shortly.\n\n-- \nThanks, Amit Langote", "msg_date": "Fri, 28 Jun 2024 20:53:31 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "> diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c\n> index 233b7b1cc9..df766cdec1 100644\n> --- a/src/backend/parser/parse_expr.c\n> +++ b/src/backend/parser/parse_expr.c\n> @@ -3583,6 +3583,7 @@ coerceJsonFuncExpr(ParseState *pstate, Node *expr,\n> \tNode\t *res;\n> \tint\t\t\tlocation;\n> \tOid\t\t\texprtype = exprType(expr);\n> +\tint32\t\tbaseTypmod = returning->typmod;\n> \n> \t/* if output type is not specified or equals to function type, return */\n> \tif (!OidIsValid(returning->typid) || returning->typid == exprtype)\n> @@ -3611,10 +3612,19 @@ coerceJsonFuncExpr(ParseState *pstate, Node *expr,\n> \t\treturn (Node *) fexpr;\n> \t}\n> \n> +\t/*\n> +\t * For domains, consider the base type's typmod to decide whether to setup\n> +\t * an implicit or explicit cast.\n> +\t */\n> +\tif (get_typtype(returning->typid) == TYPTYPE_DOMAIN)\n> +\t\t(void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\n\nI didn't review this patch in detail, but I just noticed this tiny bit\nand wanted to say that I don't like this coding style where you\ninitialize a variable to a certain value, and much later you override it\nwith a completely different value. It seems much clearer to leave it\nuninitialized at first, and have both places that determine the value\ntogether,\n\n if (get_typtype(returning->typid) == TYPTYPE_DOMAIN)\n (void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\n else\n baseTypmod = returning->typmod;\n\nNot only because in the domain case the initializer value is a downright\nlie, but also because of considerations such as if you later add code\nthat uses the variable in between those two places, you'd be introducing\na bug in the domain case because it hasn't been set. With the coding I\npropose, the compiler immediately tells you that the initialization is\nmissing.\n\n\nTBH I'm not super clear on why we decide on explicit or implicit cast\nbased on presence of a typmod. Why isn't it better to always use an\nimplicit one?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Small aircraft do not crash frequently ... usually only once!\"\n (ponder, http://thedailywtf.com/)\n\n\n", "msg_date": "Sat, 29 Jun 2024 20:24:27 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n>> +\t/*\n>> +\t * For domains, consider the base type's typmod to decide whether to setup\n>> +\t * an implicit or explicit cast.\n>> +\t */\n>> +\tif (get_typtype(returning->typid) == TYPTYPE_DOMAIN)\n>> +\t\t(void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\n\n> TBH I'm not super clear on why we decide on explicit or implicit cast\n> based on presence of a typmod. Why isn't it better to always use an\n> implicit one?\n\nHmm ... there are a bunch of existing places that seem to have similar\nlogic, but they are all in new-ish SQL/JSON functionality, and I would\nnot be surprised if they are all wrong. parse_coerce.c is quite\nopinionated about what a domain's typtypmod means (see comments in\ncoerce_type() for instance); see also the logic in coerce_to_domain:\n\n * If the domain applies a typmod to its base type, build the appropriate\n * coercion step. Mark it implicit for display purposes, because we don't\n * want it shown separately by ruleutils.c; but the isExplicit flag passed\n * to the conversion function depends on the manner in which the domain\n * coercion is invoked, so that the semantics of implicit and explicit\n * coercion differ. (Is that really the behavior we want?)\n\nI don't think that this SQL/JSON behavior quite matches that.\n\nWhile I'm bitching ... this coding style is bogus anyway:\n\n>> +\tif (get_typtype(returning->typid) == TYPTYPE_DOMAIN)\n>> +\t\t(void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\n\nbecause it results in two syscache lookups not one. You are supposed\nto apply getBaseTypeAndTypmod unconditionally, as is done everywhere\nexcept in the SQL/JSON logic. I am also wondering how it can possibly\nbe sensible to throw away the function result of getBaseTypeAndTypmod\nin this context.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Jun 2024 14:56:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Sun, Jun 30, 2024 at 2:24 AM Alvaro Herrera <[email protected]> wrote:\n>\n> TBH I'm not super clear on why we decide on explicit or implicit cast\n> based on presence of a typmod. Why isn't it better to always use an\n> implicit one?\n>\n\nI am using an example to explain it.\nSELECT JSON_SERIALIZE(JSON('{ \"a\" : 1 } '));\nwe cannot directly use implicit cast from json to text in\n{coerceJsonFuncExpr, coerce_to_target_type}\nbecause function calls:\ncoerceJsonFuncExpr->coerce_to_target_type->can_coerce_type\n->find_coercion_pathway\nwill look up pg_cast entries.\nbut we don't have text & json implicit cast entries, we will fail at:\n\n````\nif (!res && report_error)\nereport(ERROR,\nerrcode(ERRCODE_CANNOT_COERCE),\nerrmsg(\"cannot cast type %s to %s\",\n format_type_be(exprtype),\n format_type_be(returning->typid)),\nparser_coercion_errposition(pstate, location, expr));\n````\n\nMost of the cast uses explicit cast, which is what we previously did,\nthen in this thread, we found out for the returning type typmod(\n(varchar, or varchar's domain)\nWe need to first cast the expression to text then text to varchar via\nimplicit cast.\nTo trap the error:\nfor example: SELECT JSON_SERIALIZE('{ \"a\" : 1 } ' RETURNING varchar(2);\nalso see the comment:\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=c2d93c3802b205d135d1ae1d7ac167d74e08a274\n+ /*\n+ * Convert the source expression to text, because coerceJsonFuncExpr()\n+ * will create an implicit cast to the RETURNING types with typmod and\n+ * there are no implicit casts from json(b) to such types. For domains,\n+ * the base type's typmod will be considered, so do so here too.\n+ */\nIn general, I think implicit cast here is an exception.\n\noverall I come up with following logic:\n-----------------\nint32 baseTypmod = -1;\nif (returning->typmod < 0)\n(void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\nelse\nbaseTypmod = returning->typmod;\n\nres = coerce_to_target_type(pstate, expr, exprtype,\nreturning->typid, baseTypmod,\nbaseTypmod > 0 ? COERCION_IMPLICIT :\nCOERCION_EXPLICIT,\nbaseTypmod > 0 ? COERCE_IMPLICIT_CAST :\nCOERCE_EXPLICIT_CAST,\nlocation);\n-----------------\nBy the same way we are dealing with varchar,\nI came up with a verbose patch for transformJsonBehavior,\nwhich can cope with all the corner cases of bit and varbit data type.\nI also attached a test sql file (scratch169.sql) for it.\nsome examples:\n--fail\nSELECT JSON_VALUE(jsonb '\"111a\"', '$' RETURNING bit(3) default '1111'\non error);\n--ok\nSELECT JSON_VALUE(jsonb '\"111a\"', '$' RETURNING bit(3) default '111' on error);\n--ok\nSELECT JSON_VALUE(jsonb '\"111a\"', '$' RETURNING bit(3) default 32 on error);\n\n\nmakeJsonConstructorExpr we called (void)\ngetBaseTypeAndTypmod(returning->typid, &baseTypmod);\nlater in coerceJsonFuncExpr\nwe may also call (void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\nmaybe we can do some refactoring.", "msg_date": "Mon, 1 Jul 2024 15:10:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Sun, Jun 30, 2024 at 3:56 AM Tom Lane <[email protected]> wrote:\n> Alvaro Herrera <[email protected]> writes:\n> >> + /*\n> >> + * For domains, consider the base type's typmod to decide whether to setup\n> >> + * an implicit or explicit cast.\n> >> + */\n> >> + if (get_typtype(returning->typid) == TYPTYPE_DOMAIN)\n> >> + (void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\n>\n> > TBH I'm not super clear on why we decide on explicit or implicit cast\n> > based on presence of a typmod. Why isn't it better to always use an\n> > implicit one?\n>\n> Hmm ... there are a bunch of existing places that seem to have similar\n> logic, but they are all in new-ish SQL/JSON functionality, and I would\n> not be surprised if they are all wrong. parse_coerce.c is quite\n> opinionated about what a domain's typtypmod means (see comments in\n> coerce_type() for instance); see also the logic in coerce_to_domain:\n>\n> * If the domain applies a typmod to its base type, build the appropriate\n> * coercion step. Mark it implicit for display purposes, because we don't\n> * want it shown separately by ruleutils.c; but the isExplicit flag passed\n> * to the conversion function depends on the manner in which the domain\n> * coercion is invoked, so that the semantics of implicit and explicit\n> * coercion differ. (Is that really the behavior we want?)\n>\n> I don't think that this SQL/JSON behavior quite matches that.\n\nThe reason I decided to go for the implicit cast only when there is a\ntypmod is that the behavior with COERCION_EXPLICIT is only problematic\nwhen there's a typmod because of this code in\nbuild_coercion_expression:\n\n if (nargs == 3)\n {\n /* Pass it a boolean isExplicit parameter, too */\n cons = makeConst(BOOLOID,\n -1,\n InvalidOid,\n sizeof(bool),\n BoolGetDatum(ccontext == COERCION_EXPLICIT),\n false,\n true);\n\n args = lappend(args, cons);\n }\n\nYeah, we could have fixed that by always using COERCION_IMPLICIT for\nSQL/JSON but, as Jian said, we don't have a bunch of casts that these\nSQL/JSON functions need, which is why I guess we ended up with\nCOERCION_EXPLICIT here in the first place.\n\nOne option I hadn't tried was using COERCION_ASSIGNMENT instead, which\nseems to give coerceJsonFuncExpr() the casts it needs with the\nbehavior it wants, so how about applying the attached?\n\n-- \nThanks, Amit Langote", "msg_date": "Mon, 1 Jul 2024 19:45:11 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Mon, Jul 1, 2024 at 6:45 PM Amit Langote <[email protected]> wrote:\n>\n> On Sun, Jun 30, 2024 at 3:56 AM Tom Lane <[email protected]> wrote:\n> > Alvaro Herrera <[email protected]> writes:\n> > >> + /*\n> > >> + * For domains, consider the base type's typmod to decide whether to setup\n> > >> + * an implicit or explicit cast.\n> > >> + */\n> > >> + if (get_typtype(returning->typid) == TYPTYPE_DOMAIN)\n> > >> + (void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\n> >\n> > > TBH I'm not super clear on why we decide on explicit or implicit cast\n> > > based on presence of a typmod. Why isn't it better to always use an\n> > > implicit one?\n> >\n> > Hmm ... there are a bunch of existing places that seem to have similar\n> > logic, but they are all in new-ish SQL/JSON functionality, and I would\n> > not be surprised if they are all wrong. parse_coerce.c is quite\n> > opinionated about what a domain's typtypmod means (see comments in\n> > coerce_type() for instance); see also the logic in coerce_to_domain:\n> >\n> > * If the domain applies a typmod to its base type, build the appropriate\n> > * coercion step. Mark it implicit for display purposes, because we don't\n> > * want it shown separately by ruleutils.c; but the isExplicit flag passed\n> > * to the conversion function depends on the manner in which the domain\n> > * coercion is invoked, so that the semantics of implicit and explicit\n> > * coercion differ. (Is that really the behavior we want?)\n> >\n> > I don't think that this SQL/JSON behavior quite matches that.\n>\n> The reason I decided to go for the implicit cast only when there is a\n> typmod is that the behavior with COERCION_EXPLICIT is only problematic\n> when there's a typmod because of this code in\n> build_coercion_expression:\n>\n> if (nargs == 3)\n> {\n> /* Pass it a boolean isExplicit parameter, too */\n> cons = makeConst(BOOLOID,\n> -1,\n> InvalidOid,\n> sizeof(bool),\n> BoolGetDatum(ccontext == COERCION_EXPLICIT),\n> false,\n> true);\n>\n> args = lappend(args, cons);\n> }\n>\n> Yeah, we could have fixed that by always using COERCION_IMPLICIT for\n> SQL/JSON but, as Jian said, we don't have a bunch of casts that these\n> SQL/JSON functions need, which is why I guess we ended up with\n> COERCION_EXPLICIT here in the first place.\n>\n> One option I hadn't tried was using COERCION_ASSIGNMENT instead, which\n> seems to give coerceJsonFuncExpr() the casts it needs with the\n> behavior it wants, so how about applying the attached?\n\nyou patched works.\ni think it's because of you mentioned build_coercion_expression ` if\n(nargs == 3)` related code\nand\n\nfind_coercion_pathway:\nif (result == COERCION_PATH_NONE)\n{\nif (ccontext >= COERCION_ASSIGNMENT &&\nTypeCategory(targetTypeId) == TYPCATEGORY_STRING)\nresult = COERCION_PATH_COERCEVIAIO;\nelse if (ccontext >= COERCION_EXPLICIT &&\nTypeCategory(sourceTypeId) == TYPCATEGORY_STRING)\nresult = COERCION_PATH_COERCEVIAIO;\n}\n\nfunctions: JSON_OBJECT,JSON_ARRAY, JSON_ARRAYAGG,JSON_OBJECTAGG,\nJSON_SERIALIZE\nthe returning type can only be string type or json. json type already\nbeing handled in other code.\nso the targetTypeId category will be only TYPCATEGORY_STRING.\n\n\n", "msg_date": "Tue, 2 Jul 2024 14:19:04 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Tue, Jul 2, 2024 at 3:19 PM jian he <[email protected]> wrote:\n> On Mon, Jul 1, 2024 at 6:45 PM Amit Langote <[email protected]> wrote:\n> >\n> > On Sun, Jun 30, 2024 at 3:56 AM Tom Lane <[email protected]> wrote:\n> > > Alvaro Herrera <[email protected]> writes:\n> > > >> + /*\n> > > >> + * For domains, consider the base type's typmod to decide whether to setup\n> > > >> + * an implicit or explicit cast.\n> > > >> + */\n> > > >> + if (get_typtype(returning->typid) == TYPTYPE_DOMAIN)\n> > > >> + (void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\n> > >\n> > > > TBH I'm not super clear on why we decide on explicit or implicit cast\n> > > > based on presence of a typmod. Why isn't it better to always use an\n> > > > implicit one?\n> > >\n> > > Hmm ... there are a bunch of existing places that seem to have similar\n> > > logic, but they are all in new-ish SQL/JSON functionality, and I would\n> > > not be surprised if they are all wrong. parse_coerce.c is quite\n> > > opinionated about what a domain's typtypmod means (see comments in\n> > > coerce_type() for instance); see also the logic in coerce_to_domain:\n> > >\n> > > * If the domain applies a typmod to its base type, build the appropriate\n> > > * coercion step. Mark it implicit for display purposes, because we don't\n> > > * want it shown separately by ruleutils.c; but the isExplicit flag passed\n> > > * to the conversion function depends on the manner in which the domain\n> > > * coercion is invoked, so that the semantics of implicit and explicit\n> > > * coercion differ. (Is that really the behavior we want?)\n> > >\n> > > I don't think that this SQL/JSON behavior quite matches that.\n> >\n> > The reason I decided to go for the implicit cast only when there is a\n> > typmod is that the behavior with COERCION_EXPLICIT is only problematic\n> > when there's a typmod because of this code in\n> > build_coercion_expression:\n> >\n> > if (nargs == 3)\n> > {\n> > /* Pass it a boolean isExplicit parameter, too */\n> > cons = makeConst(BOOLOID,\n> > -1,\n> > InvalidOid,\n> > sizeof(bool),\n> > BoolGetDatum(ccontext == COERCION_EXPLICIT),\n> > false,\n> > true);\n> >\n> > args = lappend(args, cons);\n> > }\n> >\n> > Yeah, we could have fixed that by always using COERCION_IMPLICIT for\n> > SQL/JSON but, as Jian said, we don't have a bunch of casts that these\n> > SQL/JSON functions need, which is why I guess we ended up with\n> > COERCION_EXPLICIT here in the first place.\n> >\n> > One option I hadn't tried was using COERCION_ASSIGNMENT instead, which\n> > seems to give coerceJsonFuncExpr() the casts it needs with the\n> > behavior it wants, so how about applying the attached?\n>\n> you patched works.\n> i think it's because of you mentioned build_coercion_expression ` if\n> (nargs == 3)` related code\n> and\n>\n> find_coercion_pathway:\n> if (result == COERCION_PATH_NONE)\n> {\n> if (ccontext >= COERCION_ASSIGNMENT &&\n> TypeCategory(targetTypeId) == TYPCATEGORY_STRING)\n> result = COERCION_PATH_COERCEVIAIO;\n> else if (ccontext >= COERCION_EXPLICIT &&\n> TypeCategory(sourceTypeId) == TYPCATEGORY_STRING)\n> result = COERCION_PATH_COERCEVIAIO;\n> }\n>\n> functions: JSON_OBJECT,JSON_ARRAY, JSON_ARRAYAGG,JSON_OBJECTAGG,\n> JSON_SERIALIZE\n> the returning type can only be string type or json. json type already\n> being handled in other code.\n> so the targetTypeId category will be only TYPCATEGORY_STRING.\n\nYes, thanks for confirming that.\n\nI checked other sites that use COERCION_ASSIGNMENT and I don't see a\nreason why it can't be used in this context.\n\nI'll push the patch tomorrow unless there are objections.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 2 Jul 2024 17:03:48 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Tue, Jul 2, 2024 at 5:03 PM Amit Langote <[email protected]> wrote:\n> On Tue, Jul 2, 2024 at 3:19 PM jian he <[email protected]> wrote:\n> > On Mon, Jul 1, 2024 at 6:45 PM Amit Langote <[email protected]> wrote:\n> > >\n> > > On Sun, Jun 30, 2024 at 3:56 AM Tom Lane <[email protected]> wrote:\n> > > > Alvaro Herrera <[email protected]> writes:\n> > > > >> + /*\n> > > > >> + * For domains, consider the base type's typmod to decide whether to setup\n> > > > >> + * an implicit or explicit cast.\n> > > > >> + */\n> > > > >> + if (get_typtype(returning->typid) == TYPTYPE_DOMAIN)\n> > > > >> + (void) getBaseTypeAndTypmod(returning->typid, &baseTypmod);\n> > > >\n> > > > > TBH I'm not super clear on why we decide on explicit or implicit cast\n> > > > > based on presence of a typmod. Why isn't it better to always use an\n> > > > > implicit one?\n> > > >\n> > > > Hmm ... there are a bunch of existing places that seem to have similar\n> > > > logic, but they are all in new-ish SQL/JSON functionality, and I would\n> > > > not be surprised if they are all wrong. parse_coerce.c is quite\n> > > > opinionated about what a domain's typtypmod means (see comments in\n> > > > coerce_type() for instance); see also the logic in coerce_to_domain:\n> > > >\n> > > > * If the domain applies a typmod to its base type, build the appropriate\n> > > > * coercion step. Mark it implicit for display purposes, because we don't\n> > > > * want it shown separately by ruleutils.c; but the isExplicit flag passed\n> > > > * to the conversion function depends on the manner in which the domain\n> > > > * coercion is invoked, so that the semantics of implicit and explicit\n> > > > * coercion differ. (Is that really the behavior we want?)\n> > > >\n> > > > I don't think that this SQL/JSON behavior quite matches that.\n> > >\n> > > The reason I decided to go for the implicit cast only when there is a\n> > > typmod is that the behavior with COERCION_EXPLICIT is only problematic\n> > > when there's a typmod because of this code in\n> > > build_coercion_expression:\n> > >\n> > > if (nargs == 3)\n> > > {\n> > > /* Pass it a boolean isExplicit parameter, too */\n> > > cons = makeConst(BOOLOID,\n> > > -1,\n> > > InvalidOid,\n> > > sizeof(bool),\n> > > BoolGetDatum(ccontext == COERCION_EXPLICIT),\n> > > false,\n> > > true);\n> > >\n> > > args = lappend(args, cons);\n> > > }\n> > >\n> > > Yeah, we could have fixed that by always using COERCION_IMPLICIT for\n> > > SQL/JSON but, as Jian said, we don't have a bunch of casts that these\n> > > SQL/JSON functions need, which is why I guess we ended up with\n> > > COERCION_EXPLICIT here in the first place.\n> > >\n> > > One option I hadn't tried was using COERCION_ASSIGNMENT instead, which\n> > > seems to give coerceJsonFuncExpr() the casts it needs with the\n> > > behavior it wants, so how about applying the attached?\n> >\n> > you patched works.\n> > i think it's because of you mentioned build_coercion_expression ` if\n> > (nargs == 3)` related code\n> > and\n> >\n> > find_coercion_pathway:\n> > if (result == COERCION_PATH_NONE)\n> > {\n> > if (ccontext >= COERCION_ASSIGNMENT &&\n> > TypeCategory(targetTypeId) == TYPCATEGORY_STRING)\n> > result = COERCION_PATH_COERCEVIAIO;\n> > else if (ccontext >= COERCION_EXPLICIT &&\n> > TypeCategory(sourceTypeId) == TYPCATEGORY_STRING)\n> > result = COERCION_PATH_COERCEVIAIO;\n> > }\n> >\n> > functions: JSON_OBJECT,JSON_ARRAY, JSON_ARRAYAGG,JSON_OBJECTAGG,\n> > JSON_SERIALIZE\n> > the returning type can only be string type or json. json type already\n> > being handled in other code.\n> > so the targetTypeId category will be only TYPCATEGORY_STRING.\n>\n> Yes, thanks for confirming that.\n>\n> I checked other sites that use COERCION_ASSIGNMENT and I don't see a\n> reason why it can't be used in this context.\n>\n> I'll push the patch tomorrow unless there are objections.\n\nSorry, I dropped the ball on this one.\n\nI've pushed the patch now.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Wed, 17 Jul 2024 17:53:21 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "we still have problem in transformJsonBehavior\n\ncurrently transformJsonBehavior:\nSELECT JSON_VALUE(jsonb '1234', '$' RETURNING bit(3) DEFAULT 010111 ON ERROR);\nERROR: cannot cast behavior expression of type text to bit\nLINE 1: ...VALUE(jsonb '1234', '$' RETURNING bit(3) DEFAULT 010111 ON ...\n\nhere, 010111 will default to int4, so \"cannot cast behavior expression\nof type text to bit\"\nis wrong?\nalso int4/int8 can be explicitly cast to bit(3), in this case, it\nshould return 111.\n\n\nAlso, do we want to deal with bit data type's typmod like we did for\nstring type in transformJsonBehavior?\nlike:\nSELECT JSON_VALUE(jsonb '\"111\"', '$' RETURNING bit(3) default '1111' on error);\nshould return error:\nERROR: bit string length 2 does not match type bit(3)\nor success\n\nThe attached patch makes it return an error, similar to what we did\nfor the fixed length string type.", "msg_date": "Thu, 18 Jul 2024 14:04:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Thu, Jul 18, 2024 at 3:04 PM jian he <[email protected]> wrote:\n> we still have problem in transformJsonBehavior\n>\n> currently transformJsonBehavior:\n> SELECT JSON_VALUE(jsonb '1234', '$' RETURNING bit(3) DEFAULT 010111 ON ERROR);\n> ERROR: cannot cast behavior expression of type text to bit\n> LINE 1: ...VALUE(jsonb '1234', '$' RETURNING bit(3) DEFAULT 010111 ON ...\n>\n> here, 010111 will default to int4, so \"cannot cast behavior expression\n> of type text to bit\"\n> is wrong?\n> also int4/int8 can be explicitly cast to bit(3), in this case, it\n> should return 111.\n\nI think we shouldn't try too hard in the code to \"automatically\" cast\nthe DEFAULT expression, especially if that means having to add special\ncase code for all sorts of source-target-type combinations.\n\nI'm inclined to just give a HINT to the user to cast the DEFAULT\nexpression by hand, because they *can* do that with the syntax that\nexists.\n\nOn the other hand, transformJsonBehavior() should handle other\n\"internal\" expressions for which the cast cannot be specified by hand.\n\n> Also, do we want to deal with bit data type's typmod like we did for\n> string type in transformJsonBehavior?\n> like:\n> SELECT JSON_VALUE(jsonb '\"111\"', '$' RETURNING bit(3) default '1111' on error);\n> should return error:\n> ERROR: bit string length 2 does not match type bit(3)\n> or success\n>\n> The attached patch makes it return an error, similar to what we did\n> for the fixed length string type.\n\nYeah, that makes sense.\n\nI'm planning to push the attached 2 patches. 0001 is to fix\ntransformJsonBehavior() for these cases and 0002 to adjust the\nbehavior of casting the result of JSON_EXISTS() and EXISTS columns to\ninteger type. I've included the tests in your patch in 0001. I\nnoticed using cast expression to coerce the boolean constants to\nfixed-length types would produce unexpected errors when the planner's\nconst-simplification calls the cast functions. So in 0001, I've made\nthat case also use runtime coercion using json_populate_type().\n\n--\nThanks, Amit Langote", "msg_date": "Mon, 22 Jul 2024 17:46:18 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Mon, Jul 22, 2024 at 4:46 PM Amit Langote <[email protected]> wrote:\n>\n> On Thu, Jul 18, 2024 at 3:04 PM jian he <[email protected]> wrote:\n> > we still have problem in transformJsonBehavior\n> >\n> > currently transformJsonBehavior:\n> > SELECT JSON_VALUE(jsonb '1234', '$' RETURNING bit(3) DEFAULT 010111 ON ERROR);\n> > ERROR: cannot cast behavior expression of type text to bit\n> > LINE 1: ...VALUE(jsonb '1234', '$' RETURNING bit(3) DEFAULT 010111 ON ...\n> >\n> > here, 010111 will default to int4, so \"cannot cast behavior expression\n> > of type text to bit\"\n> > is wrong?\n> > also int4/int8 can be explicitly cast to bit(3), in this case, it\n> > should return 111.\n>\n> I think we shouldn't try too hard in the code to \"automatically\" cast\n> the DEFAULT expression, especially if that means having to add special\n> case code for all sorts of source-target-type combinations.\n>\n> I'm inclined to just give a HINT to the user to cast the DEFAULT\n> expression by hand, because they *can* do that with the syntax that\n> exists.\n\nselect typname, typinput, pg_get_function_identity_arguments(typinput)\nfrom pg_type pt join pg_proc proc on proc.oid = pt.typinput\nwhere typtype = 'b' and typarray <> 0 and proc.pronargs > 1;\n\nAs you can see from the query result, we only need to deal with bit\nand character type\nin this context.\n\nSELECT JSON_VALUE(jsonb '1234', '$.a' RETURNING bit(3) DEFAULT 10111 ON empty);\nSELECT JSON_VALUE(jsonb '1234', '$.a' RETURNING char(3) DEFAULT 10111\nON empty) ;\n\nthe single quote literal ', no explicit cast, resolve to text type.\nno single quote like 11, no explicit cast, resolve to int type.\nwe actually can cast int to bit, also have pg_cast entry.\nso the above these 2 examples should behave the same, given there is\nno pg_cast entry for int to text.\n\nselect castsource::regtype ,casttarget::regtype ,castfunc,castcontext,\ncastmethod\nfrom pg_cast where 'int'::regtype in (castsource::regtype ,casttarget::regtype);\n\nbut i won't insist on it, since bit/varbit don't use that much.\n\n\n>\n> I'm planning to push the attached 2 patches. 0001 is to fix\n> transformJsonBehavior() for these cases and 0002 to adjust the\n> behavior of casting the result of JSON_EXISTS() and EXISTS columns to\n> integer type. I've included the tests in your patch in 0001. I\n> noticed using cast expression to coerce the boolean constants to\n> fixed-length types would produce unexpected errors when the planner's\n> const-simplification calls the cast functions. So in 0001, I've made\n> that case also use runtime coercion using json_populate_type().\n>\n\n+ <note>\n+ <para>\n+ If an <literal>ON ERROR</literal> or <literal>ON EMPTY</literal>\n+ expression can't be coerced to the <literal>RETURNING</literal> type\n+ successfully, an SQL NULL value will be returned.\n+ </para>\n+ </note>\n\nI think this change will have some controversy.\nthe following are counterexamples\n\nSELECT JSON_value(jsonb '\"aaa\"', '$.a' RETURNING bool DEFAULT\n'\"2022-01-01\"' ON empty);\nreturn error, based on your change, should return NULL?\n\n\nSELECT JSON_QUERY(jsonb '\"[3,4]\"', '$.a' RETURNING bigint[] EMPTY\narray ON empty);\nSELECT JSON_QUERY(jsonb '\"[3,4]\"', '$.a' RETURNING bigint[] EMPTY\nobject ON empty);\nNow things get more confusing. empty array, empty object refers to\njsonb '[]' and jsonb '{}',\nboth cannot explicitly be cast to bigint[],\nso both should return NULL based on your new implementation?\n\n\nomit/keep quotes applied when casting '\"[1,2]\"' to int4range.\nSELECT JSON_query(jsonb '\"aaa\"', '$.a' RETURNING int4range omit quotes\nDEFAULT '\"[1,2]\"'::jsonb ON empty);\nSELECT JSON_query(jsonb '\"aaa\"', '$.a' RETURNING int4range keep quotes\nDEFAULT '\"[1,2]\"'::jsonb ON empty);\n\n\nSELECT JSON_value(jsonb '\"aaa\"', '$.a' RETURNING date DEFAULT\n'\"2022-01-01\"'::jsonb ON empty);\nbut jsonb cannot coerce to date., the example \"select\n('\"2022-01-01\"'::jsonb)::date; \" yields an error.\nbut why does this query still return a date?\n\n\n", "msg_date": "Tue, 23 Jul 2024 10:44:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "While reviewing the patch, I found some inconsistency on json_table EXISTS.\n\n--tested based on your patch and master.\nsrc4=# SELECT * FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a jsonb\nEXISTS PATH '$'));\nERROR: cannot cast behavior expression of type boolean to jsonb\nsrc4=# SELECT * FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a jsonb\nEXISTS PATH '$' error on error));\n a\n------\n true\n(1 row)\n\nWhy explicitly \"error on error\" not report error while not explicitly\nmentioning it yields an error?\n\n\"(a jsonb EXISTS PATH '$' error on error)\" returns jsonb 'true'\nimply that no errors happened.\nso \"(a jsonb EXISTS PATH '$')\" should not have any errors.\n\n\nbut boolean cannot cast to jsonb so for JSON_TABLE,\nwe should reject\nCOLUMNS (a jsonb EXISTS PATH '$' error on error ));\nCOLUMNS (a jsonb EXISTS PATH '$' unknown on error ));\nat an earlier stage.\n\nbecause json_populate_type will use literal 'true'/'false' cast to\njsonb, which will not fail.\nbut JsonPathExists returns are *not* quoted true/false.\nso rejecting it earlier is better than handling it at ExecEvalJsonExprPath.\n\n\nattached patch trying to solve the problem, changes applied based on\nyour 0001, 0002.\nafter apply attached patch:\n\n\ncreate domain djsonb as jsonb check(value = 'true');\nSELECT * FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a djsonb EXISTS\nPATH '$' error on error));\nERROR: cannot cast type boolean to djsonb\nSELECT * FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a djsonb EXISTS\nPATH '$' unknown on error));\nERROR: cannot cast type boolean to djsonb\nSELECT * FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a jsonb EXISTS PATH '$'));\nERROR: cannot cast type boolean to jsonb\n\n\n\ni found out a typo in\nsrc/test/regress/expected/sqljson_queryfuncs.out,\nsrc/test/regress/sql/sqljson_queryfuncs.sql\n\"fixed-legth\" should be \"fixed-length\"", "msg_date": "Tue, 23 Jul 2024 20:06:33 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Tue, Jul 23, 2024 at 11:45 AM jian he <[email protected]> wrote:\n> On Mon, Jul 22, 2024 at 4:46 PM Amit Langote <[email protected]> wrote:\n> >\n> > On Thu, Jul 18, 2024 at 3:04 PM jian he <[email protected]> wrote:\n> > > we still have problem in transformJsonBehavior\n> > >\n> > > currently transformJsonBehavior:\n> > > SELECT JSON_VALUE(jsonb '1234', '$' RETURNING bit(3) DEFAULT 010111 ON ERROR);\n> > > ERROR: cannot cast behavior expression of type text to bit\n> > > LINE 1: ...VALUE(jsonb '1234', '$' RETURNING bit(3) DEFAULT 010111 ON ...\n> > >\n> > > here, 010111 will default to int4, so \"cannot cast behavior expression\n> > > of type text to bit\"\n> > > is wrong?\n> > > also int4/int8 can be explicitly cast to bit(3), in this case, it\n> > > should return 111.\n> >\n> > I think we shouldn't try too hard in the code to \"automatically\" cast\n> > the DEFAULT expression, especially if that means having to add special\n> > case code for all sorts of source-target-type combinations.\n> >\n> > I'm inclined to just give a HINT to the user to cast the DEFAULT\n> > expression by hand, because they *can* do that with the syntax that\n> > exists.\n>\n> select typname, typinput, pg_get_function_identity_arguments(typinput)\n> from pg_type pt join pg_proc proc on proc.oid = pt.typinput\n> where typtype = 'b' and typarray <> 0 and proc.pronargs > 1;\n>\n> As you can see from the query result, we only need to deal with bit\n> and character type\n> in this context.\n>\n> SELECT JSON_VALUE(jsonb '1234', '$.a' RETURNING bit(3) DEFAULT 10111 ON empty);\n> SELECT JSON_VALUE(jsonb '1234', '$.a' RETURNING char(3) DEFAULT 10111\n> ON empty) ;\n>\n> the single quote literal ', no explicit cast, resolve to text type.\n> no single quote like 11, no explicit cast, resolve to int type.\n> we actually can cast int to bit, also have pg_cast entry.\n> so the above these 2 examples should behave the same, given there is\n> no pg_cast entry for int to text.\n>\n> select castsource::regtype ,casttarget::regtype ,castfunc,castcontext,\n> castmethod\n> from pg_cast where 'int'::regtype in (castsource::regtype ,casttarget::regtype);\n>\n> but i won't insist on it, since bit/varbit don't use that much.\n\nThe cast from int to bit that exists in pg_cast is only good for\nexplicit casts, so would truncate user's value instead of flagging it\nas invalid input, and this whole discussion is about not doing that.\nWith the DEFAULT expression specified or interpreted as a text string,\nwe don't have that problem because we can then use CoerceViaIO as an\nassignment-level cast, whereby the invalid input *is* flagged as it\nshould, like this:\n\nSELECT JSON_VALUE(jsonb '1234', '$' RETURNING bit(3) DEFAULT '11111' ON ERROR);\nERROR: bit string length 5 does not match type bit(3)\n\nSo it seems fair to me to flag it when the user specifies an integer\nin DEFAULT we can't create a cast expression that does not truncate a\nvalue to fit the RETURNING type.\n\n> > I'm planning to push the attached 2 patches. 0001 is to fix\n> > transformJsonBehavior() for these cases and 0002 to adjust the\n> > behavior of casting the result of JSON_EXISTS() and EXISTS columns to\n> > integer type. I've included the tests in your patch in 0001. I\n> > noticed using cast expression to coerce the boolean constants to\n> > fixed-length types would produce unexpected errors when the planner's\n> > const-simplification calls the cast functions. So in 0001, I've made\n> > that case also use runtime coercion using json_populate_type().\n> >\n>\n> + <note>\n> + <para>\n> + If an <literal>ON ERROR</literal> or <literal>ON EMPTY</literal>\n> + expression can't be coerced to the <literal>RETURNING</literal> type\n> + successfully, an SQL NULL value will be returned.\n> + </para>\n> + </note>\n>\n> I think this change will have some controversy.\n\nOn second thought, I agree. I've made some changes to *throw* the\nerror when the JsonBehavior values fail being coerced to the RETURNING\ntype. Please check the attached.\n\nIn the attached patch, I've also taken care of the problem mentioned\nin your latest email -- the solution I've chosen is not to produce the\nerror when ERROR ON ERROR is specified but to use runtime coercion\nalso for the jsonb type or any type that is not integer. Also fixed\nthe typos.\n\nThanks for your attention!\n\n--\nThanks, Amit Langote", "msg_date": "Tue, 23 Jul 2024 21:52:39 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Tue, Jul 23, 2024 at 8:52 PM Amit Langote <[email protected]> wrote:\n>\n> In the attached patch, I've also taken care of the problem mentioned\n> in your latest email -- the solution I've chosen is not to produce the\n> error when ERROR ON ERROR is specified but to use runtime coercion\n> also for the jsonb type or any type that is not integer. Also fixed\n> the typos.\n>\n> Thanks for your attention!\n>\n\n\nCOLUMNS (col_name jsonb EXISTS PATH 'pah_expression') inconsistency\nseems resolved.\nI also tested the domain over jsonb, it works.\n\n\ntransformJsonFuncExpr we have:\n case JSON_QUERY_OP:\n if (jsexpr->returning->typid != JSONBOID || jsexpr->omit_quotes)\n jsexpr->use_json_coercion = true;\n\n case JSON_VALUE_OP:\n if (jsexpr->returning->typid != TEXTOID)\n {\n if (get_typtype(jsexpr->returning->typid) == TYPTYPE_DOMAIN &&\n DomainHasConstraints(jsexpr->returning->typid))\n jsexpr->use_json_coercion = true;\n else\n jsexpr->use_io_coercion = true;\n }\n\nJSONBOID won't be a domain. for domain type, json_value, json_query\nwill use jsexpr->use_json_coercion.\njsexpr->use_json_coercion can handle whether the domain has constraints or not.\n\nso i don't know the purpose of following code in ExecInitJsonExpr\n if (get_typtype(jsexpr->returning->typid) == TYPTYPE_DOMAIN &&\n DomainHasConstraints(jsexpr->returning->typid))\n {\n Assert(jsexpr->use_json_coercion);\n scratch->opcode = EEOP_JUMP;\n scratch->d.jump.jumpdone = state->steps_len + 1;\n ExprEvalPushStep(state, scratch);\n }\n\n\n\njson_table exits works fine with int4, not domain over int4. The\nfollowing are test suites.\n\ndrop domain if exists dint4, dint4_1,dint4_0;\ncreate domain dint4 as int;\ncreate domain dint4_1 as int check ( value <> 1 );\ncreate domain dint4_0 as int check ( value <> 0 );\nSELECT a, a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4\nEXISTS PATH '$.a' ));\nSELECT a, a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4\nEXISTS PATH '$.a' false ON ERROR));\nSELECT a, a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4\nEXISTS PATH '$.a' ERROR ON ERROR));\nSELECT a, a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4_0\nEXISTS PATH '$.a'));\nSELECT a, a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4_0\nEXISTS PATH '$'));\nSELECT a,a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4_1\nEXISTS PATH '$'));\nSELECT a,a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4_1\nEXISTS PATH '$.a'));\nSELECT a,a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4_1\nEXISTS PATH '$.a' ERROR ON ERROR));\n\n\n", "msg_date": "Wed, 24 Jul 2024 14:24:55 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "drop domain if exists djs;\ncreate domain djs as jsonb check ( value <> '\"11\"' );\nSELECT JSON_QUERY(jsonb '\"aaa\"', '$.a' RETURNING djs keep quotes\nDEFAULT '\"11\"' ON empty);\nSELECT JSON_QUERY(jsonb '\"aaa\"', '$.a' RETURNING djs omit quotes\nDEFAULT '\"11\"' ON empty);\nSELECT JSON_QUERY(jsonb '\"11\"', '$' RETURNING djs omit quotes DEFAULT\n'\"11\"' ON empty);\n\nSELECT JSON_QUERY(jsonb '\"aaa\"', '$.a' RETURNING jsonb keep quotes\nDEFAULT '\"11\"' ON empty);\nSELECT JSON_QUERY(jsonb '\"aaa\"', '$.a' RETURNING jsonb omit quotes\nDEFAULT '\"11\"' ON empty);\nSELECT JSON_QUERY(jsonb '\"aaa\"', '$.a' RETURNING int4range omit quotes\nDEFAULT '\"[1,2]\"'::jsonb ON empty);\nSELECT JSON_QUERY(jsonb '\"aaa\"', '$.a' RETURNING int4range keep quotes\nDEFAULT '\"[1,2]\"'::jsonb ON empty);\nSELECT JSON_value(jsonb '\"aaa\"', '$.a' RETURNING int4range DEFAULT\n'\"[1,2]\"'::jsonb ON empty);\n----------------------------\n\nI found out 2 issues for the above tests.\n1. RETURNING types is jsonb/domain over jsonb, default expression does\nnot respect omit/keep quotes,\nbut other RETURNING types do. Maybe this will be fine.\n\n2. domain over jsonb should fail just like domain over other types?\nRETURNING djs keep quotes DEFAULT '\"11\"' ON empty\nshould fail as\nERROR: could not coerce ON EMPTY expression (DEFAULT) to the RETURNING type\nDETAIL: value for domain djs violates check constraint \"djs_check\"\"\n\n\n\n errcode(ERRCODE_CANNOT_COERCE),\n errmsg(\"cannot cast behavior expression of\ntype %s to %s\",\n format_type_be(exprType(expr)),\n format_type_be(returning->typid)),\n errhint(\"You will need to cast the expression.\"),\n parser_errposition(pstate, exprLocation(expr)));\n\nmaybe\nerrhint(\"You will need to explicitly cast the expression to type %s\",\nformat_type_be(returning->typid))\n\n\n", "msg_date": "Wed, 24 Jul 2024 16:47:15 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Wed, Jul 24, 2024 at 3:25 PM jian he <[email protected]> wrote:\n> transformJsonFuncExpr we have:\n> case JSON_QUERY_OP:\n> if (jsexpr->returning->typid != JSONBOID || jsexpr->omit_quotes)\n> jsexpr->use_json_coercion = true;\n>\n> case JSON_VALUE_OP:\n> if (jsexpr->returning->typid != TEXTOID)\n> {\n> if (get_typtype(jsexpr->returning->typid) == TYPTYPE_DOMAIN &&\n> DomainHasConstraints(jsexpr->returning->typid))\n> jsexpr->use_json_coercion = true;\n> else\n> jsexpr->use_io_coercion = true;\n> }\n>\n> JSONBOID won't be a domain. for domain type, json_value, json_query\n> will use jsexpr->use_json_coercion.\n> jsexpr->use_json_coercion can handle whether the domain has constraints or not.\n>\n> so i don't know the purpose of following code in ExecInitJsonExpr\n> if (get_typtype(jsexpr->returning->typid) == TYPTYPE_DOMAIN &&\n> DomainHasConstraints(jsexpr->returning->typid))\n> {\n> Assert(jsexpr->use_json_coercion);\n> scratch->opcode = EEOP_JUMP;\n> scratch->d.jump.jumpdone = state->steps_len + 1;\n> ExprEvalPushStep(state, scratch);\n> }\n\nYeah, it's a useless JUMP. I forget why it's there. I have attached\na patch (0005) to remove it.\n\n> json_table exits works fine with int4, not domain over int4. The\n> following are test suites.\n>\n> drop domain if exists dint4, dint4_1,dint4_0;\n> create domain dint4 as int;\n> create domain dint4_1 as int check ( value <> 1 );\n> create domain dint4_0 as int check ( value <> 0 );\n> SELECT a, a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4\n> EXISTS PATH '$.a' ));\n> SELECT a, a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4\n> EXISTS PATH '$.a' false ON ERROR));\n> SELECT a, a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4\n> EXISTS PATH '$.a' ERROR ON ERROR));\n> SELECT a, a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4_0\n> EXISTS PATH '$.a'));\n> SELECT a, a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4_0\n> EXISTS PATH '$'));\n> SELECT a,a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4_1\n> EXISTS PATH '$'));\n> SELECT a,a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4_1\n> EXISTS PATH '$.a'));\n> SELECT a,a::bool FROM JSON_TABLE(jsonb '\"a\"', '$' COLUMNS (a dint4_1\n> EXISTS PATH '$.a' ERROR ON ERROR));\n\nDomain-over-integer case should be fixed with the attached updated 0002.\n\n> I found out 2 issues for the above tests.\n> 1. RETURNING types is jsonb/domain over jsonb, default expression does\n> not respect omit/keep quotes,\n> but other RETURNING types do. Maybe this will be fine.\n\nYeah, I am not sure whether and how we could implement OMIT/KEEP\nQUOTES for the DEFAULT expression. I might try later or simply\ndocument that OMIT/KEEP QUOTE is only applied to the query result but\nnot the DEFAULT expression.\n\n> 2. domain over jsonb should fail just like domain over other types?\n> RETURNING djs keep quotes DEFAULT '\"11\"' ON empty\n> should fail as\n> ERROR: could not coerce ON EMPTY expression (DEFAULT) to the RETURNING type\n> DETAIL: value for domain djs violates check constraint \"djs_check\"\"\n\nI think this should be fixed with the attached patch 0004.\n\n> errcode(ERRCODE_CANNOT_COERCE),\n> errmsg(\"cannot cast behavior expression of\n> type %s to %s\",\n> format_type_be(exprType(expr)),\n> format_type_be(returning->typid)),\n> errhint(\"You will need to cast the expression.\"),\n> parser_errposition(pstate, exprLocation(expr)));\n>\n> maybe\n> errhint(\"You will need to explicitly cast the expression to type %s\",\n> format_type_be(returning->typid))\n\nOK, done.\n\nPlease check.\n\n-- \nThanks, Amit Langote", "msg_date": "Thu, 25 Jul 2024 23:16:45 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Thu, Jul 25, 2024 at 11:16 PM Amit Langote <[email protected]> wrote:\n> On Wed, Jul 24, 2024 at 3:25 PM jian he <[email protected]> wrote:\n> > 2. domain over jsonb should fail just like domain over other types?\n> > RETURNING djs keep quotes DEFAULT '\"11\"' ON empty\n> > should fail as\n> > ERROR: could not coerce ON EMPTY expression (DEFAULT) to the RETURNING type\n> > DETAIL: value for domain djs violates check constraint \"djs_check\"\"\n>\n> I think this should be fixed with the attached patch 0004.\n\nIt is fixed but with the patch 0003, not 0004.\n\nAlso, the test cases in 0004, which is a patch to fix a problem with\nOMIT QUOTES being disregarded when RETURNING domain-over-jsonb, didn't\ntest that problem. So I have updated the test case to use a domain\nover jsonb.\n\n-- \nThanks, Amit Langote", "msg_date": "Fri, 26 Jul 2024 11:12:27 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Fri, Jul 26, 2024 at 11:12 AM Amit Langote <[email protected]> wrote:\n> On Thu, Jul 25, 2024 at 11:16 PM Amit Langote <[email protected]> wrote:\n> > On Wed, Jul 24, 2024 at 3:25 PM jian he <[email protected]> wrote:\n> > > 2. domain over jsonb should fail just like domain over other types?\n> > > RETURNING djs keep quotes DEFAULT '\"11\"' ON empty\n> > > should fail as\n> > > ERROR: could not coerce ON EMPTY expression (DEFAULT) to the RETURNING type\n> > > DETAIL: value for domain djs violates check constraint \"djs_check\"\"\n> >\n> > I think this should be fixed with the attached patch 0004.\n>\n> It is fixed but with the patch 0003, not 0004.\n>\n> Also, the test cases in 0004, which is a patch to fix a problem with\n> OMIT QUOTES being disregarded when RETURNING domain-over-jsonb, didn't\n> test that problem. So I have updated the test case to use a domain\n> over jsonb.\n\nPushed 0003-0005 ahead of 0001-0002. Will try to push them over the\nweekend. Rebased for now.\n\n-- \nThanks, Amit Langote", "msg_date": "Fri, 26 Jul 2024 17:53:32 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Fri, Jul 26, 2024 at 4:53 PM Amit Langote <[email protected]> wrote:\n>\n>\n> Pushed 0003-0005 ahead of 0001-0002. Will try to push them over the\n> weekend. Rebased for now.\n\n{\n...\n /*\n * For expression nodes that support soft errors. Should be set to NULL\n * before calling ExecInitExprRec() if the caller wants errors thrown.\n */\n ErrorSaveContext *escontext;\n} ExprState;\n\ni believe by default makeNode will set escontext to NULL.\nSo the comment should be, if you want to catch the soft errors, make\nsure the escontext pointing to an allocated ErrorSaveContext.\nor maybe just say, default is NULL.\n\nOtherwise, the original comment's meaning feels like: we need to\nexplicitly set it to NULL\nfor certain operations, which I believe is false?\n\n\n struct\n {\n Oid targettype;\n int32 targettypmod;\n bool omit_quotes;\n bool exists_coerce;\n bool exists_cast_to_int;\n bool check_domain;\n void *json_coercion_cache;\n ErrorSaveContext *escontext;\n } jsonexpr_coercion;\ndo we need to comment that \"check_domain\" is only used for JSON_EXISTS_OP?\n\n\n\nWhile reviewing it, I found another minor issue.\n\n\njson_behavior_type:\n ERROR_P { $$ = JSON_BEHAVIOR_ERROR; }\n | NULL_P { $$ = JSON_BEHAVIOR_NULL; }\n | TRUE_P { $$ = JSON_BEHAVIOR_TRUE; }\n | FALSE_P { $$ = JSON_BEHAVIOR_FALSE; }\n | UNKNOWN { $$ = JSON_BEHAVIOR_UNKNOWN; }\n | EMPTY_P ARRAY { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n | EMPTY_P OBJECT_P { $$ = JSON_BEHAVIOR_EMPTY_OBJECT; }\n /* non-standard, for Oracle compatibility only */\n | EMPTY_P { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n ;\n\nEMPTY_P behaves the same as EMPTY_P ARRAY\nso for function GetJsonBehaviorConst, the following \"case\nJSON_BEHAVIOR_EMPTY:\" is wrong?\n\n case JSON_BEHAVIOR_NULL:\n case JSON_BEHAVIOR_UNKNOWN:\n case JSON_BEHAVIOR_EMPTY:\n val = (Datum) 0;\n isnull = true;\n typid = INT4OID;\n len = sizeof(int32);\n isbyval = true;\n break;\n\n\nalso src/backend/utils/adt/ruleutils.c\n if (jexpr->on_error->btype != JSON_BEHAVIOR_EMPTY)\n get_json_behavior(jexpr->on_error, context, \"ERROR\");\n\nfor json_value, json_query, i believe we can save some circles in\nExecInitJsonExpr\nif you don't specify on error, on empty\n\ncan you please check the attached, based on your latest attachment.", "msg_date": "Fri, 26 Jul 2024 22:19:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "Hi,\n\nOn Fri, Jul 26, 2024 at 11:19 PM jian he <[email protected]> wrote:\n> On Fri, Jul 26, 2024 at 4:53 PM Amit Langote <[email protected]> wrote:\n> >\n> >\n> > Pushed 0003-0005 ahead of 0001-0002. Will try to push them over the\n> > weekend. Rebased for now.\n\nPushed them now.\n\n> {\n> ...\n> /*\n> * For expression nodes that support soft errors. Should be set to NULL\n> * before calling ExecInitExprRec() if the caller wants errors thrown.\n> */\n> ErrorSaveContext *escontext;\n> } ExprState;\n>\n> i believe by default makeNode will set escontext to NULL.\n> So the comment should be, if you want to catch the soft errors, make\n> sure the escontext pointing to an allocated ErrorSaveContext.\n> or maybe just say, default is NULL.\n>\n> Otherwise, the original comment's meaning feels like: we need to\n> explicitly set it to NULL\n> for certain operations, which I believe is false?\n\nOK, I'll look into updating this.\n\n> struct\n> {\n> Oid targettype;\n> int32 targettypmod;\n> bool omit_quotes;\n> bool exists_coerce;\n> bool exists_cast_to_int;\n> bool check_domain;\n> void *json_coercion_cache;\n> ErrorSaveContext *escontext;\n> } jsonexpr_coercion;\n> do we need to comment that \"check_domain\" is only used for JSON_EXISTS_OP?\n\nI've renamed it to exists_check_domain and added a comment that\nexists_* fields are relevant only for JSON_EXISTS_OP.\n\n> json_behavior_type:\n> ERROR_P { $$ = JSON_BEHAVIOR_ERROR; }\n> | NULL_P { $$ = JSON_BEHAVIOR_NULL; }\n> | TRUE_P { $$ = JSON_BEHAVIOR_TRUE; }\n> | FALSE_P { $$ = JSON_BEHAVIOR_FALSE; }\n> | UNKNOWN { $$ = JSON_BEHAVIOR_UNKNOWN; }\n> | EMPTY_P ARRAY { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n> | EMPTY_P OBJECT_P { $$ = JSON_BEHAVIOR_EMPTY_OBJECT; }\n> /* non-standard, for Oracle compatibility only */\n> | EMPTY_P { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n> ;\n>\n> EMPTY_P behaves the same as EMPTY_P ARRAY\n> so for function GetJsonBehaviorConst, the following \"case\n> JSON_BEHAVIOR_EMPTY:\" is wrong?\n>\n> case JSON_BEHAVIOR_NULL:\n> case JSON_BEHAVIOR_UNKNOWN:\n> case JSON_BEHAVIOR_EMPTY:\n> val = (Datum) 0;\n> isnull = true;\n> typid = INT4OID;\n> len = sizeof(int32);\n> isbyval = true;\n> break;\n>\n> also src/backend/utils/adt/ruleutils.c\n> if (jexpr->on_error->btype != JSON_BEHAVIOR_EMPTY)\n> get_json_behavior(jexpr->on_error, context, \"ERROR\");\n\nSomething like the attached makes sense? While this meaningfully\nchanges the deparsing output, there is no behavior change for\nJsonTable top-level path execution. That's because the behavior when\nthere's an error in the execution of the top-level path is to throw it\nor return an empty set, which is handled in jsonpath_exec.c, not\nexecExprInterp.c.\n\n> for json_value, json_query, i believe we can save some circles in\n> ExecInitJsonExpr\n> if you don't specify on error, on empty\n>\n> can you please check the attached, based on your latest attachment.\n\nPerhaps makes sense, though I haven't checked closely. I'll take a\nlook next week.\n\n--\nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 30 Jul 2024 13:59:22 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Tue, Jul 30, 2024 at 12:59 PM Amit Langote <[email protected]> wrote:\n>\n> Hi,\n>\n> On Fri, Jul 26, 2024 at 11:19 PM jian he <[email protected]> wrote:\n> > On Fri, Jul 26, 2024 at 4:53 PM Amit Langote <[email protected]> wrote:\n> > >\n> > >\n> > > Pushed 0003-0005 ahead of 0001-0002. Will try to push them over the\n> > > weekend. Rebased for now.\n>\n> Pushed them now.\n>\n> > {\n> > ...\n> > /*\n> > * For expression nodes that support soft errors. Should be set to NULL\n> > * before calling ExecInitExprRec() if the caller wants errors thrown.\n> > */\n> > ErrorSaveContext *escontext;\n> > } ExprState;\n> >\n> > i believe by default makeNode will set escontext to NULL.\n> > So the comment should be, if you want to catch the soft errors, make\n> > sure the escontext pointing to an allocated ErrorSaveContext.\n> > or maybe just say, default is NULL.\n> >\n> > Otherwise, the original comment's meaning feels like: we need to\n> > explicitly set it to NULL\n> > for certain operations, which I believe is false?\n>\n> OK, I'll look into updating this.\n>\n> > struct\n> > {\n> > Oid targettype;\n> > int32 targettypmod;\n> > bool omit_quotes;\n> > bool exists_coerce;\n> > bool exists_cast_to_int;\n> > bool check_domain;\n> > void *json_coercion_cache;\n> > ErrorSaveContext *escontext;\n> > } jsonexpr_coercion;\n> > do we need to comment that \"check_domain\" is only used for JSON_EXISTS_OP?\n>\n> I've renamed it to exists_check_domain and added a comment that\n> exists_* fields are relevant only for JSON_EXISTS_OP.\n>\n> > json_behavior_type:\n> > ERROR_P { $$ = JSON_BEHAVIOR_ERROR; }\n> > | NULL_P { $$ = JSON_BEHAVIOR_NULL; }\n> > | TRUE_P { $$ = JSON_BEHAVIOR_TRUE; }\n> > | FALSE_P { $$ = JSON_BEHAVIOR_FALSE; }\n> > | UNKNOWN { $$ = JSON_BEHAVIOR_UNKNOWN; }\n> > | EMPTY_P ARRAY { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n> > | EMPTY_P OBJECT_P { $$ = JSON_BEHAVIOR_EMPTY_OBJECT; }\n> > /* non-standard, for Oracle compatibility only */\n> > | EMPTY_P { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n> > ;\n> >\n> > EMPTY_P behaves the same as EMPTY_P ARRAY\n> > so for function GetJsonBehaviorConst, the following \"case\n> > JSON_BEHAVIOR_EMPTY:\" is wrong?\n> >\n> > case JSON_BEHAVIOR_NULL:\n> > case JSON_BEHAVIOR_UNKNOWN:\n> > case JSON_BEHAVIOR_EMPTY:\n> > val = (Datum) 0;\n> > isnull = true;\n> > typid = INT4OID;\n> > len = sizeof(int32);\n> > isbyval = true;\n> > break;\n> >\n> > also src/backend/utils/adt/ruleutils.c\n> > if (jexpr->on_error->btype != JSON_BEHAVIOR_EMPTY)\n> > get_json_behavior(jexpr->on_error, context, \"ERROR\");\n>\n> Something like the attached makes sense? While this meaningfully\n> changes the deparsing output, there is no behavior change for\n> JsonTable top-level path execution. That's because the behavior when\n> there's an error in the execution of the top-level path is to throw it\n> or return an empty set, which is handled in jsonpath_exec.c, not\n> execExprInterp.c.\n>\n\nhi amit.\nseems you forgot to attach the patch?\n\n\n", "msg_date": "Thu, 22 Aug 2024 10:02:05 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Thu, Aug 22, 2024 at 11:02 jian he <[email protected]> wrote:\n\n> On Tue, Jul 30, 2024 at 12:59 PM Amit Langote <[email protected]>\n> wrote:\n> >\n> > Hi,\n> >\n> > On Fri, Jul 26, 2024 at 11:19 PM jian he <[email protected]>\n> wrote:\n> > > On Fri, Jul 26, 2024 at 4:53 PM Amit Langote <[email protected]>\n> wrote:\n> > > >\n> > > >\n> > > > Pushed 0003-0005 ahead of 0001-0002. Will try to push them over the\n> > > > weekend. Rebased for now.\n> >\n> > Pushed them now.\n> >\n> > > {\n> > > ...\n> > > /*\n> > > * For expression nodes that support soft errors. Should be set\n> to NULL\n> > > * before calling ExecInitExprRec() if the caller wants errors\n> thrown.\n> > > */\n> > > ErrorSaveContext *escontext;\n> > > } ExprState;\n> > >\n> > > i believe by default makeNode will set escontext to NULL.\n> > > So the comment should be, if you want to catch the soft errors, make\n> > > sure the escontext pointing to an allocated ErrorSaveContext.\n> > > or maybe just say, default is NULL.\n> > >\n> > > Otherwise, the original comment's meaning feels like: we need to\n> > > explicitly set it to NULL\n> > > for certain operations, which I believe is false?\n> >\n> > OK, I'll look into updating this.\n> >\n> > > struct\n> > > {\n> > > Oid targettype;\n> > > int32 targettypmod;\n> > > bool omit_quotes;\n> > > bool exists_coerce;\n> > > bool exists_cast_to_int;\n> > > bool check_domain;\n> > > void *json_coercion_cache;\n> > > ErrorSaveContext *escontext;\n> > > } jsonexpr_coercion;\n> > > do we need to comment that \"check_domain\" is only used for\n> JSON_EXISTS_OP?\n> >\n> > I've renamed it to exists_check_domain and added a comment that\n> > exists_* fields are relevant only for JSON_EXISTS_OP.\n> >\n> > > json_behavior_type:\n> > > ERROR_P { $$ = JSON_BEHAVIOR_ERROR; }\n> > > | NULL_P { $$ = JSON_BEHAVIOR_NULL; }\n> > > | TRUE_P { $$ = JSON_BEHAVIOR_TRUE; }\n> > > | FALSE_P { $$ = JSON_BEHAVIOR_FALSE; }\n> > > | UNKNOWN { $$ = JSON_BEHAVIOR_UNKNOWN; }\n> > > | EMPTY_P ARRAY { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n> > > | EMPTY_P OBJECT_P { $$ = JSON_BEHAVIOR_EMPTY_OBJECT; }\n> > > /* non-standard, for Oracle compatibility only */\n> > > | EMPTY_P { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n> > > ;\n> > >\n> > > EMPTY_P behaves the same as EMPTY_P ARRAY\n> > > so for function GetJsonBehaviorConst, the following \"case\n> > > JSON_BEHAVIOR_EMPTY:\" is wrong?\n> > >\n> > > case JSON_BEHAVIOR_NULL:\n> > > case JSON_BEHAVIOR_UNKNOWN:\n> > > case JSON_BEHAVIOR_EMPTY:\n> > > val = (Datum) 0;\n> > > isnull = true;\n> > > typid = INT4OID;\n> > > len = sizeof(int32);\n> > > isbyval = true;\n> > > break;\n> > >\n> > > also src/backend/utils/adt/ruleutils.c\n> > > if (jexpr->on_error->btype != JSON_BEHAVIOR_EMPTY)\n> > > get_json_behavior(jexpr->on_error, context, \"ERROR\");\n> >\n> > Something like the attached makes sense? While this meaningfully\n> > changes the deparsing output, there is no behavior change for\n> > JsonTable top-level path execution. That's because the behavior when\n> > there's an error in the execution of the top-level path is to throw it\n> > or return an empty set, which is handled in jsonpath_exec.c, not\n> > execExprInterp.c.\n> >\n>\n> hi amit.\n> seems you forgot to attach the patch?\n\n\nYeah, I had planned to look at this after my vacation earlier this month,\nbut I context switched into working on another project and lost track of\nthis. I’ll make some time next week to fix whatever remains go be fixed\nhere. Thanks for the reminder.\n\n>\n\nOn Thu, Aug 22, 2024 at 11:02 jian he <[email protected]> wrote:On Tue, Jul 30, 2024 at 12:59 PM Amit Langote <[email protected]> wrote:\n>\n> Hi,\n>\n> On Fri, Jul 26, 2024 at 11:19 PM jian he <[email protected]> wrote:\n> > On Fri, Jul 26, 2024 at 4:53 PM Amit Langote <[email protected]> wrote:\n> > >\n> > >\n> > > Pushed 0003-0005 ahead of 0001-0002.  Will try to push them over the\n> > > weekend.  Rebased for now.\n>\n> Pushed them now.\n>\n> > {\n> > ...\n> >     /*\n> >      * For expression nodes that support soft errors.  Should be set to NULL\n> >      * before calling ExecInitExprRec() if the caller wants errors thrown.\n> >      */\n> >     ErrorSaveContext *escontext;\n> > } ExprState;\n> >\n> > i believe by default makeNode will set escontext to NULL.\n> > So the comment should be, if you want to catch the soft errors, make\n> > sure the escontext pointing to an allocated ErrorSaveContext.\n> > or maybe just say, default is NULL.\n> >\n> > Otherwise, the original comment's meaning feels like: we need to\n> > explicitly set it to NULL\n> > for certain operations, which I believe is false?\n>\n> OK, I'll look into updating this.\n>\n> >         struct\n> >         {\n> >             Oid            targettype;\n> >             int32        targettypmod;\n> >             bool        omit_quotes;\n> >             bool        exists_coerce;\n> >             bool        exists_cast_to_int;\n> >             bool        check_domain;\n> >             void       *json_coercion_cache;\n> >             ErrorSaveContext *escontext;\n> >         }            jsonexpr_coercion;\n> > do we need to comment that \"check_domain\" is only used for JSON_EXISTS_OP?\n>\n> I've renamed it to exists_check_domain and added a comment that\n> exists_* fields are relevant only for JSON_EXISTS_OP.\n>\n> > json_behavior_type:\n> >             ERROR_P        { $$ = JSON_BEHAVIOR_ERROR; }\n> >             | NULL_P    { $$ = JSON_BEHAVIOR_NULL; }\n> >             | TRUE_P    { $$ = JSON_BEHAVIOR_TRUE; }\n> >             | FALSE_P    { $$ = JSON_BEHAVIOR_FALSE; }\n> >             | UNKNOWN    { $$ = JSON_BEHAVIOR_UNKNOWN; }\n> >             | EMPTY_P ARRAY    { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n> >             | EMPTY_P OBJECT_P    { $$ = JSON_BEHAVIOR_EMPTY_OBJECT; }\n> >             /* non-standard, for Oracle compatibility only */\n> >             | EMPTY_P    { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n> >         ;\n> >\n> > EMPTY_P behaves the same as EMPTY_P ARRAY\n> > so for function GetJsonBehaviorConst, the following \"case\n> > JSON_BEHAVIOR_EMPTY:\" is wrong?\n> >\n> >         case JSON_BEHAVIOR_NULL:\n> >         case JSON_BEHAVIOR_UNKNOWN:\n> >         case JSON_BEHAVIOR_EMPTY:\n> >             val = (Datum) 0;\n> >             isnull = true;\n> >             typid = INT4OID;\n> >             len = sizeof(int32);\n> >             isbyval = true;\n> >             break;\n> >\n> > also src/backend/utils/adt/ruleutils.c\n> >     if (jexpr->on_error->btype != JSON_BEHAVIOR_EMPTY)\n> >         get_json_behavior(jexpr->on_error, context, \"ERROR\");\n>\n> Something like the attached makes sense?  While this meaningfully\n> changes the deparsing output, there is no behavior change for\n> JsonTable top-level path execution.  That's because the behavior when\n> there's an error in the execution of the top-level path is to throw it\n> or return an empty set, which is handled in jsonpath_exec.c, not\n> execExprInterp.c.\n>\n\nhi amit.\nseems you forgot to attach the patch?Yeah, I had planned to look at this after my vacation earlier this month, but I context switched into working on another project and lost track of this. I’ll make some time next week to fix whatever remains go be fixed here. Thanks for the reminder.", "msg_date": "Thu, 22 Aug 2024 12:44:33 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Thu, Aug 22, 2024 at 12:44 PM Amit Langote <[email protected]> wrote:\n> On Thu, Aug 22, 2024 at 11:02 jian he <[email protected]> wrote:\n>> On Tue, Jul 30, 2024 at 12:59 PM Amit Langote <[email protected]> wrote:\n>> > On Fri, Jul 26, 2024 at 11:19 PM jian he <[email protected]> wrote:\n>> > > {\n>> > > ...\n>> > > /*\n>> > > * For expression nodes that support soft errors. Should be set to NULL\n>> > > * before calling ExecInitExprRec() if the caller wants errors thrown.\n>> > > */\n>> > > ErrorSaveContext *escontext;\n>> > > } ExprState;\n>> > >\n>> > > i believe by default makeNode will set escontext to NULL.\n>> > > So the comment should be, if you want to catch the soft errors, make\n>> > > sure the escontext pointing to an allocated ErrorSaveContext.\n>> > > or maybe just say, default is NULL.\n>> > >\n>> > > Otherwise, the original comment's meaning feels like: we need to\n>> > > explicitly set it to NULL\n>> > > for certain operations, which I believe is false?\n>> >\n>> > OK, I'll look into updating this.\n\nSee 0001.\n\n>> > > json_behavior_type:\n>> > > ERROR_P { $$ = JSON_BEHAVIOR_ERROR; }\n>> > > | NULL_P { $$ = JSON_BEHAVIOR_NULL; }\n>> > > | TRUE_P { $$ = JSON_BEHAVIOR_TRUE; }\n>> > > | FALSE_P { $$ = JSON_BEHAVIOR_FALSE; }\n>> > > | UNKNOWN { $$ = JSON_BEHAVIOR_UNKNOWN; }\n>> > > | EMPTY_P ARRAY { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n>> > > | EMPTY_P OBJECT_P { $$ = JSON_BEHAVIOR_EMPTY_OBJECT; }\n>> > > /* non-standard, for Oracle compatibility only */\n>> > > | EMPTY_P { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n>> > > ;\n>> > >\n>> > > EMPTY_P behaves the same as EMPTY_P ARRAY\n>> > > so for function GetJsonBehaviorConst, the following \"case\n>> > > JSON_BEHAVIOR_EMPTY:\" is wrong?\n>> > >\n>> > > case JSON_BEHAVIOR_NULL:\n>> > > case JSON_BEHAVIOR_UNKNOWN:\n>> > > case JSON_BEHAVIOR_EMPTY:\n>> > > val = (Datum) 0;\n>> > > isnull = true;\n>> > > typid = INT4OID;\n>> > > len = sizeof(int32);\n>> > > isbyval = true;\n>> > > break;\n>> > >\n>> > > also src/backend/utils/adt/ruleutils.c\n>> > > if (jexpr->on_error->btype != JSON_BEHAVIOR_EMPTY)\n>> > > get_json_behavior(jexpr->on_error, context, \"ERROR\");\n>> >\n>> > Something like the attached makes sense? While this meaningfully\n>> > changes the deparsing output, there is no behavior change for\n>> > JsonTable top-level path execution. That's because the behavior when\n>> > there's an error in the execution of the top-level path is to throw it\n>> > or return an empty set, which is handled in jsonpath_exec.c, not\n>> > execExprInterp.c.\n\nSee 0002.\n\nI'm also attaching 0003 to fix a minor annoyance that JSON_TABLE()\ncolumns' default ON ERROR, ON EMPTY behaviors are unnecessarily\nemitted in the deparsed output when the top-level ON ERROR behavior is\nERROR.\n\nWill push these on Monday.\n\nI haven't had a chance to take a closer look at your patch to optimize\nthe code in ExecInitJsonExpr() yet.\n\n-- \nThanks, Amit Langote", "msg_date": "Fri, 30 Aug 2024 16:32:51 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Fri, Aug 30, 2024 at 4:32 PM Amit Langote <[email protected]> wrote:\n> On Thu, Aug 22, 2024 at 12:44 PM Amit Langote <[email protected]> wrote:\n> > On Thu, Aug 22, 2024 at 11:02 jian he <[email protected]> wrote:\n> >> On Tue, Jul 30, 2024 at 12:59 PM Amit Langote <[email protected]> wrote:\n> >> > On Fri, Jul 26, 2024 at 11:19 PM jian he <[email protected]> wrote:\n> >> > > {\n> >> > > ...\n> >> > > /*\n> >> > > * For expression nodes that support soft errors. Should be set to NULL\n> >> > > * before calling ExecInitExprRec() if the caller wants errors thrown.\n> >> > > */\n> >> > > ErrorSaveContext *escontext;\n> >> > > } ExprState;\n> >> > >\n> >> > > i believe by default makeNode will set escontext to NULL.\n> >> > > So the comment should be, if you want to catch the soft errors, make\n> >> > > sure the escontext pointing to an allocated ErrorSaveContext.\n> >> > > or maybe just say, default is NULL.\n> >> > >\n> >> > > Otherwise, the original comment's meaning feels like: we need to\n> >> > > explicitly set it to NULL\n> >> > > for certain operations, which I believe is false?\n> >> >\n> >> > OK, I'll look into updating this.\n>\n> See 0001.\n>\n> >> > > json_behavior_type:\n> >> > > ERROR_P { $$ = JSON_BEHAVIOR_ERROR; }\n> >> > > | NULL_P { $$ = JSON_BEHAVIOR_NULL; }\n> >> > > | TRUE_P { $$ = JSON_BEHAVIOR_TRUE; }\n> >> > > | FALSE_P { $$ = JSON_BEHAVIOR_FALSE; }\n> >> > > | UNKNOWN { $$ = JSON_BEHAVIOR_UNKNOWN; }\n> >> > > | EMPTY_P ARRAY { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n> >> > > | EMPTY_P OBJECT_P { $$ = JSON_BEHAVIOR_EMPTY_OBJECT; }\n> >> > > /* non-standard, for Oracle compatibility only */\n> >> > > | EMPTY_P { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n> >> > > ;\n> >> > >\n> >> > > EMPTY_P behaves the same as EMPTY_P ARRAY\n> >> > > so for function GetJsonBehaviorConst, the following \"case\n> >> > > JSON_BEHAVIOR_EMPTY:\" is wrong?\n> >> > >\n> >> > > case JSON_BEHAVIOR_NULL:\n> >> > > case JSON_BEHAVIOR_UNKNOWN:\n> >> > > case JSON_BEHAVIOR_EMPTY:\n> >> > > val = (Datum) 0;\n> >> > > isnull = true;\n> >> > > typid = INT4OID;\n> >> > > len = sizeof(int32);\n> >> > > isbyval = true;\n> >> > > break;\n> >> > >\n> >> > > also src/backend/utils/adt/ruleutils.c\n> >> > > if (jexpr->on_error->btype != JSON_BEHAVIOR_EMPTY)\n> >> > > get_json_behavior(jexpr->on_error, context, \"ERROR\");\n> >> >\n> >> > Something like the attached makes sense? While this meaningfully\n> >> > changes the deparsing output, there is no behavior change for\n> >> > JsonTable top-level path execution. That's because the behavior when\n> >> > there's an error in the execution of the top-level path is to throw it\n> >> > or return an empty set, which is handled in jsonpath_exec.c, not\n> >> > execExprInterp.c.\n>\n> See 0002.\n>\n> I'm also attaching 0003 to fix a minor annoyance that JSON_TABLE()\n> columns' default ON ERROR, ON EMPTY behaviors are unnecessarily\n> emitted in the deparsed output when the top-level ON ERROR behavior is\n> ERROR.\n>\n> Will push these on Monday.\n\nDidn't as there's a release freeze in effect for the v17 branch. Will\npush to both master and v17 once the freeze is over.\n\n> I haven't had a chance to take a closer look at your patch to optimize\n> the code in ExecInitJsonExpr() yet.\n\nI've simplified your patch a bit and attached it as 0004.\n\n-- \nThanks, Amit Langote", "msg_date": "Mon, 2 Sep 2024 17:17:52 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Mon, Sep 2, 2024 at 4:18 PM Amit Langote <[email protected]> wrote:\n>\n> > See 0001.\n> >\n\n> >\n> > See 0002.\n> >\n> > I'm also attaching 0003 to fix a minor annoyance that JSON_TABLE()\n> > columns' default ON ERROR, ON EMPTY behaviors are unnecessarily\n> > emitted in the deparsed output when the top-level ON ERROR behavior is\n> > ERROR.\n> >\n> > Will push these on Monday.\n\n\n\nv2-0001 looks good to me.\n\n+-- Test JSON_TABLE() column deparsing -- don't emit default ON ERROR / EMPTY\n+-- behavior\n+EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text PATH '$'));\n+EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\nPATH '$') ERROR ON ERROR);\n\nAre these tests duplicated? appears both in v2-0002 and v2-0003.\n\n\n0002 output is:\n+EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\nPATH '$') ERROR ON ERROR);\n+\nQUERY PLAN\n+------------------------------------------------------------------------------------------------------------------------------------------------\n+ Table Function Scan on \"json_table\" (cost=0.01..1.00 rows=100 width=32)\n+ Output: a\n+ Table Function Call: JSON_TABLE('\"a\"'::jsonb, '$' AS\njson_table_path_0 COLUMNS (a text PATH '$' NULL ON EMPTY NULL ON\nERROR) ERROR ON ERROR)\n+(3 rows)\n\n0003 output is:\nEXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\nPATH '$') ERROR ON ERROR);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Table Function Scan on \"json_table\" (cost=0.01..1.00 rows=100 width=32)\n Output: a\n Table Function Call: JSON_TABLE('\"a\"'::jsonb, '$' AS\njson_table_path_0 COLUMNS (a text PATH '$') ERROR ON ERROR)\n(3 rows)\n\ntwo patches with different output,\noverall we should merge 0002 and 0003?\n\n\n\n if (jsexpr->on_error &&\n jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n\nwe can be simplified as\n if ( jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n\nsince if jsexpr->on_error is NULL, then segfault will appear at the beginning of\nExecInitJsonExpr\n\n\n+ *\n+ * Only add the extra steps for a NULL-valued expression when RETURNING a\n+ * domain type to check the constraints, if any.\n */\n+ jsestate->jump_error = state->steps_len;\n if (jsexpr->on_error &&\n- jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR)\n+ jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n+ (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n\n+ *\n+ * Only add the extra steps for a NULL-valued expression when RETURNING a\n+ * domain type to check the constraints, if any.\n */\n+ jsestate->jump_empty = state->steps_len;\n if (jsexpr->on_empty != NULL &&\n- jsexpr->on_empty->btype != JSON_BEHAVIOR_ERROR)\n+ jsexpr->on_empty->btype != JSON_BEHAVIOR_ERROR &&\n+ (jsexpr->on_empty->btype != JSON_BEHAVIOR_NULL || returning_domain))\n\nI am a little bit confused with the comments.\nnot sure the \"NULL-valued expression\" refers to.\n\ni think it is:\nimplicitly default for ON EMPTY | ERROR clause is NULL (JSON_BEHAVIOR_NULL)\nfor that situation, we can skip the json coercion process,\nbut this only applies when the returning type of JsonExpr is not domain,\n\n\n", "msg_date": "Tue, 3 Sep 2024 17:05:02 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "Hi,\n\nOn Tue, Sep 3, 2024 at 6:05 PM jian he <[email protected]> wrote:\n> On Mon, Sep 2, 2024 at 4:18 PM Amit Langote <[email protected]> wrote:\n> v2-0001 looks good to me.\n> +-- Test JSON_TABLE() column deparsing -- don't emit default ON ERROR / EMPTY\n> +-- behavior\n> +EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text PATH '$'));\n> +EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\n> PATH '$') ERROR ON ERROR);\n>\n> Are these tests duplicated? appears both in v2-0002 and v2-0003.\n>\n> 0002 output is:\n> +EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\n> PATH '$') ERROR ON ERROR);\n> +\n> QUERY PLAN\n> +------------------------------------------------------------------------------------------------------------------------------------------------\n> + Table Function Scan on \"json_table\" (cost=0.01..1.00 rows=100 width=32)\n> + Output: a\n> + Table Function Call: JSON_TABLE('\"a\"'::jsonb, '$' AS\n> json_table_path_0 COLUMNS (a text PATH '$' NULL ON EMPTY NULL ON\n> ERROR) ERROR ON ERROR)\n> +(3 rows)\n>\n> 0003 output is:\n> EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\n> PATH '$') ERROR ON ERROR);\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------\n> Table Function Scan on \"json_table\" (cost=0.01..1.00 rows=100 width=32)\n> Output: a\n> Table Function Call: JSON_TABLE('\"a\"'::jsonb, '$' AS\n> json_table_path_0 COLUMNS (a text PATH '$') ERROR ON ERROR)\n> (3 rows)\n>\n> two patches with different output,\n> overall we should merge 0002 and 0003?\n\nLooks like I ordered the patches wrong. I'd like to commit the two separately.\n\n> if (jsexpr->on_error &&\n> jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n> (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n>\n> we can be simplified as\n> if ( jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n> (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n>\n> since if jsexpr->on_error is NULL, then segfault will appear at the beginning of\n> ExecInitJsonExpr\n\nOk, done.\n\n> + *\n> + * Only add the extra steps for a NULL-valued expression when RETURNING a\n> + * domain type to check the constraints, if any.\n> */\n> + jsestate->jump_error = state->steps_len;\n> if (jsexpr->on_error &&\n> - jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR)\n> + jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n> + (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n>\n> + *\n> + * Only add the extra steps for a NULL-valued expression when RETURNING a\n> + * domain type to check the constraints, if any.\n> */\n> + jsestate->jump_empty = state->steps_len;\n> if (jsexpr->on_empty != NULL &&\n> - jsexpr->on_empty->btype != JSON_BEHAVIOR_ERROR)\n> + jsexpr->on_empty->btype != JSON_BEHAVIOR_ERROR &&\n> + (jsexpr->on_empty->btype != JSON_BEHAVIOR_NULL || returning_domain))\n>\n> I am a little bit confused with the comments.\n> not sure the \"NULL-valued expression\" refers to.\n\nIt refers to on_error/on_empty->expr, which is a Const node encoding\nthe NULL (constisnull is true) for that behavior.\n\nThat's actually the case also for behaviors UNKNOWN and EMPTY, so the\ncondition should actually be checking whether the expr is a\nNULL-valued expression.\n\nI've updated the patch.\n\n> i think it is:\n> implicitly default for ON EMPTY | ERROR clause is NULL (JSON_BEHAVIOR_NULL)\n> for that situation, we can skip the json coercion process,\n> but this only applies when the returning type of JsonExpr is not domain,\n\nI've reworded the comment to mention that this speeds up the default\nON ERROR / EMPTY handling for JSON_QUERY() and JSON_VALUE().\n\nPlan to push these tomorrow.\n\n--\nThanks, Amit Langote", "msg_date": "Thu, 5 Sep 2024 21:58:01 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Thu, Sep 5, 2024 at 9:58 PM Amit Langote <[email protected]> wrote:\n> On Tue, Sep 3, 2024 at 6:05 PM jian he <[email protected]> wrote:\n> > On Mon, Sep 2, 2024 at 4:18 PM Amit Langote <[email protected]> wrote:\n> > v2-0001 looks good to me.\n> > +-- Test JSON_TABLE() column deparsing -- don't emit default ON ERROR / EMPTY\n> > +-- behavior\n> > +EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text PATH '$'));\n> > +EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\n> > PATH '$') ERROR ON ERROR);\n> >\n> > Are these tests duplicated? appears both in v2-0002 and v2-0003.\n> >\n> > 0002 output is:\n> > +EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\n> > PATH '$') ERROR ON ERROR);\n> > +\n> > QUERY PLAN\n> > +------------------------------------------------------------------------------------------------------------------------------------------------\n> > + Table Function Scan on \"json_table\" (cost=0.01..1.00 rows=100 width=32)\n> > + Output: a\n> > + Table Function Call: JSON_TABLE('\"a\"'::jsonb, '$' AS\n> > json_table_path_0 COLUMNS (a text PATH '$' NULL ON EMPTY NULL ON\n> > ERROR) ERROR ON ERROR)\n> > +(3 rows)\n> >\n> > 0003 output is:\n> > EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\n> > PATH '$') ERROR ON ERROR);\n> > QUERY PLAN\n> > --------------------------------------------------------------------------------------------------------------------\n> > Table Function Scan on \"json_table\" (cost=0.01..1.00 rows=100 width=32)\n> > Output: a\n> > Table Function Call: JSON_TABLE('\"a\"'::jsonb, '$' AS\n> > json_table_path_0 COLUMNS (a text PATH '$') ERROR ON ERROR)\n> > (3 rows)\n> >\n> > two patches with different output,\n> > overall we should merge 0002 and 0003?\n>\n> Looks like I ordered the patches wrong. I'd like to commit the two separately.\n>\n> > if (jsexpr->on_error &&\n> > jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n> > (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n> >\n> > we can be simplified as\n> > if ( jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n> > (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n> >\n> > since if jsexpr->on_error is NULL, then segfault will appear at the beginning of\n> > ExecInitJsonExpr\n>\n> Ok, done.\n>\n> > + *\n> > + * Only add the extra steps for a NULL-valued expression when RETURNING a\n> > + * domain type to check the constraints, if any.\n> > */\n> > + jsestate->jump_error = state->steps_len;\n> > if (jsexpr->on_error &&\n> > - jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR)\n> > + jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n> > + (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n> >\n> > + *\n> > + * Only add the extra steps for a NULL-valued expression when RETURNING a\n> > + * domain type to check the constraints, if any.\n> > */\n> > + jsestate->jump_empty = state->steps_len;\n> > if (jsexpr->on_empty != NULL &&\n> > - jsexpr->on_empty->btype != JSON_BEHAVIOR_ERROR)\n> > + jsexpr->on_empty->btype != JSON_BEHAVIOR_ERROR &&\n> > + (jsexpr->on_empty->btype != JSON_BEHAVIOR_NULL || returning_domain))\n> >\n> > I am a little bit confused with the comments.\n> > not sure the \"NULL-valued expression\" refers to.\n>\n> It refers to on_error/on_empty->expr, which is a Const node encoding\n> the NULL (constisnull is true) for that behavior.\n>\n> That's actually the case also for behaviors UNKNOWN and EMPTY, so the\n> condition should actually be checking whether the expr is a\n> NULL-valued expression.\n>\n> I've updated the patch.\n>\n> > i think it is:\n> > implicitly default for ON EMPTY | ERROR clause is NULL (JSON_BEHAVIOR_NULL)\n> > for that situation, we can skip the json coercion process,\n> > but this only applies when the returning type of JsonExpr is not domain,\n>\n> I've reworded the comment to mention that this speeds up the default\n> ON ERROR / EMPTY handling for JSON_QUERY() and JSON_VALUE().\n>\n> Plan to push these tomorrow.\n\nPushed.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 6 Sep 2024 12:07:49 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Fri, Sep 6, 2024 at 12:07 PM Amit Langote <[email protected]> wrote:\n> On Thu, Sep 5, 2024 at 9:58 PM Amit Langote <[email protected]> wrote:\n> > On Tue, Sep 3, 2024 at 6:05 PM jian he <[email protected]> wrote:\n> > > On Mon, Sep 2, 2024 at 4:18 PM Amit Langote <[email protected]> wrote:\n> > > v2-0001 looks good to me.\n> > > +-- Test JSON_TABLE() column deparsing -- don't emit default ON ERROR / EMPTY\n> > > +-- behavior\n> > > +EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text PATH '$'));\n> > > +EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\n> > > PATH '$') ERROR ON ERROR);\n> > >\n> > > Are these tests duplicated? appears both in v2-0002 and v2-0003.\n> > >\n> > > 0002 output is:\n> > > +EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\n> > > PATH '$') ERROR ON ERROR);\n> > > +\n> > > QUERY PLAN\n> > > +------------------------------------------------------------------------------------------------------------------------------------------------\n> > > + Table Function Scan on \"json_table\" (cost=0.01..1.00 rows=100 width=32)\n> > > + Output: a\n> > > + Table Function Call: JSON_TABLE('\"a\"'::jsonb, '$' AS\n> > > json_table_path_0 COLUMNS (a text PATH '$' NULL ON EMPTY NULL ON\n> > > ERROR) ERROR ON ERROR)\n> > > +(3 rows)\n> > >\n> > > 0003 output is:\n> > > EXPLAIN VERBOSE SELECT * from JSON_TABLE('\"a\"', '$' COLUMNS (a text\n> > > PATH '$') ERROR ON ERROR);\n> > > QUERY PLAN\n> > > --------------------------------------------------------------------------------------------------------------------\n> > > Table Function Scan on \"json_table\" (cost=0.01..1.00 rows=100 width=32)\n> > > Output: a\n> > > Table Function Call: JSON_TABLE('\"a\"'::jsonb, '$' AS\n> > > json_table_path_0 COLUMNS (a text PATH '$') ERROR ON ERROR)\n> > > (3 rows)\n> > >\n> > > two patches with different output,\n> > > overall we should merge 0002 and 0003?\n> >\n> > Looks like I ordered the patches wrong. I'd like to commit the two separately.\n> >\n> > > if (jsexpr->on_error &&\n> > > jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n> > > (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n> > >\n> > > we can be simplified as\n> > > if ( jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n> > > (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n> > >\n> > > since if jsexpr->on_error is NULL, then segfault will appear at the beginning of\n> > > ExecInitJsonExpr\n> >\n> > Ok, done.\n> >\n> > > + *\n> > > + * Only add the extra steps for a NULL-valued expression when RETURNING a\n> > > + * domain type to check the constraints, if any.\n> > > */\n> > > + jsestate->jump_error = state->steps_len;\n> > > if (jsexpr->on_error &&\n> > > - jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR)\n> > > + jsexpr->on_error->btype != JSON_BEHAVIOR_ERROR &&\n> > > + (jsexpr->on_error->btype != JSON_BEHAVIOR_NULL || returning_domain))\n> > >\n> > > + *\n> > > + * Only add the extra steps for a NULL-valued expression when RETURNING a\n> > > + * domain type to check the constraints, if any.\n> > > */\n> > > + jsestate->jump_empty = state->steps_len;\n> > > if (jsexpr->on_empty != NULL &&\n> > > - jsexpr->on_empty->btype != JSON_BEHAVIOR_ERROR)\n> > > + jsexpr->on_empty->btype != JSON_BEHAVIOR_ERROR &&\n> > > + (jsexpr->on_empty->btype != JSON_BEHAVIOR_NULL || returning_domain))\n> > >\n> > > I am a little bit confused with the comments.\n> > > not sure the \"NULL-valued expression\" refers to.\n> >\n> > It refers to on_error/on_empty->expr, which is a Const node encoding\n> > the NULL (constisnull is true) for that behavior.\n> >\n> > That's actually the case also for behaviors UNKNOWN and EMPTY, so the\n> > condition should actually be checking whether the expr is a\n> > NULL-valued expression.\n> >\n> > I've updated the patch.\n> >\n> > > i think it is:\n> > > implicitly default for ON EMPTY | ERROR clause is NULL (JSON_BEHAVIOR_NULL)\n> > > for that situation, we can skip the json coercion process,\n> > > but this only applies when the returning type of JsonExpr is not domain,\n> >\n> > I've reworded the comment to mention that this speeds up the default\n> > ON ERROR / EMPTY handling for JSON_QUERY() and JSON_VALUE().\n> >\n> > Plan to push these tomorrow.\n>\n> Pushed.\n\nReverted 0002-0004 from both master and REL_17_STABLE due to BF failures.\n\n0002-0003 are easily fixed by changing the newly added tests to not\nuse EXPLAIN VERBOSE to test deparsing related changes, so will re-push\nthose shortly.\n\n0004 perhaps doesn't play nicely with LLVM compilation but I don't yet\nunderstand why.\n\n\n--\nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 6 Sep 2024 13:34:15 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" }, { "msg_contents": "On Fri, Sep 6, 2024 at 1:34 PM Amit Langote <[email protected]> wrote:\n> On Fri, Sep 6, 2024 at 12:07 PM Amit Langote <[email protected]> wrote:\n> > On Thu, Sep 5, 2024 at 9:58 PM Amit Langote <[email protected]> wr\n> > Pushed.\n>\n> Reverted 0002-0004 from both master and REL_17_STABLE due to BF failures.\n>\n> 0002-0003 are easily fixed by changing the newly added tests to not\n> use EXPLAIN VERBOSE to test deparsing related changes, so will re-push\n> those shortly.\n\nDone.\n\n> 0004 perhaps doesn't play nicely with LLVM compilation but I don't yet\n> understand why.\n\nAttached is an updated patch that takes care of the issue. The bug\nwas that llvm_compile_expr() didn't like that jump_error, jump_empty,\nand jump_end could all point to the same step. In the attached,\njump_empty / jump_error are left to be -1 if ON ERROR, ON EMPTY steps\nare not added, instead of making them also point to the step address\nthat jump_end points to. ExecEvalJsonExprPath() are also updated to\ncheck if jump_error or jump_empty is -1 and return jump_end if so.\n\n-- \nThanks, Amit Langote", "msg_date": "Fri, 6 Sep 2024 17:01:40 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add more SQL/JSON constructor functions" } ]
[ { "msg_contents": "Hi.\n\nThe function *plpgsql_inline_handler* can use uninitialized\nvariable retval, if PG_TRY fails.\nFix like function*plpgsql_call_handler* wich declare retval as\nvolatile and initialize to (Datum 0).\n\nbest regards,\nRanier Vilela", "msg_date": "Mon, 27 May 2024 11:31:24 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Fix use of possible uninitialized variable retval\n (src/pl/plpgsql/src/pl_handler.c)" }, { "msg_contents": "At Mon, 27 May 2024 11:31:24 -0300, Ranier Vilela <[email protected]> wrote in \n> Hi.\n> \n> The function *plpgsql_inline_handler* can use uninitialized\n> variable retval, if PG_TRY fails.\n> Fix like function*plpgsql_call_handler* wich declare retval as\n> volatile and initialize to (Datum 0).\n\nIf PG_TRY fails, retval is not actually accessed, so no real issue\nexists. Commit 7292fd8f1c changed plpgsql_call_handler() to the\ncurrent form, but as stated in its commit message, it did not fix a\nreal issue and was solely to silence compiler.\n\nI believe we do not need to modify plpgsql_inline_handler() unless\ncompiler actually issues a false warning for it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 05 Jun 2024 13:12:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix use of possible uninitialized variable retval\n (src/pl/plpgsql/src/pl_handler.c)" }, { "msg_contents": "On Wed, Jun 05, 2024 at 01:12:41PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 27 May 2024 11:31:24 -0300, Ranier Vilela <[email protected]> wrote in \n>> The function *plpgsql_inline_handler* can use uninitialized\n>> variable retval, if PG_TRY fails.\n>> Fix like function*plpgsql_call_handler* wich declare retval as\n>> volatile and initialize to (Datum 0).\n\nYou forgot to read elog.h, explaining under which circumstances\nvariables related to TRY/CATCH block should be marked as volatile.\nThere is a big \"Note:\" paragraph.\n\nIt is not the first time that this is mentioned on this list: but\nsending a report without looking at the reason why a change is\njustified makes everybody waste time. That's not productive.\n\n> If PG_TRY fails, retval is not actually accessed, so no real issue\n> exists. Commit 7292fd8f1c changed plpgsql_call_handler() to the\n> current form, but as stated in its commit message, it did not fix a\n> real issue and was solely to silence compiler.\n\nThis complain was from lapwing, that uses a version of gcc which\nproduces a lot of noise with incorrect issues. It is one of the only\n32b buildfarm members, so it still has a lot of value.\n\n> I believe we do not need to modify plpgsql_inline_handler() unless\n> compiler actually issues a false warning for it.\n\nIf we were to do something, that would be to remove the volatile from\nplpgsql_call_handler() at the end once we don't have in the buildfarm\ncompilers that complain about it, because there is no reason to use a\nvolatile in this case. :)\n--\nMichael", "msg_date": "Wed, 5 Jun 2024 14:04:39 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix use of possible uninitialized variable retval\n (src/pl/plpgsql/src/pl_handler.c)" }, { "msg_contents": "Em qua., 5 de jun. de 2024 às 01:12, Kyotaro Horiguchi <\[email protected]> escreveu:\n\n> At Mon, 27 May 2024 11:31:24 -0300, Ranier Vilela <[email protected]>\n> wrote in\n> > Hi.\n> >\n> > The function *plpgsql_inline_handler* can use uninitialized\n> > variable retval, if PG_TRY fails.\n> > Fix like function*plpgsql_call_handler* wich declare retval as\n> > volatile and initialize to (Datum 0).\n>\n> If PG_TRY fails, retval is not actually accessed, so no real issue\n> exists.\n\nYou say it for this call\nPG_RE_THROW();\n\n\n> Commit 7292fd8f1c changed plpgsql_call_handler() to the\n> current form, but as stated in its commit message, it did not fix a\n> real issue and was solely to silence compiler.\n>\n\n> I believe we do not need to modify plpgsql_inline_handler() unless\n> compiler actually issues a false warning for it.\n>\nYeah, there is a warning, but not from the compiler.\n\nbest regards,\nRanier Vilela\n\nEm qua., 5 de jun. de 2024 às 01:12, Kyotaro Horiguchi <[email protected]> escreveu:At Mon, 27 May 2024 11:31:24 -0300, Ranier Vilela <[email protected]> wrote in \n> Hi.\n> \n> The function *plpgsql_inline_handler* can use uninitialized\n> variable retval, if PG_TRY fails.\n> Fix like function*plpgsql_call_handler* wich declare retval as\n> volatile and initialize to (Datum 0).\n\nIf PG_TRY fails, retval is not actually accessed, so no real issue\nexists. You say it for this callPG_RE_THROW(); Commit 7292fd8f1c changed plpgsql_call_handler() to the\ncurrent form, but as stated in its commit message, it did not fix a\nreal issue and was solely to silence compiler.\n\nI believe we do not need to modify plpgsql_inline_handler() unless\ncompiler actually issues a false warning for it.Yeah, there is a warning, but not from the compiler.best regards,Ranier Vilela", "msg_date": "Wed, 5 Jun 2024 09:28:05 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix use of possible uninitialized variable retval\n (src/pl/plpgsql/src/pl_handler.c)" }, { "msg_contents": "Em qua., 5 de jun. de 2024 às 02:04, Michael Paquier <[email protected]>\nescreveu:\n\n> On Wed, Jun 05, 2024 at 01:12:41PM +0900, Kyotaro Horiguchi wrote:\n> > At Mon, 27 May 2024 11:31:24 -0300, Ranier Vilela <[email protected]>\n> wrote in\n> >> The function *plpgsql_inline_handler* can use uninitialized\n> >> variable retval, if PG_TRY fails.\n> >> Fix like function*plpgsql_call_handler* wich declare retval as\n> >> volatile and initialize to (Datum 0).\n>\n> You forgot to read elog.h, explaining under which circumstances\n> variables related to TRY/CATCH block should be marked as volatile.\n> There is a big \"Note:\" paragraph.\n>\n> It is not the first time that this is mentioned on this list: but\n> sending a report without looking at the reason why a change is\n> justified makes everybody waste time. That's not productive.\n>\nOf course, this is very bad when it happens.\n\n\n>\n> > If PG_TRY fails, retval is not actually accessed, so no real issue\n> > exists. Commit 7292fd8f1c changed plpgsql_call_handler() to the\n> > current form, but as stated in its commit message, it did not fix a\n> > real issue and was solely to silence compiler.\n>\n> This complain was from lapwing, that uses a version of gcc which\n> produces a lot of noise with incorrect issues. It is one of the only\n> 32b buildfarm members, so it still has a lot of value.\n>\nI posted the report, because of an uninitialized variable warning.\nWhich is one of the most problematic situations, when it *actually exists*.\n\n\n> > I believe we do not need to modify plpgsql_inline_handler() unless\n> > compiler actually issues a false warning for it.\n>\n> If we were to do something, that would be to remove the volatile from\n> plpgsql_call_handler() at the end once we don't have in the buildfarm\n> compilers that complain about it, because there is no reason to use a\n> volatile in this case. :)\n>\nI don't see any motivation, since there are no reports.\n\nbest regards,\nRanier Vilela\n\nEm qua., 5 de jun. de 2024 às 02:04, Michael Paquier <[email protected]> escreveu:On Wed, Jun 05, 2024 at 01:12:41PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 27 May 2024 11:31:24 -0300, Ranier Vilela <[email protected]> wrote in \n>> The function *plpgsql_inline_handler* can use uninitialized\n>> variable retval, if PG_TRY fails.\n>> Fix like function*plpgsql_call_handler* wich declare retval as\n>> volatile and initialize to (Datum 0).\n\nYou forgot to read elog.h, explaining under which circumstances\nvariables related to TRY/CATCH block should be marked as volatile.\nThere is a big \"Note:\" paragraph.\n\nIt is not the first time that this is mentioned on this list: but\nsending a report without looking at the reason why a change is\njustified makes everybody waste time.  That's not productive.Of course, this is very bad when it happens. \n\n> If PG_TRY fails, retval is not actually accessed, so no real issue\n> exists. Commit 7292fd8f1c changed plpgsql_call_handler() to the\n> current form, but as stated in its commit message, it did not fix a\n> real issue and was solely to silence compiler.\n\nThis complain was from lapwing, that uses a version of gcc which\nproduces a lot of noise with incorrect issues.  It is one of the only\n32b buildfarm members, so it still has a lot of value.I posted the report, because of an uninitialized variable warning.Which is one of the most problematic situations, when it *actually exists*.\n\n> I believe we do not need to modify plpgsql_inline_handler() unless\n> compiler actually issues a false warning for it.\n\nIf we were to do something, that would be to remove the volatile from\nplpgsql_call_handler() at the end once we don't have in the buildfarm\ncompilers that complain about it, because there is no reason to use a\nvolatile in this case.  :)I don't see any motivation, since there are no reports.best regards,Ranier Vilela", "msg_date": "Wed, 5 Jun 2024 09:34:12 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix use of possible uninitialized variable retval\n (src/pl/plpgsql/src/pl_handler.c)" }, { "msg_contents": "On Wed, Jun 5, 2024 at 1:05 PM Michael Paquier <[email protected]> wrote:\n>\n> This complain was from lapwing, that uses a version of gcc which\n> produces a lot of noise with incorrect issues. It is one of the only\n> 32b buildfarm members, so it still has a lot of value.\n\nNote that I removed the -Werror from lapwing a long time ago, so at\nleast this animal shouldn't lead to hackish fixes for false positive\nanymore.\n\n\n", "msg_date": "Wed, 5 Jun 2024 22:27:51 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix use of possible uninitialized variable retval\n (src/pl/plpgsql/src/pl_handler.c)" }, { "msg_contents": "On Wed, Jun 05, 2024 at 10:27:51PM +0800, Julien Rouhaud wrote:\n> Note that I removed the -Werror from lapwing a long time ago, so at\n> least this animal shouldn't lead to hackish fixes for false positive\n> anymore.\n\nThat's good to know. Thanks for the update.\n--\nMichael", "msg_date": "Thu, 6 Jun 2024 13:29:26 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix use of possible uninitialized variable retval\n (src/pl/plpgsql/src/pl_handler.c)" } ]
[ { "msg_contents": "Hi.\n\nIn the function *PQprint*, the variable po->fieldName\ncan be NULL.\nSee the checks a few lines up.\n\nfor (numFieldName = 0;\npo->fieldName && po->fieldName[numFieldName];\nnumFieldName++)\n\nSo, I think that must be checked, when used,\nin the loop below.\n\nbest regards,\nRanier Vilela", "msg_date": "Mon, 27 May 2024 11:52:20 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Fix possible dereference null pointer (PQprint)" }, { "msg_contents": "> On 27 May 2024, at 16:52, Ranier Vilela <[email protected]> wrote:\n\n> In the function *PQprint*, the variable po->fieldName can be NULL.\n\nYes.\n\n> See the checks a few lines up.\n\nIndeed, let's check it.\n\n for (numFieldName = 0;\n po->fieldName && po->fieldName[numFieldName];\n numFieldName++)\n ;\n for (j = 0; j < nFields; j++)\n {\n int len;\n const char *s = (j < numFieldName && po->fieldName[j][0]) ?\n po->fieldName[j] : PQfname(res, j);\n\nIf po->fieldName is NULL then numFieldName won't be incremented and will remain\nzero. In the check you reference we check (j < numFieldName) which will check\nthe j in the range 0..nFields for being less than zero. The code thus does\nseem quite correct to me.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 31 May 2024 10:03:24 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix possible dereference null pointer (PQprint)" }, { "msg_contents": "Em sex., 31 de mai. de 2024 às 05:03, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 27 May 2024, at 16:52, Ranier Vilela <[email protected]> wrote:\n>\n> > In the function *PQprint*, the variable po->fieldName can be NULL.\n>\n> Yes.\n>\n> > See the checks a few lines up.\n>\n> Indeed, let's check it.\n>\n> for (numFieldName = 0;\n> po->fieldName && po->fieldName[numFieldName];\n> numFieldName++)\n> ;\n> for (j = 0; j < nFields; j++)\n> {\n> int len;\n> const char *s = (j < numFieldName && po->fieldName[j][0]) ?\n> po->fieldName[j] : PQfname(res, j);\n>\n> If po->fieldName is NULL then numFieldName won't be incremented and will\n> remain\n> zero. In the check you reference we check (j < numFieldName) which will\n> check\n> the j in the range 0..nFields for being less than zero. The code thus does\n> seem quite correct to me.\n>\nYou are completely correct. My bad.\n\nThank you Daniel.\n\nbest regards,\nRanier Vilela\n\nEm sex., 31 de mai. de 2024 às 05:03, Daniel Gustafsson <[email protected]> escreveu:> On 27 May 2024, at 16:52, Ranier Vilela <[email protected]> wrote:\n\n> In the function *PQprint*, the variable po->fieldName can be NULL.\n\nYes.\n\n> See the checks a few lines up.\n\nIndeed, let's check it.\n\n        for (numFieldName = 0;\n             po->fieldName && po->fieldName[numFieldName];\n             numFieldName++)\n            ;\n        for (j = 0; j < nFields; j++)\n        {\n            int         len;\n            const char *s = (j < numFieldName && po->fieldName[j][0]) ?\n                po->fieldName[j] : PQfname(res, j);\n\nIf po->fieldName is NULL then numFieldName won't be incremented and will remain\nzero.  In the check you reference we check (j < numFieldName) which will check\nthe j in the range 0..nFields for being less than zero.  The code thus does\nseem quite correct to me.You are completely correct. My bad.Thank you Daniel.best regards,Ranier Vilela", "msg_date": "Sun, 2 Jun 2024 18:06:56 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix possible dereference null pointer (PQprint)" } ]
[ { "msg_contents": "Hello hackers,\n\nAs a recent buildfarm test failure on olingo (which builds postgres with\n-O1 and address sanitizer) [1] shows:\n...\n[23:12:02.127](0.166s) not ok 6 - snapshot conflict: stats show conflict on standby\n[23:12:02.130](0.003s) #   Failed test 'snapshot conflict: stats show conflict on standby'\n#   at /home/bf/bf-build/olingo/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 332.\n[23:12:02.130](0.000s) #          got: '2'\n#     expected: '1'\n...\n[23:12:06.848](1.291s) not ok 17 - 5 recovery conflicts shown in pg_stat_database\n[23:12:06.887](0.040s) #   Failed test '5 recovery conflicts shown in pg_stat_database'\n#   at /home/bf/bf-build/olingo/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 286.\n[23:12:06.887](0.000s) #          got: '6'\n#     expected: '5'\nWaiting for replication conn standby's replay_lsn to pass 0/3459160 on primary\ndone\n...\n\npgsql.build/testrun/recovery/031_recovery_conflict/log/031_recovery_conflict_standby.log:\n2024-05-15 23:12:01.616 UTC [1299981][client backend][2/2:0] LOG: statement: FETCH FORWARD FROM \ntest_recovery_conflict_cursor;\n2024-05-15 23:12:01.617 UTC [1299981][client backend][2/2:0] LOG: statement: ;\n2024-05-15 23:12:01.910 UTC [1297595][startup][34/0:0] LOG: recovery still waiting after 15.289 ms: recovery conflict on \nsnapshot\n2024-05-15 23:12:01.910 UTC [1297595][startup][34/0:0] DETAIL: Conflicting process: 1299981.\n2024-05-15 23:12:01.910 UTC [1297595][startup][34/0:0] CONTEXT:  WAL redo at 0/344F468 for Heap2/PRUNE_VACUUM_SCAN: \nsnapshotConflictHorizon: 746, isCatalogRel: F, nplans: 2, nredirected: 18, ndead: 0, nunused: 0, plans: [{ xmax: 0, \ninfomask: 2816, infomask2: 2, ntuples: 2, offsets: [21, 22] }, { xmax: 0, infomask: 11008, infomask2: 32770, ntuples: \n18, offsets: [41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58] }], redirected: [23->41, 24->42, \n25->43, 26->44, 27->45, 28->46, 29->47, 30->48, 31->49, 32->50, 33->51, 34->52, 35->53, 36->54, 37->55, 38->56, 39->57, \n40->58]; blkref #0: rel 1663/16385/16386, blk 0\n2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] FATAL: terminating connection due to conflict with recovery\n2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] DETAIL:  User query might have needed to see row versions \nthat must be removed.\n2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] HINT: In a moment you should be able to reconnect to the \ndatabase and repeat your command.\nvvv an important difference from a successful test run\n2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] LOG: could not send data to client: Broken pipe\n2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] FATAL: terminating connection due to conflict with recovery\n2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] DETAIL:  User query might have needed to see row versions \nthat must be removed.\n2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] HINT: In a moment you should be able to reconnect to the \ndatabase and repeat your command.\n^^^\n\ntest  031_recovery_conflict may fail in the following scenario:\n\n031_recovery_conflict.pl:\n     executes  a query, which produces a conflict:\n     ## RECOVERY CONFLICT 2: Snapshot conflict\n     ...\n     $psql_standby->query_safe(...)\n\nstartup process:\n         detects a snapshot conflict and sends\n         PROCSIG_RECOVERY_CONFLICT_SNAPSHOT:\n         (ResolveRecoveryConflictWithVirtualXIDs ->\n         CancelVirtualTransaction) to the client backend\n\nclient backend:\n             receives and processes the signal:\n             HandleRecoveryConflictInterrupt; ProcessClientReadInterrupt ->\n             ProcessInterrupts -> ProcessRecoveryConflictInterrupts ->\n             ProcessRecoveryConflictInterrupt,\n\n             reports the recovery conflict:\n             pgstat_report_recovery_conflict(reason);\n\n             and reports the error:\n             ereport(FATAL, ... \"terminating connection due to conflict with\n             recovery\" ...)\n             sends the message to the server log:\n             errfinish -> EmitErrorReport -> send_message_to_server_log\n\n031_recovery_conflict.pl:\n     # finds the message in the log and resets psql connection:\n     check_conflict_log(\n       \"User query might have needed to see row versions that must\n        be removed\");\n     $psql_standby->reconnect_and_clear();\n\nstartup process:\n         keeps sending PROCSIG_RECOVERY_CONFLICT_SNAPSHOT to the client\n         backend in a loop inside ResolveRecoveryConflictWithVirtualXIDs\n\nclient backend:\n             tries to send the message to the client:\n             send_message_to_frontend -> socket_flush ->\n             internal_flush_buffer,\n             gets an error (EPIPE) from secure_write, and calls\n             ereport(COMMERROR,\n               (errcode_for_socket_access(),\n                errmsg(\"could not send data to client: %m\")));\n\n            receives the following PROCSIG_RECOVERY_CONFLICT_SNAPSHOT\n            signal and processes it the same way:\n\n            HandleRecoveryConflictInterrupt; ProcessClientReadInterrupt ->\n            ProcessInterrupts -> ProcessRecoveryConflictInterrupts ->\n            ProcessRecoveryConflictInterrupt,\n\n            reports the recovery conflict:\n            pgstat_report_recovery_conflict(reason);\n            // now the conflict is counted twice\n\n            and reports the error:\n            ereport(FATAL, ... \"terminating connection due to conflict with\n            recovery\" ...)\n            sends the message to the server log:\n            errfinish -> EmitErrorReport -> send_message_to_server_log\n\n031_recovery_conflict.pl:\n     calls\n     check_conflict_stat(\"snapshot\");\n     and gets 2 instead of 1.\n\nThe patch adding delays to reproduce the issue is attached.\n\nWith the patch applied, I run the test (against an \"-O0\" build) in a loop:\nfor i in `seq 20`; do echo \"I $i\"; make check -s -C \\\nsrc/test/recovery/ PROVE_TESTS=\"t/031*\"; grep ' not ok 6 ' \\\nsrc/test/recovery/tmp_check/log/regress_log_031_recovery_conflict && break;\ndone\n\nand get exactly the same failure as on olingo:\nI 1\n# +++ tap check in src/test/recovery +++\nt/031_recovery_conflict.pl .. 6/?\n#   Failed test 'snapshot conflict: stats show conflict on standby'\n#   at t/031_recovery_conflict.pl line 333.\n#          got: '2'\n#     expected: '1'\nt/031_recovery_conflict.pl .. 13/?\n#   Failed test '5 recovery conflicts shown in pg_stat_database'\n#   at t/031_recovery_conflict.pl line 287.\n#          got: '6'\n#     expected: '5'\n# Looks like you failed 2 tests of 18.\nt/031_recovery_conflict.pl .. Dubious, test returned 2 (wstat 512, 0x200)\nFailed 2/18 subtests\n\nTest Summary Report\n-------------------\nt/031_recovery_conflict.pl (Wstat: 512 Tests: 18 Failed: 2)\n   Failed tests:  6, 17\n   Non-zero exit status: 2\n\n(Similar failures can be seen with other sections of 031_recovery_conflict.)\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-05-15%2023%3A03%3A30\n\nBest regards,\nAlexander", "msg_date": "Mon, 27 May 2024 18:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Test 031_recovery_conflict fails when a conflict counted twice" }, { "msg_contents": "> On Mon, May 27, 2024 at 06:00:00PM GMT, Alexander Lakhin wrote:\n> Hello hackers,\n>\n> As a recent buildfarm test failure on olingo (which builds postgres with\n> -O1 and address sanitizer) [1] shows:\n> ...\n> [23:12:02.127](0.166s) not ok 6 - snapshot conflict: stats show conflict on standby\n> [23:12:02.130](0.003s) #�� Failed test 'snapshot conflict: stats show conflict on standby'\n> #�� at /home/bf/bf-build/olingo/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 332.\n> [23:12:02.130](0.000s) #��������� got: '2'\n> #���� expected: '1'\n> ...\n> [23:12:06.848](1.291s) not ok 17 - 5 recovery conflicts shown in pg_stat_database\n> [23:12:06.887](0.040s) #�� Failed test '5 recovery conflicts shown in pg_stat_database'\n> #�� at /home/bf/bf-build/olingo/HEAD/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 286.\n> [23:12:06.887](0.000s) #��������� got: '6'\n> #���� expected: '5'\n> Waiting for replication conn standby's replay_lsn to pass 0/3459160 on primary\n> done\n> ...\n>\n> pgsql.build/testrun/recovery/031_recovery_conflict/log/031_recovery_conflict_standby.log:\n> 2024-05-15 23:12:01.616 UTC [1299981][client backend][2/2:0] LOG: statement:\n> FETCH FORWARD FROM test_recovery_conflict_cursor;\n> 2024-05-15 23:12:01.617 UTC [1299981][client backend][2/2:0] LOG: statement: ;\n> 2024-05-15 23:12:01.910 UTC [1297595][startup][34/0:0] LOG: recovery still\n> waiting after 15.289 ms: recovery conflict on snapshot\n> 2024-05-15 23:12:01.910 UTC [1297595][startup][34/0:0] DETAIL: Conflicting process: 1299981.\n> 2024-05-15 23:12:01.910 UTC [1297595][startup][34/0:0] CONTEXT:� WAL redo at\n> 0/344F468 for Heap2/PRUNE_VACUUM_SCAN: snapshotConflictHorizon: 746,\n> isCatalogRel: F, nplans: 2, nredirected: 18, ndead: 0, nunused: 0, plans: [{\n> xmax: 0, infomask: 2816, infomask2: 2, ntuples: 2, offsets: [21, 22] }, {\n> xmax: 0, infomask: 11008, infomask2: 32770, ntuples: 18, offsets: [41, 42,\n> 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58] }],\n> redirected: [23->41, 24->42, 25->43, 26->44, 27->45, 28->46, 29->47, 30->48,\n> 31->49, 32->50, 33->51, 34->52, 35->53, 36->54, 37->55, 38->56, 39->57,\n> 40->58]; blkref #0: rel 1663/16385/16386, blk 0\n> 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] FATAL: terminating connection due to conflict with recovery\n> 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] DETAIL:� User\n> query might have needed to see row versions that must be removed.\n> 2024-05-15 23:12:01.959 UTC [1299981][client backend][2/2:0] HINT: In a\n> moment you should be able to reconnect to the database and repeat your\n> command.\n> vvv an important difference from a successful test run\n> 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] LOG: could not send data to client: Broken pipe\n> 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] FATAL: terminating connection due to conflict with recovery\n> 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] DETAIL:� User\n> query might have needed to see row versions that must be removed.\n> 2024-05-15 23:12:01.966 UTC [1299981][client backend][2/2:0] HINT: In a\n> moment you should be able to reconnect to the database and repeat your\n> command.\n> ^^^\n>\n> test� 031_recovery_conflict may fail in the following scenario:\n>\n> 031_recovery_conflict.pl:\n> ��� executes� a query, which produces a conflict:\n> ��� ## RECOVERY CONFLICT 2: Snapshot conflict\n> ��� ...\n> ��� $psql_standby->query_safe(...)\n>\n> startup process:\n> ������� detects a snapshot conflict and sends\n> ������� PROCSIG_RECOVERY_CONFLICT_SNAPSHOT:\n> ������� (ResolveRecoveryConflictWithVirtualXIDs ->\n> ������� CancelVirtualTransaction) to the client backend\n>\n> client backend:\n> ����������� receives and processes the signal:\n> ����������� HandleRecoveryConflictInterrupt; ProcessClientReadInterrupt ->\n> ����������� ProcessInterrupts -> ProcessRecoveryConflictInterrupts ->\n> ����������� ProcessRecoveryConflictInterrupt,\n>\n> ����������� reports the recovery conflict:\n> ����������� pgstat_report_recovery_conflict(reason);\n>\n> ����������� and reports the error:\n> ����������� ereport(FATAL, ... \"terminating connection due to conflict with\n> ����������� recovery\" ...)\n> ����������� sends the message to the server log:\n> ����������� errfinish -> EmitErrorReport -> send_message_to_server_log\n>\n> 031_recovery_conflict.pl:\n> ��� # finds the message in the log and resets psql connection:\n> ��� check_conflict_log(\n> ����� \"User query might have needed to see row versions that must\n> ������ be removed\");\n> ��� $psql_standby->reconnect_and_clear();\n>\n> startup process:\n> ������� keeps sending PROCSIG_RECOVERY_CONFLICT_SNAPSHOT to the client\n> ������� backend in a loop inside ResolveRecoveryConflictWithVirtualXIDs\n>\n> client backend:\n> ����������� tries to send the message to the client:\n> ����������� send_message_to_frontend -> socket_flush ->\n> ����������� internal_flush_buffer,\n> ����������� gets an error (EPIPE) from secure_write, and calls\n> ����������� ereport(COMMERROR,\n> ������������� (errcode_for_socket_access(),\n> �������������� errmsg(\"could not send data to client: %m\")));\n>\n> ���������� receives the following PROCSIG_RECOVERY_CONFLICT_SNAPSHOT\n> ���������� signal and processes it the same way:\n>\n> ���������� HandleRecoveryConflictInterrupt; ProcessClientReadInterrupt ->\n> ���������� ProcessInterrupts -> ProcessRecoveryConflictInterrupts ->\n> ���������� ProcessRecoveryConflictInterrupt,\n>\n> ���������� reports the recovery conflict:\n> ���������� pgstat_report_recovery_conflict(reason);\n> ���������� // now the conflict is counted twice\n>\n> ���������� and reports the error:\n> ���������� ereport(FATAL, ... \"terminating connection due to conflict with\n> ���������� recovery\" ...)\n> ���������� sends the message to the server log:\n> ���������� errfinish -> EmitErrorReport -> send_message_to_server_log\n>\n> 031_recovery_conflict.pl:\n> ��� calls\n> ��� check_conflict_stat(\"snapshot\");\n> ��� and gets 2 instead of 1.\n\nThanks for the breakdown of the steps, a very interesting case. If I\nunderstand everything correctly, it could be summarized as: the startup\nprocess tries to stop the client backend, which struggles with error\nreporting just enough to receive a second signal. It seems this issue is\nrare, even on olingo it has appeared only once. Yet it raises a question\nwhether is it an incorrect behaviour or the test is not robust enough?\n\n From what I see in ProcessRecoveryConflictInterrupt, there is no defence\nagainst duplicated recovery conflict getting reported, except an\nexpectation that ereport FATAL will terminate the session (the\nlimitations on the recursive error reporting do not seem to stop that).\nI guess this points into the direction that it's expected, and\ncheck_conflict_stat might be checking for number of conflicts > 1.\n\nIf that's not the case, and recovery conflicts have to be reported only\nonce, then either the startup process or the client backend have to take\ncare of that. My first idea was to experiment with longer waiting\nintervals for an unresponsive client backend (currently it's a constant\nof 5000 us) -- it seems to reduce chances of reproducing the error (with\nthe attached patch applied), but I'm not sure it's an appropriate\nsolution.\n\n\n", "msg_date": "Wed, 5 Jun 2024 22:02:02 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 031_recovery_conflict fails when a conflict counted twice" } ]
[ { "msg_contents": "Hi ,\nWhile we try to install PG 16.2 from the source code in macbook we getting\nthis following errors\n```\n\n\n\n\n\n\n\n\n\n\n\n*explicit_bzero.c:22:9: error: call to undeclared function 'memset_s'; ISO\nC99 and later do not support implicit function declarations\n[-Wimplicit-function-declaration] (void) memset_s(buf, len, 0,\nlen); ^explicit_bzero.c:22:9: note: did you mean\n'memset'?/Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h:74:7:\nnote: 'memset' declared herevoid memset(void __b, int __c, size_t\n__len); ^1 error generated.make[2]: *** [explicit_bzero.o] Error\n1make[2]: *** Waiting for unfinished jobs....make[1]: ***\n[all-port-recurse] Error 2make: *** [all-src-recurse] Error 2*\n```\nthen I changed the function memset_s(buf, len, 0, len) to memset(buf, 0,\nlen) and it's working. need a clarification on this?\n\nThanks and regards\nPradeep\n\nHi ,While we try to install PG 16.2 from the source code in macbook we getting this following errors```explicit_bzero.c:22:9: error: call to undeclared function 'memset_s'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]        (void) memset_s(buf, len, 0, len);               ^explicit_bzero.c:22:9: note: did you mean 'memset'?/Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h:74:7: note: 'memset' declared herevoid    memset(void __b, int __c, size_t __len);         ^1 error generated.make[2]: *** [explicit_bzero.o] Error 1make[2]: *** Waiting for unfinished jobs....make[1]: *** [all-port-recurse] Error 2make: *** [all-src-recurse] Error 2```then I changed the function memset_s(buf, len, 0, len) to memset(buf, 0, len) and it's working. need a clarification on this?Thanks and regardsPradeep", "msg_date": "Tue, 28 May 2024 11:07:08 +0530", "msg_from": "Pradeep Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Need clarification on compilation errors in PG 16.2" }, { "msg_contents": "> On 28 May 2024, at 07:37, Pradeep Kumar <[email protected]> wrote:\n\nThis requires more information to be shared in order to figure out what could\nbe happening.\n \n> ```\n> explicit_bzero.c:22:9: error: call to undeclared function 'memset_s'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]\n> (void) memset_s(buf, len, 0, len);\n> ^\n\nThis codepath would only be reached if the buildsystem determined that memset_s\nwas available so something is fairly wrong here. Did you change any builfiles\nafter running configure? Re-install or upgrade XCode after running configure?\n\n> then I changed the function memset_s(buf, len, 0, len) to memset(buf, 0, len) and it's working. need a clarification on this?\n\nmemset_s and memset have the same prototype, and are functionally equivalent,\nbut memset_s have certain properties which are required in this codepath.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 28 May 2024 08:22:33 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need clarification on compilation errors in PG 16.2" }, { "msg_contents": "Hi,\n\n>This codepath would only be reached if the buildsystem determined that\nmemset_s\n>was available so something is fairly wrong here. Did you change any\nbuilfiles\n>after running configure? Re-install or upgrade XCode after running\nconfigure?\n\nI didn't touch any of the buildfiles , even didn't touch the PG's source\ncode and didn't reinstall or upgrade Xcode. Just configure the PG and gave\n'make' and I got this error.\n\n>memset_s and memset have the same prototype, and are functionally\nequivalent,\n>but memset_s have certain properties which are required in this codepath.\n\nOk\n\nThanks and regards\n\nOn Tue, May 28, 2024 at 11:52 AM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 28 May 2024, at 07:37, Pradeep Kumar <[email protected]>\n> wrote:\n>\n> This requires more information to be shared in order to figure out what\n> could\n> be happening.\n>\n> > ```\n> > explicit_bzero.c:22:9: error: call to undeclared function 'memset_s';\n> ISO C99 and later do not support implicit function declarations\n> [-Wimplicit-function-declaration]\n> > (void) memset_s(buf, len, 0, len);\n> > ^\n>\n> This codepath would only be reached if the buildsystem determined that\n> memset_s\n> was available so something is fairly wrong here. Did you change any\n> builfiles\n> after running configure? Re-install or upgrade XCode after running\n> configure?\n>\n> > then I changed the function memset_s(buf, len, 0, len) to memset(buf, 0,\n> len) and it's working. need a clarification on this?\n>\n> memset_s and memset have the same prototype, and are functionally\n> equivalent,\n> but memset_s have certain properties which are required in this codepath.\n>\n> --\n> Daniel Gustafsson\n>\n>\n\nHi,>This codepath would only be reached if the buildsystem determined that memset_s>was available so something is fairly wrong here.  Did you change any builfiles>after running configure?  Re-install or upgrade XCode after running configure?I didn't touch any of the buildfiles , even didn't touch the PG's source code and didn't reinstall or upgrade Xcode. Just configure the PG and gave 'make' and I got this error.>memset_s and memset have the same prototype, and are functionally equivalent,>but memset_s have certain properties which are required in this codepath.OkThanks and regards On Tue, May 28, 2024 at 11:52 AM Daniel Gustafsson <[email protected]> wrote:> On 28 May 2024, at 07:37, Pradeep Kumar <[email protected]> wrote:\n\nThis requires more information to be shared in order to figure out what could\nbe happening.\n\n> ```\n> explicit_bzero.c:22:9: error: call to undeclared function 'memset_s'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]\n>         (void) memset_s(buf, len, 0, len);\n>                ^\n\nThis codepath would only be reached if the buildsystem determined that memset_s\nwas available so something is fairly wrong here.  Did you change any builfiles\nafter running configure?  Re-install or upgrade XCode after running configure?\n\n> then I changed the function memset_s(buf, len, 0, len) to memset(buf, 0, len) and it's working. need a clarification on this?\n\nmemset_s and memset have the same prototype, and are functionally equivalent,\nbut memset_s have certain properties which are required in this codepath.\n\n--\nDaniel Gustafsson", "msg_date": "Tue, 28 May 2024 12:16:20 +0530", "msg_from": "Pradeep Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need clarification on compilation errors in PG 16.2" }, { "msg_contents": "Hi Long,\n\n>In fact, whether the HAVE_MEMSET_S macro is defined determines whether the\nimplementation\n>of the explicit_bzero() function calls memset_s() or memset(). This macro\nis undefined by default\n>in pg_config.h, so check to see if your build environment has a\nHAVE_MEMSET_S macro defined.\n\nYes it was defined in \"pg_config.h\"\n/* Define to 1 if you have the `memset_s' function. */\n#define HAVE_MEMSET_S 1\n\nThanks\n\nOn Tue, May 28, 2024 at 12:27 PM Long Song <[email protected]> wrote:\n\n>\n>\n>\n>\n>\n> Hi Pradeep,\n>\n>\n> At 2024-05-28 12:37:08, \"Pradeep Kumar\" <[email protected]> wrote:\n>\n> > Hi ,\n> > While we try to install PG 16.2 from the source code in macbook we\n> getting this following errors\n> > ```\n> >\n> > explicit_bzero.c:22:9: error: call to undeclared function 'memset_s';\n> ISO C99 and later do not support implicit function declarations\n> [-Wimplicit-function-declaration]\n> > (void) memset_s(buf, len, 0, len);\n> > ^\n> > explicit_bzero.c:22:9: note: did you mean 'memset'?\n> >\n> /Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h:74:7:\n> note: 'memset' declared here\n> > void memset(void __b, int __c, size_t __len);\n> > ^\n> > 1 error generated.\n> > make[2]: *** [explicit_bzero.o] Error 1\n> > make[2]: *** Waiting for unfinished jobs....\n> > make[1]: *** [all-port-recurse] Error 2\n> > make: *** [all-src-recurse] Error 2\n> >\n> > ```\n> > then I changed the function memset_s(buf, len, 0, len) to memset(buf, 0,\n> len) and it's working. need a clarification on this?\n> In fact, whether the HAVE_MEMSET_S macro is defined determines whether the\n> implementation\n> of the explicit_bzero() function calls memset_s() or memset(). This macro\n> is undefined by default\n> in pg_config.h, so check to see if your build environment has a\n> HAVE_MEMSET_S macro defined.\n>\n> Best Regards,\n> Long\n\nHi Long,>In fact, whether the HAVE_MEMSET_S macro is defined determines whether the implementation>of the explicit_bzero() function calls memset_s() or memset(). This macro is undefined by default>in pg_config.h, so check to see if your build environment has a HAVE_MEMSET_S macro defined.Yes it was defined in \"pg_config.h\"/* Define to 1 if you have the `memset_s' function. */#define HAVE_MEMSET_S 1ThanksOn Tue, May 28, 2024 at 12:27 PM Long Song <[email protected]> wrote:\n\n\n\n\nHi Pradeep,\n\n\nAt 2024-05-28 12:37:08, \"Pradeep Kumar\" <[email protected]> wrote:\n\n> Hi ,\n> While we try to install PG 16.2 from the source code in macbook we getting this following errors\n> ```\n> \n> explicit_bzero.c:22:9: error: call to undeclared function 'memset_s'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]\n>         (void) memset_s(buf, len, 0, len);\n>                ^\n> explicit_bzero.c:22:9: note: did you mean 'memset'?\n> /Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h:74:7: note: 'memset' declared here\n> void    memset(void __b, int __c, size_t __len);\n>          ^\n> 1 error generated.\n> make[2]: *** [explicit_bzero.o] Error 1\n> make[2]: *** Waiting for unfinished jobs....\n> make[1]: *** [all-port-recurse] Error 2\n> make: *** [all-src-recurse] Error 2\n> \n> ```\n> then I changed the function memset_s(buf, len, 0, len) to memset(buf, 0, len) and it's working. need a clarification on this?\nIn fact, whether the HAVE_MEMSET_S macro is defined determines whether the implementation\nof the explicit_bzero() function calls memset_s() or memset(). This macro is undefined by default\nin pg_config.h, so check to see if your build environment has a HAVE_MEMSET_S macro defined.\n\nBest Regards,\nLong", "msg_date": "Tue, 28 May 2024 12:40:47 +0530", "msg_from": "Pradeep Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need clarification on compilation errors in PG 16.2" }, { "msg_contents": "Pradeep Kumar <[email protected]> writes:\n> Yes it was defined in \"pg_config.h\"\n> /* Define to 1 if you have the `memset_s' function. */\n> #define HAVE_MEMSET_S 1\n\nThat's correct for recent versions of macOS. I see you are\nbuilding against a recent SDK:\n\n/Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h\n\nbut I wonder if maybe the actual OS version is back-rev?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 May 2024 13:51:17 -0700", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need clarification on compilation errors in PG 16.2" }, { "msg_contents": "> On 28 May 2024, at 22:51, Tom Lane <[email protected]> wrote:\n> \n> Pradeep Kumar <[email protected]> writes:\n>> Yes it was defined in \"pg_config.h\"\n>> /* Define to 1 if you have the `memset_s' function. */\n>> #define HAVE_MEMSET_S 1\n> \n> That's correct for recent versions of macOS. I see you are\n> building against a recent SDK:\n> \n> /Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h\n> \n> but I wonder if maybe the actual OS version is back-rev?\n\nSkimming the releases on https://opensource.apple.com/releases/ it seems that\nmemset_s has been available since Mavericks (10.9) AFAICT.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 28 May 2024 23:10:05 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need clarification on compilation errors in PG 16.2" }, { "msg_contents": "Hello Tom,\n\n>That's correct for recent versions of macOS. I see you are\n>building against a recent SDK:\n>\n>/Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h\n>\n>but I wonder if maybe the actual OS version is back-rev?\n\nCurrently Im using \"MacOSX14.4.sdk\" , path is\n\"/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/usr/include/string.h\".\nWhen I go through the header file and search for the memset_s(), I found\nthat library is defined in a conditional macro refer below, am I breaking\nthe macro below?\n\n#if defined(__STDC_WANT_LIB_EXT1__) && __STDC_WANT_LIB_EXT1__ >= 1\n#include <sys/_types/_rsize_t.h>\n#include <sys/_types/_errno_t.h>\n\n__BEGIN_DECLS\nerrno_t memset_s(void *__s, rsize_t __smax, int __c, rsize_t __n)\n__OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_7_0);\n__END_DECLS\n#endif\n\nThanks and Regards\nPradeep\n\nOn Wed, May 29, 2024 at 2:21 AM Tom Lane <[email protected]> wrote:\n\n> Pradeep Kumar <[email protected]> writes:\n> > Yes it was defined in \"pg_config.h\"\n> > /* Define to 1 if you have the `memset_s' function. */\n> > #define HAVE_MEMSET_S 1\n>\n> That's correct for recent versions of macOS. I see you are\n> building against a recent SDK:\n>\n>\n> /Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h\n>\n> but I wonder if maybe the actual OS version is back-rev?\n>\n> regards, tom lane\n>\n\nHello Tom,>That's correct for recent versions of macOS.  I see you are>building against a recent SDK:>>/Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h>>but I wonder if maybe the actual OS version is back-rev?Currently Im using \"MacOSX14.4.sdk\" , path is \"/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/usr/include/string.h\". When I go through the header file and search for the memset_s(), I found that library is defined in a conditional macro refer below, am I breaking the macro below?#if defined(__STDC_WANT_LIB_EXT1__) && __STDC_WANT_LIB_EXT1__ >= 1#include <sys/_types/_rsize_t.h>#include <sys/_types/_errno_t.h>__BEGIN_DECLSerrno_t memset_s(void *__s, rsize_t __smax, int __c, rsize_t __n) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_7_0);__END_DECLS#endifThanks and RegardsPradeepOn Wed, May 29, 2024 at 2:21 AM Tom Lane <[email protected]> wrote:Pradeep Kumar <[email protected]> writes:\n> Yes it was defined in \"pg_config.h\"\n> /* Define to 1 if you have the `memset_s' function. */\n> #define HAVE_MEMSET_S 1\n\nThat's correct for recent versions of macOS.  I see you are\nbuilding against a recent SDK:\n\n/Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h\n\nbut I wonder if maybe the actual OS version is back-rev?\n\n                        regards, tom lane", "msg_date": "Wed, 29 May 2024 11:30:56 +0530", "msg_from": "Pradeep Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need clarification on compilation errors in PG 16.2" }, { "msg_contents": "Hi,\n\nI found out that for using memset() library is not referred from\n\"/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/usr/include/string.h\"\n, it referred from\n\"/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/usr/include/secure/_string.h\",\nin that file didn't defined the memset_s() macro.\n\nThanks and regards\nPradeep\n\nOn Wed, May 29, 2024 at 11:30 AM Pradeep Kumar <[email protected]>\nwrote:\n\n> Hello Tom,\n>\n> >That's correct for recent versions of macOS. I see you are\n> >building against a recent SDK:\n> >\n>\n> >/Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h\n> >\n> >but I wonder if maybe the actual OS version is back-rev?\n>\n> Currently Im using \"MacOSX14.4.sdk\" , path is\n> \"/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/usr/include/string.h\".\n> When I go through the header file and search for the memset_s(), I found\n> that library is defined in a conditional macro refer below, am I breaking\n> the macro below?\n>\n> #if defined(__STDC_WANT_LIB_EXT1__) && __STDC_WANT_LIB_EXT1__ >= 1\n> #include <sys/_types/_rsize_t.h>\n> #include <sys/_types/_errno_t.h>\n>\n> __BEGIN_DECLS\n> errno_t memset_s(void *__s, rsize_t __smax, int __c, rsize_t __n)\n> __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_7_0);\n> __END_DECLS\n> #endif\n>\n> Thanks and Regards\n> Pradeep\n>\n> On Wed, May 29, 2024 at 2:21 AM Tom Lane <[email protected]> wrote:\n>\n>> Pradeep Kumar <[email protected]> writes:\n>> > Yes it was defined in \"pg_config.h\"\n>> > /* Define to 1 if you have the `memset_s' function. */\n>> > #define HAVE_MEMSET_S 1\n>>\n>> That's correct for recent versions of macOS. I see you are\n>> building against a recent SDK:\n>>\n>>\n>> /Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h\n>>\n>> but I wonder if maybe the actual OS version is back-rev?\n>>\n>> regards, tom lane\n>>\n>\n\nHi,I found out that for using memset() library is not referred from \"/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/usr/include/string.h\" , it referred from \"/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/usr/include/secure/_string.h\", in that file didn't defined the memset_s() macro.Thanks and regardsPradeepOn Wed, May 29, 2024 at 11:30 AM Pradeep Kumar <[email protected]> wrote:Hello Tom,>That's correct for recent versions of macOS.  I see you are>building against a recent SDK:>>/Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h>>but I wonder if maybe the actual OS version is back-rev?Currently Im using \"MacOSX14.4.sdk\" , path is \"/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/usr/include/string.h\". When I go through the header file and search for the memset_s(), I found that library is defined in a conditional macro refer below, am I breaking the macro below?#if defined(__STDC_WANT_LIB_EXT1__) && __STDC_WANT_LIB_EXT1__ >= 1#include <sys/_types/_rsize_t.h>#include <sys/_types/_errno_t.h>__BEGIN_DECLSerrno_t memset_s(void *__s, rsize_t __smax, int __c, rsize_t __n) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_7_0);__END_DECLS#endifThanks and RegardsPradeepOn Wed, May 29, 2024 at 2:21 AM Tom Lane <[email protected]> wrote:Pradeep Kumar <[email protected]> writes:\n> Yes it was defined in \"pg_config.h\"\n> /* Define to 1 if you have the `memset_s' function. */\n> #define HAVE_MEMSET_S 1\n\nThat's correct for recent versions of macOS.  I see you are\nbuilding against a recent SDK:\n\n/Library/Developer/CommandLineTools/SDKs/MacOSX14.2.sdk/usr/include/string.h\n\nbut I wonder if maybe the actual OS version is back-rev?\n\n                        regards, tom lane", "msg_date": "Wed, 29 May 2024 11:49:02 +0530", "msg_from": "Pradeep Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need clarification on compilation errors in PG 16.2" } ]
[ { "msg_contents": "Hi\r\n\r\nI am playing with examples for P2D2, and I found few issues related to\r\nmemoize\r\n\r\n1. I use dataset https://pgsql.cz/files/obce.sql - it is data about czech\r\npopulation\r\n\r\nDictionary - \"obec\" -> \"village\", \"pocet_muzu\" -> \"number_of_men\",\r\n\"pocet_zen\" -> \"number_of_woman\", \"okres\" -> \"district\", \"nazev\" -> \"name\"\r\n\r\nI wrote the query - biggest village per district\r\n\r\nselect nazev\r\n from obce o\r\n where pocet_muzu + pocet_zen = (select max(pocet_muzu + pocet_zen)\r\n from obce\r\n where o.okres_id = okres_id);\r\n\r\n\r\n\r\nI expected usage of memoize, because in this query, it can be very\r\neffective https://explain.depesz.com/s/0ubC\r\n\r\n(2024-05-28 09:09:58) postgres=# explain select nazev from obce o where\r\npocet_muzu + pocet_zen = (select max(pocet_muzu + pocet_zen) from obce\r\nwhere o.okres_id = okres_id);\r\n QUERY PLAN\r\n\r\n══════════════════════════════════════════════════════════════════════════════════════════════════════════\r\nSeq Scan on obce o (cost=0.00..33588.33 rows=31 width=10)\r\n Filter: ((pocet_muzu + pocet_zen) = (SubPlan 2))\r\n SubPlan 2\r\n -> Result (cost=5.34..5.35 rows=1 width=4)\r\n InitPlan 1\r\n -> Limit (cost=0.28..5.34 rows=1 width=4)\r\n -> Index Scan Backward using obce_expr_idx on obce\r\n (cost=0.28..409.92 rows=81 width=4)\r\n Index Cond: ((pocet_muzu + pocet_zen) IS NOT NULL)\r\n Filter: ((o.okres_id)::text = (okres_id)::text)\r\n(9 rows)\r\n\r\nBut it doesn't do. I rewrote this query to lateral join, and memoize was\r\nused, but the result was not good, because filter wa pushed to subquery\r\n\r\nexplain select * from obce o, lateral (select max(pocet_zen + pocet_muzu)\r\nfrom obce where o.okres_id = okres_id) where pocet_zen + pocet_muzu = max;\r\n QUERY PLAN\r\n\r\n══════════════════════════════════════════════════════════════════════════════════════════════════════\r\nNested Loop (cost=12.83..19089.82 rows=31 width=45)\r\n -> Seq Scan on obce o (cost=0.00..121.50 rows=6250 width=41)\r\n -> Memoize (cost=12.83..12.85 rows=1 width=4)\r\n Cache Key: (o.pocet_zen + o.pocet_muzu), o.okres_id\r\n Cache Mode: binary\r\n -> Subquery Scan on unnamed_subquery (cost=12.82..12.84 rows=1\r\nwidth=4)\r\n Filter: ((o.pocet_zen + o.pocet_muzu) = unnamed_subquery.max)\r\n -> Aggregate (cost=12.82..12.83 rows=1 width=4)\r\n -> Index Scan using obce_okres_id_idx on obce\r\n (cost=0.28..12.41 rows=81 width=8)\r\n Index Cond: ((okres_id)::text =\r\n(o.okres_id)::text)\r\n(10 rows)\r\n\r\nand then the effect of memoize is almost negative\r\nhttps://explain.depesz.com/s/TKLL\r\n\r\nWhen I used optimization fence, then memoize was used effectively\r\nhttps://explain.depesz.com/s/hhgi\r\n\r\nexplain select * from (select * from obce o, lateral (select max(pocet_zen\r\n+ pocet_muzu) from obce where o.okres_id = okres_id) offset 0) where\r\npocet_zen + pocet_muzu = max;\r\n QUERY PLAN\r\n\r\n══════════════════════════════════════════════════════════════════════════════════════════════════════\r\nSubquery Scan on unnamed_subquery (cost=12.83..1371.93 rows=31 width=45)\r\n Filter: ((unnamed_subquery.pocet_zen + unnamed_subquery.pocet_muzu) =\r\nunnamed_subquery.max)\r\n -> Nested Loop (cost=12.83..1278.18 rows=6250 width=45)\r\n -> Seq Scan on obce o (cost=0.00..121.50 rows=6250 width=41)\r\n -> Memoize (cost=12.83..12.84 rows=1 width=4)\r\n Cache Key: o.okres_id\r\n Cache Mode: binary\r\n -> Aggregate (cost=12.82..12.83 rows=1 width=4)\r\n -> Index Scan using obce_okres_id_idx on obce\r\n (cost=0.28..12.41 rows=81 width=8)\r\n Index Cond: ((okres_id)::text =\r\n(o.okres_id)::text)\r\n(10 rows)\r\n\r\nMy question is - does memoize support subqueries? And can be enhanced to\r\nsupport this exercise without LATERAL and optimization fences?\r\n\r\nRegards\r\n\r\nPavel\r\n\nHiI am playing with examples for P2D2, and I found few issues related to memoize 1. I use dataset https://pgsql.cz/files/obce.sql - it is data about czech populationDictionary - \"obec\" -> \"village\", \"pocet_muzu\" -> \"number_of_men\", \"pocet_zen\" -> \"number_of_woman\", \"okres\" -> \"district\", \"nazev\" -> \"name\"I wrote the query - biggest village per districtselect nazev   from obce o   where pocet_muzu + pocet_zen = (select max(pocet_muzu + pocet_zen)                                    from obce                                   where o.okres_id = okres_id);I expected usage of memoize, because in this query, it can be very effective https://explain.depesz.com/s/0ubC(2024-05-28 09:09:58) postgres=# explain select nazev from obce o where pocet_muzu + pocet_zen = (select max(pocet_muzu + pocet_zen) from obce where o.okres_id = okres_id);                                                QUERY PLAN                                                 ══════════════════════════════════════════════════════════════════════════════════════════════════════════Seq Scan on obce o  (cost=0.00..33588.33 rows=31 width=10)  Filter: ((pocet_muzu + pocet_zen) = (SubPlan 2))  SubPlan 2    ->  Result  (cost=5.34..5.35 rows=1 width=4)          InitPlan 1            ->  Limit  (cost=0.28..5.34 rows=1 width=4)                  ->  Index Scan Backward using obce_expr_idx on obce  (cost=0.28..409.92 rows=81 width=4)                        Index Cond: ((pocet_muzu + pocet_zen) IS NOT NULL)                        Filter: ((o.okres_id)::text = (okres_id)::text)(9 rows)But it doesn't do. I rewrote this query to lateral join, and memoize was used, but the result was not good, because filter wa pushed to subqueryexplain select * from obce o, lateral (select max(pocet_zen + pocet_muzu) from obce where o.okres_id = okres_id) where pocet_zen + pocet_muzu = max;                                              QUERY PLAN                                               ══════════════════════════════════════════════════════════════════════════════════════════════════════Nested Loop  (cost=12.83..19089.82 rows=31 width=45)  ->  Seq Scan on obce o  (cost=0.00..121.50 rows=6250 width=41)  ->  Memoize  (cost=12.83..12.85 rows=1 width=4)        Cache Key: (o.pocet_zen + o.pocet_muzu), o.okres_id        Cache Mode: binary        ->  Subquery Scan on unnamed_subquery  (cost=12.82..12.84 rows=1 width=4)              Filter: ((o.pocet_zen + o.pocet_muzu) = unnamed_subquery.max)              ->  Aggregate  (cost=12.82..12.83 rows=1 width=4)                    ->  Index Scan using obce_okres_id_idx on obce  (cost=0.28..12.41 rows=81 width=8)                          Index Cond: ((okres_id)::text = (o.okres_id)::text)(10 rows)and then the effect of memoize is almost negative https://explain.depesz.com/s/TKLLWhen I used optimization fence, then memoize was used effectively https://explain.depesz.com/s/hhgiexplain select * from (select * from obce o, lateral (select max(pocet_zen + pocet_muzu) from obce where o.okres_id = okres_id) offset 0) where pocet_zen + pocet_muzu = max;                                              QUERY PLAN                                               ══════════════════════════════════════════════════════════════════════════════════════════════════════Subquery Scan on unnamed_subquery  (cost=12.83..1371.93 rows=31 width=45)  Filter: ((unnamed_subquery.pocet_zen + unnamed_subquery.pocet_muzu) = unnamed_subquery.max)  ->  Nested Loop  (cost=12.83..1278.18 rows=6250 width=45)        ->  Seq Scan on obce o  (cost=0.00..121.50 rows=6250 width=41)        ->  Memoize  (cost=12.83..12.84 rows=1 width=4)              Cache Key: o.okres_id              Cache Mode: binary              ->  Aggregate  (cost=12.82..12.83 rows=1 width=4)                    ->  Index Scan using obce_okres_id_idx on obce  (cost=0.28..12.41 rows=81 width=8)                          Index Cond: ((okres_id)::text = (o.okres_id)::text)(10 rows)My question is - does memoize support subqueries? And can be enhanced to support this exercise without LATERAL and optimization fences?RegardsPavel", "msg_date": "Tue, 28 May 2024 09:31:03 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "why memoize is not used for correlated subquery" }, { "msg_contents": "Pavel Stehule <[email protected]> 于2024年5月28日周二 15:31写道:\n\n> Hi\n>\n>\n> My question is - does memoize support subqueries? And can be enhanced to\n> support this exercise without LATERAL and optimization fences?\n>\n>\nThe commit messages in memoize may answer your question:\n\n \"For now, the planner will only consider using a result cache for\n parameterized nested loop joins. This works for both normal joins and\n also for LATERAL type joins to subqueries. It is possible to use this\nnew\n node for other uses in the future. For example, to cache results from\n correlated subqueries. However, that's not done here due to some\n difficulties obtaining a distinct estimation on the outer plan to\n calculate the estimated cache hit ratio. Currently we plan the inner\nplan\n before planning the outer plan so there is no good way to know if a\nresult\n cache would be useful or not since we can't estimate the number of times\n the subplan will be called until the outer plan is generated.\"\n\ngit show b6002a796d\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nPavel Stehule <[email protected]> 于2024年5月28日周二 15:31写道:HiMy question is - does memoize support subqueries? And can be enhanced to support this exercise without LATERAL and optimization fences?\nThe commit messages in memoize may answer your question:   \"For now, the planner will only consider using a result cache for    parameterized nested loop joins.  This works for both normal joins and    also for LATERAL type joins to subqueries.  It is possible to use this new    node for other uses in the future.  For example, to cache results from    correlated subqueries.  However, that's not done here due to some    difficulties obtaining a distinct estimation on the outer plan to    calculate the estimated cache hit ratio.  Currently we plan the inner plan    before planning the outer plan so there is no good way to know if a result    cache would be useful or not since we can't estimate the number of times    the subplan will be called until the outer plan is generated.\" git show b6002a796d-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Tue, 28 May 2024 15:46:21 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why memoize is not used for correlated subquery" }, { "msg_contents": "On Tue, 28 May 2024 at 19:31, Pavel Stehule <[email protected]> wrote:\n> My question is - does memoize support subqueries? And can be enhanced to support this exercise without LATERAL and optimization fences?\n\nIt's only currently considered for parameterized nested loop joins,\nnot for subplans.\n\nI wrote a bit about this in [1] and there's even a patch. The problem\nwith it is that we plan subqueries and generate an actual plan before\nplanning the outer query. This means we don't have an ndistinct\nestimate for the parameters to the subquery when we plan it, therefore\nwe can't tell if Memoize is a good choice or not. It isn't a good\nchoice if each set of parameters the subplan is called with is unique.\nThat would always be a cache miss and would only result in making the\nquery run more slowly.\n\nI imagined making this work by delaying the plan creation for\nsubqueries until the same time as create_plan() for the outer query.\nIf we have a Path with and without a Memoize node, at some point after\nplanning the outer query, we can choose which Path is the cheapest\nbased on the ndistinct estimate for the parameters.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvpGX7RN%2Bsh7Hn9HWZQKp53SjKaL%3DGtDzYheHWiEd-8moQ%40mail.gmail.com\n\n\n", "msg_date": "Tue, 28 May 2024 19:48:39 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why memoize is not used for correlated subquery" }, { "msg_contents": "út 28. 5. 2024 v 9:48 odesílatel David Rowley <[email protected]> napsal:\n\n> On Tue, 28 May 2024 at 19:31, Pavel Stehule <[email protected]>\n> wrote:\n> > My question is - does memoize support subqueries? And can be enhanced to\n> support this exercise without LATERAL and optimization fences?\n>\n> It's only currently considered for parameterized nested loop joins,\n> not for subplans.\n>\n> I wrote a bit about this in [1] and there's even a patch. The problem\n> with it is that we plan subqueries and generate an actual plan before\n> planning the outer query. This means we don't have an ndistinct\n> estimate for the parameters to the subquery when we plan it, therefore\n> we can't tell if Memoize is a good choice or not. It isn't a good\n> choice if each set of parameters the subplan is called with is unique.\n> That would always be a cache miss and would only result in making the\n> query run more slowly.\n>\n> I imagined making this work by delaying the plan creation for\n> subqueries until the same time as create_plan() for the outer query.\n> If we have a Path with and without a Memoize node, at some point after\n> planning the outer query, we can choose which Path is the cheapest\n> based on the ndistinct estimate for the parameters.\n>\n\nThank you for explanation\n\nPavel\n\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/CAApHDvpGX7RN%2Bsh7Hn9HWZQKp53SjKaL%3DGtDzYheHWiEd-8moQ%40mail.gmail.com\n>\n\nút 28. 5. 2024 v 9:48 odesílatel David Rowley <[email protected]> napsal:On Tue, 28 May 2024 at 19:31, Pavel Stehule <[email protected]> wrote:\n> My question is - does memoize support subqueries? And can be enhanced to support this exercise without LATERAL and optimization fences?\n\nIt's only currently considered for parameterized nested loop joins,\nnot for subplans.\n\nI wrote a bit about this in [1] and there's even a patch.  The problem\nwith it is that we plan subqueries and generate an actual plan before\nplanning the outer query.  This means we don't have an ndistinct\nestimate for the parameters to the subquery when we plan it, therefore\nwe can't tell if Memoize is a good choice or not.  It isn't a good\nchoice if each set of parameters the subplan is called with is unique.\nThat would always be a cache miss and would only result in making the\nquery run more slowly.\n\nI imagined making this work by delaying the plan creation for\nsubqueries until the same time as create_plan() for the outer query.\nIf we have a Path with and without a Memoize node, at some point after\nplanning the outer query, we can choose which Path is the cheapest\nbased on the ndistinct estimate for the parameters.Thank you for explanation Pavel \n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvpGX7RN%2Bsh7Hn9HWZQKp53SjKaL%3DGtDzYheHWiEd-8moQ%40mail.gmail.com", "msg_date": "Tue, 28 May 2024 09:56:50 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why memoize is not used for correlated subquery" }, { "msg_contents": "\n\n> I imagined making this work by delaying the plan creation for\n> subqueries until the same time as create_plan() for the outer query.\n\nDo you mean sublinks rather than subqueries? if so, we can get another\nbenefit I want very much.\n\nexplain (costs off) select * from t1 where t1.a = 1\n and exists (select 1 from t2 where t2.a = t1.a and random() > 0);\n QUERY PLAN \n-----------------------------------------------------------------------\n Seq Scan on t1\n Filter: ((a = 1) AND EXISTS(SubPlan 1))\n SubPlan 1\n -> Seq Scan on t2\n Filter: ((a = t1.a) AND (random() > '0'::double precision))\n\nAs for now, when we are planing the sublinks, we don't know t1.a = 1\nwhich may lost some optimization chance. Considering the t2.a is a\npartition key of t2, this would yield some big improvement for planning\na big number of partitioned table.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 28 May 2024 09:47:17 +0000", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why memoize is not used for correlated subquery" } ]
[ { "msg_contents": "Hi!\n\nI’ve noticed two “surprising” (to me) behaviors related to\nthe “ON ERROR” clause of the new JSON query functions in 17beta1.\n\n1. JSON parsing errors are not subject to ON ERROR\n Apparently, the functions expect JSONB so that a cast is implied\n when providing TEXT. However, the errors during that cast are\n not subject to the ON ERROR clause.\n\n 17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n ERROR: invalid input syntax for type json\n DETAIL: Token \"invalid\" is invalid.\n CONTEXT: JSON data, line 1: invalid\n\n Oracle DB and Db2 (LUW) both return NULL in that case.\n\n I had a look on the list archive to see if that is intentional but\n frankly speaking these functions came a long way. In case it is\n intentional it might be worth adding a note to the docs.\n\n2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n\n 17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n a\n ----\n []\n (1 row) \n\n As NULL ON EMPTY is implied, it should give the same result as\n explicitly adding NULL ON EMPTY:\n\n 17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON ERROR) a;\n a\n ---\n\n (1 row)\n\n Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n on the other hand returns NULL for both queries.\n\n I don’t think that PostgreSQL should follow Oracle DB's suit here\n but again, in case this is intentional it should be made explicit\n in the docs.\n\n-markus\n\n\n\n", "msg_date": "Tue, 28 May 2024 10:19:50 +0200", "msg_from": "Markus Winand <[email protected]>", "msg_from_op": true, "msg_subject": "ON ERROR in json_query and the like" }, { "msg_contents": "út 28. 5. 2024 v 11:29 odesílatel Markus Winand <[email protected]>\nnapsal:\n\n> Hi!\n>\n> I’ve noticed two “surprising” (to me) behaviors related to\n> the “ON ERROR” clause of the new JSON query functions in 17beta1.\n>\n> 1. JSON parsing errors are not subject to ON ERROR\n> Apparently, the functions expect JSONB so that a cast is implied\n> when providing TEXT. However, the errors during that cast are\n> not subject to the ON ERROR clause.\n>\n> 17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n> ERROR: invalid input syntax for type json\n> DETAIL: Token \"invalid\" is invalid.\n> CONTEXT: JSON data, line 1: invalid\n>\n> Oracle DB and Db2 (LUW) both return NULL in that case.\n>\n> I had a look on the list archive to see if that is intentional but\n> frankly speaking these functions came a long way. In case it is\n> intentional it might be worth adding a note to the docs.\n>\n\nI remember a talk about this subject years ago. Originally the JSON_QUERY\nwas designed in similar like Oracle, and casting to jsonb was done inside.\nIf I remember this behave depends on the fact, so old SQL/JSON has not json\ntype and it was based just on processing of plain text. But Postgres has\nJSON, and JSONB and then was more logical to use these types. And because\nthe JSON_QUERY uses these types, and the casting is done before the\nexecution of the function, then the clause ON ERROR cannot be handled.\nMoreover, until soft errors Postgres didn't allow handling input errors in\ncommon functions.\n\nI think so this difference should be mentioned in documentation.\n\nRegards\n\nPavel\n\n\n> 2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n>\n> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n> a\n> ----\n> []\n> (1 row)\n>\n> As NULL ON EMPTY is implied, it should give the same result as\n> explicitly adding NULL ON EMPTY:\n>\n> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON\n> ERROR) a;\n> a\n> ---\n>\n> (1 row)\n>\n> Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n> on the other hand returns NULL for both queries.\n>\n> I don’t think that PostgreSQL should follow Oracle DB's suit here\n> but again, in case this is intentional it should be made explicit\n> in the docs.\n>\n> -markus\n>\n>\n>\n>\n\nút 28. 5. 2024 v 11:29 odesílatel Markus Winand <[email protected]> napsal:Hi!\n\nI’ve noticed two “surprising” (to me) behaviors related to\nthe “ON ERROR” clause of the new JSON query functions in 17beta1.\n\n1. JSON parsing errors are not subject to ON ERROR\n   Apparently, the functions expect JSONB so that a cast is implied\n   when providing TEXT. However, the errors during that cast are\n   not subject to the ON ERROR clause.\n\n   17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n   ERROR:  invalid input syntax for type json\n   DETAIL:  Token \"invalid\" is invalid.\n   CONTEXT:  JSON data, line 1: invalid\n\n   Oracle DB and Db2 (LUW) both return NULL in that case.\n\n   I had a look on the list archive to see if that is intentional but\n   frankly speaking these functions came a long way. In case it is\n   intentional it might be worth adding a note to the docs.I remember a talk about this subject years ago. Originally the JSON_QUERY was designed in similar like Oracle, and casting to jsonb was done inside. If I remember this behave depends on the fact, so old SQL/JSON has not json type and it was based just on processing of plain text. But Postgres has JSON, and JSONB and then was more logical to use these types. And because the JSON_QUERY uses these types, and the casting is done before the execution of the function, then the clause ON ERROR cannot be handled. Moreover, until soft errors Postgres didn't allow handling input errors in common functions.I think so this difference should be mentioned in documentation.RegardsPavel\n\n2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n\n   17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n    a\n   ----\n    []\n   (1 row) \n\n   As NULL ON EMPTY is implied, it should give the same result as\n   explicitly adding NULL ON EMPTY:\n\n   17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON ERROR) a;\n    a\n   ---\n\n   (1 row)\n\n   Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n   on the other hand returns NULL for both queries.\n\n   I don’t think that PostgreSQL should follow Oracle DB's suit here\n   but again, in case this is intentional it should be made explicit\n   in the docs.\n\n-markus", "msg_date": "Tue, 28 May 2024 11:56:26 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Tue, May 28, 2024 at 5:29 PM Markus Winand <[email protected]> wrote:\n>\n> Hi!\n>\n> I’ve noticed two “surprising” (to me) behaviors related to\n> the “ON ERROR” clause of the new JSON query functions in 17beta1.\n>\n> 1. JSON parsing errors are not subject to ON ERROR\n> Apparently, the functions expect JSONB so that a cast is implied\n> when providing TEXT. However, the errors during that cast are\n> not subject to the ON ERROR clause.\n>\n> 17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n> ERROR: invalid input syntax for type json\n> DETAIL: Token \"invalid\" is invalid.\n> CONTEXT: JSON data, line 1: invalid\n>\n> Oracle DB and Db2 (LUW) both return NULL in that case.\n>\n> I had a look on the list archive to see if that is intentional but\n> frankly speaking these functions came a long way. In case it is\n> intentional it might be worth adding a note to the docs.\n\nprevious versions require SQL/JSON query function's context_item to\nexplicitly cast to jsonb,\nif it is not it will error out.\n\nprevious version the following query will have a error\nselect json_value(text '\"1\"' , 'strict $[*]' DEFAULT 9 ON ERROR);\n\nnow it only requires that (context_item) casting to jsonb successfully.\nI raise this issue separately at [1]\n\n[1] https://www.postgresql.org/message-id/CACJufxGWJTa-b0WjNH15otih42PA7SF%2Be7LbkAb0gThs7ojT5Q%40mail.gmail.com\n\n> 2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n>\n> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n> a\n> ----\n> []\n> (1 row)\n>\n> As NULL ON EMPTY is implied, it should give the same result as\n> explicitly adding NULL ON EMPTY:\n>\n\nI vaguely remember, we stumbled on ON ERROR, ON EMPTY several times.\ni don't have a standard, but the doc seems not explicit enough for the\nabove example.\n\nin json_query, maybe we can rephrase like:\n--------------------\nThe ON EMPTY clause specifies the behavior if evaluating\npath_expression yields no value at all. The default when ON EMPTY is\nnot specified\nand ON ERROR not specified is to return a null value.\n\nThe ON ERROR clause specifies the behavior if an error occurs when\nevaluating path_expression, including evaluation yields no value at\nall and ON EMPTY is not specified, the operation to coerce the result\nvalue to the output type, or during the execution of ON EMPTY behavior\n(that is caused by empty result of path_expression evaluation). The\ndefault when ON ERROR is not specified is to return a null value.\n\n\n\n> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON ERROR) a;\n> a\n> ---\n>\n> (1 row)\n>\n> Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n> on the other hand returns NULL for both queries.\n>\n> I don’t think that PostgreSQL should follow Oracle DB's suit here\n> but again, in case this is intentional it should be made explicit\n> in the docs.\n\n\n`\nSELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON ERROR) a;\n`\nI think these sentences addressed the above query.\n<<<\nor during the execution of ON EMPTY behavior (that is caused by empty\nresult of path_expression evaluation).\n<<<\nAs you can see, in this context, \"execution of ON EMPTY behavior\"\nworks fine, successfully returned null,\nso `EMPTY ARRAY ON ERROR` part was ignored.\n\n\n", "msg_date": "Tue, 4 Jun 2024 13:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Tue, May 28, 2024 at 5:29 PM Markus Winand <[email protected]> wrote:\n>\n> Hi!\n>\n> I’ve noticed two “surprising” (to me) behaviors related to\n> the “ON ERROR” clause of the new JSON query functions in 17beta1.\n>\n> 1. JSON parsing errors are not subject to ON ERROR\n> Apparently, the functions expect JSONB so that a cast is implied\n> when providing TEXT. However, the errors during that cast are\n> not subject to the ON ERROR clause.\n>\n> 17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n> ERROR: invalid input syntax for type json\n> DETAIL: Token \"invalid\" is invalid.\n> CONTEXT: JSON data, line 1: invalid\n>\n> Oracle DB and Db2 (LUW) both return NULL in that case.\n>\n> I had a look on the list archive to see if that is intentional but\n> frankly speaking these functions came a long way. In case it is\n> intentional it might be worth adding a note to the docs.\n>\n\njson_query ( context_item, path_expression);\n\n`SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);`\nto make this return NULL, that means to catch all the errors that\nhappened while context_item evaluation.\notherwise, it would not be consistent?\n\nCurrently context_item expressions can be quite arbitrary.\nconsidering the following examples.\n\ncreate or replace function test(jsonb) returns jsonb as $$ begin raise\nexception 'abort'; end $$ language plpgsql;\ncreate or replace function test1(jsonb) returns jsonb as $$ begin\nreturn $1; end $$ language plpgsql;\nSELECT JSON_VALUE(test('1'), '$');\nSELECT JSON_VALUE(test1('1'), '$');\nSELECT JSON_VALUE((select '1'::jsonb), '$');\nSELECT JSON_VALUE((with cte(s) as (select '1') select s::jsonb from cte), '$');\nSELECT JSON_VALUE((with cte(s) as (select '1') select s::jsonb from\ncte union all select s::jsonb from cte limit 1), '$');\n\nCurrently, I don't think we can make\nSELECT JSON_VALUE(test('1'), '$' null on error);\nreturn NULL.\n\n\n", "msg_date": "Tue, 11 Jun 2024 09:58:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "\n\n> On 11.06.2024, at 03:58, jian he <[email protected]> wrote:\n> \n> On Tue, May 28, 2024 at 5:29 PM Markus Winand <[email protected]> wrote:\n>> \n>> Hi!\n>> \n>> I’ve noticed two “surprising” (to me) behaviors related to\n>> the “ON ERROR” clause of the new JSON query functions in 17beta1.\n>> \n>> 1. JSON parsing errors are not subject to ON ERROR\n>> Apparently, the functions expect JSONB so that a cast is implied\n>> when providing TEXT. However, the errors during that cast are\n>> not subject to the ON ERROR clause.\n>> \n>> 17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n>> ERROR: invalid input syntax for type json\n>> DETAIL: Token \"invalid\" is invalid.\n>> CONTEXT: JSON data, line 1: invalid\n>> \n>> Oracle DB and Db2 (LUW) both return NULL in that case.\n>> \n>> I had a look on the list archive to see if that is intentional but\n>> frankly speaking these functions came a long way. In case it is\n>> intentional it might be worth adding a note to the docs.\n>> \n> \n> json_query ( context_item, path_expression);\n> \n> `SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);`\n> to make this return NULL, that means to catch all the errors that\n> happened while context_item evaluation.\n> otherwise, it would not be consistent?\n> \n> Currently context_item expressions can be quite arbitrary.\n> considering the following examples.\n> \n> create or replace function test(jsonb) returns jsonb as $$ begin raise\n> exception 'abort'; end $$ language plpgsql;\n> create or replace function test1(jsonb) returns jsonb as $$ begin\n> return $1; end $$ language plpgsql;\n> SELECT JSON_VALUE(test('1'), '$');\n> SELECT JSON_VALUE(test1('1'), '$');\n> SELECT JSON_VALUE((select '1'::jsonb), '$');\n> SELECT JSON_VALUE((with cte(s) as (select '1') select s::jsonb from cte), '$');\n> SELECT JSON_VALUE((with cte(s) as (select '1') select s::jsonb from\n> cte union all select s::jsonb from cte limit 1), '$');\n> \n> Currently, I don't think we can make\n> SELECT JSON_VALUE(test('1'), '$' null on error);\n> return NULL.\n\nThis is not how it is meant. Your example is not subject to the ON ERROR\nclause because the error happens in a sub-expression. My point is that\nON ERROR includes the String to JSON conversion (the JSON parsing) that\n— in the way the standard describes these functions — inside of them.\n\nIn the standard, JSON_VALUE & co accept string types as well as the type JSON:\n\n10.14 SR 1: The declared type of the <value expression> simply contained in the <JSON input expression> immediately contained in the <JSON context item> shall be a string type or a JSON type. \n\nIt might be best to think of it as two separate functions, overloaded:\n\nJSON_VALUE(context_item JSONB, path_expression …)\nJSON_VALUE(context_item TEXT, path_expression …)\n\nNow if you do this:\ncreate function test2(text) returns text as $$ begin\nreturn $1; end $$ language plpgsql;\ncreate function test3(text) returns jsonb as $$ begin\nreturn $1::jsonb; end $$ language plpgsql;\n\nSELECT JSON_VALUE(test2('invalid'), '$' null on error);\nSELECT JSON_VALUE(test3('invalid'), '$' null on error);\n\nThe first query should return NULL, while the second should (and does) fail.\n\nThis is how I understand it.\n\n-markus\n\n", "msg_date": "Wed, 12 Jun 2024 14:52:49 +0200", "msg_from": "Markus Winand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Wednesday, June 12, 2024, Markus Winand <[email protected]> wrote:\n>\n>\n> 10.14 SR 1: The declared type of the <value expression> simply contained\n> in the <JSON input expression> immediately contained in the <JSON context\n> item> shall be a string type or a JSON type.\n\n\n> It might be best to think of it as two separate functions, overloaded:\n>\n> JSON_VALUE(context_item JSONB, path_expression …)\n> JSON_VALUE(context_item TEXT, path_expression …)\n\n\nYes, we need to document that we deviate from (fail to fully implement) the\nstandard here in that we only provide jsonb parameter functions, not text\nones.\n\nDavid J.\n\nOn Wednesday, June 12, 2024, Markus Winand <[email protected]> wrote:\n\n10.14 SR 1: The declared type of the <value expression> simply contained in the <JSON input expression> immediately contained in the <JSON context item> shall be a string type or a JSON type.  \n\nIt might be best to think of it as two separate functions, overloaded:\n\nJSON_VALUE(context_item JSONB, path_expression …)\nJSON_VALUE(context_item TEXT, path_expression …)Yes, we need to document that we deviate from (fail to fully implement) the standard here in that we only provide jsonb parameter functions, not text ones.David J.", "msg_date": "Wed, 12 Jun 2024 06:13:52 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "\n> On 04.06.2024, at 07:00, jian he <[email protected]> wrote:\n> \n> On Tue, May 28, 2024 at 5:29 PM Markus Winand <[email protected]> wrote:\n> \n>> 2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n>> \n>> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n>> a\n>> ----\n>> []\n>> (1 row)\n>> \n>> As NULL ON EMPTY is implied, it should give the same result as\n>> explicitly adding NULL ON EMPTY:\n>> \n> \n> I vaguely remember, we stumbled on ON ERROR, ON EMPTY several times.\n> i don't have a standard, \n\nIn my understanding of the standard is that there is no distinction\nbetween an explicit and implicit ON EMPTY clause.\n\nE.g. clause 6.35 (json_query) Syntax Rule 4:\n\n • If <JSON query empty behavior> is not specified, then NULL ON EMPTY is implicit.\n\nGeneral Rule 5ai then covers the NULL ON EMPTY case:\n\n • i) If the length of SEQ2 is 0 (zero) and ONEMPTY is NULL, then let JV be the null value.\n\nNeither of these make the ON EMPTY handling dependent on the presence of ON ERROR.\n\n-markus\n\n", "msg_date": "Wed, 12 Jun 2024 15:22:15 +0200", "msg_from": "Markus Winand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Tuesday, May 28, 2024, Markus Winand <[email protected]> wrote:\n\n>\n> 2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n>\n> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n> a\n> ----\n> []\n> (1 row)\n>\n> As NULL ON EMPTY is implied, it should give the same result as\n> explicitly adding NULL ON EMPTY:\n>\n> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON\n> ERROR) a;\n> a\n> ---\n>\n> (1 row)\n>\n> Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n> on the other hand returns NULL for both queries.\n>\n> I don’t think that PostgreSQL should follow Oracle DB's suit here\n> but again, in case this is intentional it should be made explicit\n> in the docs.\n>\n\nThe docs here don’t seem to cover the on empty clause at all nor fully\ncover all options.\n\nWhere do you find the claim that the one implies the other? Is it a typo\nthat your examples says “implies null on empty” but the subject line says\n“implies error on empty”?\n\nWithout those clauses a result is either empty or an error - they are\nmutually exclusive (ignoring matches). I would not expect one clause to\nimply or affect the behavior of the other. There is no chaining. The\noriginal result is transformed to the new result specified by the clause.\n\nI’d need to figure out whether the example you show is actually producing\nempty or error; but it seems correct if the result is empty. The first\nquery ignores the error clause - the empty array row seems to be the\nrepresentation of empty here; the second one matches the empty clause and\noutputs null instead of the empty array.\n\nDavid J.\n\nOn Tuesday, May 28, 2024, Markus Winand <[email protected]> wrote:\n2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n\n   17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n    a\n   ----\n    []\n   (1 row) \n\n   As NULL ON EMPTY is implied, it should give the same result as\n   explicitly adding NULL ON EMPTY:\n\n   17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON ERROR) a;\n    a\n   ---\n\n   (1 row)\n\n   Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n   on the other hand returns NULL for both queries.\n\n   I don’t think that PostgreSQL should follow Oracle DB's suit here\n   but again, in case this is intentional it should be made explicit\n   in the docs.\nThe docs here don’t seem to cover the on empty clause at all nor fully cover all options.Where do you find the claim that the one implies the other?  Is it a typo that your examples says “implies null on empty” but the subject line says “implies error on empty”?Without those clauses a result is either empty or an error - they are mutually exclusive (ignoring matches).  I would not expect one clause to imply or affect the behavior of the other.  There is no chaining.  The original result is transformed to the new result specified by the clause.I’d need to figure out whether the example you show is actually producing empty or error; but it seems correct if the result is empty.  The first query ignores the error clause - the empty array row seems to be the representation of empty here; the second one matches the empty clause and outputs null instead of the empty array.David J.", "msg_date": "Wed, 12 Jun 2024 06:31:19 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "\n\n> On 12.06.2024, at 15:31, David G. Johnston <[email protected]> wrote:\n> \n> On Tuesday, May 28, 2024, Markus Winand <[email protected]> wrote:\n> \n> 2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n> \n> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n> a\n> ----\n> []\n> (1 row) \n> \n> As NULL ON EMPTY is implied, it should give the same result as\n> explicitly adding NULL ON EMPTY:\n> \n> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON ERROR) a;\n> a\n> ---\n> \n> (1 row)\n> \n> Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n> on the other hand returns NULL for both queries.\n> \n> I don’t think that PostgreSQL should follow Oracle DB's suit here\n> but again, in case this is intentional it should be made explicit\n> in the docs.\n> \n> The docs here don’t seem to cover the on empty clause at all nor fully cover all options.\n> \n> Where do you find the claim that the one implies the other? Is it a typo that your examples says “implies null on empty” but the subject line says “implies error on empty”?\n\nI see the confusion caused — sorry. The headline was meant to describe the observed behaviour in 17beta1, while the content refers to how the standard defines it.\n\n> Without those clauses a result is either empty or an error - they are mutually exclusive (ignoring matches). I would not expect one clause to imply or affect the behavior of the other. There is no chaining. The original result is transformed to the new result specified by the clause.\n\nAgreed, that’s why I found the 17beta1 behaviour surprising. \n\n> I’d need to figure out whether the example you show is actually producing empty or error; but it seems correct if the result is empty.\n\nAs I understand the standard, an empty result is not an error.\n\n> The first query ignores the error clause - the empty array row seems to be the representation of empty here; the second one matches the empty clause and outputs null instead of the empty array.\n\nBut the first should behave the same, as the standard implies NULL ON EMPTY if there is no explicit ON EMPTY clause. Oracle DB behaving differently here makes me wonder if there is something in the standard that I haven’t noticed yet...\n\n-markus\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 15:53:16 +0200", "msg_from": "Markus Winand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "Hi,\n\n(apologies for not replying to this thread sooner)\n\nOn Tue, May 28, 2024 at 6:57 PM Pavel Stehule <[email protected]> wrote:\n> út 28. 5. 2024 v 11:29 odesílatel Markus Winand <[email protected]> napsal:\n>>\n>> Hi!\n>>\n>> I’ve noticed two “surprising” (to me) behaviors related to\n>> the “ON ERROR” clause of the new JSON query functions in 17beta1.\n>>\n>> 1. JSON parsing errors are not subject to ON ERROR\n>> Apparently, the functions expect JSONB so that a cast is implied\n>> when providing TEXT. However, the errors during that cast are\n>> not subject to the ON ERROR clause.\n>>\n>> 17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n>> ERROR: invalid input syntax for type json\n>> DETAIL: Token \"invalid\" is invalid.\n>> CONTEXT: JSON data, line 1: invalid\n>>\n>> Oracle DB and Db2 (LUW) both return NULL in that case.\n>>\n>> I had a look on the list archive to see if that is intentional but\n>> frankly speaking these functions came a long way. In case it is\n>> intentional it might be worth adding a note to the docs.\n>\n>\n> I remember a talk about this subject years ago. Originally the JSON_QUERY was designed in similar like Oracle, and casting to jsonb was done inside. If I remember this behave depends on the fact, so old SQL/JSON has not json type and it was based just on processing of plain text. But Postgres has JSON, and JSONB and then was more logical to use these types. And because the JSON_QUERY uses these types, and the casting is done before the execution of the function, then the clause ON ERROR cannot be handled. Moreover, until soft errors Postgres didn't allow handling input errors in common functions.\n>\n> I think so this difference should be mentioned in documentation.\n\nAgree that the documentation needs to be clear about this. I'll update\nmy patch at [1] to add a note next to table 9.16.3. SQL/JSON Query\nFunctions.\n\n>> 2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n>>\n>> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n>> a\n>> ----\n>> []\n>> (1 row)\n>>\n>> As NULL ON EMPTY is implied, it should give the same result as\n>> explicitly adding NULL ON EMPTY:\n>>\n>> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON ERROR) a;\n>> a\n>> ---\n>>\n>> (1 row)\n>>\n>> Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n>> on the other hand returns NULL for both queries.\n>>\n>> I don’t think that PostgreSQL should follow Oracle DB's suit here\n>> but again, in case this is intentional it should be made explicit\n>> in the docs.\n\nThis behavior is a bug and result of an unintentional change that I\nmade at some point after getting involved with this patch set. So I'm\ngoing to fix this so that the empty results of jsonpath evaluation use\nNULL ON EMPTY by default, ie, when the ON EMPTY clause is not present.\nAttached a patch to do so.\n\n--\nThanks, Amit Langote\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqGdineyHfcTEe0%3D8jjXonH3qXi4vFB%2BgRxf1L%2BxR2v_Pw%40mail.gmail.com", "msg_date": "Mon, 17 Jun 2024 15:20:16 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "Hi,\n\nOn 06/17/24 02:20, Amit Langote wrote:\n>>> Apparently, the functions expect JSONB so that a cast is implied\n>>> when providing TEXT. However, the errors during that cast are\n>>> not subject to the ON ERROR clause.\n>>>\n>>> 17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n>>> ERROR: invalid input syntax for type json\n>>> DETAIL: Token \"invalid\" is invalid.\n>>> CONTEXT: JSON data, line 1: invalid\n>>>\n>>> Oracle DB and Db2 (LUW) both return NULL in that case.\n\nI wonder, could prosupport rewriting be used to detect that the first\nargument is supplied by a cast, and rewrite the expression to apply the\ncast 'softly'? Or would that behavior be too magical?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 17 Jun 2024 08:47:16 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "\n> On 17.06.2024, at 08:20, Amit Langote <[email protected]> wrote:\n> \n> Hi,\n> \n> (apologies for not replying to this thread sooner)\n> \n> On Tue, May 28, 2024 at 6:57 PM Pavel Stehule <[email protected]> wrote:\n>> út 28. 5. 2024 v 11:29 odesílatel Markus Winand <[email protected]> napsal:\n>>> \n>>> Hi!\n>>> \n>>> I’ve noticed two “surprising” (to me) behaviors related to\n>>> the “ON ERROR” clause of the new JSON query functions in 17beta1.\n>>> \n>>> 1. JSON parsing errors are not subject to ON ERROR\n>>> Apparently, the functions expect JSONB so that a cast is implied\n>>> when providing TEXT. However, the errors during that cast are\n>>> not subject to the ON ERROR clause.\n>>> \n>>> 17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n>>> ERROR: invalid input syntax for type json\n>>> DETAIL: Token \"invalid\" is invalid.\n>>> CONTEXT: JSON data, line 1: invalid\n>>> \n>>> Oracle DB and Db2 (LUW) both return NULL in that case.\n>>> \n>>> I had a look on the list archive to see if that is intentional but\n>>> frankly speaking these functions came a long way. In case it is\n>>> intentional it might be worth adding a note to the docs.\n>> \n>> \n>> I remember a talk about this subject years ago. Originally the JSON_QUERY was designed in similar like Oracle, and casting to jsonb was done inside. If I remember this behave depends on the fact, so old SQL/JSON has not json type and it was based just on processing of plain text. But Postgres has JSON, and JSONB and then was more logical to use these types. And because the JSON_QUERY uses these types, and the casting is done before the execution of the function, then the clause ON ERROR cannot be handled. Moreover, until soft errors Postgres didn't allow handling input errors in common functions.\n>> \n>> I think so this difference should be mentioned in documentation.\n> \n> Agree that the documentation needs to be clear about this. I'll update\n> my patch at [1] to add a note next to table 9.16.3. SQL/JSON Query\n> Functions.\n\nConsidering another branch of this thread [1] I think the\n\"Supported Features” appendix of the docs should mention that as well.\n\nThe way I see it is that the standards defines two overloaded\nJSON_QUERY functions, of which PostgreSQL will support only one.\nIn case of valid JSON, the implied CAST makes it look as though\nthe second variant of these function was supported as well but that\nillusion totally falls apart once the JSON is not valid anymore.\n\nI think it affects the following feature IDs:\n\n - T821, Basic SQL/JSON query operators\n For JSON_VALUE, JSON_TABLE and JSON_EXISTS\n - T828, JSON_QUERY\n\nAlso, how hard would it be to add the functions that accept\ncharacter strings? Is there, besides the effort, any thing else\nagainst it? I’m asking because I believe once released it might\nnever be changed — for backward compatibility.\n\n[1] https://www.postgresql.org/message-id/CAKFQuwb50BAaj83Np%2B1O6xe3_T6DO8w2mxtFbgSbbUng%2BabrqA%40mail.gmail.com\n\n\n> \n>>> 2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n>>> \n>>> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n>>> a\n>>> ----\n>>> []\n>>> (1 row)\n>>> \n>>> As NULL ON EMPTY is implied, it should give the same result as\n>>> explicitly adding NULL ON EMPTY:\n>>> \n>>> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON ERROR) a;\n>>> a\n>>> ---\n>>> \n>>> (1 row)\n>>> \n>>> Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n>>> on the other hand returns NULL for both queries.\n>>> \n>>> I don’t think that PostgreSQL should follow Oracle DB's suit here\n>>> but again, in case this is intentional it should be made explicit\n>>> in the docs.\n> \n> This behavior is a bug and result of an unintentional change that I\n> made at some point after getting involved with this patch set. So I'm\n> going to fix this so that the empty results of jsonpath evaluation use\n> NULL ON EMPTY by default, ie, when the ON EMPTY clause is not present.\n> Attached a patch to do so.\n> \n\nTested: works.\n\nThanks :)\n\n-markus\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 15:06:26 +0200", "msg_from": "Markus Winand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "po 17. 6. 2024 v 15:07 odesílatel Markus Winand <[email protected]>\nnapsal:\n\n>\n> > On 17.06.2024, at 08:20, Amit Langote <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > (apologies for not replying to this thread sooner)\n> >\n> > On Tue, May 28, 2024 at 6:57 PM Pavel Stehule <[email protected]>\n> wrote:\n> >> út 28. 5. 2024 v 11:29 odesílatel Markus Winand <\n> [email protected]> napsal:\n> >>>\n> >>> Hi!\n> >>>\n> >>> I’ve noticed two “surprising” (to me) behaviors related to\n> >>> the “ON ERROR” clause of the new JSON query functions in 17beta1.\n> >>>\n> >>> 1. JSON parsing errors are not subject to ON ERROR\n> >>> Apparently, the functions expect JSONB so that a cast is implied\n> >>> when providing TEXT. However, the errors during that cast are\n> >>> not subject to the ON ERROR clause.\n> >>>\n> >>> 17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n> >>> ERROR: invalid input syntax for type json\n> >>> DETAIL: Token \"invalid\" is invalid.\n> >>> CONTEXT: JSON data, line 1: invalid\n> >>>\n> >>> Oracle DB and Db2 (LUW) both return NULL in that case.\n> >>>\n> >>> I had a look on the list archive to see if that is intentional but\n> >>> frankly speaking these functions came a long way. In case it is\n> >>> intentional it might be worth adding a note to the docs.\n> >>\n> >>\n> >> I remember a talk about this subject years ago. Originally the\n> JSON_QUERY was designed in similar like Oracle, and casting to jsonb was\n> done inside. If I remember this behave depends on the fact, so old SQL/JSON\n> has not json type and it was based just on processing of plain text. But\n> Postgres has JSON, and JSONB and then was more logical to use these types.\n> And because the JSON_QUERY uses these types, and the casting is done before\n> the execution of the function, then the clause ON ERROR cannot be handled.\n> Moreover, until soft errors Postgres didn't allow handling input errors in\n> common functions.\n> >>\n> >> I think so this difference should be mentioned in documentation.\n> >\n> > Agree that the documentation needs to be clear about this. I'll update\n> > my patch at [1] to add a note next to table 9.16.3. SQL/JSON Query\n> > Functions.\n>\n> Considering another branch of this thread [1] I think the\n> \"Supported Features” appendix of the docs should mention that as well.\n>\n> The way I see it is that the standards defines two overloaded\n> JSON_QUERY functions, of which PostgreSQL will support only one.\n> In case of valid JSON, the implied CAST makes it look as though\n> the second variant of these function was supported as well but that\n> illusion totally falls apart once the JSON is not valid anymore.\n>\n> I think it affects the following feature IDs:\n>\n> - T821, Basic SQL/JSON query operators\n> For JSON_VALUE, JSON_TABLE and JSON_EXISTS\n> - T828, JSON_QUERY\n>\n> Also, how hard would it be to add the functions that accept\n> character strings? Is there, besides the effort, any thing else\n> against it? I’m asking because I believe once released it might\n> never be changed — for backward compatibility.\n>\n\nIt is easy to add the function that accepts text, but when you have\noverloaded functions, then varchar or text is the preferred type, and then\nthe arguments will be casted to text by default instead of json. You can\nhave one function with argument of type \"any\", but then the\nexecution is a little bit slower (outer cast is faster than cast inside\nfunction), and again the Postgres cannot deduce used argument types from\nfunction's argument types.\n\nProbably this can be solved if we introduce a new kind of type, where the\npreferred type will be json, or jsonb.\n\nSo the problem is in the definition of implementation details about the\nmechanism of type deduction (when you use string literal or when you use\nstring expression).\n\nSo now, when you will write json_table(x1 || x2), then and x1 and x2 are of\nunknown type, then Postgres can know, so x1 and x2 will be jsonb, but when\nthere\nwill be secondary function json_table(text), then Postgres detects problem,\nand use preferred type (that is text).\n\nGenerally, Postgres supports function overloading and it is working well\nbetween text, int and numeric where constants have different syntax, but\nwhen the constant\nliteral has the same syntax, there can be problems with hidden casts to\nunwanted type, so not overloaded function is ideal. These issues can be\nsolved in the analysis stage by special code, but again it increases code\ncomplexity.\n\n\n\n\n\n\n>\n> [1]\n> https://www.postgresql.org/message-id/CAKFQuwb50BAaj83Np%2B1O6xe3_T6DO8w2mxtFbgSbbUng%2BabrqA%40mail.gmail.com\n>\n>\n> >\n> >>> 2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n> >>>\n> >>> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n> >>> a\n> >>> ----\n> >>> []\n> >>> (1 row)\n> >>>\n> >>> As NULL ON EMPTY is implied, it should give the same result as\n> >>> explicitly adding NULL ON EMPTY:\n> >>>\n> >>> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY\n> ON ERROR) a;\n> >>> a\n> >>> ---\n> >>>\n> >>> (1 row)\n> >>>\n> >>> Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n> >>> on the other hand returns NULL for both queries.\n> >>>\n> >>> I don’t think that PostgreSQL should follow Oracle DB's suit here\n> >>> but again, in case this is intentional it should be made explicit\n> >>> in the docs.\n> >\n> > This behavior is a bug and result of an unintentional change that I\n> > made at some point after getting involved with this patch set. So I'm\n> > going to fix this so that the empty results of jsonpath evaluation use\n> > NULL ON EMPTY by default, ie, when the ON EMPTY clause is not present.\n> > Attached a patch to do so.\n> >\n>\n> Tested: works.\n>\n> Thanks :)\n>\n> -markus\n>\n>\n\npo 17. 6. 2024 v 15:07 odesílatel Markus Winand <[email protected]> napsal:\n> On 17.06.2024, at 08:20, Amit Langote <[email protected]> wrote:\n> \n> Hi,\n> \n> (apologies for not replying to this thread sooner)\n> \n> On Tue, May 28, 2024 at 6:57 PM Pavel Stehule <[email protected]> wrote:\n>> út 28. 5. 2024 v 11:29 odesílatel Markus Winand <[email protected]> napsal:\n>>> \n>>> Hi!\n>>> \n>>> I’ve noticed two “surprising” (to me) behaviors related to\n>>> the “ON ERROR” clause of the new JSON query functions in 17beta1.\n>>> \n>>> 1. JSON parsing errors are not subject to ON ERROR\n>>>   Apparently, the functions expect JSONB so that a cast is implied\n>>>   when providing TEXT. However, the errors during that cast are\n>>>   not subject to the ON ERROR clause.\n>>> \n>>>   17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n>>>   ERROR:  invalid input syntax for type json\n>>>   DETAIL:  Token \"invalid\" is invalid.\n>>>   CONTEXT:  JSON data, line 1: invalid\n>>> \n>>>   Oracle DB and Db2 (LUW) both return NULL in that case.\n>>> \n>>>   I had a look on the list archive to see if that is intentional but\n>>>   frankly speaking these functions came a long way. In case it is\n>>>   intentional it might be worth adding a note to the docs.\n>> \n>> \n>> I remember a talk about this subject years ago. Originally the JSON_QUERY was designed in similar like Oracle, and casting to jsonb was done inside. If I remember this behave depends on the fact, so old SQL/JSON has not json type and it was based just on processing of plain text. But Postgres has JSON, and JSONB and then was more logical to use these types. And because the JSON_QUERY uses these types, and the casting is done before the execution of the function, then the clause ON ERROR cannot be handled. Moreover, until soft errors Postgres didn't allow handling input errors in common functions.\n>> \n>> I think so this difference should be mentioned in documentation.\n> \n> Agree that the documentation needs to be clear about this. I'll update\n> my patch at [1] to add a note next to table 9.16.3. SQL/JSON Query\n> Functions.\n\nConsidering another branch of this thread [1] I think the\n\"Supported Features” appendix of the docs should mention that as well.\n\nThe way I see it is that the standards defines two overloaded\nJSON_QUERY functions, of which PostgreSQL will support only one.\nIn case of valid JSON, the implied CAST makes it look as though\nthe second variant of these function was supported as well but that\nillusion totally falls apart once the JSON is not valid anymore.\n\nI think it affects the following feature IDs:\n\n  - T821, Basic SQL/JSON query operators\n     For JSON_VALUE, JSON_TABLE and JSON_EXISTS\n  - T828, JSON_QUERY\n\nAlso, how hard would it be to add the functions that accept\ncharacter strings? Is there, besides the effort, any thing else\nagainst it? I’m asking because I believe once released it might\nnever be changed — for backward compatibility.It is easy to add the function that accepts text, but when you have overloaded functions, then varchar or text is the preferred type, and thenthe arguments will be casted to text by default instead of json. You can have one function with argument of type \"any\", but then theexecution is a little bit slower (outer cast is faster than cast inside function), and again the Postgres cannot deduce used argument types from function's argument types.Probably this can be solved if we introduce a new kind of type, where the preferred type will be json, or jsonb. So the problem is in the definition of implementation details about the mechanism of type deduction (when you use string literal or when you use string expression).So now, when you will write json_table(x1 || x2), then and x1 and x2 are of unknown type, then Postgres can know, so x1 and x2 will be jsonb, but when therewill be secondary function json_table(text), then Postgres detects problem, and use preferred type (that is text).Generally, Postgres supports function overloading and it is working well between text, int and numeric where constants have different syntax, but when the constantliteral has the same syntax, there can be problems with hidden casts to unwanted type, so not overloaded function is ideal. These issues can be solved in the analysis stage by special code, but again it increases code complexity.  \n\n[1] https://www.postgresql.org/message-id/CAKFQuwb50BAaj83Np%2B1O6xe3_T6DO8w2mxtFbgSbbUng%2BabrqA%40mail.gmail.com\n\n\n> \n>>> 2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n>>> \n>>>   17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n>>>    a\n>>>   ----\n>>>    []\n>>>   (1 row)\n>>> \n>>>   As NULL ON EMPTY is implied, it should give the same result as\n>>>   explicitly adding NULL ON EMPTY:\n>>> \n>>>   17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON ERROR) a;\n>>>    a\n>>>   ---\n>>> \n>>>   (1 row)\n>>> \n>>>   Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n>>>   on the other hand returns NULL for both queries.\n>>> \n>>>   I don’t think that PostgreSQL should follow Oracle DB's suit here\n>>>   but again, in case this is intentional it should be made explicit\n>>>   in the docs.\n> \n> This behavior is a bug and result of an unintentional change that I\n> made at some point after getting involved with this patch set.  So I'm\n> going to fix this so that the empty results of jsonpath evaluation use\n> NULL ON EMPTY by default, ie, when the ON EMPTY clause is not present.\n> Attached a patch to do so.\n> \n\nTested: works.\n\nThanks :)\n\n-markus", "msg_date": "Mon, 17 Jun 2024 15:56:54 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Mon, Jun 17, 2024 at 10:07 PM Markus Winand <[email protected]> wrote:\n> > On 17.06.2024, at 08:20, Amit Langote <[email protected]> wrote:\n> >>> 2. EMPTY [ARRAY|OBJECT] ON ERROR implies ERROR ON EMPTY\n> >>>\n> >>> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' EMPTY ARRAY ON ERROR) a;\n> >>> a\n> >>> ----\n> >>> []\n> >>> (1 row)\n> >>>\n> >>> As NULL ON EMPTY is implied, it should give the same result as\n> >>> explicitly adding NULL ON EMPTY:\n> >>>\n> >>> 17beta1=# SELECT JSON_QUERY('[]', '$[*]' NULL ON EMPTY EMPTY ARRAY ON ERROR) a;\n> >>> a\n> >>> ---\n> >>>\n> >>> (1 row)\n> >>>\n> >>> Interestingly, Oracle DB gives the same (wrong) results. Db2 (LUW)\n> >>> on the other hand returns NULL for both queries.\n> >>>\n> >>> I don’t think that PostgreSQL should follow Oracle DB's suit here\n> >>> but again, in case this is intentional it should be made explicit\n> >>> in the docs.\n> >\n> > This behavior is a bug and result of an unintentional change that I\n> > made at some point after getting involved with this patch set. So I'm\n> > going to fix this so that the empty results of jsonpath evaluation use\n> > NULL ON EMPTY by default, ie, when the ON EMPTY clause is not present.\n> > Attached a patch to do so.\n> >\n>\n> Tested: works.\n\nPushed, thanks for testing.\n\nI'll work on the documentation updates that may be needed based on\nthis and nearby discussion(s).\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Wed, 19 Jun 2024 15:54:02 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Mon, Jun 17, 2024 at 9:07 PM Markus Winand <[email protected]> wrote:\n>\n>\n> I think it affects the following feature IDs:\n>\n> - T821, Basic SQL/JSON query operators\n> For JSON_VALUE, JSON_TABLE and JSON_EXISTS\n> - T828, JSON_QUERY\n>\n> Also, how hard would it be to add the functions that accept\n> character strings? Is there, besides the effort, any thing else\n> against it? I’m asking because I believe once released it might\n> never be changed — for backward compatibility.\n>\n\nwe have ExecEvalCoerceViaIOSafe, so it's doable.\nI tried, but other things break. so it's not super easy i think.\n\nbecause of eval_const_expressions_mutator, postgres will constantly\nevaluate simple const expressions to simplify some expressions.\n`SELECT JSON_QUERY('a', '$');`\npostgres will try to do the cast coercion from text 'a' to jsonb. but\nit will fail, but it's hard to send the cast failed information to\nlater code,\nin ExecInterpExpr. in ExecEvalCoerceViaIOSafe, if we cast coercion\nfailed, then this function value is zero, isnull is set to true.\n\n`SELECT JSON_QUERY('a', '$');`\nwill be equivalent to\n`SELECT JSON_QUERY(NULL, '$');`\n\nso making one of the next 2 examples to return jsonb 1 would be hard\nto implement.\nSELECT JSON_QUERY('a', '$' default '1' on empty);\nSELECT JSON_QUERY('a', '$' default '1' on error);\n\n\n--------------------------------------------------------------------------\nIf the standard says the context_item can be text string (cannot cast\nto json successfully). future version we make it happen,\nthen it should be fine?\nbecause it's like the previous version we are not fully compliant with\nstandard, now the new version is in full compliance with standard.\n\n\n", "msg_date": "Wed, 19 Jun 2024 21:32:57 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 17, 2024 at 9:47 PM Chapman Flack <[email protected]> wrote:\n> On 06/17/24 02:20, Amit Langote wrote:\n> >>> Apparently, the functions expect JSONB so that a cast is implied\n> >>> when providing TEXT. However, the errors during that cast are\n> >>> not subject to the ON ERROR clause.\n> >>>\n> >>> 17beta1=# SELECT JSON_QUERY('invalid', '$' NULL ON ERROR);\n> >>> ERROR: invalid input syntax for type json\n> >>> DETAIL: Token \"invalid\" is invalid.\n> >>> CONTEXT: JSON data, line 1: invalid\n> >>>\n> >>> Oracle DB and Db2 (LUW) both return NULL in that case.\n>\n> I wonder, could prosupport rewriting be used to detect that the first\n> argument is supplied by a cast, and rewrite the expression to apply the\n> cast 'softly'? Or would that behavior be too magical?\n\nI don't think prosupport rewriting can be used, because JSON_QUERY().\n\nWe could possibly use \"runtime coercion\" for context_item so that the\ncoercion errors can be \"caught\", which is how we coerce the jsonpath\nresult to the RETURNING type.\n\nFor now, I'm inclined to simply document the limitation that errors\nwhen coercing string arguments to json are always thrown.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Thu, 20 Jun 2024 16:55:19 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 17, 2024 at 10:07 PM Markus Winand <[email protected]> wrote:\n> > On 17.06.2024, at 08:20, Amit Langote <[email protected]> wrote:\n> > Agree that the documentation needs to be clear about this. I'll update\n> > my patch at [1] to add a note next to table 9.16.3. SQL/JSON Query\n> > Functions.\n>\n> Considering another branch of this thread [1] I think the\n> \"Supported Features” appendix of the docs should mention that as well.\n>\n> The way I see it is that the standards defines two overloaded\n> JSON_QUERY functions, of which PostgreSQL will support only one.\n> In case of valid JSON, the implied CAST makes it look as though\n> the second variant of these function was supported as well but that\n> illusion totally falls apart once the JSON is not valid anymore.\n>\n> I think it affects the following feature IDs:\n>\n> - T821, Basic SQL/JSON query operators\n> For JSON_VALUE, JSON_TABLE and JSON_EXISTS\n> - T828, JSON_QUERY\n>\n> Also, how hard would it be to add the functions that accept\n> character strings? Is there, besides the effort, any thing else\n> against it? I’m asking because I believe once released it might\n> never be changed — for backward compatibility.\n\nHmm, I'm starting to think that adding the implied cast to json wasn't\nsuch a great idea after all, because it might mislead the users to\nthink that JSON parsing is transparent (respects ON ERROR), which is\nwhat you are saying, IIUC.\n\nI'm inclined to push the attached patch which puts back the\nrestriction to allow only jsonb arguments, asking users to add an\nexplicit cast if necessary.\n\n--\nThanks, Amit Langote", "msg_date": "Thu, 20 Jun 2024 18:19:09 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Thu, Jun 20, 2024 at 2:19 AM Amit Langote <[email protected]>\nwrote:\n\n> Hi,\n>\n> On Mon, Jun 17, 2024 at 10:07 PM Markus Winand <[email protected]>\n> wrote:\n> > > On 17.06.2024, at 08:20, Amit Langote <[email protected]> wrote:\n> > > Agree that the documentation needs to be clear about this. I'll update\n> > > my patch at [1] to add a note next to table 9.16.3. SQL/JSON Query\n> > > Functions.\n> >\n> > Considering another branch of this thread [1] I think the\n> > \"Supported Features” appendix of the docs should mention that as well.\n> >\n> > The way I see it is that the standards defines two overloaded\n> > JSON_QUERY functions, of which PostgreSQL will support only one.\n> > In case of valid JSON, the implied CAST makes it look as though\n> > the second variant of these function was supported as well but that\n> > illusion totally falls apart once the JSON is not valid anymore.\n> >\n> > I think it affects the following feature IDs:\n> >\n> > - T821, Basic SQL/JSON query operators\n> > For JSON_VALUE, JSON_TABLE and JSON_EXISTS\n> > - T828, JSON_QUERY\n> >\n> > Also, how hard would it be to add the functions that accept\n> > character strings? Is there, besides the effort, any thing else\n> > against it? I’m asking because I believe once released it might\n> > never be changed — for backward compatibility.\n>\n> Hmm, I'm starting to think that adding the implied cast to json wasn't\n> such a great idea after all, because it might mislead the users to\n> think that JSON parsing is transparent (respects ON ERROR), which is\n> what you are saying, IIUC.\n>\n>\nActually, the implied cast is exactly the correct thing to do here - the\nissue is that we aren't using the newly added soft errors infrastructure\n[1] to catch the result of that cast and instead produce whatever output on\nerror tells us to produce. This seems to be in the realm of doability so\nwe should try in the interest of being standard conforming. I'd even argue\nto make this an open item unless and until the attempt is agreed upon to\nhave failed (or it succeeds of course).\n\nDavid J.\n\n[1] d9f7f5d32f201bec61fef8104aafcb77cecb4dcb\n\nOn Thu, Jun 20, 2024 at 2:19 AM Amit Langote <[email protected]> wrote:Hi,\n\nOn Mon, Jun 17, 2024 at 10:07 PM Markus Winand <[email protected]> wrote:\n> > On 17.06.2024, at 08:20, Amit Langote <[email protected]> wrote:\n> > Agree that the documentation needs to be clear about this. I'll update\n> > my patch at [1] to add a note next to table 9.16.3. SQL/JSON Query\n> > Functions.\n>\n> Considering another branch of this thread [1] I think the\n> \"Supported Features” appendix of the docs should mention that as well.\n>\n> The way I see it is that the standards defines two overloaded\n> JSON_QUERY functions, of which PostgreSQL will support only one.\n> In case of valid JSON, the implied CAST makes it look as though\n> the second variant of these function was supported as well but that\n> illusion totally falls apart once the JSON is not valid anymore.\n>\n> I think it affects the following feature IDs:\n>\n>   - T821, Basic SQL/JSON query operators\n>      For JSON_VALUE, JSON_TABLE and JSON_EXISTS\n>   - T828, JSON_QUERY\n>\n> Also, how hard would it be to add the functions that accept\n> character strings? Is there, besides the effort, any thing else\n> against it? I’m asking because I believe once released it might\n> never be changed — for backward compatibility.\n\nHmm, I'm starting to think that adding the implied cast to json wasn't\nsuch a great idea after all, because it might mislead the users to\nthink that JSON parsing is transparent (respects ON ERROR), which is\nwhat you are saying, IIUC.Actually, the implied cast is exactly the correct thing to do here - the issue is that we aren't using the newly added soft errors infrastructure [1] to catch the result of that cast and instead produce whatever output on error tells us to produce.  This seems to be in the realm of doability so we should try in the interest of being standard conforming.  I'd even argue to make this an open item unless and until the attempt is agreed upon to have failed (or it succeeds of course).David J.[1] d9f7f5d32f201bec61fef8104aafcb77cecb4dcb", "msg_date": "Thu, 20 Jun 2024 09:11:16 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Fri, Jun 21, 2024 at 1:11 AM David G. Johnston\n<[email protected]> wrote:\n> On Thu, Jun 20, 2024 at 2:19 AM Amit Langote <[email protected]> wrote:\n>> On Mon, Jun 17, 2024 at 10:07 PM Markus Winand <[email protected]> wrote:\n>> > > On 17.06.2024, at 08:20, Amit Langote <[email protected]> wrote:\n>> > > Agree that the documentation needs to be clear about this. I'll update\n>> > > my patch at [1] to add a note next to table 9.16.3. SQL/JSON Query\n>> > > Functions.\n>> >\n>> > Considering another branch of this thread [1] I think the\n>> > \"Supported Features” appendix of the docs should mention that as well.\n>> >\n>> > The way I see it is that the standards defines two overloaded\n>> > JSON_QUERY functions, of which PostgreSQL will support only one.\n>> > In case of valid JSON, the implied CAST makes it look as though\n>> > the second variant of these function was supported as well but that\n>> > illusion totally falls apart once the JSON is not valid anymore.\n>> >\n>> > I think it affects the following feature IDs:\n>> >\n>> > - T821, Basic SQL/JSON query operators\n>> > For JSON_VALUE, JSON_TABLE and JSON_EXISTS\n>> > - T828, JSON_QUERY\n>> >\n>> > Also, how hard would it be to add the functions that accept\n>> > character strings? Is there, besides the effort, any thing else\n>> > against it? I’m asking because I believe once released it might\n>> > never be changed — for backward compatibility.\n>>\n>> Hmm, I'm starting to think that adding the implied cast to json wasn't\n>> such a great idea after all, because it might mislead the users to\n>> think that JSON parsing is transparent (respects ON ERROR), which is\n>> what you are saying, IIUC.\n>>\n>\n> Actually, the implied cast is exactly the correct thing to do here - the issue is that we aren't using the newly added soft errors infrastructure [1] to catch the result of that cast and instead produce whatever output on error tells us to produce. This seems to be in the realm of doability so we should try in the interest of being standard conforming.\n\nSoft error handling *was* used for catching cast errors in the very\nearly versions of this patch (long before I got involved and the\ninfrastructure you mention got added). It was taken out after Pavel\nsaid [1] that he didn't like producing NULL instead of throwing an\nerror. Not sure if Pavel's around but it would be good to know why he\ndidn't like it at the time.\n\nI can look into making that work again, but that is not going to make beta2.\n\n> I'd even argue to make this an open item unless and until the attempt is agreed upon to have failed (or it succeeds of course).\n\nOK, adding an open item.\n\n-- \nThanks, Amit Langote\n[1] https://www.postgresql.org/message-id/CAFj8pRCnzO2cnHi5ebXciV%3DtuGVvAQOW9uPU%2BDQV1GkL31R%3D-g%40mail.gmail.com\n\n\n", "msg_date": "Fri, 21 Jun 2024 09:22:36 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Thu, Jun 20, 2024 at 5:22 PM Amit Langote <[email protected]>\nwrote:\n\n>\n> Soft error handling *was* used for catching cast errors in the very\n> early versions of this patch (long before I got involved and the\n> infrastructure you mention got added). It was taken out after Pavel\n> said [1] that he didn't like producing NULL instead of throwing an\n> error. Not sure if Pavel's around but it would be good to know why he\n> didn't like it at the time.\n>\n>\nI'm personally in the \"make it error\" camp but \"make it conform to the\nstandard\" is a stronger membership (in general).\n\nI see this note in your linked thread:\n\n> By the standard, it is implementation-defined whether JSON parsing errors\n> should be caught by ON ERROR clause.\n\nAbsent someone contradicting that claim I retract my position here and am\nfine with failing if these \"functions\" are supplied with something that\ncannot be cast to json. I'd document them like functions that accept json\nwith the implications that any casting to json happens before the function\nis called and thus its arguments do not apply to that step.\n\nGiven that we have \"expression IS JSON\" the ability to hack together a case\nexpression to get non-erroring behavior exists.\n\nDavid J.\n\nOn Thu, Jun 20, 2024 at 5:22 PM Amit Langote <[email protected]> wrote:\nSoft error handling *was* used for catching cast errors in the very\nearly versions of this patch (long before I got involved and the\ninfrastructure you mention got added).  It was taken out after Pavel\nsaid [1] that he didn't like producing NULL instead of throwing an\nerror.  Not sure if Pavel's around but it would be good to know why he\ndidn't like it at the time.I'm personally in the \"make it error\" camp but \"make it conform to the standard\" is a stronger membership (in general).I see this note in your linked thread:> By the standard, it is implementation-defined whether JSON parsing errors> should be caught by ON ERROR clause.Absent someone contradicting that claim I retract my position here and am fine with failing if these \"functions\" are supplied with something that cannot be cast to json.  I'd document them like functions that accept json with the implications that any casting to json happens before the function is called and thus its arguments do not apply to that step.Given that we have \"expression IS JSON\" the ability to hack together a case expression to get non-erroring behavior exists.David J.", "msg_date": "Thu, 20 Jun 2024 18:00:48 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Fri, Jun 21, 2024 at 10:01 AM David G. Johnston\n<[email protected]> wrote:\n> On Thu, Jun 20, 2024 at 5:22 PM Amit Langote <[email protected]> wrote:\n>>\n>>\n>> Soft error handling *was* used for catching cast errors in the very\n>> early versions of this patch (long before I got involved and the\n>> infrastructure you mention got added). It was taken out after Pavel\n>> said [1] that he didn't like producing NULL instead of throwing an\n>> error. Not sure if Pavel's around but it would be good to know why he\n>> didn't like it at the time.\n>>\n>\n> I'm personally in the \"make it error\" camp but \"make it conform to the standard\" is a stronger membership (in general).\n>\n> I see this note in your linked thread:\n>\n> > By the standard, it is implementation-defined whether JSON parsing errors\n> > should be caught by ON ERROR clause.\n>\n> Absent someone contradicting that claim I retract my position here and am fine with failing if these \"functions\" are supplied with something that cannot be cast to json. I'd document them like functions that accept json with the implications that any casting to json happens before the function is called and thus its arguments do not apply to that step.\n\nThanks for that clarification.\n\nSo, there are the following options:\n\n1. Disallow anything but jsonb for context_item (the patch I posted yesterday)\n\n2. Continue allowing context_item to be non-json character or utf-8\nencoded bytea strings, but document that any parsing errors do not\nrespect the ON ERROR clause.\n\n3. Go ahead and fix implicit casts to jsonb so that any parsing errors\nrespect ON ERROR (no patch written yet).\n\nDavid's vote seems to be 2, which is my inclination too. Markus' vote\nseems to be either 1 or 3. Anyone else?\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 21 Jun 2024 13:00:46 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "pá 21. 6. 2024 v 2:22 odesílatel Amit Langote <[email protected]>\nnapsal:\n\n> On Fri, Jun 21, 2024 at 1:11 AM David G. Johnston\n> <[email protected]> wrote:\n> > On Thu, Jun 20, 2024 at 2:19 AM Amit Langote <[email protected]>\n> wrote:\n> >> On Mon, Jun 17, 2024 at 10:07 PM Markus Winand <[email protected]>\n> wrote:\n> >> > > On 17.06.2024, at 08:20, Amit Langote <[email protected]>\n> wrote:\n> >> > > Agree that the documentation needs to be clear about this. I'll\n> update\n> >> > > my patch at [1] to add a note next to table 9.16.3. SQL/JSON Query\n> >> > > Functions.\n> >> >\n> >> > Considering another branch of this thread [1] I think the\n> >> > \"Supported Features” appendix of the docs should mention that as well.\n> >> >\n> >> > The way I see it is that the standards defines two overloaded\n> >> > JSON_QUERY functions, of which PostgreSQL will support only one.\n> >> > In case of valid JSON, the implied CAST makes it look as though\n> >> > the second variant of these function was supported as well but that\n> >> > illusion totally falls apart once the JSON is not valid anymore.\n> >> >\n> >> > I think it affects the following feature IDs:\n> >> >\n> >> > - T821, Basic SQL/JSON query operators\n> >> > For JSON_VALUE, JSON_TABLE and JSON_EXISTS\n> >> > - T828, JSON_QUERY\n> >> >\n> >> > Also, how hard would it be to add the functions that accept\n> >> > character strings? Is there, besides the effort, any thing else\n> >> > against it? I’m asking because I believe once released it might\n> >> > never be changed — for backward compatibility.\n> >>\n> >> Hmm, I'm starting to think that adding the implied cast to json wasn't\n> >> such a great idea after all, because it might mislead the users to\n> >> think that JSON parsing is transparent (respects ON ERROR), which is\n> >> what you are saying, IIUC.\n> >>\n> >\n> > Actually, the implied cast is exactly the correct thing to do here - the\n> issue is that we aren't using the newly added soft errors infrastructure\n> [1] to catch the result of that cast and instead produce whatever output on\n> error tells us to produce. This seems to be in the realm of doability so\n> we should try in the interest of being standard conforming.\n>\n> Soft error handling *was* used for catching cast errors in the very\n> early versions of this patch (long before I got involved and the\n> infrastructure you mention got added). It was taken out after Pavel\n> said [1] that he didn't like producing NULL instead of throwing an\n> error. Not sure if Pavel's around but it would be good to know why he\n> didn't like it at the time.\n>\n> I can look into making that work again, but that is not going to make\n> beta2.\n>\n> > I'd even argue to make this an open item unless and until the attempt\n> is agreed upon to have failed (or it succeeds of course).\n>\n> OK, adding an open item.\n>\n\nAt this time, when I wrote this mail, I didn't exactly notice the standard,\nso broken format should be handled there too. In this time, there was no\nsupport for soft errors ever in Postgres, so handling broken formats was\ninconsistent.\n\nStandard describes format errors, but exactly doesn't describe if this is\nerror like missing key or broken json format. Maybe wrongly, but\nintuitively for me, these errors are of different kinds and broken input\ndata is a different case than missing key (but fully valid json). I didn't\nfind the exact sentence in standard when I searched it (but it was four\nyears ago).\n\nMy position in this case is not extra strong. The original patch was\nwritten and tested to be compatible with Oracle (what is a strong argument\nand feature). On second hand, some things required subtransactioning what\nwas wrong (soft errors were introduced later). The compatibility with\nOracle is a strong argument, but Oracle by itself is not fully compatible\nwith standard, and some cases are special (in Oracle) because empty string\nin Oracle is NULL, and then it is handled differently. In this time I had\nmotivation to reduce the patch to \"safe\" minimum to be possible to accept\nit by committers. The patch was written in 2017 (I think). Handling broken\nformat (input format) was one issue that I thought could be solved later.\n\nThe main reason for my mail is fact, so Postgres and Oracle have DIFFERENT\ncorrect format of JSON!\n\n'{a:10}' is correct on Oracle, but not correct on Postgres. And with\ndefault ON ERROR NULL (what is default), then the Oracle returns 10, and\nPostgres NULL. I thought this can be very messy and better to just raise an\nexception.\n\nRegards\n\nPavel\n\n\n\n\n\n\n> --\n> Thanks, Amit Langote\n> [1]\n> https://www.postgresql.org/message-id/CAFj8pRCnzO2cnHi5ebXciV%3DtuGVvAQOW9uPU%2BDQV1GkL31R%3D-g%40mail.gmail.com\n>\n\npá 21. 6. 2024 v 2:22 odesílatel Amit Langote <[email protected]> napsal:On Fri, Jun 21, 2024 at 1:11 AM David G. Johnston\n<[email protected]> wrote:\n> On Thu, Jun 20, 2024 at 2:19 AM Amit Langote <[email protected]> wrote:\n>> On Mon, Jun 17, 2024 at 10:07 PM Markus Winand <[email protected]> wrote:\n>> > > On 17.06.2024, at 08:20, Amit Langote <[email protected]> wrote:\n>> > > Agree that the documentation needs to be clear about this. I'll update\n>> > > my patch at [1] to add a note next to table 9.16.3. SQL/JSON Query\n>> > > Functions.\n>> >\n>> > Considering another branch of this thread [1] I think the\n>> > \"Supported Features” appendix of the docs should mention that as well.\n>> >\n>> > The way I see it is that the standards defines two overloaded\n>> > JSON_QUERY functions, of which PostgreSQL will support only one.\n>> > In case of valid JSON, the implied CAST makes it look as though\n>> > the second variant of these function was supported as well but that\n>> > illusion totally falls apart once the JSON is not valid anymore.\n>> >\n>> > I think it affects the following feature IDs:\n>> >\n>> >   - T821, Basic SQL/JSON query operators\n>> >      For JSON_VALUE, JSON_TABLE and JSON_EXISTS\n>> >   - T828, JSON_QUERY\n>> >\n>> > Also, how hard would it be to add the functions that accept\n>> > character strings? Is there, besides the effort, any thing else\n>> > against it? I’m asking because I believe once released it might\n>> > never be changed — for backward compatibility.\n>>\n>> Hmm, I'm starting to think that adding the implied cast to json wasn't\n>> such a great idea after all, because it might mislead the users to\n>> think that JSON parsing is transparent (respects ON ERROR), which is\n>> what you are saying, IIUC.\n>>\n>\n> Actually, the implied cast is exactly the correct thing to do here - the issue is that we aren't using the newly added soft errors infrastructure [1] to catch the result of that cast and instead produce whatever output on error tells us to produce.  This seems to be in the realm of doability so we should try in the interest of being standard conforming.\n\nSoft error handling *was* used for catching cast errors in the very\nearly versions of this patch (long before I got involved and the\ninfrastructure you mention got added).  It was taken out after Pavel\nsaid [1] that he didn't like producing NULL instead of throwing an\nerror.  Not sure if Pavel's around but it would be good to know why he\ndidn't like it at the time.\n\nI can look into making that work again, but that is not going to make beta2.\n\n>  I'd even argue to make this an open item unless and until the attempt is agreed upon to have failed (or it succeeds of course).\n\nOK, adding an open item.At this time, when I wrote this mail, I didn't exactly notice the standard, so broken format should be handled there too.  In this time, there was no support for soft errors ever in Postgres, so handling broken formats was inconsistent. Standard describes format errors, but exactly doesn't describe if this is error like missing key or broken json format. Maybe wrongly, but intuitively for me, these errors are of different kinds and broken input data is a different case than missing key (but fully valid json). I didn't find the exact sentence in standard when I searched it (but it was four years ago). My position in this case is not extra strong. The original patch was written and tested to be compatible with Oracle (what is a strong argument and feature). On second hand, some things required subtransactioning what was wrong (soft errors were introduced later). The compatibility with Oracle is a strong argument, but Oracle by itself is not fully compatible with standard, and some cases are special (in Oracle) because empty string in Oracle is NULL, and then it is handled differently. In this time I had motivation to reduce the patch to \"safe\" minimum to be possible to accept it by committers. The patch was written in 2017 (I think). Handling broken format (input format) was one issue that I thought could be solved later. The main reason for my mail is fact, so Postgres and Oracle have DIFFERENT correct format of JSON!'{a:10}' is correct on Oracle, but not correct on Postgres. And with default ON ERROR NULL (what is default), then the Oracle returns 10, and Postgres NULL. I thought this can be very messy and better to just raise an exception. RegardsPavel\n\n-- \nThanks, Amit Langote\n[1] https://www.postgresql.org/message-id/CAFj8pRCnzO2cnHi5ebXciV%3DtuGVvAQOW9uPU%2BDQV1GkL31R%3D-g%40mail.gmail.com", "msg_date": "Fri, 21 Jun 2024 06:22:42 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "pá 21. 6. 2024 v 6:01 odesílatel Amit Langote <[email protected]>\nnapsal:\n\n> On Fri, Jun 21, 2024 at 10:01 AM David G. Johnston\n> <[email protected]> wrote:\n> > On Thu, Jun 20, 2024 at 5:22 PM Amit Langote <[email protected]>\n> wrote:\n> >>\n> >>\n> >> Soft error handling *was* used for catching cast errors in the very\n> >> early versions of this patch (long before I got involved and the\n> >> infrastructure you mention got added). It was taken out after Pavel\n> >> said [1] that he didn't like producing NULL instead of throwing an\n> >> error. Not sure if Pavel's around but it would be good to know why he\n> >> didn't like it at the time.\n> >>\n> >\n> > I'm personally in the \"make it error\" camp but \"make it conform to the\n> standard\" is a stronger membership (in general).\n> >\n> > I see this note in your linked thread:\n> >\n> > > By the standard, it is implementation-defined whether JSON parsing\n> errors\n> > > should be caught by ON ERROR clause.\n> >\n> > Absent someone contradicting that claim I retract my position here and\n> am fine with failing if these \"functions\" are supplied with something that\n> cannot be cast to json. I'd document them like functions that accept json\n> with the implications that any casting to json happens before the function\n> is called and thus its arguments do not apply to that step.\n>\n> Thanks for that clarification.\n>\n> So, there are the following options:\n>\n> 1. Disallow anything but jsonb for context_item (the patch I posted\n> yesterday)\n>\n> 2. Continue allowing context_item to be non-json character or utf-8\n> encoded bytea strings, but document that any parsing errors do not\n> respect the ON ERROR clause.\n>\n> 3. Go ahead and fix implicit casts to jsonb so that any parsing errors\n> respect ON ERROR (no patch written yet).\n>\n> David's vote seems to be 2, which is my inclination too. Markus' vote\n> seems to be either 1 or 3. Anyone else?\n>\n\n@3 can be possibly messy (although be near Oracle or standard). I don't\nthink it is safe - one example '{a:10}' is valid for Oracle, but not for\nPostgres, and using @3 impacts different results (better to raise an\nexception).\n\nThe effect of @1 and @2 is similar - @1 is better so the user needs to\nexplicitly cast, so maybe it is cleaner, so the cast should not be handled,\n@2 is more user friendly, because it accepts unknown string literal. From a\ndeveloper perspective I prefer @1, from a user perspective I prefer @2.\nMaybe @2 is a good compromise.\n\nRegards\n\nPavel\n\n\n>\n> --\n> Thanks, Amit Langote\n>\n\npá 21. 6. 2024 v 6:01 odesílatel Amit Langote <[email protected]> napsal:On Fri, Jun 21, 2024 at 10:01 AM David G. Johnston\n<[email protected]> wrote:\n> On Thu, Jun 20, 2024 at 5:22 PM Amit Langote <[email protected]> wrote:\n>>\n>>\n>> Soft error handling *was* used for catching cast errors in the very\n>> early versions of this patch (long before I got involved and the\n>> infrastructure you mention got added).  It was taken out after Pavel\n>> said [1] that he didn't like producing NULL instead of throwing an\n>> error.  Not sure if Pavel's around but it would be good to know why he\n>> didn't like it at the time.\n>>\n>\n> I'm personally in the \"make it error\" camp but \"make it conform to the standard\" is a stronger membership (in general).\n>\n> I see this note in your linked thread:\n>\n> > By the standard, it is implementation-defined whether JSON parsing errors\n> > should be caught by ON ERROR clause.\n>\n> Absent someone contradicting that claim I retract my position here and am fine with failing if these \"functions\" are supplied with something that cannot be cast to json.  I'd document them like functions that accept json with the implications that any casting to json happens before the function is called and thus its arguments do not apply to that step.\n\nThanks for that clarification.\n\nSo, there are the following options:\n\n1. Disallow anything but jsonb for context_item (the patch I posted yesterday)\n\n2. Continue allowing context_item to be non-json character or utf-8\nencoded bytea strings, but document that any parsing errors do not\nrespect the ON ERROR clause.\n\n3. Go ahead and fix implicit casts to jsonb so that any parsing errors\nrespect ON ERROR (no patch written yet).\n\nDavid's vote seems to be 2, which is my inclination too.  Markus' vote\nseems to be either 1 or 3.  Anyone else?@3 can be possibly messy (although be near Oracle or standard). I don't think it is safe - one example '{a:10}' is valid for Oracle, but not for Postgres, and using @3 impacts different results (better to raise an exception).The effect of @1 and @2 is similar - @1 is better so the user needs to explicitly cast, so maybe it is cleaner, so the cast should not be handled, @2 is more user friendly, because it accepts unknown string literal. From a developer perspective I prefer @1, from a user perspective I prefer @2. Maybe @2 is a good compromise.RegardsPavel \n\n-- \nThanks, Amit Langote", "msg_date": "Fri, 21 Jun 2024 06:39:48 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Thursday, June 20, 2024, Pavel Stehule <[email protected]> wrote:\n\n>\n>\n> pá 21. 6. 2024 v 6:01 odesílatel Amit Langote <[email protected]>\n> napsal:\n>\n>> On Fri, Jun 21, 2024 at 10:01 AM David G. Johnston\n>> <[email protected]> wrote:\n>>\n>> > > By the standard, it is implementation-defined whether JSON parsing\n>> errors\n>> > > should be caught by ON ERROR clause.\n>> >\n>> > Absent someone contradicting that claim I retract my position here and\n>> am fine with failing if these \"functions\" are supplied with something that\n>> cannot be cast to json. I'd document them like functions that accept json\n>> with the implications that any casting to json happens before the function\n>> is called and thus its arguments do not apply to that step.\n>>\n>> Thanks for that clarification.\n>>\n>> So, there are the following options:\n>>\n>> 1. Disallow anything but jsonb for context_item (the patch I posted\n>> yesterday)\n>>\n>> 2. Continue allowing context_item to be non-json character or utf-8\n>> encoded bytea strings, but document that any parsing errors do not\n>> respect the ON ERROR clause.\n>>\n>> 3. Go ahead and fix implicit casts to jsonb so that any parsing errors\n>> respect ON ERROR (no patch written yet).\n>>\n>> David's vote seems to be 2, which is my inclination too. Markus' vote\n>> seems to be either 1 or 3. Anyone else?\n>>\n>\n> @3 can be possibly messy (although be near Oracle or standard). I don't\n> think it is safe - one example '{a:10}' is valid for Oracle, but not for\n> Postgres, and using @3 impacts different results (better to raise an\n> exception).\n>\n> The effect of @1 and @2 is similar - @1 is better so the user needs to\n> explicitly cast, so maybe it is cleaner, so the cast should not be handled,\n> @2 is more user friendly, because it accepts unknown string literal. From a\n> developer perspective I prefer @1, from a user perspective I prefer @2.\n> Maybe @2 is a good compromise.\n>\n\n2 also has the benefit of being standard conforming while 1 does not.\n\n3 is also conforming and I wouldn’t object to it had we already done it\nthat way.\n\nBut since 2 is conforming too and implemented, and we are in beta, I'm\nthinking we need to go with this option.\n\nDavid J.\n\nOn Thursday, June 20, 2024, Pavel Stehule <[email protected]> wrote:pá 21. 6. 2024 v 6:01 odesílatel Amit Langote <[email protected]> napsal:On Fri, Jun 21, 2024 at 10:01 AM David G. Johnston\n<[email protected]> wrote:\n> > By the standard, it is implementation-defined whether JSON parsing errors\n> > should be caught by ON ERROR clause.\n>\n> Absent someone contradicting that claim I retract my position here and am fine with failing if these \"functions\" are supplied with something that cannot be cast to json.  I'd document them like functions that accept json with the implications that any casting to json happens before the function is called and thus its arguments do not apply to that step.\n\nThanks for that clarification.\n\nSo, there are the following options:\n\n1. Disallow anything but jsonb for context_item (the patch I posted yesterday)\n\n2. Continue allowing context_item to be non-json character or utf-8\nencoded bytea strings, but document that any parsing errors do not\nrespect the ON ERROR clause.\n\n3. Go ahead and fix implicit casts to jsonb so that any parsing errors\nrespect ON ERROR (no patch written yet).\n\nDavid's vote seems to be 2, which is my inclination too.  Markus' vote\nseems to be either 1 or 3.  Anyone else?@3 can be possibly messy (although be near Oracle or standard). I don't think it is safe - one example '{a:10}' is valid for Oracle, but not for Postgres, and using @3 impacts different results (better to raise an exception).The effect of @1 and @2 is similar - @1 is better so the user needs to explicitly cast, so maybe it is cleaner, so the cast should not be handled, @2 is more user friendly, because it accepts unknown string literal. From a developer perspective I prefer @1, from a user perspective I prefer @2. Maybe @2 is a good compromise.2 also has the benefit of being standard conforming while 1 does not.3 is also conforming and I wouldn’t object to it had we already done it that way.But since 2 is conforming too and implemented, and we are in beta, I'm thinking we need to go with this option.David J.", "msg_date": "Thu, 20 Jun 2024 21:46:43 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "\n> On 21.06.2024, at 03:00, David G. Johnston <[email protected]> wrote:\n> \n> On Thu, Jun 20, 2024 at 5:22 PM Amit Langote <[email protected]> wrote:\n> \n> Soft error handling *was* used for catching cast errors in the very\n> early versions of this patch (long before I got involved and the\n> infrastructure you mention got added). It was taken out after Pavel\n> said [1] that he didn't like producing NULL instead of throwing an\n> error. Not sure if Pavel's around but it would be good to know why he\n> didn't like it at the time.\n> \n> \n> I'm personally in the \"make it error\" camp but \"make it conform to the standard\" is a stronger membership (in general).\n> \n> I see this note in your linked thread:\n> \n> > By the standard, it is implementation-defined whether JSON parsing errors\n> > should be caught by ON ERROR clause.\n> \n> Absent someone contradicting that claim I retract my position here and am fine with failing if these \"functions\" are supplied with something that cannot be cast to json. I'd document them like functions that accept json with the implications that any casting to json happens before the function is called and thus its arguments do not apply to that step.\n\nThat claim was also made in 2020, before the current (2023)\nSQL standard was released — yet it might have been the same.\n\nMy understanding of the 2023 standard is that ON ERROR \ncovers invalid JSON because the conversion from a character\nstring to JSON is done deeply nested inside the JSON_QUERY &\nCo functions.\n\n9.47 Processing <JSON API common syntax> Function GR 3\ntriggers\n9.46, “SQL/JSON path language: syntax and semantics”\nWhere GR 11 says:\n————\nGR 11) The result of evaluating a <JSON path wff> is a completion condition, and, if that completion condition is successful completion (00000), then an SQL/JSON sequence. For conciseness, the result will be stated either as an exception condition or as an SQL/JSON sequence (in the latter case, the completion condition successful completion (00000) is implicit). Unsuccessful completion conditions are not automatically raised and do not terminate application of the General Rules in this Subclause.\n a) If <JSON path context variable> JPCV is specified, then\n Case:\n • i) If PARSED is True, then the result of evaluating JPCV is JT.\n • ii) If the declared type of JT is JSON, then the result of evaluating JPCV is JT.\n • iii) Otherwise:\n • 1) The General Rules of Subclause 9.42, “Parsing JSON text”, are applied with JT as JSON TEXT, an implementation-defined (IV185) <JSON key uniqueness constraint> as UNIQUENESS CONSTRAINT, and FO as FORMAT OPTION; let ST be the STATUS and let CISJI be the SQL/JSON ITEM returned from the application of those General Rules.\n • 2) Case:\n • A) If ST is not successful completion (00000), then the result of evaluating JPCV is ST.\n • B) Otherwise, the result of evaluating JPCV is CISJI.\n————\n\nIn case of an exception, it is passed along to clause 9.44 Converting an SQL/JSON sequence to an SQL/JSON item where GR 5b ultimately says (the exception is in TEMPST in the meanwhile):\n\n——\n • b) If TEMPST is an exception condition, then Case:\n i) If ONERROR is ERROR, then let OUTST be TEMPST.\n ii) Otherwise, let OUTST be successful completion (00000). Case:\n • 1) If ONERROR is NULL, then let JV be the null value.\n • 2) If ONERROR is EMPTY ARRAY, then let JV be an SQL/JSON array that has no SQL/JSON elements.\n • 3) If ONERROR is EMPTY OBJECT, then let JV be an SQL/JSON object that has no SQL/JSON members.\n——\n\nLet me know if I’m missing something here.\n\nThe whole idea that a cast is implied outside of JSON_QUERY & co\nmight be covered by a clause that generally allows implementations\nto cast as they like (don’t have the ref at hand, but I think\nsuch a clause is somewhere). On the other hand, the 2023 standard\ndoesn’t even cover an **explicit** cast from character strings to\nJSON as per 6.13 SR 7 (that’ where the matrix of source- and\ndestination types is given for cast).\n\nSo my bet is this:\n\n* I’m pretty sure JSON parsing errors being subject to ON ERROR\n is conforming.\n That’s also “backed” by the Oracle and Db2 (LUW) implementations.\n\n* Implying a CAST might be ok, but I have doubts.\n\n* I don’t see how failing without being subject to ON ERRROR\n (as it is now in 17beta1) could possibly covered by the standard.\n But as we all know: the standard is confusing. If somebody thinks\n differently, references would be greatly appreciated.\n \n-markus\n\n", "msg_date": "Fri, 21 Jun 2024 07:20:17 +0200", "msg_from": "Markus Winand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "\n\n> On 21.06.2024, at 06:46, David G. Johnston <[email protected]> wrote:\n>> \n>> On Thursday, June 20, 2024, Pavel Stehule <[email protected]> wrote:\n>> \n>> \n>> pá 21. 6. 2024 v 6:01 odesílatel Amit Langote <[email protected]> napsal:\n>> On Fri, Jun 21, 2024 at 10:01 AM David G. Johnston\n>> <[email protected]> wrote:\n>> \n>> > > By the standard, it is implementation-defined whether JSON parsing errors\n>> > > should be caught by ON ERROR clause.\n>> >\n>> > Absent someone contradicting that claim I retract my position here and am fine with failing if these \"functions\" are supplied with something that cannot be cast to json. I'd document them like functions that accept json with the implications that any casting to json happens before the function is called and thus its arguments do not apply to that step.\n>> \n>> Thanks for that clarification.\n>> \n>> So, there are the following options:\n>> \n>> 1. Disallow anything but jsonb for context_item (the patch I posted yesterday)\n>> \n>> 2. Continue allowing context_item to be non-json character or utf-8\n>> encoded bytea strings, but document that any parsing errors do not\n>> respect the ON ERROR clause.\n>> \n>> 3. Go ahead and fix implicit casts to jsonb so that any parsing errors\n>> respect ON ERROR (no patch written yet).\n>> \n>> David's vote seems to be 2, which is my inclination too. Markus' vote\n>> seems to be either 1 or 3. Anyone else?\n\nWith a very strong preference of 3.\n\n>> \n>> @3 can be possibly messy (although be near Oracle or standard). I don't think it is safe - one example '{a:10}' is valid for Oracle, but not for Postgres, and using @3 impacts different results (better to raise an exception).\n\nThe question of what is valid JSON is a different question, I guess. My original report is about something that is invalid everywhere. Having that in line would be a start. Also I believe Oracle’s habit to accept unquoted object keys is not covered by the standard (unless defined as a JSON format and also explicitly using the corresponding FORMAT clause).\n\n>> The effect of @1 and @2 is similar - @1 is better so the user needs to explicitly cast, so maybe it is cleaner, so the cast should not be handled, @2 is more user friendly, because it accepts unknown string literal. From a developer perspective I prefer @1, from a user perspective I prefer @2. Maybe @2 is a good compromise.\n> \n> 2 also has the benefit of being standard conforming while 1 does not.\n\nWhy do you think so? Do you have any references or is this just based on previous statements in this discussion?\n\n-markus\n\n", "msg_date": "Fri, 21 Jun 2024 07:28:22 +0200", "msg_from": "Markus Winand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Thursday, June 20, 2024, Markus Winand <[email protected]> wrote:\n\n>\n>\n> > On 21.06.2024, at 06:46, David G. Johnston <[email protected]>\n> wrote:\n> >>\n>\n> >\n> > 2 also has the benefit of being standard conforming while 1 does not.\n>\n> Why do you think so? Do you have any references or is this just based on\n> previous statements in this discussion?\n>\n>\nHearsay.\n\n\nhttps://www.postgresql.org/message-id/CAFj8pRCnzO2cnHi5ebXciV%3DtuGVvAQOW9uPU%2BDQV1GkL31R%3D-g%40mail.gmail.com\n\n> 4) If ALREADY PARSED is False, then it is implementation-defined whether\nthe\n> following rules are applied:\n> a) The General Rules of Subclause 9.36, \"Parsing JSON text\", are applied\nwith\n> JT as JSON TEXT, an implementation-defined <JSON key uniqueness\nconstraint>\n> as UNIQUENESS CONSTRAINT, and FO as FORMAT OPTION; let ST be the STATUS\nand\n> let CISJI be the SQL/JSON ITEM returned from the application of those\n> General Rules.\n> b) If ST is not successful completion, then ST is returned as the STATUS\nof\n> this application of these General Rules, and no further General Rules of\n> this Subclause are applied.\n\nBut maybe I’m mis-interpreting that snippet and Nikita’s related commentary\nregarding have chosen between options for this implementation-defined\nfeature.\n\nDavid j.\n\nOn Thursday, June 20, 2024, Markus Winand <[email protected]> wrote:\n\n> On 21.06.2024, at 06:46, David G. Johnston <[email protected]> wrote:\n>> \n> \n> 2 also has the benefit of being standard conforming while 1 does not.\n\nWhy do you think so? Do you have any references or is this just based on previous statements in this discussion?Hearsay. https://www.postgresql.org/message-id/CAFj8pRCnzO2cnHi5ebXciV%3DtuGVvAQOW9uPU%2BDQV1GkL31R%3D-g%40mail.gmail.com> 4) If ALREADY PARSED is False, then it is implementation-defined whether the> following rules are applied:> a) The General Rules of Subclause 9.36, \"Parsing JSON text\", are applied with> JT as JSON TEXT, an implementation-defined <JSON key uniqueness constraint>> as UNIQUENESS CONSTRAINT, and FO as FORMAT OPTION; let ST be the STATUS and> let CISJI be the SQL/JSON ITEM returned from the application of those> General Rules.> b) If ST is not successful completion, then ST is returned as the STATUS of> this application of these General Rules, and no further General Rules of> this Subclause are applied.But maybe I’m mis-interpreting that snippet and Nikita’s related commentary regarding have chosen between options for this implementation-defined feature.David j.", "msg_date": "Thu, 20 Jun 2024 22:38:24 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "\n> On 21.06.2024, at 07:38, David G. Johnston <[email protected]> wrote:\n> \n> On Thursday, June 20, 2024, Markus Winand <[email protected]> wrote:\n> \n> \n> > On 21.06.2024, at 06:46, David G. Johnston <[email protected]> wrote:\n> >> \n> \n> > \n> > 2 also has the benefit of being standard conforming while 1 does not.\n> \n> Why do you think so? Do you have any references or is this just based on previous statements in this discussion?\n> \n> \n> Hearsay.\n> \n> https://www.postgresql.org/message-id/CAFj8pRCnzO2cnHi5ebXciV%3DtuGVvAQOW9uPU%2BDQV1GkL31R%3D-g%40mail.gmail.com\n> \n> > 4) If ALREADY PARSED is False, then it is implementation-defined whether the\n> > following rules are applied:\n> > a) The General Rules of Subclause 9.36, \"Parsing JSON text\", are applied with\n> > JT as JSON TEXT, an implementation-defined <JSON key uniqueness constraint>\n> > as UNIQUENESS CONSTRAINT, and FO as FORMAT OPTION; let ST be the STATUS and\n> > let CISJI be the SQL/JSON ITEM returned from the application of those\n> > General Rules.\n> > b) If ST is not successful completion, then ST is returned as the STATUS of\n> > this application of these General Rules, and no further General Rules of\n> > this Subclause are applied.\n> \n> But maybe I’m mis-interpreting that snippet and Nikita’s related commentary regarding have chosen between options for this implementation-defined feature.\n\nAh, here we go. Nowadays this is called IA050, “Whether a JSON context item that is not of the JSON data type is parsed.” (Likewise IA054 “Whether a JSON parameter is parsed.”)\n\nSo updating the three options:\n\n> 1. Disallow anything but jsonb for context_item (the patch I posted yesterday)\n\n* Non-conforming\n* patch available\n\n> 2. Continue allowing context_item to be non-json character or utf-8\n> encoded bytea strings, but document that any parsing errors do not\n> respect the ON ERROR clause.\n\n* Conforming by choosing IA050 to implement GR4: raise errors independent of the ON ERROR clause.\n* currently committed.\n\n> 3. Go ahead and fix implicit casts to jsonb so that any parsing errors\n> respect ON ERROR (no patch written yet).\n\n* Conforming by choosing IA050 not to implement GR4: Parsing happens later, considering the ON ERROR clause.\n* no patch available, not trivial\n\nI guess I’m the only one in favour of 3 ;) My remaining arguments are that Oracle and Db2 (LUW) do it that way and also that it is IMHO what users would expect. However, as 2 is also conforming (how could I miss that?), proper documentation is a very tempting option.\n\n-markus\nps: Does anyone know a dialect that implements GR4?\n\n", "msg_date": "Fri, 21 Jun 2024 12:59:12 +0200", "msg_from": "Markus Winand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "Hi,\n\nThanks all for chiming in.\n\nOn Fri, Jun 21, 2024 at 8:00 PM Markus Winand <[email protected]> wrote:\n> So updating the three options:\n> > 1. Disallow anything but jsonb for context_item (the patch I posted yesterday)\n>\n> * Non-conforming\n> * patch available\n>\n> > 2. Continue allowing context_item to be non-json character or utf-8\n> > encoded bytea strings, but document that any parsing errors do not\n> > respect the ON ERROR clause.\n>\n> * Conforming by choosing IA050 to implement GR4: raise errors independent of the ON ERROR clause.\n> * currently committed.\n>\n> > 3. Go ahead and fix implicit casts to jsonb so that any parsing errors\n> > respect ON ERROR (no patch written yet).\n>\n> * Conforming by choosing IA050 not to implement GR4: Parsing happens later, considering the ON ERROR clause.\n> * no patch available, not trivial\n>\n> I guess I’m the only one in favour of 3 ;) My remaining arguments are that Oracle and Db2 (LUW) do it that way and also that it is IMHO what users would expect. However, as 2 is also conforming (how could I miss that?), proper documentation is a very tempting option.\n\nSo, we should go with 2 for v17, because while 3 may be very\nappealing, there's a risk that it might not get done in the time\nremaining for v17.\n\nI'll post the documentation patch on Monday.\n\n\n--\nThanks, Amit Langote\n\n\n", "msg_date": "Sat, 22 Jun 2024 17:43:05 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "Hi,\n\nOn Sat, Jun 22, 2024 at 5:43 PM Amit Langote <[email protected]> wrote:\n> On Fri, Jun 21, 2024 at 8:00 PM Markus Winand <[email protected]> wrote:\n> > So updating the three options:\n> > > 1. Disallow anything but jsonb for context_item (the patch I posted yesterday)\n> >\n> > * Non-conforming\n> > * patch available\n> >\n> > > 2. Continue allowing context_item to be non-json character or utf-8\n> > > encoded bytea strings, but document that any parsing errors do not\n> > > respect the ON ERROR clause.\n> >\n> > * Conforming by choosing IA050 to implement GR4: raise errors independent of the ON ERROR clause.\n> > * currently committed.\n> >\n> > > 3. Go ahead and fix implicit casts to jsonb so that any parsing errors\n> > > respect ON ERROR (no patch written yet).\n> >\n> > * Conforming by choosing IA050 not to implement GR4: Parsing happens later, considering the ON ERROR clause.\n> > * no patch available, not trivial\n> >\n> > I guess I’m the only one in favour of 3 ;) My remaining arguments are that Oracle and Db2 (LUW) do it that way and also that it is IMHO what users would expect. However, as 2 is also conforming (how could I miss that?), proper documentation is a very tempting option.\n>\n> So, we should go with 2 for v17, because while 3 may be very\n> appealing, there's a risk that it might not get done in the time\n> remaining for v17.\n>\n> I'll post the documentation patch on Monday.\n\nHere's that patch, which adds this note after table 9.16.3. SQL/JSON\nQuery Functions:\n\n+ <note>\n+ <para>\n+ The <replaceable>context_item</replaceable> expression is converted to\n+ <type>jsonb</type> by an implicit cast if the expression is not already of\n+ type <type>jsonb</type>. Note, however, that any parsing errors that occur\n+ during that conversion are thrown unconditionally, that is, are not\n+ handled according to the (specified or implicit) <literal>ON\nERROR</literal>\n+ clause.\n+ </para>\n+ </note>\n\nPeter, sorry about the last-minute ask, but do you have any\nthoughts/advice on conformance as discussed above?\n\n--\nThanks, Amit Langote", "msg_date": "Wed, 26 Jun 2024 21:10:09 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" }, { "msg_contents": "On Wed, Jun 26, 2024 at 9:10 PM Amit Langote <[email protected]> wrote:\n> On Sat, Jun 22, 2024 at 5:43 PM Amit Langote <[email protected]> wrote:\n> > On Fri, Jun 21, 2024 at 8:00 PM Markus Winand <[email protected]> wrote:\n> > > So updating the three options:\n> > > > 1. Disallow anything but jsonb for context_item (the patch I posted yesterday)\n> > >\n> > > * Non-conforming\n> > > * patch available\n> > >\n> > > > 2. Continue allowing context_item to be non-json character or utf-8\n> > > > encoded bytea strings, but document that any parsing errors do not\n> > > > respect the ON ERROR clause.\n> > >\n> > > * Conforming by choosing IA050 to implement GR4: raise errors independent of the ON ERROR clause.\n> > > * currently committed.\n> > >\n> > > > 3. Go ahead and fix implicit casts to jsonb so that any parsing errors\n> > > > respect ON ERROR (no patch written yet).\n> > >\n> > > * Conforming by choosing IA050 not to implement GR4: Parsing happens later, considering the ON ERROR clause.\n> > > * no patch available, not trivial\n> > >\n> > > I guess I’m the only one in favour of 3 ;) My remaining arguments are that Oracle and Db2 (LUW) do it that way and also that it is IMHO what users would expect. However, as 2 is also conforming (how could I miss that?), proper documentation is a very tempting option.\n> >\n> > So, we should go with 2 for v17, because while 3 may be very\n> > appealing, there's a risk that it might not get done in the time\n> > remaining for v17.\n> >\n> > I'll post the documentation patch on Monday.\n>\n> Here's that patch, which adds this note after table 9.16.3. SQL/JSON\n> Query Functions:\n>\n> + <note>\n> + <para>\n> + The <replaceable>context_item</replaceable> expression is converted to\n> + <type>jsonb</type> by an implicit cast if the expression is not already of\n> + type <type>jsonb</type>. Note, however, that any parsing errors that occur\n> + during that conversion are thrown unconditionally, that is, are not\n> + handled according to the (specified or implicit) <literal>ON\n> ERROR</literal>\n> + clause.\n> + </para>\n> + </note>\n\nI have pushed this.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 28 Jun 2024 09:49:45 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ON ERROR in json_query and the like" } ]
[ { "msg_contents": "I'm getting build failures when building with meson and llvm enabled, \nlike this:\n\n[1/112] Generating src/backend/jit/llvm/llvmjit_types.bc with a custom \ncommand\nFAILED: src/backend/jit/llvm/llvmjit_types.bc\n/usr/local/bin/ccache /usr/local/Cellar/llvm/18.1.6/bin/clang -c -o \nsrc/backend/jit/llvm/llvmjit_types.bc \n../src/backend/jit/llvm/llvmjit_types.c -flto=thin -emit-llvm -MD -MQ \nsrc/backend/jit/llvm/llvmjit_types.bc -MF \nsrc/backend/jit/llvm/llvmjit_types.c.bc.d -O2 -Wno-ignored-attributes \n-Wno-empty-body -fno-strict-aliasing -fwrapv -I./src/include \n-I./src/backend/utils/misc -I../src/include\nIn file included from ../src/backend/jit/llvm/llvmjit_types.c:27:\nIn file included from ../src/include/postgres.h:45:\n../src/include/c.h:75:10: fatal error: 'libintl.h' file not found\n 75 | #include <libintl.h>\n | ^~~~~~~~~~~\n1 error generated.\n\n\nThe reason is that libintl.h is at /usr/local/include/libintl.h, but \nthat is not in the include path for this command. I have \n-I/usr/local/include in CPPFLAGS in the environment, which is why the \nnormal compilation commands pick it up, but this is not used by this \ncustom command.\n\nWit this small patch I can make it work:\n\ndiff --git a/src/backend/jit/llvm/meson.build \nb/src/backend/jit/llvm/meson.build\nindex 41c759f73c5..4a4232661ba 100644\n--- a/src/backend/jit/llvm/meson.build\n+++ b/src/backend/jit/llvm/meson.build\n@@ -63,6 +63,7 @@ bitcode_cflags = ['-fno-strict-aliasing', '-fwrapv']\n if llvm.version().version_compare('=15.0')\n bitcode_cflags += ['-Xclang', '-no-opaque-pointers']\n endif\n+bitcode_cflags += get_option('c_args')\n bitcode_cflags += cppflags\n\n # XXX: Worth improving on the logic to find directories here\n\n\nIs that correct?\n\n\n", "msg_date": "Tue, 28 May 2024 08:17:34 -0700", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "small fix for llvm build" }, { "msg_contents": "On 28.05.24 17:17, Peter Eisentraut wrote:\n> I'm getting build failures when building with meson and llvm enabled, \n> like this:\n> \n> [1/112] Generating src/backend/jit/llvm/llvmjit_types.bc with a custom \n> command\n> FAILED: src/backend/jit/llvm/llvmjit_types.bc\n> /usr/local/bin/ccache /usr/local/Cellar/llvm/18.1.6/bin/clang -c -o \n> src/backend/jit/llvm/llvmjit_types.bc \n> ../src/backend/jit/llvm/llvmjit_types.c -flto=thin -emit-llvm -MD -MQ \n> src/backend/jit/llvm/llvmjit_types.bc -MF \n> src/backend/jit/llvm/llvmjit_types.c.bc.d -O2 -Wno-ignored-attributes \n> -Wno-empty-body -fno-strict-aliasing -fwrapv -I./src/include \n> -I./src/backend/utils/misc -I../src/include\n> In file included from ../src/backend/jit/llvm/llvmjit_types.c:27:\n> In file included from ../src/include/postgres.h:45:\n> ../src/include/c.h:75:10: fatal error: 'libintl.h' file not found\n>    75 | #include <libintl.h>\n>       |          ^~~~~~~~~~~\n> 1 error generated.\n> \n> \n> The reason is that libintl.h is at /usr/local/include/libintl.h, but \n> that is not in the include path for this command.  I have \n> -I/usr/local/include in CPPFLAGS in the environment, which is why the \n> normal compilation commands pick it up, but this is not used by this \n> custom command.\n> \n> Wit this small patch I can make it work:\n> \n> diff --git a/src/backend/jit/llvm/meson.build \n> b/src/backend/jit/llvm/meson.build\n> index 41c759f73c5..4a4232661ba 100644\n> --- a/src/backend/jit/llvm/meson.build\n> +++ b/src/backend/jit/llvm/meson.build\n> @@ -63,6 +63,7 @@ bitcode_cflags = ['-fno-strict-aliasing', '-fwrapv']\n>  if llvm.version().version_compare('=15.0')\n>    bitcode_cflags += ['-Xclang', '-no-opaque-pointers']\n>  endif\n> +bitcode_cflags += get_option('c_args')\n>  bitcode_cflags += cppflags\n> \n>  # XXX: Worth improving on the logic to find directories here\n\nI have committed this change.\n\n\n\n", "msg_date": "Thu, 6 Jun 2024 22:48:36 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: small fix for llvm build" } ]
[ { "msg_contents": "Hi.\n\nThe function *perform_rewind* has an odd undefined behavior.\nThe function memcmp/ <https://cplusplus.com/reference/cstring/memcmp/>,\ncompares bytes to bytes.\n\nIMO, I think that pg_rewind can have a security issue,\nif two files are exactly the same, they are considered different.\nBecause use of structs with padding values is unspecified.\n\nFix by explicitly initializing with memset to avoid this.\n\nbest regards,\nRanier Vilela", "msg_date": "Tue, 28 May 2024 16:02:37 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Avoid an odd undefined behavior with memcmp\n (src/bin/pg_rewind/pg_rewind.c)" }, { "msg_contents": "\nHi Ranier,\n\n\n\n> IMO, I think that pg_rewind can have a security issue,\n> if two files are exactly the same, they are considered different.\n> Because use of structs with padding values is unspecified.\nLogically you are right. But I don't understand what scenario\nwould require memcmp to compare ControlFileData.\nIn general, we read ControlFileData from a pg_control file\nand then use members of ControlFileData directly.\nSo the two ControlFileData are not directly compared by byte.\n\n> Fix by explicitly initializing with memset to avoid this.\nAnd, even if there are scenarios that use memcmp comparisons,\nyour modifications are not complete.\nThere are three calls to the digestControlFile in the main()\nof pg_rewind.c, and as your said(if right), these should do\nmemory initialization every time.\n\n\n\n\n\n--\nBest Regards,\n\nLong\n", "msg_date": "Thu, 30 May 2024 09:41:29 +0800 (CST)", "msg_from": "\"Long Song\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re:Avoid an odd undefined behavior with memcmp\n (src/bin/pg_rewind/pg_rewind.c)" }, { "msg_contents": "On Wed, 29 May 2024 at 07:02, Ranier Vilela <[email protected]> wrote:\n> The function *perform_rewind* has an odd undefined behavior.\n> The function memcmp/, compares bytes to bytes.\n>\n> IMO, I think that pg_rewind can have a security issue,\n> if two files are exactly the same, they are considered different.\n> Because use of structs with padding values is unspecified.\n>\n> Fix by explicitly initializing with memset to avoid this.\n\nIt's unclear to me why you think this makes it any safer. If you look\nat the guts of digestControlFile(), you'll see that the memory you've\njust carefully memset to zero is subsequently overwritten by a\nmemcpy(). Do you not notice that? or do you think memcpy() somehow\nhas some ability to skip over padding bytes and leave them set to the\nvalue they were set to previously? It doesn't.\n\nIf your patch fixes the Coverity warning, then that's just a\ndemonstration of how smart the tool is. A warning does not mean\nthere's a problem.\n\nThe location where we should ensure the memory is zeroed is when\nwriting the control file. That's done in InitControlFile(). If you\nlook in there you'll see \"memset(ControlFile, 0,\nsizeof(ControlFileData));\"\n\nIt would be great if you add a bit more thinking between noticing a\nCoverity warning and raising it on the mailing list. I'm not sure how\nmuch time you dwell on these warnings before raising them here, but\ncertainly, if we wanted a direct tie between a Coverity warning and\nthis mailing list, we'd script it. But we don't, so we won't. So many\nof your reports appear to rely on someone on the list doing that\nthinking for you. That does not scale well when in the majority of the\nreports there's no actual problem. Please remember that false reports\nwastes people's time.\n\nIf you're keen to do some other small hobby projects around here, then\nI bet there'd be a lot of suggestions for things that could be\nimproved. You could raise a thread to ask for suggestions for places\nto start. I don't think your goal should be that Coverity runs on the\nPostgreSQL source warning free. If that was a worthy goal, it would\nhave happened a long time ago.\n\nDavid\n\n\n", "msg_date": "Thu, 30 May 2024 16:02:58 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid an odd undefined behavior with memcmp\n (src/bin/pg_rewind/pg_rewind.c)" }, { "msg_contents": "Em qua., 29 de mai. de 2024 às 22:41, Long Song <[email protected]>\nescreveu:\n\n>\n> Hi Ranier,\n>\n>\n>\n> > IMO, I think that pg_rewind can have a security issue,\n> > if two files are exactly the same, they are considered different.\n> > Because use of structs with padding values is unspecified.\n> Logically you are right. But I don't understand what scenario\n> would require memcmp to compare ControlFileData.\n> In general, we read ControlFileData from a pg_control file\n> and then use members of ControlFileData directly.\n> So the two ControlFileData are not directly compared by byte.\n>\nActually in pg_rewind there is a comparison using memcmp.\n\n\n>\n> > Fix by explicitly initializing with memset to avoid this.\n> And, even if there are scenarios that use memcmp comparisons,\n> your modifications are not complete.\n> There are three calls to the digestControlFile in the main()\n> of pg_rewind.c, and as your said(if right), these should do\n> memory initialization every time.\n>\nIn fact, initializing structures with memset does not solve anything.\nOnce the entire structure is populated again by a call to memcpy shortly\nthereafter.\nMy concern now is that when the structure is saved to disk,\nwhat are the padding fields like?\n\nBut enough noise.\nThanks for taking a look.\n\nbest regards,\nRanier Vilela\n\nEm qua., 29 de mai. de 2024 às 22:41, Long Song <[email protected]> escreveu:\nHi Ranier,\n\n\n\n> IMO, I think that pg_rewind can have a security issue,\n> if two files are exactly the same, they are considered different.\n> Because use of structs with padding values is unspecified.\nLogically you are right. But I don't understand what scenario\nwould require memcmp to compare ControlFileData.\nIn general, we read ControlFileData from a pg_control file\nand then use members of ControlFileData directly.\nSo the two ControlFileData are not directly compared by byte.Actually in pg_rewind there is a comparison using memcmp. \n\n> Fix by explicitly initializing with memset to avoid this.\nAnd, even if there are scenarios that use memcmp comparisons,\nyour modifications are not complete.\nThere are three calls to the digestControlFile in the main()\nof pg_rewind.c, and as your said(if right), these should do\nmemory initialization every time.In fact, initializing structures with memset does not solve anything.Once the entire structure is populated again by a call to memcpy shortly thereafter.My concern now is that when the structure is saved to disk, what are the padding fields like? But enough noise.Thanks for taking a look.best regards,Ranier Vilela", "msg_date": "Sun, 2 Jun 2024 18:14:33 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid an odd undefined behavior with memcmp\n (src/bin/pg_rewind/pg_rewind.c)" } ]
[ { "msg_contents": "I recently noticed that at least with PostgreSQL 16, it is not possible to\nbuild using MSVC if both OpenSSL and gssapi (MIT Kerberos) are enabled.\nBoth work if the other isn't included though..\n\nI briefly tried to test with PG17 to see if it has the same issue, but it\nseems like gssapi has the same problem I recently found with zlib (\nhttps://www.postgresql.org/message-id/CA%2BOCxozrPZx57ue8rmhq6CD1Jic5uqKh80%3DvTpZurSKESn-dkw%40mail.gmail.com\n).\n\nI've yet to find time to look into this - reporting anyway rather than\nsitting on it until I get round to it...\n\nBuild FAILED.\n\n\"C:\\Users\\dpage\\Downloads\\postgresql-16.3\\pgsql.sln\" (default target) (1) ->\n\"C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj\" (default\ntarget) (9) ->\n(ClCompile target) ->\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): warning C4228:\nnonstandard extension used: qualifiers after comma in declarator list are\nignored [C:\\User\ns\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): warning C4228:\nnonstandard extension used: qualifiers after comma in declarator list are\nignored [C:\\User\ns\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): warning C4228:\nnonstandard extension used: qualifiers after comma in declarator list are\nignored [C:\\User\ns\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n\nC:\\Users\\dpage\\Downloads\\postgresql-16.3\\src\\backend\\utils\\sort\\tuplesort.c(2000,1):\nwarning C4724: potential mod by 0 [C:\\Users\\dpage\\Downloads\\postgresql-1\n6.3\\postgres.vcxproj]\n\n\n\"C:\\Users\\dpage\\Downloads\\postgresql-16.3\\pgsql.sln\" (default target) (1) ->\n\"C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj\" (default\ntarget) (9) ->\n(ClCompile target) ->\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(181,9): error C2059: syntax\nerror: '(' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(188,9): error C2059: syntax\nerror: '<parameter-list>'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(193,5): error C2059: syntax\nerror: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(194,1): error C2059: syntax\nerror: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(198,5): error C2061: syntax\nerror: identifier 'GENERAL_NAME'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.v\ncxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(199,1): error C2059: syntax\nerror: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2143: syntax\nerror: missing ')' before '*'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxp\nroj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2143: syntax\nerror: missing '{' before '*'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxp\nroj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2143: syntax\nerror: missing ';' before '*'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxp\nroj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2059: syntax\nerror: ')' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2373: 'a':\nredefinition; different type modifiers\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgr\nes.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2054: expected\n'(' to follow 'ptr'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2146: syntax\nerror: missing ')' before identifier 'cmp'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\\npostgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2061: syntax\nerror: identifier 'cmp'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2059: syntax\nerror: ';' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2449: found\n'{' at file scope (missing function header?)\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\n\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2059: syntax\nerror: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2146: syntax\nerror: missing ')' before identifier 'fr'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\p\nostgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2061: syntax\nerror: identifier 'fr'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2236:\nunexpected token 'struct'. Did you forget a ';'?\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\p\nostgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2143: syntax\nerror: missing ')' before '*'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxp\nroj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2143: syntax\nerror: missing '{' before '*'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxp\nroj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2143: syntax\nerror: missing ';' before '*'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxp\nroj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2059: syntax\nerror: ')' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2373: 'a':\nredefinition; different type modifiers\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgr\nes.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2054: expected\n'(' to follow 'ptr'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2146: syntax\nerror: missing ')' before identifier 'cmp'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\\npostgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2061: syntax\nerror: identifier 'cmp'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2059: syntax\nerror: ';' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2449: found\n'{' at file scope (missing function header?)\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\n\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2059: syntax\nerror: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2146: syntax\nerror: missing ')' before identifier 'fr'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\p\nostgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2061: syntax\nerror: identifier 'fr'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(295,5): error C2059: syntax\nerror: '(' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(296,1): error C2059: syntax\nerror: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(313,5): error C2061: syntax\nerror: identifier 'DIST_POINT_NAME'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgre\ns.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(317,1): error C2059: syntax\nerror: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(522,5): error C2061: syntax\nerror: identifier 'GENERAL_NAME'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.v\ncxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(525,1): error C2059: syntax\nerror: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2143: syntax\nerror: missing ')' before '*'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxp\nroj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2143: syntax\nerror: missing '{' before '*'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxp\nroj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2143: syntax\nerror: missing ';' before '*'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxp\nroj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2059: syntax\nerror: ')' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2373: 'a':\nredefinition; different type modifiers\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgr\nes.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2054: expected\n'(' to follow 'ptr'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2146: syntax\nerror: missing ')' before identifier 'cmp'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\\npostgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2061: syntax\nerror: identifier 'cmp'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2059: syntax\nerror: ';' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2449: found\n'{' at file scope (missing function header?)\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\n\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2059: syntax\nerror: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2146: syntax\nerror: missing ')' before identifier 'fr'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\p\nostgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2061: syntax\nerror: identifier 'fr'\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\n C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C1003: error\ncount exceeds 100; stopping compilation\n[C:\\Users\\dpage\\Downloads\\postgresql-16.3\\post\ngres.vcxproj]\n\n 4 Warning(s)\n 53 Error(s)\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nI recently noticed that at least with PostgreSQL 16, it is not possible to build using MSVC if both OpenSSL and gssapi (MIT Kerberos) are enabled. Both work if the other isn't included though..I briefly tried to test with PG17 to see if it has the same issue, but it seems like gssapi has the same problem I recently found with zlib (https://www.postgresql.org/message-id/CA%2BOCxozrPZx57ue8rmhq6CD1Jic5uqKh80%3DvTpZurSKESn-dkw%40mail.gmail.com).I've yet to find time to look into this - reporting anyway rather than sitting on it until I get round to it...Build FAILED.\"C:\\Users\\dpage\\Downloads\\postgresql-16.3\\pgsql.sln\" (default target) (1) ->\"C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj\" (default target) (9) ->(ClCompile target) ->  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): warning C4228: nonstandard extension used: qualifiers after comma in declarator list are ignored [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): warning C4228: nonstandard extension used: qualifiers after comma in declarator list are ignored [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): warning C4228: nonstandard extension used: qualifiers after comma in declarator list are ignored [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\Users\\dpage\\Downloads\\postgresql-16.3\\src\\backend\\utils\\sort\\tuplesort.c(2000,1): warning C4724: potential mod by 0 [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]\"C:\\Users\\dpage\\Downloads\\postgresql-16.3\\pgsql.sln\" (default target) (1) ->\"C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj\" (default target) (9) ->(ClCompile target) ->  C:\\build64\\openssl\\include\\openssl\\x509v3.h(181,9): error C2059: syntax error: '(' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(188,9): error C2059: syntax error: '<parameter-list>' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(193,5): error C2059: syntax error: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(194,1): error C2059: syntax error: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(198,5): error C2061: syntax error: identifier 'GENERAL_NAME' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(199,1): error C2059: syntax error: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2143: syntax error: missing ')' before '*' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2143: syntax error: missing '{' before '*' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2143: syntax error: missing ';' before '*' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2059: syntax error: ')' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2373: 'a': redefinition; different type modifiers [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2054: expected '(' to follow 'ptr' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2146: syntax error: missing ')' before identifier 'cmp' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2061: syntax error: identifier 'cmp' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2059: syntax error: ';' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2449: found '{' at file scope (missing function header?) [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2059: syntax error: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2146: syntax error: missing ')' before identifier 'fr' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(201,1): error C2061: syntax error: identifier 'fr' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2236: unexpected token 'struct'. Did you forget a ';'? [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2143: syntax error: missing ')' before '*' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2143: syntax error: missing '{' before '*' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2143: syntax error: missing ';' before '*' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2059: syntax error: ')' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2373: 'a': redefinition; different type modifiers [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2054: expected '(' to follow 'ptr' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2146: syntax error: missing ')' before identifier 'cmp' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2061: syntax error: identifier 'cmp' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2059: syntax error: ';' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2449: found '{' at file scope (missing function header?) [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2059: syntax error: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2146: syntax error: missing ')' before identifier 'fr' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(227,1): error C2061: syntax error: identifier 'fr' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(295,5): error C2059: syntax error: '(' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(296,1): error C2059: syntax error: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(313,5): error C2061: syntax error: identifier 'DIST_POINT_NAME' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(317,1): error C2059: syntax error: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(522,5): error C2061: syntax error: identifier 'GENERAL_NAME' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(525,1): error C2059: syntax error: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2143: syntax error: missing ')' before '*' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2143: syntax error: missing '{' before '*' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2143: syntax error: missing ';' before '*' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2059: syntax error: ')' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2373: 'a': redefinition; different type modifiers [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2054: expected '(' to follow 'ptr' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2146: syntax error: missing ')' before identifier 'cmp' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2061: syntax error: identifier 'cmp' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2059: syntax error: ';' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2449: found '{' at file scope (missing function header?) [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2059: syntax error: '}' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2146: syntax error: missing ')' before identifier 'fr' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C2061: syntax error: identifier 'fr' [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]  C:\\build64\\openssl\\include\\openssl\\x509v3.h(527,1): error C1003: error count exceeds 100; stopping compilation [C:\\Users\\dpage\\Downloads\\postgresql-16.3\\postgres.vcxproj]    4 Warning(s)    53 Error(s)-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 28 May 2024 12:13:17 -0700", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Windows: openssl & gssapi dislike each other" }, { "msg_contents": "I was able to reproduce the gssapi & openssl error on windows. I tried\non PG16 with msvc build system and on PG17 with meson build system.\nThe error was reproducible when enabling both openssl and gssapi from\nthe configurations. Turns out that it was due to the conflicting\nmacros.\n\n\n\"be-secure-openssl.c\" tries to prevent this conflict here [1]. But the\nerror again appears when gssapi is enabled. The file\n\"be-secure-openssl.c\" fails to compile because it has a similar\nscenario as explained here [2]. The header libpq.h is indirectly\nincluding libpq-be.h which has a wrong order of including openssl\nheaders. Header \"gssapi.h\" indirectly includes \"wincrypt.h\" and\nopenssl header should be defined after gssapi includes.\n\nNow this can either be solved by just just undefine the macro defined\nby wincrypt.h as done here [3]\n```\n#ifdef X509_NAME\n#undef X509_NAME\n#endif\n```\n\nOr we should rearrange our headers. Openssl header should be at the\nbottom (after the gssapi includes).\n\n\nI am attaching the patch here in which I rearranged the openssl header\nin libpq-be.h\n\n\n[1]: https://github.com/postgres/postgres/blob/8ba34c698d19450ccae9a5aea59a6d0bc8b75c0e/src/backend/libpq/be-secure-openssl.c#L46\n[2]: https://github.com/openssl/openssl/issues/10307#issuecomment-964155382\n[3]: https://github.com/postgres/postgres/blob/00ac25a3c365004821e819653c3307acd3294818/contrib/sslinfo/sslinfo.c#L29\n\n\nThanks\nImran Zaheer\nBitnine", "msg_date": "Sat, 8 Jun 2024 19:22:39 +0900", "msg_from": "Imran Zaheer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows: openssl & gssapi dislike each other" }, { "msg_contents": "\nOn 2024-06-08 Sa 06:22, Imran Zaheer wrote:\n> I was able to reproduce the gssapi & openssl error on windows. I tried\n> on PG16 with msvc build system and on PG17 with meson build system.\n> The error was reproducible when enabling both openssl and gssapi from\n> the configurations. Turns out that it was due to the conflicting\n> macros.\n>\n>\n> \"be-secure-openssl.c\" tries to prevent this conflict here [1]. But the\n> error again appears when gssapi is enabled. The file\n> \"be-secure-openssl.c\" fails to compile because it has a similar\n> scenario as explained here [2]. The header libpq.h is indirectly\n> including libpq-be.h which has a wrong order of including openssl\n> headers. Header \"gssapi.h\" indirectly includes \"wincrypt.h\" and\n> openssl header should be defined after gssapi includes.\n>\n> Now this can either be solved by just just undefine the macro defined\n> by wincrypt.h as done here [3]\n> ```\n> #ifdef X509_NAME\n> #undef X509_NAME\n> #endif\n> ```\n>\n> Or we should rearrange our headers. Openssl header should be at the\n> bottom (after the gssapi includes).\n>\n>\n> I am attaching the patch here in which I rearranged the openssl header\n> in libpq-be.h\n>\n>\n> [1]: https://github.com/postgres/postgres/blob/8ba34c698d19450ccae9a5aea59a6d0bc8b75c0e/src/backend/libpq/be-secure-openssl.c#L46\n> [2]: https://github.com/openssl/openssl/issues/10307#issuecomment-964155382\n> [3]: https://github.com/postgres/postgres/blob/00ac25a3c365004821e819653c3307acd3294818/contrib/sslinfo/sslinfo.c#L29\n>\n>\n\nLet's be consistent and use the #undef from [3]. I did find the comment \nin sslinfo.c slightly confusing until I understood that this was a \n#define clashing with a typedef.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 8 Jun 2024 16:40:28 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows: openssl & gssapi dislike each other" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-06-08 Sa 06:22, Imran Zaheer wrote:\n>> Now this can either be solved by just just undefine the macro defined\n>> by wincrypt.h as done here [3]\n>> Or we should rearrange our headers. Openssl header should be at the\n>> bottom (after the gssapi includes).\n\n> Let's be consistent and use the #undef from [3].\n\n+1. Depending on header order is not great, especially when you have\nto make it depend on an order that is directly contradictory to\nproject conventions [0].\n\n\t\t\tregards, tom lane\n\n[0] https://wiki.postgresql.org/wiki/Committing_checklist#Policies\n\n\n", "msg_date": "Sat, 08 Jun 2024 18:21:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows: openssl & gssapi dislike each other" }, { "msg_contents": "Hi\n\nI am submitting two new patches. We can undefine the macro at two locations\n\n1). As be-secure-openssl.c [1] was the actual\nfile where the conflict happened so I undefined the macro here before\nthe ssl includes. I changed the comment a little to make it understandable.\nI am also attaching the error generated with ninja build.\n\nOR\n\n2). Right after the gssapi includes in libpq-be.h\n\n\nThanks\nImran Zaheer\nBitnine\n\n[1]: https://github.com/postgres/postgres/blob/00ac25a3c365004821e819653c3307acd3294818/src/backend/libpq/be-secure-openssl.c#L46\n\nOn Sun, Jun 9, 2024 at 7:21 AM Tom Lane <[email protected]> wrote:\n>\n> Andrew Dunstan <[email protected]> writes:\n> > On 2024-06-08 Sa 06:22, Imran Zaheer wrote:\n> >> Now this can either be solved by just just undefine the macro defined\n> >> by wincrypt.h as done here [3]\n> >> Or we should rearrange our headers. Openssl header should be at the\n> >> bottom (after the gssapi includes).\n>\n> > Let's be consistent and use the #undef from [3].\n>\n> +1. Depending on header order is not great, especially when you have\n> to make it depend on an order that is directly contradictory to\n> project conventions [0].\n>\n> regards, tom lane\n>\n> [0] https://wiki.postgresql.org/wiki/Committing_checklist#Policies", "msg_date": "Sun, 9 Jun 2024 16:28:59 +0900", "msg_from": "Imran Zaheer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows: openssl & gssapi dislike each other" }, { "msg_contents": "Hi\n\nOn Sun, 9 Jun 2024 at 08:29, Imran Zaheer <[email protected]> wrote:\n\n> Hi\n>\n> I am submitting two new patches. We can undefine the macro at two locations\n>\n> 1). As be-secure-openssl.c [1] was the actual\n> file where the conflict happened so I undefined the macro here before\n> the ssl includes. I changed the comment a little to make it understandable.\n> I am also attaching the error generated with ninja build.\n>\n> OR\n>\n> 2). Right after the gssapi includes in libpq-be.h\n>\n\nThank you for working on this. I can confirm the undef version compiles and\npasses tests with 16.3.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Sun, 9 Jun 2024 at 08:29, Imran Zaheer <[email protected]> wrote:Hi\n\nI am submitting two new patches. We can undefine the macro at two locations\n\n1). As be-secure-openssl.c [1] was the actual\nfile where the conflict happened so I undefined the macro here before\nthe ssl includes. I changed the comment a little to make it understandable.\nI am also attaching the error generated with ninja build.\n\nOR\n\n2). Right after the gssapi includes in libpq-be.hThank you for working on this. I can confirm the undef version compiles and passes tests with 16.3. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 11 Jun 2024 10:19:17 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Windows: openssl & gssapi dislike each other" }, { "msg_contents": "On 2024-06-11 Tu 05:19, Dave Page wrote:\n> Hi\n>\n> On Sun, 9 Jun 2024 at 08:29, Imran Zaheer <[email protected]> wrote:\n>\n> Hi\n>\n> I am submitting two new patches. We can undefine the macro at two\n> locations\n>\n> 1). As be-secure-openssl.c [1] was the actual\n> file where the conflict happened so I undefined the macro here before\n> the ssl includes. I changed the comment a little to make it\n> understandable.\n> I am also attaching the error generated with ninja build.\n>\n> OR\n>\n> 2). Right after the gssapi includes in libpq-be.h\n>\n>\n> Thank you for working on this. I can confirm the undef version \n> compiles and passes tests with 16.3.\n>\n\nThanks for testing.\n\nI think I prefer approach 2, which should also allow us to remove the \n#undef in sslinfo.c so we only need to do this in one place.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-11 Tu 05:19, Dave Page\n wrote:\n\n\n\n\nHi\n\n\nOn Sun, 9 Jun 2024 at 08:29,\n Imran Zaheer <[email protected]>\n wrote:\n\nHi\n\n I am submitting two new patches. We can undefine the macro\n at two locations\n\n 1). As be-secure-openssl.c [1] was the actual\n file where the conflict happened so I undefined the macro\n here before\n the ssl includes. I changed the comment a little to make it\n understandable.\n I am also attaching the error generated with ninja build.\n\n OR\n\n 2). Right after the gssapi includes in libpq-be.h\n\n\n\nThank you for working on this. I can confirm the undef\n version compiles and passes tests with 16.3.\n \n\n\n\n\n\n\nThanks for testing. \n\nI think I prefer approach 2, which should also allow us to remove\n the #undef in sslinfo.c so we only need to do this in one place.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 11 Jun 2024 07:22:18 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows: openssl & gssapi dislike each other" }, { "msg_contents": "On Tue, 11 Jun 2024 at 12:22, Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2024-06-11 Tu 05:19, Dave Page wrote:\n>\n> Hi\n>\n> On Sun, 9 Jun 2024 at 08:29, Imran Zaheer <[email protected]> wrote:\n>\n>> Hi\n>>\n>> I am submitting two new patches. We can undefine the macro at two\n>> locations\n>>\n>> 1). As be-secure-openssl.c [1] was the actual\n>> file where the conflict happened so I undefined the macro here before\n>> the ssl includes. I changed the comment a little to make it\n>> understandable.\n>> I am also attaching the error generated with ninja build.\n>>\n>> OR\n>>\n>> 2). Right after the gssapi includes in libpq-be.h\n>>\n>\n> Thank you for working on this. I can confirm the undef version compiles\n> and passes tests with 16.3.\n>\n>\n>\n> Thanks for testing.\n>\n> I think I prefer approach 2, which should also allow us to remove the\n> #undef in sslinfo.c so we only need to do this in one place.\n>\nOK, well that version compiles and passes tests as well :-)\n\nThanks!\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Tue, 11 Jun 2024 at 12:22, Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2024-06-11 Tu 05:19, Dave Page\n wrote:\n\n\n\nHi\n\n\nOn Sun, 9 Jun 2024 at 08:29,\n Imran Zaheer <[email protected]>\n wrote:\n\nHi\n\n I am submitting two new patches. We can undefine the macro\n at two locations\n\n 1). As be-secure-openssl.c [1] was the actual\n file where the conflict happened so I undefined the macro\n here before\n the ssl includes. I changed the comment a little to make it\n understandable.\n I am also attaching the error generated with ninja build.\n\n OR\n\n 2). Right after the gssapi includes in libpq-be.h\n\n\n\nThank you for working on this. I can confirm the undef\n version compiles and passes tests with 16.3.\n \n\n\n\n\n\n\nThanks for testing. \n\nI think I prefer approach 2, which should also allow us to remove\n the #undef in sslinfo.c so we only need to do this in one place.OK, well that version compiles and passes tests as well :-)Thanks! -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 11 Jun 2024 16:33:55 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Windows: openssl & gssapi dislike each other" }, { "msg_contents": "Hi\n\nI removed the macro from the sslinfo.c as suggested by Andrew. Then I\nwas thinking maybe we can undo some other similar code.\n\nI rearranged the headers to their previous position in\nbe-secure-openssl.c and in fe-secure-openssl.c. I was able to compile\nwith gssapi and openssl enabled. You can look into the original commits. [1,\n2]\nIs it ok if we undo the changes from these commits?\n\nI am attaching two new patches.\nOne with macro guards removed from ssinfo.c.\nSecond patch will additionally rearrange headers for\nbe-secure-openssl.c and in fe-secure-openssl.c to their previous\nposition.\n\nThanks\nImran Zaheer\nBitnine\n\n[1]: https://github.com/postgres/postgres/commit/1241fcbd7e649414f09f9858ba73e63975dcff64\n[2]: https://github.com/postgres/postgres/commit/568620dfd6912351b4127435eca5309f823abde8\n\nOn Wed, Jun 12, 2024 at 12:34 AM Dave Page <[email protected]> wrote:\n>\n>\n>\n> On Tue, 11 Jun 2024 at 12:22, Andrew Dunstan <[email protected]> wrote:\n>>\n>>\n>> On 2024-06-11 Tu 05:19, Dave Page wrote:\n>>\n>> Hi\n>>\n>> On Sun, 9 Jun 2024 at 08:29, Imran Zaheer <[email protected]> wrote:\n>>>\n>>> Hi\n>>>\n>>> I am submitting two new patches. We can undefine the macro at two locations\n>>>\n>>> 1). As be-secure-openssl.c [1] was the actual\n>>> file where the conflict happened so I undefined the macro here before\n>>> the ssl includes. I changed the comment a little to make it understandable.\n>>> I am also attaching the error generated with ninja build.\n>>>\n>>> OR\n>>>\n>>> 2). Right after the gssapi includes in libpq-be.h\n>>\n>>\n>> Thank you for working on this. I can confirm the undef version compiles and passes tests with 16.3.\n>>\n>>\n>>\n>> Thanks for testing.\n>>\n>> I think I prefer approach 2, which should also allow us to remove the #undef in sslinfo.c so we only need to do this in one place.\n>\n> OK, well that version compiles and passes tests as well :-)\n>\n> Thanks!\n>\n> --\n> Dave Page\n> pgAdmin: https://www.pgadmin.org\n> PostgreSQL: https://www.postgresql.org\n> EDB: https://www.enterprisedb.com\n>", "msg_date": "Thu, 13 Jun 2024 00:12:51 +0900", "msg_from": "Imran Zaheer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows: openssl & gssapi dislike each other" }, { "msg_contents": "Hi,\n\n\nOn 2024-06-13 00:12:51 +0900, Imran Zaheer wrote:\n> I removed the macro from the sslinfo.c as suggested by Andrew. Then I\n> was thinking maybe we can undo some other similar code.\n\nWhat precisely do you mean by that? Just getting rid of the \"ordered include\"\nof openssl headers in {fe,be}-secure-openssl.h?\n\n\n> I rearranged the headers to their previous position in\n> be-secure-openssl.c and in fe-secure-openssl.c. I was able to compile\n> with gssapi and openssl enabled. You can look into the original commits. [1,\n> 2]\n> Is it ok if we undo the changes from these commits?\n>\n> I am attaching two new patches.\n> One with macro guards removed from ssinfo.c.\n> Second patch will additionally rearrange headers for\n> be-secure-openssl.c and in fe-secure-openssl.c to their previous\n> position.\n\nOne thing that concerns me with this is that there are other includes of\ngssapi/gssapi.h (e.g. in , which haven't been changed here. ISTM we ought to do apply\nthe same change to all of those, otherwise we're just waiting for the problem\nto re-appear.\n\nI wonder if we should add a src/include/libpq/pg-gssapi.h or such, which could\nwrap the entire ifdeferry for gss support. Something like\n\n\n#ifdef ENABLE_GSS\n\n#if defined(HAVE_GSSAPI_H)\n#include <gssapi.h>\n#include <gssapi_ext.h>\n#else\n#include <gssapi/gssapi.h>\n#include <gssapi/gssapi_ext.h>\n#endif\n\n/*\n * On Windows, <wincrypt.h> includes a #define for X509_NAME, which breaks our\n * ability to use OpenSSL's version of that symbol if <wincrypt.h> is pulled\n * in after <openssl/ssl.h> ... and, at least on some builds, it is. We\n * can't reliably fix that by re-ordering #includes, because libpq/libpq-be.h\n * #includes <openssl/ssl.h>. Instead, just zap the #define again here.\n */\n#ifdef X509_NAME\n#undef X509_NAME\n#endif\n\n#endif /* ENABLE_GSS */\n\nWhich'd allow the various places using gss (libpq-be.h, be-gssapi-common.h,\nlibpq-int.h) to just include pg-gssapi.h and get all of the above without\nredundancy?\n\n\nAnother thing that concerns me about this approach is that it seems to assume\nthat the only source of such conflicting includes is gssapi. What if some\nother header pulls in wincrypt.h? But I can't really see a way out of that...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Jul 2024 10:32:04 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows: openssl & gssapi dislike each other" }, { "msg_contents": "On Tue, Jul 9, 2024 at 2:32 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n>\n> On 2024-06-13 00:12:51 +0900, Imran Zaheer wrote:\n> > I removed the macro from the sslinfo.c as suggested by Andrew. Then I\n> > was thinking maybe we can undo some other similar code.\n>\n> What precisely do you mean by that? Just getting rid of the \"ordered include\"\n> of openssl headers in {fe,be}-secure-openssl.h?\n>\nHi\n\nI reordered the includes in {fe,be}-secure-openssl.h as they were also placed\nthere to resolve similar errors and also were contradicting the\nproject conventions [1].\nBut looks like it's better to not touch those as they were for future proofing.\n\n>\n> > I rearranged the headers to their previous position in\n> > be-secure-openssl.c and in fe-secure-openssl.c. I was able to compile\n> > with gssapi and openssl enabled. You can look into the original commits. [1,\n> > 2]\n> > Is it ok if we undo the changes from these commits?\n> >\n> > I am attaching two new patches.\n> > One with macro guards removed from ssinfo.c.\n> > Second patch will additionally rearrange headers for\n> > be-secure-openssl.c and in fe-secure-openssl.c to their previous\n> > position.\n>\n> One thing that concerns me with this is that there are other includes of\n> gssapi/gssapi.h (e.g. in , which haven't been changed here. ISTM we ought to do apply\n> the same change to all of those, otherwise we're just waiting for the problem\n> to re-appear.\n>\n\nYes this should be better.\n\n> I wonder if we should add a src/include/libpq/pg-gssapi.h or such, which could\n> wrap the entire ifdeferry for gss support. Something like\n>\n>\n> #ifdef ENABLE_GSS\n>\n> #if defined(HAVE_GSSAPI_H)\n> #include <gssapi.h>\n> #include <gssapi_ext.h>\n> #else\n> #include <gssapi/gssapi.h>\n> #include <gssapi/gssapi_ext.h>\n> #endif\n>\n> /*\n> * On Windows, <wincrypt.h> includes a #define for X509_NAME, which breaks our\n> * ability to use OpenSSL's version of that symbol if <wincrypt.h> is pulled\n> * in after <openssl/ssl.h> ... and, at least on some builds, it is. We\n> * can't reliably fix that by re-ordering #includes, because libpq/libpq-be.h\n> * #includes <openssl/ssl.h>. Instead, just zap the #define again here.\n> */\n> #ifdef X509_NAME\n> #undef X509_NAME\n> #endif\n>\n> #endif /* ENABLE_GSS */\n>\n> Which'd allow the various places using gss (libpq-be.h, be-gssapi-common.h,\n> libpq-int.h) to just include pg-gssapi.h and get all of the above without\n> redundancy?\n>\n>\n> Another thing that concerns me about this approach is that it seems to assume\n> that the only source of such conflicting includes is gssapi. What if some\n> other header pulls in wincrypt.h? But I can't really see a way out of that...\n>\n> Greetings,\n>\n> Andres Freund\n\nCreating src/include/libpq/pg-gssapi.h can be another great way of\nhandling these includes. I compiled successfully but couldn't do\nproper testing as there is something wrong with my windows env.\n\nAnd you are right, the approach we are going with right now only\nassumes that it's due to the\ngssapi as the bug also appeared when building with gssapi (openssl &\ngssapi build). What if openssl clashes with\nsome other lib too which indirectly includes wincrypt.h\n\nFor now maybe we can do the future proofing for gssapi & openssl includes\nand do testing if openssl clashes with some other lib too.\n\nThanks\nImran Zaheer\n\n[1]: https://wiki.postgresql.org/wiki/Committing_checklist#Policies\n(3rd last point)\n\n\n", "msg_date": "Thu, 11 Jul 2024 02:06:40 +0900", "msg_from": "Imran Zaheer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows: openssl & gssapi dislike each other" } ]
[ { "msg_contents": "In installation.sgml it says\n\n\"\"\"\nAlternatively, PostgreSQL can be built using Meson. This is currently \nexperimental.\n\"\"\"\n\nDo we want to alter this statement for PG17, considering that this is \nnow the only way to build for Windows using MSVC?\n\n(A joke response is that the Windows port itself is experimental, so it \ndoesn't matter much that the build system for it is as well.)\n\nI would still call Meson use on non-Windows platforms experimental at \nthis time.\n\nSo maybe something like\n\n\"Alternatively, PostgreSQL can be built using Meson. This is the only \noption for building PostgreSQL in Windows using Visual Something[*]. \nFor other platforms, using Meson is currently experimental.\"\n\n[*] What is the correct name for this?\n\n\n", "msg_date": "Tue, 28 May 2024 23:41:06 -0700", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "meson \"experimental\"?" }, { "msg_contents": "Hi.\n\n> \"Alternatively, PostgreSQL can be built using Meson. This is the only\n> option for building PostgreSQL in Windows using Visual Something[*].\n> For other platforms, using Meson is currently experimental.\"\n\n+1 good catch\n\n> [*] What is the correct name for this?\n\nI believe in this section it should be \"Visual Studio\" as we specify\nelsewhere [1][2]. In [2] we name specific required versions. Maybe we\nshould reference this section.\n\nWhile on it, in [2] section 17.7.5 is named \"Visual\". I don't think\nsuch a product exists and find it confusing. Maybe we should rename\nthe section to \"Visual Studio\".\n\nAlso I don't see any mention of the minimum required version of Ninja.\nI think we should add it too, or maybe reference the corresponding\nsection I couldn't find in \"17.1 Requirements\"\n\n[1]: https://www.postgresql.org/docs/devel/install-meson.html\n[2]: https://www.postgresql.org/docs/devel/installation-platform-notes.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 30 May 2024 13:07:02 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "Hi,\n\n> > [*] What is the correct name for this?\n>\n> I believe in this section it should be \"Visual Studio\" as we specify\n> elsewhere [1][2]. In [2] we name specific required versions. Maybe we\n> should reference this section.\n>\n> While on it, in [2] section 17.7.5 is named \"Visual\". I don't think\n> such a product exists and find it confusing. Maybe we should rename\n> the section to \"Visual Studio\".\n\nHere are corresponding patches.\n\n> Also I don't see any mention of the minimum required version of Ninja.\n> I think we should add it too, or maybe reference the corresponding\n> section I couldn't find in \"17.1 Requirements\"\n>\n> [1]: https://www.postgresql.org/docs/devel/install-meson.html\n> [2]: https://www.postgresql.org/docs/devel/installation-platform-notes.html\n\nBy a quick look on the buildfarm we seem to use Ninja >= 1.11.1.\nHowever since Meson can use both Ninja and VS as a backend I'm not\ncertain which section would be most appropriate for naming the minimal\nrequired version of Ninja.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 30 May 2024 13:32:18 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "On Thu, May 30, 2024 at 6:32 AM Aleksander Alekseev <\[email protected]> wrote:\n\n>\n>\n> By a quick look on the buildfarm we seem to use Ninja >= 1.11.1.\n> However since Meson can use both Ninja and VS as a backend I'm not\n> certain which section would be most appropriate for naming the minimal\n> required version of Ninja.\n>\n>\nWhen I tried building with the VS backend it blew up, I don't recall the\ndetails. I think we should just use ninja everywhere. That keeps things\nsimple. On Windows I just install python and then do \"pip install meson\nninja\"\n\ncheers\n\nandrew\n\nOn Thu, May 30, 2024 at 6:32 AM Aleksander Alekseev <[email protected]> wrote:\nBy a quick look on the buildfarm we seem to use Ninja >= 1.11.1.\nHowever since Meson can use both Ninja and VS as a backend I'm not\ncertain which section would be most appropriate for naming the minimal\nrequired version of Ninja.\nWhen I tried building with the VS backend it blew up, I don't recall the details. I think we should just use ninja everywhere. That keeps things simple. On Windows I just install python and then do \"pip install meson ninja\"cheersandrew", "msg_date": "Thu, 30 May 2024 11:03:33 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "Hi,\n\n>> By a quick look on the buildfarm we seem to use Ninja >= 1.11.1.\n>> However since Meson can use both Ninja and VS as a backend I'm not\n>> certain which section would be most appropriate for naming the minimal\n>> required version of Ninja.\n>\n> When I tried building with the VS backend it blew up, I don't recall the details. I think we should just use ninja everywhere. That keeps things simple. On Windows I just install python and then do \"pip install meson ninja\"\n\nIf we know that it doesn't work I suggest removing mention of\n--backend option from [1] until it will, in order to avoid any\nconfusion.\n\n[1]: https://www.postgresql.org/docs/devel/install-meson.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 31 May 2024 13:34:17 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "Hi, \n\nOn May 30, 2024 8:03:33 AM PDT, Andrew Dunstan <[email protected]> wrote:\n>On Thu, May 30, 2024 at 6:32 AM Aleksander Alekseev <\n>[email protected]> wrote:\n>\n>>\n>>\n>> By a quick look on the buildfarm we seem to use Ninja >= 1.11.1.\n>> However since Meson can use both Ninja and VS as a backend I'm not\n>> certain which section would be most appropriate for naming the minimal\n>> required version of Ninja.\n>>\n>>\n>When I tried building with the VS backend it blew up, I don't recall the\n>details. I think we should just use ninja everywhere. That keeps things\n>simple. \n\nVS should work, and if not, we should fix it. It's slow, so I'd not use it for scheduled builds, but for people developing using visual studio. \n\nAndres \n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 31 May 2024 08:55:29 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "Hi,\n\n> VS should work, and if not, we should fix it. It's slow, so I'd not use it for scheduled builds, but for people developing using visual studio.\n\nSince no one complained, should we assume that there are actually no\nsuch people? If this is the case then VS arguably doesn't give any\nvalue that would make it worth maintaining.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 3 Jun 2024 10:10:55 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "On Wed, May 29, 2024 at 2:41 AM Peter Eisentraut <[email protected]> wrote:\n> \"Alternatively, PostgreSQL can be built using Meson. This is the only\n> option for building PostgreSQL in Windows using Visual Something[*].\n> For other platforms, using Meson is currently experimental.\"\n\nIs it, though? I feel like we're beyond the experimental stage now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Jun 2024 11:24:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "On Tue, 4 Jun 2024 at 16:25, Robert Haas <[email protected]> wrote:\n\n> On Wed, May 29, 2024 at 2:41 AM Peter Eisentraut <[email protected]>\n> wrote:\n> > \"Alternatively, PostgreSQL can be built using Meson. This is the only\n> > option for building PostgreSQL in Windows using Visual Something[*].\n> > For other platforms, using Meson is currently experimental.\"\n>\n> Is it, though? I feel like we're beyond the experimental stage now.\n>\n\nI clearly missed the discussion in which it was decided to remove the old\nMSVC++ support from the tree (and am disappointed about that as I would\nhave objected loudly). Having recently started testing Meson on Windows\nwhen I realised that change had been made, and having to update builds for\npgAdmin and the Windows installers, I think it's clear it's far more\nexperimental on that platform than it is on Linux at least.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Tue, 4 Jun 2024 at 16:25, Robert Haas <[email protected]> wrote:On Wed, May 29, 2024 at 2:41 AM Peter Eisentraut <[email protected]> wrote:\n> \"Alternatively, PostgreSQL can be built using Meson.  This is the only\n> option for building PostgreSQL in Windows using Visual Something[*].\n> For other platforms, using Meson is currently experimental.\"\n\nIs it, though? I feel like we're beyond the experimental stage now.I clearly missed the discussion in which it was decided to remove the old MSVC++ support from the tree (and am disappointed about that as I would have objected loudly). Having recently started testing Meson on Windows when I realised that change had been made, and having to update builds for pgAdmin and the Windows installers, I think it's clear it's far more experimental on that platform than it is on Linux at least. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 4 Jun 2024 16:28:22 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "On 04/06/2024 18:28, Dave Page wrote:\n> I clearly missed the discussion in which it was decided to remove the \n> old MSVC++ support from the tree (and am disappointed about that as I \n> would have objected loudly). Having recently started testing Meson on \n> Windows when I realised that change had been made, and having to update \n> builds for pgAdmin and the Windows installers, I think it's clear it's \n> far more experimental on that platform than it is on Linux at least.\n\nWhat kind of issues did you run into? Have they been fixed since?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 4 Jun 2024 19:36:24 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "On 04.06.24 17:24, Robert Haas wrote:\n> On Wed, May 29, 2024 at 2:41 AM Peter Eisentraut <[email protected]> wrote:\n>> \"Alternatively, PostgreSQL can be built using Meson. This is the only\n>> option for building PostgreSQL in Windows using Visual Something[*].\n>> For other platforms, using Meson is currently experimental.\"\n> \n> Is it, though? I feel like we're beyond the experimental stage now.\n\nExperimental is probably too harsh a word now.\n\nBut it doesn't have feature parity with configure/make yet (for example, \nno .bc files), so I wouldn't recommend it for production or packaging \nbuilds.\n\nThen, there are issues like [0]. If it's experimental, then this is \nlike, meh, we'll fix it later. If not, then it's a bug.\n\nMore generally, I don't think we've really done a comprehensive check of \nhow popular extensions build against pgxs-from-meson. Packagers that \nmake their production builds using meson now might be signing up for a \nlong tail of the unknown.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/[email protected]\n\n\n", "msg_date": "Tue, 4 Jun 2024 18:56:34 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "On Tue, Jun 4, 2024 at 12:56 PM Peter Eisentraut <[email protected]> wrote:\n> Experimental is probably too harsh a word now.\n>\n> But it doesn't have feature parity with configure/make yet (for example,\n> no .bc files), so I wouldn't recommend it for production or packaging\n> builds.\n\nThat's unfortunate. :-(\n\n> Then, there are issues like [0]. If it's experimental, then this is\n> like, meh, we'll fix it later. If not, then it's a bug.\n\nThis feels like a case where I'd have to read the entire thread to\nunderstand what the issue is.\n\n> More generally, I don't think we've really done a comprehensive check of\n> how popular extensions build against pgxs-from-meson. Packagers that\n> make their production builds using meson now might be signing up for a\n> long tail of the unknown.\n\nThat's a fair point, and I don't know what to do about it, but it\nseems like something that needs to be addressed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Jun 2024 13:40:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "Hi\n\nOn Tue, 4 Jun 2024 at 17:36, Heikki Linnakangas <[email protected]> wrote:\n\n> On 04/06/2024 18:28, Dave Page wrote:\n> > I clearly missed the discussion in which it was decided to remove the\n> > old MSVC++ support from the tree (and am disappointed about that as I\n> > would have objected loudly). Having recently started testing Meson on\n> > Windows when I realised that change had been made, and having to update\n> > builds for pgAdmin and the Windows installers, I think it's clear it's\n> > far more experimental on that platform than it is on Linux at least.\n>\n> What kind of issues did you run into? Have they been fixed since?\n>\n\nzlib detection (\nhttps://www.postgresql.org/message-id/CA+OCxozrPZx57ue8rmhq6CD1Jic5uqKh80=vTpZurSKESn-dkw@mail.gmail.com)\nfor which Nazir worked up an as-yet-uncommitted patch.\n\nuuid detection, which Andrew fixed.\n\nI also ran into an issue in which the build fails if both gssapi and\nopenssl are enabled, but it turns out that's a code issue that affects at\nleast v16 with the old build system as well.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Tue, 4 Jun 2024 at 17:36, Heikki Linnakangas <[email protected]> wrote:On 04/06/2024 18:28, Dave Page wrote:\n> I clearly missed the discussion in which it was decided to remove the \n> old MSVC++ support from the tree (and am disappointed about that as I \n> would have objected loudly). Having recently started testing Meson on \n> Windows when I realised that change had been made, and having to update \n> builds for pgAdmin and the Windows installers, I think it's clear it's \n> far more experimental on that platform than it is on Linux at least.\n\nWhat kind of issues did you run into? Have they been fixed since?zlib detection (https://www.postgresql.org/message-id/CA+OCxozrPZx57ue8rmhq6CD1Jic5uqKh80=vTpZurSKESn-dkw@mail.gmail.com) for which Nazir worked up an as-yet-uncommitted patch.uuid detection, which Andrew fixed.I also ran into an issue in which the build fails if both gssapi and openssl are enabled, but it turns out that's a code issue that affects at least v16 with the old build system as well.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Wed, 5 Jun 2024 10:30:54 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "On Thu, May 30, 2024 at 01:32:18PM +0300, Aleksander Alekseev wrote:\n> > > [*] What is the correct name for this?\n> >\n> > I believe in this section it should be \"Visual Studio\" as we specify\n> > elsewhere [1][2]. In [2] we name specific required versions. Maybe we\n> > should reference this section.\n> >\n> > While on it, in [2] section 17.7.5 is named \"Visual\". I don't think\n> > such a product exists and find it confusing. Maybe we should rename\n> > the section to \"Visual Studio\".\n\nAgreed. I've pushed your patch for that:\n\n> Here are corresponding patches.\n\nThe <sect2> ID is new in v17, so I also renamed it like\ns/installation-notes-visual/installation-notes-visual-studio/.\n\n(I didn't examine or push your other patch, which was about $SUBJECT.)\n\n\n", "msg_date": "Fri, 2 Aug 2024 12:52:28 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" }, { "msg_contents": "Hi,\n\n> Agreed. I've pushed your patch for that:\n>\n> > Here are corresponding patches.\n>\n> The <sect2> ID is new in v17, so I also renamed it like\n> s/installation-notes-visual/installation-notes-visual-studio/.\n>\n> (I didn't examine or push your other patch, which was about $SUBJECT.)\n\nThanks. Re-attaching 0001 and adding it to the nearest CF to make it\nvisible on cfbot.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 5 Aug 2024 16:48:28 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson \"experimental\"?" } ]
[ { "msg_contents": "On a particular query, I start an alarm (say for 5 sec) using RegisterTimeout , and when the alarm rings, I log something.\nThis works fine.\nBut if I run a query with a syntax error between the time duration, then the alarm never rings.\nIs there some code within Postgres that resets/removes the signals in case a query hits any error?\nTimeoutId timer = RegisterTimeout(USER_TIMEOUT,interval_handler);\n enable_timeout_after(timer, 5 * 1000);\n\nThanks,\nIshan.\n\n\n-- \nThe information contained in this electronic communication is intended \nsolely for the individual(s) or entity to which it is addressed. It may \ncontain proprietary, confidential and/or legally privileged information. \nAny review, retransmission, dissemination, printing, copying or other use \nof, or taking any action in reliance on the contents of this information by \nperson(s) or entities other than the intended recipient is strictly \nprohibited and may be unlawful. If you have received this communication in \nerror, please notify us by responding to this email or telephone and \nimmediately and permanently delete all copies of this message and any \nattachments from your system(s). The contents of this message do not \nnecessarily represent the views or policies of BITS Pilani.\n\n\n\n\n\n\n\n\n\nOn a particular query, I start an alarm (say for 5 sec) using RegisterTimeout , and when the alarm rings, I log something.\nThis works fine.\nBut if I run a query with a syntax error between the time duration, then the alarm never rings.\nIs there some code within Postgres that resets/removes the signals in case a query hits any error?\nTimeoutId timer = RegisterTimeout(USER_TIMEOUT,interval_handler);\n    enable_timeout_after(timer, 5 * 1000);\n \nThanks,\nIshan.", "msg_date": "Wed, 29 May 2024 10:45:09 +0000", "msg_from": "\"ISHAN CHHANGANI .\" <[email protected]>", "msg_from_op": true, "msg_subject": "Timeout gets unset on a syntax error." } ]
[ { "msg_contents": "Hello hackers,\n\nAs a recent buildfarm test failure [1] shows:\n[14:33:02.374](0.333s) ok 23 - update works with dropped subscriber column\n### Stopping node \"publisher\" using mode fast\n# Running: pg_ctl -D \n/home/bf/bf-build/adder/HEAD/pgsql.build/testrun/subscription/001_rep_changes/data/t_001_rep_changes_publisher_data/pgdata \n-m fast stop\nwaiting for server to shut down.. ... ... ... .. failed\npg_ctl: server does not shut down\n# pg_ctl stop failed: 256\n# Postmaster PID for node \"publisher\" is 2222549\n[14:39:04.375](362.001s) Bail out!  pg_ctl stop failed\n\n001_rep_changes_publisher.log\n2024-05-16 14:33:02.907 UTC [2238704][client backend][4/22:0] LOG: statement: DELETE FROM tab_rep\n2024-05-16 14:33:02.925 UTC [2238704][client backend][:0] LOG: disconnection: session time: 0:00:00.078 user=bf \ndatabase=postgres host=[local]\n2024-05-16 14:33:02.939 UTC [2222549][postmaster][:0] LOG:  received fast shutdown request\n2024-05-16 14:33:03.000 UTC [2222549][postmaster][:0] LOG:  aborting any active transactions\n2024-05-16 14:33:03.049 UTC [2222549][postmaster][:0] LOG: background worker \"logical replication launcher\" (PID \n2223110) exited with exit code 1\n2024-05-16 14:33:03.062 UTC [2222901][checkpointer][:0] LOG: shutting down\n2024-05-16 14:39:04.377 UTC [2222549][postmaster][:0] LOG:  received immediate shutdown request\n2024-05-16 14:39:04.382 UTC [2222549][postmaster][:0] LOG:  database system is shut down\n\nthe publisher node may hang on stopping.\n\nI reproduced the failure (with aggressive autovacuum) locally and\ndiscovered that it happens because:\n1) checkpointer calls WalSndInitStopping() (which sends\n  PROCSIG_WALSND_INIT_STOPPING to walsender), and then spins inside\n  WalSndWaitStopping() indefinitely, because:\n2) walsender receives the signal, sets got_STOPPING = true, but can't exit\nWalSndLoop():\n3) it never sets got_SIGUSR2 (to get into WalSndDone()) in\n  XLogSendLogical():\n4) it never sets WalSndCaughtUp to true:\n5) logical_decoding_ctx->reader->EndRecPtr can't reach flushPtr in\n  XLogSendLogical():\n6) EndRecPtr doesn't advance in XLogNextRecord():\n7) XLogDecodeNextRecord() fails do decode a record that crosses a page\n  boundary:\n8) ReadPageInternal() (commented \"Wait for the next page to become\n  available\") constantly returns XLREAD_FAIL:\n9) state->routine.page_read()/logical_read_xlog_page() constantly returns\n  -1:\n10) flushptr = WalSndWaitForWal() stops advancing, because\n  got_STOPPING == true (see 2).\n\nThat is, walsender doesn't let itself to catch up, if it gets the stop\nsignal when it's lagging behind and decoding a record requires reading\nthe next wal page.\n\nPlease look at the reproducing test (based on 001_rep_changes.pl) attached.\nIf fails for me as below:\n# 17\nBailout called.  Further testing stopped:  pg_ctl stop failed\nFAILED--Further testing stopped: pg_ctl stop failed\nmake: *** [Makefile:21: check] Ошибка 255\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-05-16%2014%3A22%3A38\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-04-24%2014%3A38%3A35 (apparently the same)\n\nBest regards,\nAlexander", "msg_date": "Wed, 29 May 2024 14:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Wed, 29 May 2024 at 16:30, Alexander Lakhin <[email protected]> wrote:\n>\n> Hello hackers,\n>\n> As a recent buildfarm test failure [1] shows:\n> [14:33:02.374](0.333s) ok 23 - update works with dropped subscriber column\n> ### Stopping node \"publisher\" using mode fast\n> # Running: pg_ctl -D\n> /home/bf/bf-build/adder/HEAD/pgsql.build/testrun/subscription/001_rep_changes/data/t_001_rep_changes_publisher_data/pgdata\n> -m fast stop\n> waiting for server to shut down.. ... ... ... .. failed\n> pg_ctl: server does not shut down\n> # pg_ctl stop failed: 256\n> # Postmaster PID for node \"publisher\" is 2222549\n> [14:39:04.375](362.001s) Bail out! pg_ctl stop failed\n>\n> 001_rep_changes_publisher.log\n> 2024-05-16 14:33:02.907 UTC [2238704][client backend][4/22:0] LOG: statement: DELETE FROM tab_rep\n> 2024-05-16 14:33:02.925 UTC [2238704][client backend][:0] LOG: disconnection: session time: 0:00:00.078 user=bf\n> database=postgres host=[local]\n> 2024-05-16 14:33:02.939 UTC [2222549][postmaster][:0] LOG: received fast shutdown request\n> 2024-05-16 14:33:03.000 UTC [2222549][postmaster][:0] LOG: aborting any active transactions\n> 2024-05-16 14:33:03.049 UTC [2222549][postmaster][:0] LOG: background worker \"logical replication launcher\" (PID\n> 2223110) exited with exit code 1\n> 2024-05-16 14:33:03.062 UTC [2222901][checkpointer][:0] LOG: shutting down\n> 2024-05-16 14:39:04.377 UTC [2222549][postmaster][:0] LOG: received immediate shutdown request\n> 2024-05-16 14:39:04.382 UTC [2222549][postmaster][:0] LOG: database system is shut down\n>\n> the publisher node may hang on stopping.\n>\n> I reproduced the failure (with aggressive autovacuum) locally and\n> discovered that it happens because:\n> 1) checkpointer calls WalSndInitStopping() (which sends\n> PROCSIG_WALSND_INIT_STOPPING to walsender), and then spins inside\n> WalSndWaitStopping() indefinitely, because:\n> 2) walsender receives the signal, sets got_STOPPING = true, but can't exit\n> WalSndLoop():\n> 3) it never sets got_SIGUSR2 (to get into WalSndDone()) in\n> XLogSendLogical():\n> 4) it never sets WalSndCaughtUp to true:\n> 5) logical_decoding_ctx->reader->EndRecPtr can't reach flushPtr in\n> XLogSendLogical():\n> 6) EndRecPtr doesn't advance in XLogNextRecord():\n> 7) XLogDecodeNextRecord() fails do decode a record that crosses a page\n> boundary:\n> 8) ReadPageInternal() (commented \"Wait for the next page to become\n> available\") constantly returns XLREAD_FAIL:\n> 9) state->routine.page_read()/logical_read_xlog_page() constantly returns\n> -1:\n> 10) flushptr = WalSndWaitForWal() stops advancing, because\n> got_STOPPING == true (see 2).\n>\n> That is, walsender doesn't let itself to catch up, if it gets the stop\n> signal when it's lagging behind and decoding a record requires reading\n> the next wal page.\n>\n> Please look at the reproducing test (based on 001_rep_changes.pl) attached.\n> If fails for me as below:\n> # 17\n> Bailout called. Further testing stopped: pg_ctl stop failed\n> FAILED--Further testing stopped: pg_ctl stop failed\n> make: *** [Makefile:21: check] Ошибка 255\n\nThank you, Alexander, for sharing the script. I was able to reproduce\nthe issue using the provided script. Furthermore, while investigating\nits origins, I discovered that this problem persists across all\nbranches up to PG10 (the script needs slight adjustments to run it on\nolder versions). It's worth noting that this issue isn't a result of\nrecent version changes.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 29 May 2024 21:26:12 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Thu, May 30, 2024 at 2:09 AM vignesh C <[email protected]> wrote:\n>\n> On Wed, 29 May 2024 at 16:30, Alexander Lakhin <[email protected]> wrote:\n> >\n> > Hello hackers,\n> >\n> > As a recent buildfarm test failure [1] shows:\n> > [14:33:02.374](0.333s) ok 23 - update works with dropped subscriber column\n> > ### Stopping node \"publisher\" using mode fast\n> > # Running: pg_ctl -D\n> > /home/bf/bf-build/adder/HEAD/pgsql.build/testrun/subscription/001_rep_changes/data/t_001_rep_changes_publisher_data/pgdata\n> > -m fast stop\n> > waiting for server to shut down.. ... ... ... .. failed\n> > pg_ctl: server does not shut down\n> > # pg_ctl stop failed: 256\n> > # Postmaster PID for node \"publisher\" is 2222549\n> > [14:39:04.375](362.001s) Bail out! pg_ctl stop failed\n> >\n> > 001_rep_changes_publisher.log\n> > 2024-05-16 14:33:02.907 UTC [2238704][client backend][4/22:0] LOG: statement: DELETE FROM tab_rep\n> > 2024-05-16 14:33:02.925 UTC [2238704][client backend][:0] LOG: disconnection: session time: 0:00:00.078 user=bf\n> > database=postgres host=[local]\n> > 2024-05-16 14:33:02.939 UTC [2222549][postmaster][:0] LOG: received fast shutdown request\n> > 2024-05-16 14:33:03.000 UTC [2222549][postmaster][:0] LOG: aborting any active transactions\n> > 2024-05-16 14:33:03.049 UTC [2222549][postmaster][:0] LOG: background worker \"logical replication launcher\" (PID\n> > 2223110) exited with exit code 1\n> > 2024-05-16 14:33:03.062 UTC [2222901][checkpointer][:0] LOG: shutting down\n> > 2024-05-16 14:39:04.377 UTC [2222549][postmaster][:0] LOG: received immediate shutdown request\n> > 2024-05-16 14:39:04.382 UTC [2222549][postmaster][:0] LOG: database system is shut down\n> >\n> > the publisher node may hang on stopping.\n> >\n> > I reproduced the failure (with aggressive autovacuum) locally and\n> > discovered that it happens because:\n> > 1) checkpointer calls WalSndInitStopping() (which sends\n> > PROCSIG_WALSND_INIT_STOPPING to walsender), and then spins inside\n> > WalSndWaitStopping() indefinitely, because:\n> > 2) walsender receives the signal, sets got_STOPPING = true, but can't exit\n> > WalSndLoop():\n> > 3) it never sets got_SIGUSR2 (to get into WalSndDone()) in\n> > XLogSendLogical():\n> > 4) it never sets WalSndCaughtUp to true:\n> > 5) logical_decoding_ctx->reader->EndRecPtr can't reach flushPtr in\n> > XLogSendLogical():\n> > 6) EndRecPtr doesn't advance in XLogNextRecord():\n> > 7) XLogDecodeNextRecord() fails do decode a record that crosses a page\n> > boundary:\n> > 8) ReadPageInternal() (commented \"Wait for the next page to become\n> > available\") constantly returns XLREAD_FAIL:\n> > 9) state->routine.page_read()/logical_read_xlog_page() constantly returns\n> > -1:\n> > 10) flushptr = WalSndWaitForWal() stops advancing, because\n> > got_STOPPING == true (see 2).\n> >\n> > That is, walsender doesn't let itself to catch up, if it gets the stop\n> > signal when it's lagging behind and decoding a record requires reading\n> > the next wal page.\n> >\n> > Please look at the reproducing test (based on 001_rep_changes.pl) attached.\n> > If fails for me as below:\n> > # 17\n> > Bailout called. Further testing stopped: pg_ctl stop failed\n> > FAILED--Further testing stopped: pg_ctl stop failed\n> > make: *** [Makefile:21: check] Ошибка 255\n>\n> Thank you, Alexander, for sharing the script. I was able to reproduce\n> the issue using the provided script. Furthermore, while investigating\n> its origins, I discovered that this problem persists across all\n> branches up to PG10 (the script needs slight adjustments to run it on\n> older versions). It's worth noting that this issue isn't a result of\n> recent version changes.\n>\n\nHi,\n\nFWIW using the provided scripting I was also able to reproduce the\nproblem on HEAD but for me, it was more rare. -- the script passed ok\n3 times all 100 iterations; it eventually failed on the 4th run on the\n75th iteration.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 30 May 2024 14:48:11 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Wed, May 29, 2024 at 9:00 PM Alexander Lakhin <[email protected]> wrote:\n>\n> Hello hackers,\n>\n> As a recent buildfarm test failure [1] shows:\n> [14:33:02.374](0.333s) ok 23 - update works with dropped subscriber column\n> ### Stopping node \"publisher\" using mode fast\n> # Running: pg_ctl -D\n> /home/bf/bf-build/adder/HEAD/pgsql.build/testrun/subscription/001_rep_changes/data/t_001_rep_changes_publisher_data/pgdata\n> -m fast stop\n> waiting for server to shut down.. ... ... ... .. failed\n> pg_ctl: server does not shut down\n> # pg_ctl stop failed: 256\n> # Postmaster PID for node \"publisher\" is 2222549\n> [14:39:04.375](362.001s) Bail out! pg_ctl stop failed\n>\n> 001_rep_changes_publisher.log\n> 2024-05-16 14:33:02.907 UTC [2238704][client backend][4/22:0] LOG: statement: DELETE FROM tab_rep\n> 2024-05-16 14:33:02.925 UTC [2238704][client backend][:0] LOG: disconnection: session time: 0:00:00.078 user=bf\n> database=postgres host=[local]\n> 2024-05-16 14:33:02.939 UTC [2222549][postmaster][:0] LOG: received fast shutdown request\n> 2024-05-16 14:33:03.000 UTC [2222549][postmaster][:0] LOG: aborting any active transactions\n> 2024-05-16 14:33:03.049 UTC [2222549][postmaster][:0] LOG: background worker \"logical replication launcher\" (PID\n> 2223110) exited with exit code 1\n> 2024-05-16 14:33:03.062 UTC [2222901][checkpointer][:0] LOG: shutting down\n> 2024-05-16 14:39:04.377 UTC [2222549][postmaster][:0] LOG: received immediate shutdown request\n> 2024-05-16 14:39:04.382 UTC [2222549][postmaster][:0] LOG: database system is shut down\n>\n> the publisher node may hang on stopping.\n>\n> I reproduced the failure (with aggressive autovacuum) locally and\n> discovered that it happens because:\n> 1) checkpointer calls WalSndInitStopping() (which sends\n> PROCSIG_WALSND_INIT_STOPPING to walsender), and then spins inside\n> WalSndWaitStopping() indefinitely, because:\n> 2) walsender receives the signal, sets got_STOPPING = true, but can't exit\n> WalSndLoop():\n> 3) it never sets got_SIGUSR2 (to get into WalSndDone()) in\n> XLogSendLogical():\n> 4) it never sets WalSndCaughtUp to true:\n> 5) logical_decoding_ctx->reader->EndRecPtr can't reach flushPtr in\n> XLogSendLogical():\n> 6) EndRecPtr doesn't advance in XLogNextRecord():\n> 7) XLogDecodeNextRecord() fails do decode a record that crosses a page\n> boundary:\n> 8) ReadPageInternal() (commented \"Wait for the next page to become\n> available\") constantly returns XLREAD_FAIL:\n> 9) state->routine.page_read()/logical_read_xlog_page() constantly returns\n> -1:\n> 10) flushptr = WalSndWaitForWal() stops advancing, because\n> got_STOPPING == true (see 2).\n>\n> That is, walsender doesn't let itself to catch up, if it gets the stop\n> signal when it's lagging behind and decoding a record requires reading\n> the next wal page.\n>\n> Please look at the reproducing test (based on 001_rep_changes.pl) attached.\n> If fails for me as below:\n> # 17\n> Bailout called. Further testing stopped: pg_ctl stop failed\n> FAILED--Further testing stopped: pg_ctl stop failed\n> make: *** [Makefile:21: check] Ошибка 255\n>\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-05-16%2014%3A22%3A38\n> [2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-04-24%2014%3A38%3A35 (apparently the same)\n>\n\nHi Alexander,\n\nFYI, by injecting a lot of logging, I’ve confirmed your findings that\nfor the failing scenario, the ‘got_SIGUSR2’ flag never gets set to\ntrue, meaning the WalSndLoop() cannot finish. Furthermore, I agree\nwith your step 8 finding that when it fails the ReadPageInternal\nfunction call (the one in XLogDecodeNextRecord with the comment \"Wait\nfor the next page to become available\") constantly returns -1.\n\nI will continue digging next week...\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 31 May 2024 18:27:42 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "Hi, I have reproduced this multiple times now.\n\nI confirmed the initial post/steps from Alexander. i.e. The test\nscript provided [1] gets itself into a state where function\nReadPageInternal (called by XLogDecodeNextRecord and commented \"Wait\nfor the next page to become available\") constantly returns\nXLREAD_FAIL. Ultimately the test times out because WalSndLoop() loops\nforever, since it never calls WalSndDone() to exit the walsender\nprocess.\n\n~~~\n\nI've made a patch to inject lots of logging, and when the test script\nfails a cycle of function failures can be seen. I don't know how to\nfix it yet, so I'm attaching my log results, hoping the information\nmay be useful for anyone familiar with this area of the code.\n\n~~~\n\nAttachment #1 \"v1-0001-DEBUG-LOGGING.patch\" -- Patch to inject some\nlogging. Be careful if you apply this because the resulting log files\ncan be huge (e.g. 3G)\n\nAttachment #2 \"bad8_logs_last500lines.txt\" -- This is the last 500\nlines of a 3G logfile from a failing test run.\n\nAttachment #3 \"bad8_logs_last500lines-simple.txt\" -- Same log file as\nabove, but it's a simplified extract in which I showed the CYCLES of\nfailure more clearly.\n\nAttachment #4 \"bad8_digram\"-- Same execution patch information as from\nthe log files, but in diagram form (just to help me visualise the\nlogic more easily).\n\n~~~\n\nJust so you know, the test script does not always cause the problem.\nSometimes it happens after just 20 script iterations. Or, sometimes it\ntakes a very long time and multiple runs (e.g. 400-500 script\niterations). Either way, when the problem eventually occurs the CYCLES\nof the ReadPageInternal() failures always have the the same pattern\nshown in these attached logs.\n\n======\n[1] OP - https://www.postgresql.org/message-id/f15d665f-4cd1-4894-037c-afdbe369287e%40gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 6 Jun 2024 12:49:45 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "At Thu, 6 Jun 2024 12:49:45 +1000, Peter Smith <[email protected]> wrote in \n> Hi, I have reproduced this multiple times now.\n> \n> I confirmed the initial post/steps from Alexander. i.e. The test\n> script provided [1] gets itself into a state where function\n> ReadPageInternal (called by XLogDecodeNextRecord and commented \"Wait\n> for the next page to become available\") constantly returns\n> XLREAD_FAIL. Ultimately the test times out because WalSndLoop() loops\n> forever, since it never calls WalSndDone() to exit the walsender\n> process.\n\nThanks for the repro; I believe I understand what's happening here.\n\nDuring server shutdown, the latter half of the last continuation\nrecord may fail to be flushed. This is similar to what is described in\nthe commit message of commit ff9f111bce. While shutting down,\nWalSndLoop() waits for XLogSendLogical() to consume WAL up to\nflushPtr, but in this case, the last record cannot complete without\nthe continuation part starting from flushPtr, which is\nmissing. However, in such cases, xlogreader.missingContrecPtr is set\nto the beginning of the missing part, but something similar to \n\nSo, I believe the attached small patch fixes the behavior. I haven't\ncome up with a good test script for this issue. Something like\n026_overwrite_contrecord.pl might work, but this situation seems a bit\nmore complex than what it handles.\n\nVersions back to 10 should suffer from the same issue and the same\npatch will be applicable without significant changes.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 06 Jun 2024 15:19:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Thu, 6 Jun 2024 at 11:49, Kyotaro Horiguchi <[email protected]> wrote:\n>\n> At Thu, 6 Jun 2024 12:49:45 +1000, Peter Smith <[email protected]> wrote in\n> > Hi, I have reproduced this multiple times now.\n> >\n> > I confirmed the initial post/steps from Alexander. i.e. The test\n> > script provided [1] gets itself into a state where function\n> > ReadPageInternal (called by XLogDecodeNextRecord and commented \"Wait\n> > for the next page to become available\") constantly returns\n> > XLREAD_FAIL. Ultimately the test times out because WalSndLoop() loops\n> > forever, since it never calls WalSndDone() to exit the walsender\n> > process.\n>\n> Thanks for the repro; I believe I understand what's happening here.\n>\n> During server shutdown, the latter half of the last continuation\n> record may fail to be flushed. This is similar to what is described in\n> the commit message of commit ff9f111bce. While shutting down,\n> WalSndLoop() waits for XLogSendLogical() to consume WAL up to\n> flushPtr, but in this case, the last record cannot complete without\n> the continuation part starting from flushPtr, which is\n> missing. However, in such cases, xlogreader.missingContrecPtr is set\n> to the beginning of the missing part, but something similar to\n>\n> So, I believe the attached small patch fixes the behavior. I haven't\n> come up with a good test script for this issue. Something like\n> 026_overwrite_contrecord.pl might work, but this situation seems a bit\n> more complex than what it handles.\n>\n> Versions back to 10 should suffer from the same issue and the same\n> patch will be applicable without significant changes.\n\nI tested the changes for PG 12 to master as we do not support prior versions.\nThe patch applied successfully for master and PG 16. I ran the test\nprovided in [1] multiple times and it ran successfully each time.\nThe patch did not apply on PG 15. I did a similar change for PG 15 and\ncreated a patch. I ran the test multiple times and it was successful\nevery time.\nThe patch did not apply on PG 14 to PG 12. I did a similar change in\neach branch. But the tests did not pass in each branch.\n\nI have attached a patch which applies successfully on the PG 15 branch.\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Mon, 10 Jun 2024 15:10:39 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Mon, 10 Jun 2024 at 15:10, Shlok Kyal <[email protected]> wrote:\n>\n> On Thu, 6 Jun 2024 at 11:49, Kyotaro Horiguchi <[email protected]> wrote:\n> >\n> > At Thu, 6 Jun 2024 12:49:45 +1000, Peter Smith <[email protected]> wrote in\n> > > Hi, I have reproduced this multiple times now.\n> > >\n> > > I confirmed the initial post/steps from Alexander. i.e. The test\n> > > script provided [1] gets itself into a state where function\n> > > ReadPageInternal (called by XLogDecodeNextRecord and commented \"Wait\n> > > for the next page to become available\") constantly returns\n> > > XLREAD_FAIL. Ultimately the test times out because WalSndLoop() loops\n> > > forever, since it never calls WalSndDone() to exit the walsender\n> > > process.\n> >\n> > Thanks for the repro; I believe I understand what's happening here.\n> >\n> > During server shutdown, the latter half of the last continuation\n> > record may fail to be flushed. This is similar to what is described in\n> > the commit message of commit ff9f111bce. While shutting down,\n> > WalSndLoop() waits for XLogSendLogical() to consume WAL up to\n> > flushPtr, but in this case, the last record cannot complete without\n> > the continuation part starting from flushPtr, which is\n> > missing. However, in such cases, xlogreader.missingContrecPtr is set\n> > to the beginning of the missing part, but something similar to\n> >\n> > So, I believe the attached small patch fixes the behavior. I haven't\n> > come up with a good test script for this issue. Something like\n> > 026_overwrite_contrecord.pl might work, but this situation seems a bit\n> > more complex than what it handles.\n> >\n> > Versions back to 10 should suffer from the same issue and the same\n> > patch will be applicable without significant changes.\n>\n> I tested the changes for PG 12 to master as we do not support prior versions.\n> The patch applied successfully for master and PG 16. I ran the test\n> provided in [1] multiple times and it ran successfully each time.\n> The patch did not apply on PG 15. I did a similar change for PG 15 and\n> created a patch. I ran the test multiple times and it was successful\n> every time.\n> The patch did not apply on PG 14 to PG 12. I did a similar change in\n> each branch. But the tests did not pass in each branch.\nI, by mistake, applied wrong changes in PG 14 to PG 12. I tested again\nfor all versions and the test ran successfully for all of them till\nPG12.\nI have also attached the patch which applies for PG14 to PG12.\n\nSorry for the inconvenience.\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Mon, 10 Jun 2024 19:25:12 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Thu, Jun 06, 2024 at 03:19:20PM +0900, Kyotaro Horiguchi wrote:\n> During server shutdown, the latter half of the last continuation\n> record may fail to be flushed. This is similar to what is described in\n> the commit message of commit ff9f111bce. While shutting down,\n> WalSndLoop() waits for XLogSendLogical() to consume WAL up to\n> flushPtr, but in this case, the last record cannot complete without\n> the continuation part starting from flushPtr, which is\n> missing. However, in such cases, xlogreader.missingContrecPtr is set\n> to the beginning of the missing part, but something similar to \n\n- /* If EndRecPtr is still past our flushPtr, it means we caught up. */\n- if (logical_decoding_ctx->reader->EndRecPtr >= flushPtr)\n+ /*\n+ * If EndRecPtr is still past our flushPtr, it means we caught up. When\n+ * the server is shutting down, the latter part of a continuation record\n+ * may be missing. If got_STOPPING is true, assume we are caught up if the\n+ * last record is missing its continuation part at flushPtr.\n+ */\n+ if (logical_decoding_ctx->reader->EndRecPtr >= flushPtr ||\n+ (got_STOPPING &&\n+ logical_decoding_ctx->reader->missingContrecPtr == flushPtr))\n\nFWIW, I don't have a better idea than what you are proposing here. We\njust cannot receive more data past the page boundary in a shutdown\nsequence in this context, so checking after the missingContrecPtr\nis a good compromise to ensure that we don't remain stuck when\nshutting down a logical WAL sender. I'm surprised that we did not\nhear about that more often on the lists, or perhaps we did but just\ndiscarded it?\n\nThis is going to take some time to check across all the branches down\nto v12 that this is stable, because this area of the code tends to\nchange slightly every year.. Well, that's my job.\n\n> So, I believe the attached small patch fixes the behavior. I haven't\n> come up with a good test script for this issue. Something like\n> 026_overwrite_contrecord.pl might work, but this situation seems a bit\n> more complex than what it handles.\n\nHmm. Indeed you will not be able to reuse the same trick with the end\nof a segment. Still you should be able to get a rather stable test by\nusing the same tricks as 039_end_of_wal.pl to spawn a record across\nmultiple pages, no? \n--\nMichael", "msg_date": "Tue, 11 Jun 2024 09:27:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Thu, Jun 6, 2024 at 11:49 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Thu, 6 Jun 2024 12:49:45 +1000, Peter Smith <[email protected]> wrote in\n> > Hi, I have reproduced this multiple times now.\n> >\n> > I confirmed the initial post/steps from Alexander. i.e. The test\n> > script provided [1] gets itself into a state where function\n> > ReadPageInternal (called by XLogDecodeNextRecord and commented \"Wait\n> > for the next page to become available\") constantly returns\n> > XLREAD_FAIL. Ultimately the test times out because WalSndLoop() loops\n> > forever, since it never calls WalSndDone() to exit the walsender\n> > process.\n>\n> Thanks for the repro; I believe I understand what's happening here.\n>\n> During server shutdown, the latter half of the last continuation\n> record may fail to be flushed. This is similar to what is described in\n> the commit message of commit ff9f111bce. While shutting down,\n> WalSndLoop() waits for XLogSendLogical() to consume WAL up to\n> flushPtr, but in this case, the last record cannot complete without\n> the continuation part starting from flushPtr, which is\n> missing.\n\nSorry, it is not clear to me why we failed to flush the last\ncontinuation record in logical walsender? I see that we try to flush\nthe WAL after receiving got_STOPPING in WalSndWaitForWal(), why is\nthat not sufficient?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 11 Jun 2024 11:32:12 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "At Tue, 11 Jun 2024 11:32:12 +0530, Amit Kapila <[email protected]> wrote in \n> Sorry, it is not clear to me why we failed to flush the last\n> continuation record in logical walsender? I see that we try to flush\n> the WAL after receiving got_STOPPING in WalSndWaitForWal(), why is\n> that not sufficient?\n\nIt seems that, it uses XLogBackgroundFlush(), which does not guarantee\nflushing WAL until the end.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 11 Jun 2024 16:04:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "At Tue, 11 Jun 2024 09:27:20 +0900, Michael Paquier <[email protected]> wrote in \n> On Thu, Jun 06, 2024 at 03:19:20PM +0900, Kyotaro Horiguchi wrote:\n> > So, I believe the attached small patch fixes the behavior. I haven't\n> > come up with a good test script for this issue. Something like\n> > 026_overwrite_contrecord.pl might work, but this situation seems a bit\n> > more complex than what it handles.\n> \n> Hmm. Indeed you will not be able to reuse the same trick with the end\n> of a segment. Still you should be able to get a rather stable test by\n> using the same tricks as 039_end_of_wal.pl to spawn a record across\n> multiple pages, no? \n\nWith the trick, we could write a page-spanning record, but I'm not\nsure we can control the behavior of XLogBackgroundFlush().\n\nAs Amit suggested, we have the option to create a variant of the\nfunction that guarantees flushing WAL until the end.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 11 Jun 2024 16:15:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Tue, Jun 11, 2024 at 12:34 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Tue, 11 Jun 2024 11:32:12 +0530, Amit Kapila <[email protected]> wrote in\n> > Sorry, it is not clear to me why we failed to flush the last\n> > continuation record in logical walsender? I see that we try to flush\n> > the WAL after receiving got_STOPPING in WalSndWaitForWal(), why is\n> > that not sufficient?\n>\n> It seems that, it uses XLogBackgroundFlush(), which does not guarantee\n> flushing WAL until the end.\n>\n\nWhat would it take to ensure the same? I am trying to explore this\npath because currently logical WALSender sends any outstanding logs up\nto the shutdown checkpoint record (i.e., the latest record) and waits\nfor them to be replicated to the standby before exit. Please take a\nlook at the comments where we call WalSndDone(). The fix you are\nproposing will break that guarantee.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 11 Jun 2024 14:27:28 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "At Tue, 11 Jun 2024 14:27:28 +0530, Amit Kapila <[email protected]> wrote in \r\n> On Tue, Jun 11, 2024 at 12:34 PM Kyotaro Horiguchi\r\n> <[email protected]> wrote:\r\n> >\r\n> > At Tue, 11 Jun 2024 11:32:12 +0530, Amit Kapila <[email protected]> wrote in\r\n> > > Sorry, it is not clear to me why we failed to flush the last\r\n> > > continuation record in logical walsender? I see that we try to flush\r\n> > > the WAL after receiving got_STOPPING in WalSndWaitForWal(), why is\r\n> > > that not sufficient?\r\n> >\r\n> > It seems that, it uses XLogBackgroundFlush(), which does not guarantee\r\n> > flushing WAL until the end.\r\n> >\r\n> \r\n> What would it take to ensure the same? I am trying to explore this\r\n> path because currently logical WALSender sends any outstanding logs up\r\n> to the shutdown checkpoint record (i.e., the latest record) and waits\r\n> for them to be replicated to the standby before exit. Please take a\r\n> look at the comments where we call WalSndDone(). The fix you are\r\n> proposing will break that guarantee.\r\n\r\nShutdown checkpoint is performed after the walsender completed\r\ntermination since 086221cf6b, aiming to prevent walsenders from\r\ngenerating competing WAL (by, for example, CREATE_REPLICATION_SLOT)\r\nrecords with the shutdown checkpoint. Thus, it seems that the\r\nwalsender cannot see the shutdown record, and a certain amount of\r\nbytes before it, as the walsender appears to have relied on the\r\ncheckpoint flushing its record, rather than on XLogBackgroundFlush().\r\n\r\nIf we approve of the walsender being terminated before the shutdown\r\ncheckpoint, we need to \"fix\" the comment, then provide a function to\r\nensure the synchronization of WAL records.\r\n\r\nI'll consider this direction for a while.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Wed, 12 Jun 2024 10:13:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Wed, Jun 12, 2024 at 6:43 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Tue, 11 Jun 2024 14:27:28 +0530, Amit Kapila <[email protected]> wrote in\n> > On Tue, Jun 11, 2024 at 12:34 PM Kyotaro Horiguchi\n> > <[email protected]> wrote:\n> > >\n> > > At Tue, 11 Jun 2024 11:32:12 +0530, Amit Kapila <[email protected]> wrote in\n> > > > Sorry, it is not clear to me why we failed to flush the last\n> > > > continuation record in logical walsender? I see that we try to flush\n> > > > the WAL after receiving got_STOPPING in WalSndWaitForWal(), why is\n> > > > that not sufficient?\n> > >\n> > > It seems that, it uses XLogBackgroundFlush(), which does not guarantee\n> > > flushing WAL until the end.\n> > >\n> >\n> > What would it take to ensure the same? I am trying to explore this\n> > path because currently logical WALSender sends any outstanding logs up\n> > to the shutdown checkpoint record (i.e., the latest record) and waits\n> > for them to be replicated to the standby before exit. Please take a\n> > look at the comments where we call WalSndDone(). The fix you are\n> > proposing will break that guarantee.\n>\n> Shutdown checkpoint is performed after the walsender completed\n> termination since 086221cf6b,\n>\n\nYeah, but the commit you quoted later reverted by commit 703f148e98\nand committed again as c6c3334364.\n\n> aiming to prevent walsenders from\n> generating competing WAL (by, for example, CREATE_REPLICATION_SLOT)\n> records with the shutdown checkpoint. Thus, it seems that the\n> walsender cannot see the shutdown record,\n>\n\nThis is true of logical walsender. The physical walsender do send\nshutdown checkpoint record before getting terminated.\n\n> and a certain amount of\n> bytes before it, as the walsender appears to have relied on the\n> checkpoint flushing its record, rather than on XLogBackgroundFlush().\n>\n> If we approve of the walsender being terminated before the shutdown\n> checkpoint, we need to \"fix\" the comment, then provide a function to\n> ensure the synchronization of WAL records.\n>\n\nWhich comment do you want to fix?\n\n> I'll consider this direction for a while.\n>\n\nOkay, thanks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 Jun 2024 09:29:03 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "At Thu, 13 Jun 2024 09:29:03 +0530, Amit Kapila <[email protected]> wrote in \n> Yeah, but the commit you quoted later reverted by commit 703f148e98\n> and committed again as c6c3334364.\n\nYeah, right..\n\n> > aiming to prevent walsenders from\n> > generating competing WAL (by, for example, CREATE_REPLICATION_SLOT)\n> > records with the shutdown checkpoint. Thus, it seems that the\n> > walsender cannot see the shutdown record,\n> >\n> \n> This is true of logical walsender. The physical walsender do send\n> shutdown checkpoint record before getting terminated.\n\nYes, I know. They differ in their blocking mechanisms.\n\n> > and a certain amount of\n> > bytes before it, as the walsender appears to have relied on the\n> > checkpoint flushing its record, rather than on XLogBackgroundFlush().\n> >\n> > If we approve of the walsender being terminated before the shutdown\n> > checkpoint, we need to \"fix\" the comment, then provide a function to\n> > ensure the synchronization of WAL records.\n> >\n> \n> Which comment do you want to fix?\n\nYeah. The part you seem to think I was trying to fix is actually\nfine. Instead, I have revised the comment on the modified section to\nmake its intention clearer.\n\n> > I'll consider this direction for a while.\n> >\n> \n> Okay, thanks.\n\nThe attached patch is it. It's only for the master.\n\nI decided not to create a new function because the simple code has\nonly one caller. I haven't seen the test script fail with this fix.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 18 Jun 2024 17:07:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "Dear Horiguchi-san,\n\nThanks for sharing the patch! I agree this approach (ensure WAL records are flushed)\nIs more proper than others.\n\nI have an unclear point. According to the comment atop GetInsertRecPtr(), it just\nreturns the approximated value - the position of the last full WAL page [1].\nIf there is a continuation WAL record which across a page, will it return the\nhalfway point of the WAL record (end of the first WAL page)? If so, the proposed\nfix seems not sufficient. We have to point out the exact the end of the record.\n\n[1]:\n/*\n * GetInsertRecPtr -- Returns the current insert position.\n *\n * NOTE: The value *actually* returned is the position of the last full\n * xlog page. It lags behind the real insert position by at most 1 page.\n * For that, we don't need to scan through WAL insertion locks, and an\n * approximation is enough for the current usage of this function.\n */\nXLogRecPtr\nGetInsertRecPtr(void)\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/ \n\n\n\n", "msg_date": "Wed, 19 Jun 2024 05:14:50 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "FYI - I applied this latest patch and re-ran the original failing test\nscript 10 times (e.g. 10 x 100 test iterations; it took 4+ hours).\n\nThere were zero failures observed in my environment.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 19 Jun 2024 17:58:50 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Wed, Jun 19, 2024 at 05:14:50AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> I have an unclear point. According to the comment atop GetInsertRecPtr(), it just\n> returns the approximated value - the position of the last full WAL page [1].\n> If there is a continuation WAL record which across a page, will it return the\n> halfway point of the WAL record (end of the first WAL page)? If so, the proposed\n> fix seems not sufficient. We have to point out the exact the end of the record.\n\nYeah, that a thing of the patch I am confused with. How are we sure\nthat this is the correct LSN to rely on? If that it the case, the\npatch does not offer an explanation about why it is better.\n\nWalSndWaitForWal() is called only in the context of page callback for a\nlogical WAL sender. Shouldn't we make the flush conditional on what's\nstored in XLogReaderState.missingContrecPtr? Aka, if we know that\nwe're in the middle of the decoding of a continuation record, we\nshould wait until we've dealt with it, no?\n\nIn short, I would imagine that WalSndWaitForWal() should know more\nabout XLogReaderState is doing. But perhaps I'm missing something.\n--\nMichael", "msg_date": "Fri, 21 Jun 2024 10:48:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "On Wed, Jun 19, 2024 at 10:44 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Horiguchi-san,\n>\n> Thanks for sharing the patch! I agree this approach (ensure WAL records are flushed)\n> Is more proper than others.\n>\n> I have an unclear point. According to the comment atop GetInsertRecPtr(), it just\n> returns the approximated value - the position of the last full WAL page [1].\n> If there is a continuation WAL record which across a page, will it return the\n> halfway point of the WAL record (end of the first WAL page)? If so, the proposed\n> fix seems not sufficient. We have to point out the exact the end of the record.\n>\n\nYou have a point but if this theory is correct why we are not able to\nreproduce the issue after this patch? Also, how to get the WAL\nlocation up to which we need to flush? Is XLogCtlData->logInsertResult\nthe one we are looking for?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 21 Jun 2024 11:48:22 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" }, { "msg_contents": "At Fri, 21 Jun 2024 11:48:22 +0530, Amit Kapila <[email protected]> wrote in \r\n> On Wed, Jun 19, 2024 at 10:44 AM Hayato Kuroda (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> > Dear Horiguchi-san,\r\n> >\r\n> > Thanks for sharing the patch! I agree this approach (ensure WAL records are flushed)\r\n> > Is more proper than others.\r\n> >\r\n> > I have an unclear point. According to the comment atop GetInsertRecPtr(), it just\r\n> > returns the approximated value - the position of the last full WAL page [1].\r\n> > If there is a continuation WAL record which across a page, will it return the\r\n> > halfway point of the WAL record (end of the first WAL page)? If so, the proposed\r\n> > fix seems not sufficient. We have to point out the exact the end of the record.\r\n> >\r\n> \r\n> You have a point but if this theory is correct why we are not able to\r\n> reproduce the issue after this patch? Also, how to get the WAL\r\n> location up to which we need to flush? Is XLogCtlData->logInsertResult\r\n> the one we are looking for?\r\n\r\nIt is not exposed, but of course logInsertResult is more\r\nstraightforward source for the LSN.\r\n\r\nThe reason why the patch is working well is due to the following bit\r\nof the code.\r\n\r\nxlog.c:958, in XLInsertRecord()\r\n>\t/*\r\n>\t * Update shared LogwrtRqst.Write, if we crossed page boundary.\r\n>\t */\r\n>\tif (StartPos / XLOG_BLCKSZ != EndPos / XLOG_BLCKSZ)\r\n>\t{\r\n>\t\tSpinLockAcquire(&XLogCtl->info_lck);\r\n>\t\t/* advance global request to include new block(s) */\r\n>\t\tif (XLogCtl->LogwrtRqst.Write < EndPos)\r\n>\t\t\tXLogCtl->LogwrtRqst.Write = EndPos;\r\n>\t\tSpinLockRelease(&XLogCtl->info_lck);\r\n>\t\tRefreshXLogWriteResult(LogwrtResult);\r\n>\t}\r\n\r\nThe code, which exists has existed for a long time, ensures that\r\nGetInsertRecPtr() returns the accurate end of a record when it spanns\r\nover page boundaries. This would need to be written in the new comment\r\nif we use GetInsertRecPtr().\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Tue, 25 Jun 2024 10:03:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl fails due to publisher stuck on shutdown" } ]
[ { "msg_contents": "I've built a custom type of index. I implemented an index access method,\nbut have run into roadblocks. I need to:\n\n1. See the other quals in the where clause.\n2. Extract some things from the projection.\n3. Insert some things in the projection.\n4. Push some aggregations down into the index.\n\nSo I started implementing a CustomScan. It's not trivial.\n\nI've learned that the system executes ExecInitCustomScan automatically, but\nI probably need it to do most of the stuff in ExecInitIndexScan, and then\nexecute the scan mostly the way it's done in IndexNext.\n\nBasically, I want just a normal index scan, but with the ability to do\ncustom things with the quals and the projection.\n\nSo... what's the best approach?\n\nIs there any sample code that does this? A search of github doesn't turn up\nmuch.\n\nIs there any way to do this without duplicating everything in\nnodeIndexscan.c myself?\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nI've built a custom type of index. I implemented an index access method, but have run into roadblocks. I need to:1. See the other quals in the where clause.2. Extract some things from the projection.3. Insert some things in the projection.4. Push some aggregations down into the index.So I started implementing a CustomScan. It's not trivial.I've learned that the system executes ExecInitCustomScan automatically, but I probably need it to do most of the stuff in ExecInitIndexScan, and then execute the scan mostly the way it's done in IndexNext.Basically, I want just a normal index scan, but with the ability to do custom things with the quals and the projection.So... what's the best approach?Is there any sample code that does this? A search of github doesn't turn up much.Is there any way to do this without duplicating everything in nodeIndexscan.c myself?-- Chris Cleveland312-339-2677 mobile", "msg_date": "Wed, 29 May 2024 23:02:30 -0500", "msg_from": "Chris Cleveland <[email protected]>", "msg_from_op": true, "msg_subject": "Implementing CustomScan over an index" }, { "msg_contents": "Hi,\n\n> So I started implementing a CustomScan. It's not trivial.\n>\n> I've learned that the system executes ExecInitCustomScan automatically, but I probably need it to do most of the stuff in ExecInitIndexScan, and then execute the scan mostly the way it's done in IndexNext.\n>\n> Basically, I want just a normal index scan, but with the ability to do custom things with the quals and the projection.\n>\n> So... what's the best approach?\n>\n> Is there any sample code that does this? A search of github doesn't turn up much.\n>\n> Is there any way to do this without duplicating everything in nodeIndexscan.c myself?\n\nYes, unfortunately it is not quite trivial.\n\nThere is a \"Writing a Custom Scan Provider\" chapter in the\ndocumentation that may help [1]. TimescaleDB uses CustomScans, maybe\nusing its code as an example will help [2]. Hint: exploring `git log`\nis often helpful too.\n\nIf something in the documentation is not clear, maybe it can be\nimproved. Let us know here or (even better) provide a patch. If you\nhave a particular piece of code that doesn't do what you want, try\nuploading a minimal example on GitHub and asking here.\n\nBy a quick look I couldn't find an example of implementing a\nCustomScan in ./contrib/ or ./src/test/. If you can think of a usage\nexample of CustomScans, consider contributing a test case.\n\n[1]: https://www.postgresql.org/docs/current/custom-scan.html\n[2]: https://github.com/timescale/timescaledb/\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 30 May 2024 15:26:18 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Implementing CustomScan over an index" } ]
[ { "msg_contents": "Hello Andrew,\n\nWhile reviewing recent buildfarm failures, I came across this one:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-05-23%2004%3A11%3A03\n\nupgrade.crake/REL_16_STABLE/REL9_5_STABLE-ctl4.log\nwaiting for server to shut \ndown........................................................................................................................... \nfailed\npg_ctl: server does not shut down\n\nLooking at:\nhttps://github.com/PGBuildFarm/client-code/blob/05014d50e/PGBuild/Modules/TestUpgradeXversion.pm#L641\n\nI see that ctl4.log is created after updating extensions and\nREL9_5_STABLE-update_extensions.log contains:\nYou are now connected to database \"contrib_regression_redis_fdw\" as user \"buildfarm\".\nALTER EXTENSION \"hstore\" UPDATE;\nALTER EXTENSION\nYou are now connected to database \"contrib_regression_btree_gin\" as user \"buildfarm\".\nALTER EXTENSION \"btree_gin\" UPDATE;\nALTER EXTENSION\n...\nbut I see no corresponding server log file containing these commands in the\nfailure log.\n\nWhen running the same test locally, I find these in inst/upgrade_log.\n\nMaybe uploading this log file too would help to understand what is the\ncause of the failure...\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 30 May 2024 18:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "The xversion-upgrade test fails to stop server" }, { "msg_contents": "\nSent from my iPhone\n\n> On May 30, 2024, at 8:00 AM, Alexander Lakhin <[email protected]> wrote:\n> \n> Hello Andrew,\n> \n> While reviewing recent buildfarm failures, I came across this one:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-05-23%2004%3A11%3A03\n> \n> upgrade.crake/REL_16_STABLE/REL9_5_STABLE-ctl4.log\n> waiting for server to shut down........................................................................................................................... failed\n> pg_ctl: server does not shut down\n> \n> Looking at:\n> https://github.com/PGBuildFarm/client-code/blob/05014d50e/PGBuild/Modules/TestUpgradeXversion.pm#L641\n> \n> I see that ctl4.log is created after updating extensions and\n> REL9_5_STABLE-update_extensions.log contains:\n> You are now connected to database \"contrib_regression_redis_fdw\" as user \"buildfarm\".\n> ALTER EXTENSION \"hstore\" UPDATE;\n> ALTER EXTENSION\n> You are now connected to database \"contrib_regression_btree_gin\" as user \"buildfarm\".\n> ALTER EXTENSION \"btree_gin\" UPDATE;\n> ALTER EXTENSION\n> ...\n> but I see no corresponding server log file containing these commands in the\n> failure log.\n> \n> When running the same test locally, I find these in inst/upgrade_log.\n> \n> Maybe uploading this log file too would help to understand what is the\n> cause of the failure...\n> \n\nWill investigate after I return from pgconf\n\nCheers \n\nAndrew \n\n\n\n", "msg_date": "Thu, 30 May 2024 10:28:26 -0700", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The xversion-upgrade test fails to stop server" }, { "msg_contents": "\nOn 2024-05-30 Th 11:00, Alexander Lakhin wrote:\n> Hello Andrew,\n>\n> While reviewing recent buildfarm failures, I came across this one:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-05-23%2004%3A11%3A03 \n>\n>\n> upgrade.crake/REL_16_STABLE/REL9_5_STABLE-ctl4.log\n> waiting for server to shut \n> down........................................................................................................................... \n> failed\n> pg_ctl: server does not shut down\n>\n> Looking at:\n> https://github.com/PGBuildFarm/client-code/blob/05014d50e/PGBuild/Modules/TestUpgradeXversion.pm#L641 \n>\n>\n> I see that ctl4.log is created after updating extensions and\n> REL9_5_STABLE-update_extensions.log contains:\n> You are now connected to database \"contrib_regression_redis_fdw\" as \n> user \"buildfarm\".\n> ALTER EXTENSION \"hstore\" UPDATE;\n> ALTER EXTENSION\n> You are now connected to database \"contrib_regression_btree_gin\" as \n> user \"buildfarm\".\n> ALTER EXTENSION \"btree_gin\" UPDATE;\n> ALTER EXTENSION\n> ...\n> but I see no corresponding server log file containing these commands \n> in the\n> failure log.\n>\n> When running the same test locally, I find these in inst/upgrade_log.\n>\n> Maybe uploading this log file too would help to understand what is the\n> cause of the failure...\n>\n>\n\nYeah, I'll fix that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 2 Jun 2024 14:39:02 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The xversion-upgrade test fails to stop server" }, { "msg_contents": "02.06.2024 21:39, Andrew Dunstan wrote:\n>\n>> Maybe uploading this log file too would help to understand what is the\n>> cause of the failure...\n>>\n>>\n>\n> Yeah, I'll fix that.\n\nThank you, Andrew!\n\nCould you also take a look at:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-04-21%2014%3A09%3A53\n\nThis log contains:\ntest sto_using_select             ... FAILED    27556 ms\n\nbut I can't see ../snapshot_too_old/output_iso/regression.diff and\n.../snapshot_too_old/output_iso/log/postmaster.log in the log.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 2 Jun 2024 23:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The xversion-upgrade test fails to stop server" }, { "msg_contents": "\nOn 2024-06-02 Su 16:00, Alexander Lakhin wrote:\n> 02.06.2024 21:39, Andrew Dunstan wrote:\n>>\n>>> Maybe uploading this log file too would help to understand what is the\n>>> cause of the failure...\n>>>\n>>>\n>>\n>> Yeah, I'll fix that.\n>\n> Thank you, Andrew!\n>\n> Could you also take a look at:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-04-21%2014%3A09%3A53 \n>\n>\n> This log contains:\n> test sto_using_select             ... FAILED    27556 ms\n>\n> but I can't see ../snapshot_too_old/output_iso/regression.diff and\n> .../snapshot_too_old/output_iso/log/postmaster.log in the log.\n>\n>\n\nwill do.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 4 Jun 2024 07:56:03 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The xversion-upgrade test fails to stop server" }, { "msg_contents": "Hello,\n\n30.05.2024 18:00, Alexander Lakhin wrote:\n> While reviewing recent buildfarm failures, I came across this one:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-05-23%2004%3A11%3A03\n>\n> upgrade.crake/REL_16_STABLE/REL9_5_STABLE-ctl4.log\n> waiting for server to shut \n> down........................................................................................................................... \n> failed\n> pg_ctl: server does not shut down\n>\n\nI've grepped through logs of the last 167\nxversion-upgrade-REL9_5_STABLE-REL_16_STABLE/*ctl4.log on crake and got\nthe following results:\nwaiting for server to shut down........ done\nwaiting for server to shut down............................... done\nwaiting for server to shut down.............. done\nwaiting for server to shut down........ done\nwaiting for server to shut down....................... done\nwaiting for server to shut down..... done\nwaiting for server to shut down...... done\nwaiting for server to shut down............................... done\nwaiting for server to shut down.... done\nwaiting for server to shut down....................................... done\nwaiting for server to shut down....................... done\nwaiting for server to shut down............................................ done\nwaiting for server to shut down..... done\nwaiting for server to shut down....................... done\nwaiting for server to shut down.... done\nwaiting for server to shut down..... done\nwaiting for server to shut down................... done\nwaiting for server to shut down...................................... done\nwaiting for server to shut down........ done\nwaiting for server to shut down................ done\nwaiting for server to shut down............ done\nwaiting for server to shut down........... done\nwaiting for server to shut down........ done\nwaiting for server to shut down.... done\nwaiting for server to shut down.................................................. done\nwaiting for server to shut down........................................................... done\nwaiting for server to shut down..... done\nwaiting for server to shut down.............. done\nwaiting for server to shut down......... done\nwaiting for server to shut down........................................................... done\nwaiting for server to shut down................ done\nwaiting for server to shut down.................................... done\nwaiting for server to shut down........ done\nwaiting for server to shut down............ done\nwaiting for server to shut down................................................................... done\nwaiting for server to shut down..... done\nwaiting for server to shut down.............................................. done\nwaiting for server to shut down............. done\nwaiting for server to shut down............. done\nwaiting for server to shut down..... done\nwaiting for server to shut down.............................................. done\nwaiting for server to shut down...... done\nwaiting for server to shut down....... done\nwaiting for server to shut down....... done\nwaiting for server to shut down.......................... done\nwaiting for server to shut down............ done\nwaiting for server to shut down.................... done\nwaiting for server to shut down..... done\nwaiting for server to shut down........ done\nwaiting for server to shut down....... done\nwaiting for server to shut down....................... done\nwaiting for server to shut down........... done\nwaiting for server to shut down.............. done\nwaiting for server to shut down.................... done\nwaiting for server to shut down............. done\nwaiting for server to shut down...................................... done\nwaiting for server to shut down............................................... done\nwaiting for server to shut down........................................ done\nwaiting for server to shut down.......... done\nwaiting for server to shut down......................................... done\nwaiting for server to shut down......................................... done\nwaiting for server to shut down........ done\nwaiting for server to shut down..... done\nwaiting for server to shut down................................................................... done\nwaiting for server to shut down............................. done\nwaiting for server to shut down.......... done\nwaiting for server to shut down..... done\nwaiting for server to shut down......................... done\nwaiting for server to shut down...... done\nwaiting for server to shut down..... done\nwaiting for server to shut down......... done\nwaiting for server to shut down.......................................................... done\nwaiting for server to shut down............. done\nwaiting for server to shut down..... done\nwaiting for server to shut down........ done\nwaiting for server to shut down........................................... done\nwaiting for server to shut down.... done\nwaiting for server to shut down..... done\nwaiting for server to shut down...... done\nwaiting for server to shut down............. done\nwaiting for server to shut down.......................................................... done\nwaiting for server to shut down........................... done\nwaiting for server to shut down..... done\nwaiting for server to shut down....... done\nwaiting for server to shut down...... done\nwaiting for server to shut down......................................... done\nwaiting for server to shut down.......................... done\nwaiting for server to shut down.......................... done\nwaiting for server to shut down................................... done\nwaiting for server to shut down..... done\nwaiting for server to shut down.................. done\nwaiting for server to shut down....................... done\nwaiting for server to shut down................. done\nwaiting for server to shut down........................................... done\nwaiting for server to shut down........ done\nwaiting for server to shut down...................................................................... done\nwaiting for server to shut down............. done\nwaiting for server to shut down.................................................................... done\nwaiting for server to shut down................... done\nwaiting for server to shut down.............. done\nwaiting for server to shut down..................... done\nwaiting for server to shut \ndown.................................................................................................... done\nwaiting for server to shut down..... done\nwaiting for server to shut down....................... done\nwaiting for server to shut down................................ done\nwaiting for server to shut down....... done\nwaiting for server to shut down....... done\nwaiting for server to shut down.............. done\nwaiting for server to shut down................................................ done\nwaiting for server to shut down...... done\nwaiting for server to shut down..................... done\nwaiting for server to shut down...... done\nwaiting for server to shut down........ done\nwaiting for server to shut down..... done\nwaiting for server to shut down.... done\nwaiting for server to shut down......... done\nwaiting for server to shut down...... done\nwaiting for server to shut down........... done\nwaiting for server to shut down...................... done\nwaiting for server to shut down............ done\nwaiting for server to shut down................. done\nwaiting for server to shut down.... done\nwaiting for server to shut down................................... done\nwaiting for server to shut down.............................. done\nwaiting for server to shut down....................................................................................... done\nwaiting for server to shut \ndown................................................................................................... done\nwaiting for server to shut down...... done\nwaiting for server to shut down.... done\nwaiting for server to shut down........... done\nwaiting for server to shut down..... done\nwaiting for server to shut down......... done\nwaiting for server to shut down.................... done\nwaiting for server to shut down..... done\nwaiting for server to shut down....................... done\nwaiting for server to shut down................. done\nwaiting for server to shut down........................................ done\nwaiting for server to shut down...... done\nwaiting for server to shut down.... done\nwaiting for server to shut down.................................... done\nwaiting for server to shut down.............................. done\nwaiting for server to shut down.................................. done\nwaiting for server to shut down.............................................................. done\nwaiting for server to shut down.................................................. done\nwaiting for server to shut down............................................................. done\nwaiting for server to shut down............ done\nwaiting for server to shut down............ done\nwaiting for server to shut down...................................... done\nwaiting for server to shut down.......................................... done\nwaiting for server to shut down................................. done\nwaiting for server to shut down................... done\nwaiting for server to shut \ndown........................................................................................................................... \nfailed\nwaiting for server to shut down................................................. done\nwaiting for server to shut down............ done\nwaiting for server to shut down................................................................... done\nwaiting for server to shut down......... done\nwaiting for server to shut down......... done\nwaiting for server to shut down.................... done\nwaiting for server to shut down........ done\nwaiting for server to shut down............................... done\nwaiting for server to shut down......... done\nwaiting for server to shut down......................................................... done\nwaiting for server to shut down................................................... done\nwaiting for server to shut down............ done\nwaiting for server to shut down...... done\nwaiting for server to shut down........................................... done\nwaiting for server to shut \ndown.......................................................................................... done\nwaiting for server to shut down............................................................................... done\n\nThus, pg_ctl stopped waiting after 120 seconds timeout, but we can see\n\"allowed\" duration around 100 seconds.\n\nA similar failure have occurred today:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&amp;dt=2024-06-08%2001%3A41%3A41\nwaiting for server to shut \ndown........................................................................................................................... \nfailed\npg_ctl: server does not shut down\n\nand the last StopDb-C:4 runs on opaleye show the following timings:\nwaiting for server to shut down............. done\nwaiting for server to shut down......................... done\nwaiting for server to shut down......................... done\nwaiting for server to shut down.......................... done\nwaiting for server to shut down.............. done\nwaiting for server to shut down...................... done\nwaiting for server to shut down................................. done\nwaiting for server to shut down........................... done\nwaiting for server to shut down............ done\nwaiting for server to shut down................ done\nwaiting for server to shut down.................. done\nwaiting for server to shut down............................... done\nwaiting for server to shut down............................ done\nwaiting for server to shut down.................. done\nwaiting for server to shut down..................... done\nwaiting for server to shut down........................ done\nwaiting for server to shut down.......................... done\nwaiting for server to shut down......................................... done\nwaiting for server to shut down.................... done\nwaiting for server to shut down..................................... done\nwaiting for server to shut down............................................. done\nwaiting for server to shut down........................... done\nwaiting for server to shut down................... done\nwaiting for server to shut down............. done\nwaiting for server to shut down........................ done\nwaiting for server to shut down......................... done\nwaiting for server to shut down................... done\nwaiting for server to shut down................. done\nwaiting for server to shut down..................... done\nwaiting for server to shut down......................... done\nwaiting for server to shut down............................... done\nwaiting for server to shut down.................................................. done\nwaiting for server to shut down............... done\nwaiting for server to shut down................. done\nwaiting for server to shut down..................... done\nwaiting for server to shut down................ done\nwaiting for server to shut down.............. done\nwaiting for server to shut down................ done\nwaiting for server to shut down.......................... done\nwaiting for server to shut down.................. done\nwaiting for server to shut down.................... done\nwaiting for server to shut down................ done\nwaiting for server to shut down....................... done\nwaiting for server to shut down................ done\nwaiting for server to shut down...................... done\nwaiting for server to shut down............... done\nwaiting for server to shut down.............. done\nwaiting for server to shut down........................... done\nwaiting for server to shut down............. done\nwaiting for server to shut down..................... done\nwaiting for server to shut down................. done\nwaiting for server to shut down........................... done\nwaiting for server to shut down..................... done\nwaiting for server to shut down................ done\nwaiting for server to shut down........................................... done\nwaiting for server to shut down....................... done\nwaiting for server to shut down.............................................................................. done\nwaiting for server to shut \ndown........................................................................................................................... \nfailed\n\nSo maybe it would make sense to increase default PGCTLTIMEOUT for\nbuildfarm clients, say, to 180 seconds?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 8 Jun 2024 17:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The xversion-upgrade test fails to stop server" }, { "msg_contents": "\nOn 2024-06-08 Sa 10:00, Alexander Lakhin wrote:\n> Hello,\n>\n> 30.05.2024 18:00, Alexander Lakhin wrote:\n>> While reviewing recent buildfarm failures, I came across this one:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-05-23%2004%3A11%3A03 \n>>\n>>\n>> upgrade.crake/REL_16_STABLE/REL9_5_STABLE-ctl4.log\n>> waiting for server to shut \n>> down........................................................................................................................... \n>> failed\n>> pg_ctl: server does not shut down\n>>\n>\n> I've grepped through logs of the last 167\n> xversion-upgrade-REL9_5_STABLE-REL_16_STABLE/*ctl4.log on crake and got\n> the following results:\n> waiting for server to shut down........ done\n>\n[...]\n>\n> So maybe it would make sense to increase default PGCTLTIMEOUT for\n> buildfarm clients, say, to 180 seconds?\n\n\nSure. For now I have added it to the config on crake, but we can make it \na default.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 8 Jun 2024 16:35:24 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The xversion-upgrade test fails to stop server" }, { "msg_contents": "Hello Andrew,\n\n04.06.2024 14:56, Andrew Dunstan wrote:\n>\n>>\n>> but I can't see ../snapshot_too_old/output_iso/regression.diff and\n>> .../snapshot_too_old/output_iso/log/postmaster.log in the log.\n>>\n>>\n>\n> will do.\n>\n\nI've discovered a couple of other failures where the interesting log files\nare not saved.\n\n[1] has only inst/logfile saved and that's not enough for TAP tests in\nsrc/test/modules.\n\nI've emulated the failure (not trying to guess the real cause) with:\n--- a/src/test/modules/test_custom_rmgrs/Makefile\n+++ b/src/test/modules/test_custom_rmgrs/Makefile\n@@ -14,0 +15,4 @@ TAP_TESTS = 1\n+remove:\n+    make uninstall\n+install: remove\n\nand saw mostly the same with the buildfarm client REL_17.\n\nI've tried the following addition to run_build.pl:\n@@ -2194,6 +2194,8 @@ sub make_testmodules_install_check\n         my @logs = glob(\"$pgsql/src/test/modules/*/regression.diffs\");\n         push(@logs, \"inst/logfile\");\n         $log->add_log($_) foreach (@logs);\n+       @logs = glob(\"$pgsql/src/test/modules/*/tmp_check/log/*\");\n+       $log->add_log($_) foreach (@logs);\nand it works as expected for me.\n\nThe output of another failure ([2]) contains only:\n\\342\\226\\266 1/1 + partition_prune                          3736 ms FAIL\nbut no regression.diffs\n(Fortunately, in this particular case I found \"FATAL:  incorrect checksum\nin control file\" in inst/logfile, so that can explain the failure.)\n\nProbably, run_build.pl needs something like:\n@@ -1848,7 +1848,7 @@ sub make_install_check\n                 @checklog = run_log(\"cd $pgsql/src/test/regress && $make $chktarget\");\n         }\n         my $status = $? >> 8;\n-       my @logfiles = (\"$pgsql/src/test/regress/regression.diffs\", \"inst/logfile\");\n+       my @logfiles = (\"$pgsql/src/test/regress/regression.diffs\", \n\"$pgsql/testrun/regress-running/regress/regression.diffs\", \"inst/logfile\");\n\n>> A similar failure have occurred today:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&amp;dt=2024-06-08%2001%3A41%3A41\n>>\n>> So maybe it would make sense to increase default PGCTLTIMEOUT for\n>> buildfarm clients, say, to 180 seconds?\n>\n>\n> Sure. For now I have added it to the config on crake, but we can make it a default.\n\nBy the way, opaleye failed once again (with 120 seconds timeout):\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-08-13%2002%3A04%3A07\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gull&dt=2024-08-06%2014%3A56%3A39\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-08-17%2001%3A12%3A50\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 18 Aug 2024 19:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The xversion-upgrade test fails to stop server" }, { "msg_contents": "\nOn 2024-08-18 Su 12:00 PM, Alexander Lakhin wrote:\n> Hello Andrew,\n>\n> 04.06.2024 14:56, Andrew Dunstan wrote:\n>>\n>>>\n>>> but I can't see ../snapshot_too_old/output_iso/regression.diff and\n>>> .../snapshot_too_old/output_iso/log/postmaster.log in the log.\n>>>\n>>>\n>>\n>> will do.\n>>\n>\n> I've discovered a couple of other failures where the interesting log \n> files\n> are not saved.\n>\n> [1] has only inst/logfile saved and that's not enough for TAP tests in\n> src/test/modules.\n>\n> I've emulated the failure (not trying to guess the real cause) with:\n> --- a/src/test/modules/test_custom_rmgrs/Makefile\n> +++ b/src/test/modules/test_custom_rmgrs/Makefile\n> @@ -14,0 +15,4 @@ TAP_TESTS = 1\n> +remove:\n> +    make uninstall\n> +install: remove\n>\n> and saw mostly the same with the buildfarm client REL_17.\n>\n> I've tried the following addition to run_build.pl:\n> @@ -2194,6 +2194,8 @@ sub make_testmodules_install_check\n>         my @logs = glob(\"$pgsql/src/test/modules/*/regression.diffs\");\n>         push(@logs, \"inst/logfile\");\n>         $log->add_log($_) foreach (@logs);\n> +       @logs = glob(\"$pgsql/src/test/modules/*/tmp_check/log/*\");\n> +       $log->add_log($_) foreach (@logs);\n> and it works as expected for me.\n>\n> The output of another failure ([2]) contains only:\n> \\342\\226\\266 1/1 + partition_prune                          3736 ms FAIL\n> but no regression.diffs\n> (Fortunately, in this particular case I found \"FATAL:  incorrect checksum\n> in control file\" in inst/logfile, so that can explain the failure.)\n>\n> Probably, run_build.pl needs something like:\n> @@ -1848,7 +1848,7 @@ sub make_install_check\n>                 @checklog = run_log(\"cd $pgsql/src/test/regress && \n> $make $chktarget\");\n>         }\n>         my $status = $? >> 8;\n> -       my @logfiles = (\"$pgsql/src/test/regress/regression.diffs\", \n> \"inst/logfile\");\n> +       my @logfiles = (\"$pgsql/src/test/regress/regression.diffs\", \n> \"$pgsql/testrun/regress-running/regress/regression.diffs\", \n> \"inst/logfile\");\n\n\nThanks, I have added these tweaks.\n\n\n\n>\n>>> A similar failure have occurred today:\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&amp;dt=2024-06-08%2001%3A41%3A41 \n>>>\n>>>\n>>> So maybe it would make sense to increase default PGCTLTIMEOUT for\n>>> buildfarm clients, say, to 180 seconds?\n>>\n>>\n>> Sure. For now I have added it to the config on crake, but we can make \n>> it a default.\n>\n> By the way, opaleye failed once again (with 120 seconds timeout):\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=opaleye&dt=2024-08-13%2002%3A04%3A07 \n>\n>\n> [1] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gull&dt=2024-08-06%2014%3A56%3A39\n> [2] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-08-17%2001%3A12%3A50\n>\n\n\nYeah. In the next release the default will increase to 180.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 20 Aug 2024 08:02:54 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The xversion-upgrade test fails to stop server" } ]
[ { "msg_contents": "Hello, everyone!\n\nI think we don't have enough information to analyze vacuum functionality.\n\nNeedless to say that the vacuum is the most important process for a \ndatabase system. It prevents problems like table and index bloating and \nemergency freezing if we have a wraparound problem. Furthermore, it \nkeeps the visibility map up to date. On the other hand, because of \nincorrectly adjusted aggressive settings of autovacuum it can consume a \nlot of computing resources that lead to all queries to the system \nrunning longer.\n\nNowadays the vacuum gathers statistical information about tables, but it \nis important not for optimizer only.\n\nBecause the vacuum is an automation process, there are a lot of settings \nthat determine their aggressive functionality to other objects of the \ndatabase. Besides, sometimes it is important to set a correct parameter \nfor the specified table, because of its dynamic changes.\n\nAn administrator of a database needs to set the settings of autovacuum \nto have a balance between the vacuum's useful action in the database \nsystem on the one hand, and the overhead of its workload on the other. \nHowever, it is not enough for him to decide on vacuum functionality \nthrough statistical information about the number of vacuum passes \nthrough tables and operational data from progress_vacuum, because it is \navailable only during vacuum operation and does not provide a strategic \noverview over the considered period.\n\nTo sum up, an automation vacuum has a strategic behavior because the \nfrequency of its functionality and resource consumption depends on the \nworkload of the database. Its workload on the database is minimal for an \nappend-only table and it is a maximum for the table with a \nhigh-frequency updating. Furthermore, there is a high dependence of the \nvacuum load on the number and volume of indexes. Because of the absence \nof the visibility map for indexes, the vacuum scans the index \ncompletely, and the worst situation when it needs to do it during a \nbloating index situation in a small table.\n\n\nI suggest gathering information about vacuum resource consumption for \nprocessing indexes and tables and storing it in the table and index \nrelationships (for example, PgStat_StatTabEntry structure like it has \nrealized for usual statistics). It will allow us to determine how well \nthe vacuum is configured and evaluate the effect of overhead on the \nsystem at the strategic level, the vacuum has gathered this information \nalready, but this valuable information doesn't store it.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 30 May 2024 10:33:51 -0700", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum statistics" }, { "msg_contents": "On 30.05.2024 10:33, Alena Rybakina wrote:\n>\n> I suggest gathering information about vacuum resource consumption for \n> processing indexes and tables and storing it in the table and index \n> relationships (for example, PgStat_StatTabEntry structure like it has \n> realized for usual statistics). It will allow us to determine how well \n> the vacuum is configured and evaluate the effect of overhead on the \n> system at the strategic level, the vacuum has gathered this \n> information already, but this valuable information doesn't store it.\n>\nMy colleagues and I have prepared a patch that can help to solve this \nproblem.\n\nWe are open to feedback.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 30 May 2024 11:26:38 -0700", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi,\n\nTh, 30/05/2024 at 10:33 -0700, Alena Rybakina wrote:\n> I suggest gathering information about vacuum resource consumption for\n> processing indexes and tables and storing it in the table and index \n> relationships (for example, PgStat_StatTabEntry structure like it has\n> realized for usual statistics). It will allow us to determine how\n> well \n> the vacuum is configured and evaluate the effect of overhead on the \n> system at the strategic level, the vacuum has gathered this\n> information \n> already, but this valuable information doesn't store it.\n> \nIt seems a little bit unclear to me, so let me explain a little the\npoint of a proposition.\n\nAs the vacuum process is a backend it has a workload instrumentation.\nWe have all the basic counters available such as a number of blocks\nread, hit and written, time spent on I/O, WAL stats and so on.. Also,\nwe can easily get some statistics specific to vacuum activity i.e.\nnumber of tuples removed, number of blocks removed, number of VM marks\nset and, of course the most important metric - time spent on vacuum\noperation.\n\nAll those statistics must be stored by the Cumulative Statistics System\non per-relation basis. I mean individual cumulative counters for every\ntable and every index in the database.\n\nSuch counters will provide us a clear view about vacuum workload on\nindividual objects of the database, providing means to measure the\nefficiency of performed vacuum fine tuning.\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 30 May 2024 22:19:26 +0300", "msg_from": "Andrei Zubkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On Thu, May 30, 2024 at 11:57 PM Alena Rybakina\n<[email protected]> wrote:\n>\n> On 30.05.2024 10:33, Alena Rybakina wrote:\n> >\n> > I suggest gathering information about vacuum resource consumption for\n> > processing indexes and tables and storing it in the table and index\n> > relationships (for example, PgStat_StatTabEntry structure like it has\n> > realized for usual statistics). It will allow us to determine how well\n> > the vacuum is configured and evaluate the effect of overhead on the\n> > system at the strategic level, the vacuum has gathered this\n> > information already, but this valuable information doesn't store it.\n> >\n> My colleagues and I have prepared a patch that can help to solve this\n> problem.\n>\n> We are open to feedback.\n\nI was reading through the patch here are some initial comments.\n\n--\n+typedef struct LVExtStatCounters\n+{\n+ TimestampTz time;\n+ PGRUsage ru;\n+ WalUsage walusage;\n+ BufferUsage bufusage;\n+ int64 VacuumPageMiss;\n+ int64 VacuumPageHit;\n+ int64 VacuumPageDirty;\n+ double VacuumDelayTime;\n+ PgStat_Counter blocks_fetched;\n+ PgStat_Counter blocks_hit;\n+} LVExtStatCounters;\n\n\nI noticed that you are storing both pgBufferUsage and\nVacuumPage(Hit/Miss/Dirty) stats. Aren't these essentially the same?\nIt seems they both exist in the system because some code, like\nheap_vacuum_rel(), uses pgBufferUsage, while do_analyze_rel() still\nrelies on the old counters. And there is already a patch to remove\nthose old counters.\n\n\n--\n+static Datum\n+pg_stats_vacuum(FunctionCallInfo fcinfo, ExtVacReportType type, int ncolumns)\n+{\n\nI don't think you need this last parameter (ncolumns) we can anyway\nfetch that from tupledesc, so adding an additional parameter\njust for checking doesn't look good to me.\n\n--\n+ /* Tricky turn here: enforce pgstat to think that our database us dbid */\n+\n+ MyDatabaseId = dbid;\n\ntypo\n/think that our database us dbid/think that our database has dbid\n\nAlso, remove the blank line between the comment and the next code\nblock that is related to that comment.\n\n\n--\n VacuumPageDirty = 0;\n+ VacuumDelayTime = 0.;\n\nThere is an extra \".\" after 0\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 12:16:30 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi! Thank you for your interest in this topic!\n\nOn 07.06.2024 09:46, Dilip Kumar wrote:\n> On Thu, May 30, 2024 at 11:57 PM Alena Rybakina\n> <[email protected]> wrote:\n>> On 30.05.2024 10:33, Alena Rybakina wrote:\n>>> I suggest gathering information about vacuum resource consumption for\n>>> processing indexes and tables and storing it in the table and index\n>>> relationships (for example, PgStat_StatTabEntry structure like it has\n>>> realized for usual statistics). It will allow us to determine how well\n>>> the vacuum is configured and evaluate the effect of overhead on the\n>>> system at the strategic level, the vacuum has gathered this\n>>> information already, but this valuable information doesn't store it.\n>>>\n>> My colleagues and I have prepared a patch that can help to solve this\n>> problem.\n>>\n>> We are open to feedback.\n> I was reading through the patch here are some initial comments.\n>\n> --\n> +typedef struct LVExtStatCounters\n> +{\n> + TimestampTz time;\n> + PGRUsage ru;\n> + WalUsage walusage;\n> + BufferUsage bufusage;\n> + int64 VacuumPageMiss;\n> + int64 VacuumPageHit;\n> + int64 VacuumPageDirty;\n> + double VacuumDelayTime;\n> + PgStat_Counter blocks_fetched;\n> + PgStat_Counter blocks_hit;\n> +} LVExtStatCounters;\n>\n>\n> I noticed that you are storing both pgBufferUsage and\n> VacuumPage(Hit/Miss/Dirty) stats. Aren't these essentially the same?\n> It seems they both exist in the system because some code, like\n> heap_vacuum_rel(), uses pgBufferUsage, while do_analyze_rel() still\n> relies on the old counters. And there is already a patch to remove\n> those old counters.\nI agree with you and I have fixed it.\n>\n> --\n> +static Datum\n> +pg_stats_vacuum(FunctionCallInfo fcinfo, ExtVacReportType type, int ncolumns)\n> +{\n>\n> I don't think you need this last parameter (ncolumns) we can anyway\n> fetch that from tupledesc, so adding an additional parameter\n> just for checking doesn't look good to me.\nTo be honest,I'm notsureifncolumns shouldbe deletedat \nall,becausethepg_stats_vacuum \nfunctionisusedtodisplaythreedifferenttypesof \nstatistics:fortables,indexes, \nanddatabases.Weusethisparametertopassinformationaboutthe numberof \nparameters(orhowmany statisticsweexpect)dependingonthe typeof \nstatistics.For example,table \nvacuumstatisticscontain27parameters,whileindexesanddatabasescontain19and15parameters, \nrespectively.Youcanseethatthe pg_stats_vacuum functioncontainsan \nAssertthatchecksthatthe expectednumberof tupledesc \nparametersmatchestheactualnumber.\n\nAssert(tupdesc->natts == ncolumns);\n\n\nPerhapsIcanconvertitto alocalparameteranddetermineitsvaluealreadyinthe \nfunction,for example:\n\npg_stats_vacuum(FunctionCallInfo fcinfo, ExtVacReportType type, int \nncolumns)\n{\n\nint columns = 0;\n\nswitch (type)\n\n{\n\ncase PGSTAT_EXTVAC_HEAP:\n\nncolumns = EXTVACHEAPSTAT_COLUMNS;\n\nbreak;\n\ncase PGSTAT_EXTVAC_INDEX:\n\nncolumns = EXTVACINDEXSTAT_COLUMNS;\n\nbreak;\n\ncase PGSTAT_EXTVAC_DB:\n\nncolumns = EXTVACDBSTAT_COLUMNS;\n\nbreak;\n\n}\n\n...\n\n}\n\nWhat do you think?\n\n> --\n> + /* Tricky turn here: enforce pgstat to think that our database us dbid */\n> +\n> + MyDatabaseId = dbid;\n>\n> typo\n> /think that our database us dbid/think that our database has dbid\n>\n> Also, remove the blank line between the comment and the next code\n> block that is related to that comment.\n>\n>\n> --\n> VacuumPageDirty = 0;\n> + VacuumDelayTime = 0.;\n>\n> There is an extra \".\" after 0\n>\n>\nThank you, I fixed it.\n\n\nIn additionto thesechanges,Ifixedthe \nproblemwithdisplayingvacuumstatisticsfordatabases:Ifoundan \nerrorindefiningthe pg_stats_vacuum_database systemview.In \naddition,Irewrotethe testsandconvertedthemintoa regressiontest.In \naddition,Ihave dividedthe testtotestthe functionalityof the outputof \nvacuumstatisticsintotwotests:oneofthemchecksthe functionalityof \ntablesanddatabases,andthe other-indexes.Thisis causedby aproblemwiththe \nvacuumfunctionalitywhenthe tablecontainsan \nindex.Youcanfindmoreinformationaboutthishere:[0]and[1].\n\nI attached the diff to this letter.\n\n[0] \nhttps://www.postgresql.org/message-id/d1ca3a1d-7ead-41a7-bfd0-5b66ad97b1cd%40yandex.ru\n\n[1] \nhttps://www.postgresql.org/message-id/CAH2-Wznv94Q_Td8OS8bAN7fYLpfU6CGgjn6Xau5eJ_sDxEGeBA%40mail.gmail.com\n\n\nIam currentlyworkingondividingthispatchintothreepartstosimplifythe \nreviewprocess:oneofthemwillcontaincodeforcollectingvacuumstatisticsontables,the \nsecondonindexesandthe lastondatabases.I alsowritethe documentation.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 8 Jun 2024 09:30:47 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi!\n\nOn 11.06.2024 16:09, Alena Rybakina wrote:\n> On 08.06.2024 09:30, Alena Rybakina wrote:\n>>\n>> Iam currentlyworkingondividingthispatchintothreepartstosimplifythe \n>> reviewprocess:oneofthemwillcontaincodeforcollectingvacuumstatisticsontables,the \n>> secondonindexesandthe lastondatabases.\n>>\nI have divided the patch into three: the first patch containscodeforthe \nfunctionalityof collecting and storage for tables, the second one for \nindexes and the last one for databases.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nHi!\n\nOn 11.06.2024 16:09, Alena Rybakina\n wrote:\n\n\n\nOn 08.06.2024 09:30, Alena Rybakina\n wrote:\n\nI am currently working on dividing this patch into three parts to simplify the review process: one of them will contain code for collecting vacuum statistics on tables, the second on indexes and the last on databases. \n\n\n I have divided the patch into three:\n the first patch contains code for the functionality of collecting and storage for\n tables, the second one for indexes and the last one for databases.\n -- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 12 Jun 2024 09:37:35 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi!\n\nOn 11.06.2024 16:09, Alena Rybakina wrote:\n> On 08.06.2024 09:30, Alena Rybakina wrote:\n>>\n>> Iam currentlyworkingondividingthispatchintothreepartstosimplifythe \n>> reviewprocess:oneofthemwillcontaincodeforcollectingvacuumstatisticsontables,the \n>> secondonindexesandthe lastondatabases.\n>>\nI have divided the patch into three: the first patch containscodeforthe \nfunctionalityof collecting and storage for tables, the second one for \nindexes and the last one for databases.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 12 Jun 2024 09:38:30 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "I have written the documentary and attached the patch.\n\nOn 08.06.2024 09:30, Alena Rybakina wrote:\n>\n> Iam currentlyworkingondividingthispatchintothreepartstosimplifythe \n> reviewprocess:oneofthemwillcontaincodeforcollectingvacuumstatisticsontables,the \n> secondonindexesandthe lastondatabases.I alsowritethe documentation.\n>\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 17 Jun 2024 13:09:51 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On Fri, May 31, 2024 at 4:19 AM Andrei Zubkov <[email protected]> wrote:\n>\n> Hi,\n>\n> Th, 30/05/2024 at 10:33 -0700, Alena Rybakina wrote:\n> > I suggest gathering information about vacuum resource consumption for\n> > processing indexes and tables and storing it in the table and index\n> > relationships (for example, PgStat_StatTabEntry structure like it has\n> > realized for usual statistics). It will allow us to determine how\n> > well\n> > the vacuum is configured and evaluate the effect of overhead on the\n> > system at the strategic level, the vacuum has gathered this\n> > information\n> > already, but this valuable information doesn't store it.\n> >\n> It seems a little bit unclear to me, so let me explain a little the\n> point of a proposition.\n>\n> As the vacuum process is a backend it has a workload instrumentation.\n> We have all the basic counters available such as a number of blocks\n> read, hit and written, time spent on I/O, WAL stats and so on.. Also,\n> we can easily get some statistics specific to vacuum activity i.e.\n> number of tuples removed, number of blocks removed, number of VM marks\n> set and, of course the most important metric - time spent on vacuum\n> operation.\n\nI've not reviewed the patch closely but it sounds helpful for users. I\nwould like to add a statistic, the high-water mark of memory usage of\ndead tuple TIDs. Since the amount of memory used by TidStore is hard\nto predict, I think showing the high-water mark would help users to\npredict how much memory they set to maintenance_work_mem.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 27 Jun 2024 10:39:45 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hello!\n\nOn Thu, 27/06/2024 at 10:39 +0900, Masahiko Sawada:\n> On Fri, May 31, 2024 at 4:19 AM Andrei Zubkov <[email protected]>\n> wrote:\n> > As the vacuum process is a backend it has a workload\n> > instrumentation.\n> > We have all the basic counters available such as a number of blocks\n> > read, hit and written, time spent on I/O, WAL stats and so on..\n> > Also,\n> > we can easily get some statistics specific to vacuum activity i.e.\n> > number of tuples removed, number of blocks removed, number of VM\n> > marks\n> > set and, of course the most important metric - time spent on vacuum\n> > operation.\n> \n> I've not reviewed the patch closely but it sounds helpful for users.\n> I\n> would like to add a statistic, the high-water mark of memory usage of\n> dead tuple TIDs. Since the amount of memory used by TidStore is hard\n> to predict, I think showing the high-water mark would help users to\n> predict how much memory they set to maintenance_work_mem.\n> \nThank you for your interest on this patch. I've understand your idea.\nThe obvious goal of it is to avoid expensive index multi processing\nduring vacuum of the heap. Provided statistics in the patch contain the\nindex_vacuum_count counter for each table which can be compared to the\npg_stat_all_tables.vacuum_count to detect specific relation index\nmulti-passes. Previous setting of maintenance_work_mem is known. Usage\nof TidStore should be proportional to the amount of dead-tuples vacuum\nworkload on the table, so as the first evaluation we can take the\nnumber of index passes per one heap pass as a maintenance_work_mem\nmultiplier.\n\nBut there is a better way. Once we detected the index multiprocessing\nwe can lower the vacuum workload for the heap pass making vacuum a\nlittle bit more aggressive for this particular relation. I mean, in\nsuch case increasing maintenance_work_mem is not only decision.\n\nSuggested high-water mark statistic can't be used as cumulative\nstatistic - any high-water mark statistic as maximim-like statistic is\nvalid for certain time period thus should be reset on some kind of\nschedule. Without resets it should reach 100% once under the heavy load\nand stay there forever.\n\nSaid that such high-water mark seems a little bit unclear and\ncomplicated for the DBA. It seems redundant to me right now. I can see\nthe main value of such statistic is to avoid too large\nmaintenance_work_mem setting. But I can't see really dramatic\nconsequences of that. Maybe I've miss something..\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 27 Jun 2024 11:52:01 +0300", "msg_from": "Andrei Zubkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hello, everyone!\n\n\nThank you for your interesting patch with extended information \nstatistics about autovacuum.\n\nDo you consider not to create new table in pg_catalog but to save \nstatistics in existing table? I mean pg_class or \npg_stat_progress_analyze, pg_stat_progress_vacuum?\n\nP.S. If I sent this mail twice, I'm sorry :)\n\n\nRegards\n\nIlia Evdokimov,\n\nTantor Labs.\n\n\n", "msg_date": "Sat, 10 Aug 2024 21:14:27 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "\nHi, Ilia!\n\n> Do you consider not to create new table in pg_catalog but to save \n> statistics in existing table? I mean pg_class or \n> pg_stat_progress_analyze, pg_stat_progress_vacuum?\n> \nThank you for your interest on our patch!\n\n*_progress views is not our case. They hold online statistics while\nvacuum is in progress. Once work is done on a table the entry is gone\nfrom those views. Idea of this patch is the opposite - it doesn't\nprovide online statistics but it accumulates statistics about rosources\nconsumed by all vacuum passes over all relations. It's much closer to\nthe pg_stat_all_tables than pg_stat_progress_vacuum.\n\nIt seems pg_class is not the right place because it is not a statistic\nview - it holds the current relation state and haven't anything about\nthe relation workload.\n\nMaybe the pg_stat_all_tables is the right place but I have several\nthoughts about why it is not:\n- Some statistics provided by this patch is really vacuum specific. I\ndon't think we want them in the relation statistics view.\n- Postgres is extreamly extensible. I'm sure someday there will be\ntable AMs that does not need the vacuum at all.\n\nRight now vacuum specific workload views seems optimal choice to me.\n\nRegards,\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sat, 10 Aug 2024 22:37:25 +0300", "msg_from": "Andrei Zubkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On Wed, 12 Jun 2024 at 11:38, Alena Rybakina <[email protected]> wrote:\n>\n> Hi!\n>\n> On 11.06.2024 16:09, Alena Rybakina wrote:\n>\n> On 08.06.2024 09:30, Alena Rybakina wrote:\n>\n> I am currently working on dividing this patch into three parts to simplify the review process: one of them will contain code for collecting vacuum statistics on tables, the second on indexes and the last on databases.\n>\n> I have divided the patch into three: the first patch contains code for the functionality of collecting and storage for tables, the second one for indexes and the last one for databases.\n>\n> --\n> Regards,\n> Alena Rybakina\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\nHi!\nFew suggestions on this patch-set\n\n1)\n> +{ oid => '4701',\n> + descr => 'pg_stats_vacuum_tables return stats values',\n> + proname => 'pg_stats_vacuum_tables', provolatile => 's', prorettype => 'record',proisstrict => 'f',\n> + proretset => 't',\n> + proargtypes => 'oid oid',\n> + proallargtypes =>\n\nDuring development, OIDs should be picked up from range 8000-9999.\nSame for pg_stats_vacuum_database & pg_stats_vacuum_indexes\n\nAlso, why are these function naming schemes like\npg_stats_vacuum_*something*, not pg_stat_vacuum_*something*, like\npg_stat_replication etc?\n\n2) In 0003:\n> + proargnames => '{dboid,dboid,db_blks_read,db_blks_hit,total_blks_dirtied,total_blks_written,wal_records,wal_fpi,wal_bytes,blk_read_time,blk_write_time,delay_time,system_time,user_time,total_time,interrupts}',\n\nRepeated dboid arg name is strange. Is it done this way to make\npg_stats_vacuum function call in more unified fashion? I don't see any\nother place within postgresql core with similar approach, so I doubt\nit is correct.\n\n3) 0001 patch vacuum_tables_statistics test creates\nstatistic_vacuum_database1, but does not use it. 0003 do.\nAlso I'm not sure if these additional checks on the second database\nadds much value. Can you justify this please?\n\nOther places look more or less fine to me.\nHowever, I'll maybe post some additional nit-picky comments on the\nnext patch version.\n\n-- \nBest regards,\nKirill Reshke\n\n\n", "msg_date": "Sun, 11 Aug 2024 01:57:35 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi! Thank you for inquiring about our topic!\n\nOn 10.08.2024 23:57, Kirill Reshke wrote:\n> Hi!\n> Few suggestions on this patch-set\n>\n> 1)\n>> +{ oid => '4701',\n>> + descr => 'pg_stats_vacuum_tables return stats values',\n>> + proname => 'pg_stats_vacuum_tables', provolatile => 's', prorettype => 'record',proisstrict => 'f',\n>> + proretset => 't',\n>> + proargtypes => 'oid oid',\n>> + proallargtypes =>\n> During development, OIDs should be picked up from range 8000-9999.\n> Same for pg_stats_vacuum_database & pg_stats_vacuum_indexes\n>\n> Also, why are these function naming schemes like\n> pg_stats_vacuum_*something*, not pg_stat_vacuum_*something*, like\n> pg_stat_replication etc?\nTo be honest, when I named it, I missed this aspect. I thought about the \nplural vacuum statistics we show, so I named them. I fixed it.\n> 2) In 0003:\n>> + proargnames => '{dboid,dboid,db_blks_read,db_blks_hit,total_blks_dirtied,total_blks_written,wal_records,wal_fpi,wal_bytes,blk_read_time,blk_write_time,delay_time,system_time,user_time,total_time,interrupts}',\n> Repeated dboid arg name is strange. Is it done this way to make\n> pg_stats_vacuum function call in more unified fashion? I don't see any\n> other place within postgresql core with similar approach, so I doubt\n> it is correct.\nBoth parameters are required for input and output. We are trying to find \nstatistics for a specific database if the database oid was specified by \nthe user or display statistics for all objects, but we need to display \nwhich database these statistics are for. I corrected the name of the \nfirst parameter.\n> 3) 0001 patch vacuum_tables_statistics test creates\n> statistic_vacuum_database1, but does not use it. 0003 do.\n> Also I'm not sure if these additional checks on the second database\n> adds much value. Can you justify this please?\n\nThe statistic_vacuum_database1 needs us to check the visible of \nstatistics from another database (statistic_vacuum_database) as they are \nafter the manipulation with tables in another database, and after \ndeleting the vestat table . In the latter case, we need to be sure that \nall the table statistics are not visible to us.\n\nSo, I agree that it should be added only in the latest version of the \npatch, where we add vacuum statistics for databases. I fixed it.\n\n> Other places look more or less fine to me.\n> However, I'll maybe post some additional nit-picky comments on the\n> next patch version.\nWe are glad any feedback and review, so feel free to do it)\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 11 Aug 2024 16:58:54 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On 10.8.24 22:37, Andrei Zubkov wrote:\n\n> Hi, Ilia!\n>\n>> Do you consider not to create new table in pg_catalog but to save\n>> statistics in existing table? I mean pg_class or\n>> pg_stat_progress_analyze, pg_stat_progress_vacuum?\n>>\n> Thank you for your interest on our patch!\n>\n> *_progress views is not our case. They hold online statistics while\n> vacuum is in progress. Once work is done on a table the entry is gone\n> from those views. Idea of this patch is the opposite - it doesn't\n> provide online statistics but it accumulates statistics about rosources\n> consumed by all vacuum passes over all relations. It's much closer to\n> the pg_stat_all_tables than pg_stat_progress_vacuum.\n>\n> It seems pg_class is not the right place because it is not a statistic\n> view - it holds the current relation state and haven't anything about\n> the relation workload.\n>\n> Maybe the pg_stat_all_tables is the right place but I have several\n> thoughts about why it is not:\n> - Some statistics provided by this patch is really vacuum specific. I\n> don't think we want them in the relation statistics view.\n> - Postgres is extreamly extensible. I'm sure someday there will be\n> table AMs that does not need the vacuum at all.\n>\n> Right now vacuum specific workload views seems optimal choice to me.\n>\n> Regards,\n\n\nAgreed. They are not god places to store such statistics.\n\n\nI have some suggestions:\n\n 1. pgstatfuncs.c in functions tuplestore_put_for_database() and\n tuplestore_put_for_relation you can remove 'nulls' array if you're\n sure that columns cannot be NULL.\n 2. These functions are almost the same and I would think of writing one\n function depending of type 'ExtVacReportType'\n\n\n\n\n\n\nOn 10.8.24 22:37, Andrei Zubkov wrote:\n\n\nHi, Ilia!\n\n\n\nDo you consider not to create new table in pg_catalog but to save \nstatistics in existing table? I mean pg_class or \npg_stat_progress_analyze, pg_stat_progress_vacuum?\n\n\n\nThank you for your interest on our patch!\n\n*_progress views is not our case. They hold online statistics while\nvacuum is in progress. Once work is done on a table the entry is gone\nfrom those views. Idea of this patch is the opposite - it doesn't\nprovide online statistics but it accumulates statistics about rosources\nconsumed by all vacuum passes over all relations. It's much closer to\nthe pg_stat_all_tables than pg_stat_progress_vacuum.\n\nIt seems pg_class is not the right place because it is not a statistic\nview - it holds the current relation state and haven't anything about\nthe relation workload.\n\nMaybe the pg_stat_all_tables is the right place but I have several\nthoughts about why it is not:\n- Some statistics provided by this patch is really vacuum specific. I\ndon't think we want them in the relation statistics view.\n- Postgres is extreamly extensible. I'm sure someday there will be\ntable AMs that does not need the vacuum at all.\n\nRight now vacuum specific workload views seems optimal choice to me.\n\nRegards,\n\n\n\n\nAgreed. They are not god places to store such statistics.\n\n\nI have some suggestions:\n\npgstatfuncs.c in functions tuplestore_put_for_database() and\n tuplestore_put_for_relation you can remove 'nulls' array if\n you're sure that columns cannot be NULL. \n\nThese functions are almost the same and I would think of\n writing one function depending of type 'ExtVacReportType'", "msg_date": "Tue, 13 Aug 2024 16:18:48 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "And I have one suggestion for pg_stat_vacuum_database: I suppose we \nshould add database's name column after 'dboid' column because it is \ndifficult to read statistics without database's name. We could call it \n'datname' just like in 'pg_stat_database' view.\n\nRegards,\n\nIlia Evdokimov,\nTantor Labs LCC.\n\n\n\n", "msg_date": "Tue, 13 Aug 2024 16:37:41 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi!\n\nOn 13.08.2024 16:18, Ilia Evdokimov wrote:\n>\n> On 10.8.24 22:37, Andrei Zubkov wrote:\n>\n>> Hi, Ilia!\n>>\n>>> Do you consider not to create new table in pg_catalog but to save\n>>> statistics in existing table? I mean pg_class or\n>>> pg_stat_progress_analyze, pg_stat_progress_vacuum?\n>>>\n>> Thank you for your interest on our patch!\n>>\n>> *_progress views is not our case. They hold online statistics while\n>> vacuum is in progress. Once work is done on a table the entry is gone\n>> from those views. Idea of this patch is the opposite - it doesn't\n>> provide online statistics but it accumulates statistics about rosources\n>> consumed by all vacuum passes over all relations. It's much closer to\n>> the pg_stat_all_tables than pg_stat_progress_vacuum.\n>>\n>> It seems pg_class is not the right place because it is not a statistic\n>> view - it holds the current relation state and haven't anything about\n>> the relation workload.\n>>\n>> Maybe the pg_stat_all_tables is the right place but I have several\n>> thoughts about why it is not:\n>> - Some statistics provided by this patch is really vacuum specific. I\n>> don't think we want them in the relation statistics view.\n>> - Postgres is extreamly extensible. I'm sure someday there will be\n>> table AMs that does not need the vacuum at all.\n>>\n>> Right now vacuum specific workload views seems optimal choice to me.\n>>\n>> Regards,\n>\n>\n> Agreed. They are not god places to store such statistics.\n>\n>\n> I have some suggestions:\n>\n> 1. pgstatfuncs.c in functions tuplestore_put_for_database() and\n> tuplestore_put_for_relation you can remove 'nulls' array if you're\n> sure that columns cannot be NULL.\n>\nWe need to use this for tuplestore_putvalues function. With this \nfunction, we fill the table with the values of the statistics.\n>\n> 1.\n>\n>\n> 2. These functions are almost the same and I would think of writing\n> one function depending of type 'ExtVacReportType'\n>\nI'm not sure that I fully understand what you mean. Can you explain it \nmore clearly, please?\n\nOn 13.08.2024 16:37, Ilia Evdokimov wrote:\n> And I have one suggestion for pg_stat_vacuum_database: I suppose we \n> should add database's name column after 'dboid' column because it is \n> difficult to read statistics without database's name. We could call it \n> 'datname' just like in 'pg_stat_database' view.\n>\nThank you. Fixed.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 15 Aug 2024 11:49:36 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On 15.8.24 11:49, Alena Rybakina wrote:\n>>\n>> I have some suggestions:\n>>\n>> 1. pgstatfuncs.c in functions tuplestore_put_for_database() and\n>> tuplestore_put_for_relation you can remove 'nulls' array if\n>> you're sure that columns cannot be NULL.\n>>\n> We need to use this for tuplestore_putvalues function. With this \n> function, we fill the table with the values of the statistics.\n\nAh, right! I'm sorry.\n\n\n>> 1.\n>>\n>>\n>>\n>> 2. These functions are almost the same and I would think of writing\n>> one function depending of type 'ExtVacReportType'\n>>\n> I'm not sure that I fully understand what you mean. Can you explain it \n> more clearly, please?\n\n\nAh, I didn't notice that the size of all three tables is different. \nTherefore, it won't be possible to write one function instead of two to \navoid code duplication. My mistake.\n\n\n\n\n\n\n\n\n\n\nOn 15.8.24 11:49, Alena Rybakina wrote:\n\n\nI have some suggestions:\n\npgstatfuncs.c in functions tuplestore_put_for_database()\n and tuplestore_put_for_relation you can remove 'nulls' array\n if you're sure that columns cannot be NULL. \n\n\n\n We need to use this for tuplestore_putvalues function. With this\n function, we fill the table with the values of the statistics.\n\nAh, right! I'm sorry.\n\n\n\n\n\n\n \n\nThese functions are almost the same and I would think of\n writing one function depending of type 'ExtVacReportType'\n\n\n I'm not sure that I fully understand what you mean. Can you\n explain it more clearly, please?\n\n\n\nAh, I didn't notice that the size of all three tables is\n different. Therefore, it won't be possible to write one function\n instead of two to avoid code duplication. My mistake.", "msg_date": "Thu, 15 Aug 2024 12:50:58 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On Thu, Aug 15, 2024 at 4:49 PM Alena Rybakina\n<[email protected]> wrote:\n>\n> Hi!\n\n\nI've applied all the v5 patches.\n0002 and 0003 have white space errors.\n\n+ <para>\n+ Number of times blocks of this index were already found\n+ in the buffer cache by vacuum operations, so that a read was\nnot necessary\n+ (this only includes hits in the\n+ &project; buffer cache, not the operating system's file system cache)\n+ </para></entry>\n\n+ Number of times blocks of this table were already found\n+ in the buffer cache by vacuum operations, so that a read was\nnot necessary\n+ (this only includes hits in the\n+ &project; buffer cache, not the operating system's file system cache)\n+ </para></entry>\n\n\"&project;\"\nrepresents a sgml file placeholder name as \"project\" and puts all the\ncontent of \"project.sgml\" to system-views.sgml.\nbut you don't have \"project.sgml\". you may check\ndoc/src/sgml/filelist.sgml or doc/src/sgml/ref/allfiles.sgml\nfor usage of \"&place_holder;\".\nso you can change it to \"project\", otherwise doc cannot build.\n\n\nsrc/backend/commands/dbcommands.c\nwe have:\n /*\n * If built with appropriate switch, whine when regression-testing\n * conventions for database names are violated. But don't complain during\n * initdb.\n */\n#ifdef ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n if (IsUnderPostmaster && strstr(dbname, \"regression\") == NULL)\n elog(WARNING, \"databases created by regression test cases\nshould have names including \\\"regression\\\"\");\n#endif\nso in src/test/regress/sql/vacuum_tables_and_db_statistics.sql you\nneed to change dbname:\nCREATE DATABASE statistic_vacuum_database;\nCREATE DATABASE statistic_vacuum_database1;\n\n\n+ <para>\n+ The view <structname>pg_stat_vacuum_indexes</structname> will contain\n+ one row for each index in the current database (including TOAST\n+ table indexes), showing statistics about vacuuming that specific index.\n+ </para>\nTOAST should\n<acronym>TOAST</acronym>\n\n\n\n+ /* Build a tuple descriptor for our result type */\n+ if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)\n+ elog(ERROR, \"return type must be a row type\");\nmaybe change to\n ereport(ERROR,\n (errcode(ERRCODE_DATATYPE_MISMATCH),\n errmsg(\"return type must be a row type\")));\nLater I found out \"InitMaterializedSRF(fcinfo, 0);\" already did all\nthe work. much of the code can be gotten rid of.\nplease check attached.\n\n\n>>>>\n#define EXTVACHEAPSTAT_COLUMNS 27\n#define EXTVACIDXSTAT_COLUMNS 19\n#define EXTVACDBSTAT_COLUMNS 15\n#define EXTVACSTAT_COLUMNS Max(EXTVACHEAPSTAT_COLUMNS, EXTVACIDXSTAT_COLUMNS)\n\nstatic Oid CurrentDatabaseId = InvalidOid;\n>>>>\nwe already defined MyDatabaseId in src/include/miscadmin.h,\nWhy do we need \"static Oid CurrentDatabaseId = InvalidOid;\"?\nalso src/backend/utils/adt/pgstatfuncs.c already included \"miscadmin.h\".\n\n\n\n\nthe following code one function has 2 return statements?\n------------------------------------------------------------------------\n/*\n * Get the vacuum statistics for the heap tables.\n */\nDatum\npg_stat_vacuum_tables(PG_FUNCTION_ARGS)\n{\n return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_HEAP, EXTVACHEAPSTAT_COLUMNS);\n\n PG_RETURN_NULL();\n}\n\n/*\n * Get the vacuum statistics for the indexes.\n */\nDatum\npg_stat_vacuum_indexes(PG_FUNCTION_ARGS)\n{\n return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_INDEX, EXTVACIDXSTAT_COLUMNS);\n\n PG_RETURN_NULL();\n}\n\n/*\n * Get the vacuum statistics for the database.\n */\nDatum\npg_stat_vacuum_database(PG_FUNCTION_ARGS)\n{\n return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_DB, EXTVACDBSTAT_COLUMNS);\n\n PG_RETURN_NULL();\n}\n------------------------------------------------------------------------\nin pg_stats_vacuum:\n if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n {\n Oid relid = PG_GETARG_OID(1);\n\n /* Load table statistics for specified database. */\n if (OidIsValid(relid))\n {\n tabentry = fetch_dbstat_tabentry(dbid, relid);\n if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n /* Table don't exists or isn't an heap relation. */\n PG_RETURN_NULL();\n\n tuplestore_put_for_relation(relid, tupstore, tupdesc,\ntabentry, ncolumns);\n }\n else\n {\n ...\n }\n}\nI don't understand the ELSE branch. the IF branch means the input\ndboid, heap/index oid is correct.\nthe ELSE branch means table reloid is invalid = 0.\nI am not sure 100% what the ELSE Branch means.\nAlso there are no comments explaining why.\nexperiments seem to show that when reloid is 0, it will print out all\nthe vacuum statistics\nfor all the tables in the current database. If so, then only super\nusers can call pg_stats_vacuum?\nbut the table owner should be able to call pg_stats_vacuum for that\nspecific table.\n\n\n\n\n/* Type of ExtVacReport */\ntypedef enum ExtVacReportType\n{\n PGSTAT_EXTVAC_INVALID = 0,\n PGSTAT_EXTVAC_HEAP = 1,\n PGSTAT_EXTVAC_INDEX = 2,\n PGSTAT_EXTVAC_DB = 3,\n} ExtVacReportType;\ngenerally \"HEAP\" means table and index, maybe \"PGSTAT_EXTVAC_HEAP\" would be term", "msg_date": "Fri, 16 Aug 2024 19:12:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "in pg_stats_vacuum\n if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n {\n Oid relid = PG_GETARG_OID(1);\n\n /* Load table statistics for specified database. */\n if (OidIsValid(relid))\n {\n tabentry = fetch_dbstat_tabentry(dbid, relid);\n if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n /* Table don't exists or isn't an heap relation. */\n PG_RETURN_NULL();\n\n tuplestore_put_for_relation(relid, rsinfo, tabentry);\n }\n else\n {\n }\n\n\nSo for functions pg_stat_vacuum_indexes and pg_stat_vacuum_tables,\nit seems you didn't check \"relid\" 's relkind,\nyou may need to use get_rel_relkind.\n\n\n", "msg_date": "Mon, 19 Aug 2024 17:32:48 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Are you certain that all tables are included in `pg_stat_vacuum_tables`? \nI'm asking because of the following:\n\n\nSELECT count(*) FROM pg_stat_all_tables ;\n  count\n-------\n    108\n(1 row)\n\nSELECT count(*) FROM pg_stat_vacuum_tables ;\n  count\n-------\n     20\n(1 row)\n\n-- \nRegards,\nIlia Evdokimov,\nTantor Labs LCC.\n\n\n\n", "msg_date": "Mon, 19 Aug 2024 19:28:00 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi! Thank you very much for your review! Sorry for my late response I \nwas overwhelmed by tasks.\n\nOn 16.08.2024 14:12, jian he wrote:\n> On Thu, Aug 15, 2024 at 4:49 PM Alena Rybakina\n> <[email protected]> wrote:\n>> Hi!\n>\n> I've applied all the v5 patches.\n> 0002 and 0003 have white space errors.\n>\n> + <para>\n> + Number of times blocks of this index were already found\n> + in the buffer cache by vacuum operations, so that a read was\n> not necessary\n> + (this only includes hits in the\n> + &project; buffer cache, not the operating system's file system cache)\n> + </para></entry>\n>\n> + Number of times blocks of this table were already found\n> + in the buffer cache by vacuum operations, so that a read was\n> not necessary\n> + (this only includes hits in the\n> + &project; buffer cache, not the operating system's file system cache)\n> + </para></entry>\n>\n> \"&project;\"\n> represents a sgml file placeholder name as \"project\" and puts all the\n> content of \"project.sgml\" to system-views.sgml.\n> but you don't have \"project.sgml\". you may check\n> doc/src/sgml/filelist.sgml or doc/src/sgml/ref/allfiles.sgml\n> for usage of \"&place_holder;\".\n> so you can change it to \"project\", otherwise doc cannot build.\n>\n>\n> src/backend/commands/dbcommands.c\n> we have:\n> /*\n> * If built with appropriate switch, whine when regression-testing\n> * conventions for database names are violated. But don't complain during\n> * initdb.\n> */\n> #ifdef ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n> if (IsUnderPostmaster && strstr(dbname, \"regression\") == NULL)\n> elog(WARNING, \"databases created by regression test cases\n> should have names including \\\"regression\\\"\");\n> #endif\n> so in src/test/regress/sql/vacuum_tables_and_db_statistics.sql you\n> need to change dbname:\n> CREATE DATABASE statistic_vacuum_database;\n> CREATE DATABASE statistic_vacuum_database1;\n>\n>\n> + <para>\n> + The view <structname>pg_stat_vacuum_indexes</structname> will contain\n> + one row for each index in the current database (including TOAST\n> + table indexes), showing statistics about vacuuming that specific index.\n> + </para>\n> TOAST should\n> <acronym>TOAST</acronym>\n>\n>\n>\n> + /* Build a tuple descriptor for our result type */\n> + if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)\n> + elog(ERROR, \"return type must be a row type\");\n> maybe change to\n> ereport(ERROR,\n> (errcode(ERRCODE_DATATYPE_MISMATCH),\n> errmsg(\"return type must be a row type\")));\n> Later I found out \"InitMaterializedSRF(fcinfo, 0);\" already did all\n> the work. much of the code can be gotten rid of.\n> please check attached.\nI agree with your suggestions for improving the code. I will add this in \nthe next version of the patch.\n>\n> #define EXTVACHEAPSTAT_COLUMNS 27\n> #define EXTVACIDXSTAT_COLUMNS 19\n> #define EXTVACDBSTAT_COLUMNS 15\n> #define EXTVACSTAT_COLUMNS Max(EXTVACHEAPSTAT_COLUMNS, EXTVACIDXSTAT_COLUMNS)\n>\n> static Oid CurrentDatabaseId = InvalidOid;\n> we already defined MyDatabaseId in src/include/miscadmin.h,\n> Why do we need \"static Oid CurrentDatabaseId = InvalidOid;\"?\n> also src/backend/utils/adt/pgstatfuncs.c already included \"miscadmin.h\".\nHmm, Tom Lane added \"misc admin.h\", or I didn't notice something. Could \nyou point this out, please?\n\nWe used the Current Database Id to output statistics on tables from \nanother database, so we need to replace it with a different default \nvalue. But I want to rewrite this patch to display table statistics only \nfor the current database, that is, this part will be removed in the \nfuture. In my opinion, it would be more correct.\n> the following code one function has 2 return statements?\n> ------------------------------------------------------------------------\n> /*\n> * Get the vacuum statistics for the heap tables.\n> */\n> Datum\n> pg_stat_vacuum_tables(PG_FUNCTION_ARGS)\n> {\n> return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_HEAP, EXTVACHEAPSTAT_COLUMNS);\n>\n> PG_RETURN_NULL();\n> }\n>\n> /*\n> * Get the vacuum statistics for the indexes.\n> */\n> Datum\n> pg_stat_vacuum_indexes(PG_FUNCTION_ARGS)\n> {\n> return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_INDEX, EXTVACIDXSTAT_COLUMNS);\n>\n> PG_RETURN_NULL();\n> }\n>\n> /*\n> * Get the vacuum statistics for the database.\n> */\n> Datum\n> pg_stat_vacuum_database(PG_FUNCTION_ARGS)\n> {\n> return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_DB, EXTVACDBSTAT_COLUMNS);\n>\n> PG_RETURN_NULL();\n> }\nYou are right - the second return is superfluous. I'll fix it.\n> ------------------------------------------------------------------------\n> in pg_stats_vacuum:\n> if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n> {\n> Oid relid = PG_GETARG_OID(1);\n>\n> /* Load table statistics for specified database. */\n> if (OidIsValid(relid))\n> {\n> tabentry = fetch_dbstat_tabentry(dbid, relid);\n> if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n> /* Table don't exists or isn't an heap relation. */\n> PG_RETURN_NULL();\n>\n> tuplestore_put_for_relation(relid, tupstore, tupdesc,\n> tabentry, ncolumns);\n> }\n> else\n> {\n> ...\n> }\n> }\n> I don't understand the ELSE branch. the IF branch means the input\n> dboid, heap/index oid is correct.\n> the ELSE branch means table reloid is invalid = 0.\n> I am not sure 100% what the ELSE Branch means.\n> Also there are no comments explaining why.\n> experiments seem to show that when reloid is 0, it will print out all\n> the vacuum statistics\n> for all the tables in the current database. If so, then only super\n> users can call pg_stats_vacuum?\n> but the table owner should be able to call pg_stats_vacuum for that\n> specific table.\nIf any reloid has not been set by the user, we output statistics for all \nobjects - tables or indexes.In this part of the code, we find all the \nsuitable objects from the snapshot, if they belong to the index or table \ntype of objects.\n> /* Type of ExtVacReport */\n> typedef enum ExtVacReportType\n> {\n> PGSTAT_EXTVAC_INVALID = 0,\n> PGSTAT_EXTVAC_HEAP = 1,\n> PGSTAT_EXTVAC_INDEX = 2,\n> PGSTAT_EXTVAC_DB = 3,\n> } ExtVacReportType;\n> generally \"HEAP\" means table and index, maybe \"PGSTAT_EXTVAC_HEAP\" would be term\n\nNo, Heap means something like a table in a relationship database, or its \nalternative name is Heap.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nHi! Thank you very much for your review! Sorry for my late\n response I was overwhelmed by tasks.\n\nOn 16.08.2024 14:12, jian he wrote:\n\n\nOn Thu, Aug 15, 2024 at 4:49 PM Alena Rybakina\n<[email protected]> wrote:\n\n\n\nHi!\n\n\n\n\nI've applied all the v5 patches.\n0002 and 0003 have white space errors.\n\n+ <para>\n+ Number of times blocks of this index were already found\n+ in the buffer cache by vacuum operations, so that a read was\nnot necessary\n+ (this only includes hits in the\n+ &project; buffer cache, not the operating system's file system cache)\n+ </para></entry>\n\n+ Number of times blocks of this table were already found\n+ in the buffer cache by vacuum operations, so that a read was\nnot necessary\n+ (this only includes hits in the\n+ &project; buffer cache, not the operating system's file system cache)\n+ </para></entry>\n\n\"&project;\"\nrepresents a sgml file placeholder name as \"project\" and puts all the\ncontent of \"project.sgml\" to system-views.sgml.\nbut you don't have \"project.sgml\". you may check\ndoc/src/sgml/filelist.sgml or doc/src/sgml/ref/allfiles.sgml\nfor usage of \"&place_holder;\".\nso you can change it to \"project\", otherwise doc cannot build.\n\n\nsrc/backend/commands/dbcommands.c\nwe have:\n /*\n * If built with appropriate switch, whine when regression-testing\n * conventions for database names are violated. But don't complain during\n * initdb.\n */\n#ifdef ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n if (IsUnderPostmaster && strstr(dbname, \"regression\") == NULL)\n elog(WARNING, \"databases created by regression test cases\nshould have names including \\\"regression\\\"\");\n#endif\nso in src/test/regress/sql/vacuum_tables_and_db_statistics.sql you\nneed to change dbname:\nCREATE DATABASE statistic_vacuum_database;\nCREATE DATABASE statistic_vacuum_database1;\n\n\n+ <para>\n+ The view <structname>pg_stat_vacuum_indexes</structname> will contain\n+ one row for each index in the current database (including TOAST\n+ table indexes), showing statistics about vacuuming that specific index.\n+ </para>\nTOAST should\n<acronym>TOAST</acronym>\n\n\n\n+ /* Build a tuple descriptor for our result type */\n+ if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)\n+ elog(ERROR, \"return type must be a row type\");\nmaybe change to\n ereport(ERROR,\n (errcode(ERRCODE_DATATYPE_MISMATCH),\n errmsg(\"return type must be a row type\")));\nLater I found out \"InitMaterializedSRF(fcinfo, 0);\" already did all\nthe work. much of the code can be gotten rid of.\nplease check attached.\n\n\n I agree with your suggestions for improving the code. I will add\n this in the next version of the patch.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n#define EXTVACHEAPSTAT_COLUMNS 27\n#define EXTVACIDXSTAT_COLUMNS 19\n#define EXTVACDBSTAT_COLUMNS 15\n#define EXTVACSTAT_COLUMNS Max(EXTVACHEAPSTAT_COLUMNS, EXTVACIDXSTAT_COLUMNS)\n\nstatic Oid CurrentDatabaseId = InvalidOid;\n\n\n\n\n\n\n\n\n\n\n\nwe already defined MyDatabaseId in src/include/miscadmin.h,\nWhy do we need \"static Oid CurrentDatabaseId = InvalidOid;\"?\nalso src/backend/utils/adt/pgstatfuncs.c already included \"miscadmin.h\".\n\n Hmm, Tom Lane added \"misc admin.h\", or I didn't notice something.\n Could you point this out, please?\n\n We used the Current Database Id to output statistics on tables from\n another database, so we need to replace it with a different default\n value. But I want to rewrite this patch to display table statistics\n only for the current database, that is, this part will be removed in\n the future. In my opinion, it would be more correct.\n \nthe following code one function has 2 return statements?\n\n\n\n------------------------------------------------------------------------\n/*\n * Get the vacuum statistics for the heap tables.\n */\nDatum\npg_stat_vacuum_tables(PG_FUNCTION_ARGS)\n{\n return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_HEAP, EXTVACHEAPSTAT_COLUMNS);\n\n PG_RETURN_NULL();\n}\n\n/*\n * Get the vacuum statistics for the indexes.\n */\nDatum\npg_stat_vacuum_indexes(PG_FUNCTION_ARGS)\n{\n return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_INDEX, EXTVACIDXSTAT_COLUMNS);\n\n PG_RETURN_NULL();\n}\n\n/*\n * Get the vacuum statistics for the database.\n */\nDatum\npg_stat_vacuum_database(PG_FUNCTION_ARGS)\n{\n return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_DB, EXTVACDBSTAT_COLUMNS);\n\n PG_RETURN_NULL();\n}\n\n You are right - the second return is superfluous. I'll fix it.\n \n\n------------------------------------------------------------------------\nin pg_stats_vacuum:\n if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n {\n Oid relid = PG_GETARG_OID(1);\n\n /* Load table statistics for specified database. */\n if (OidIsValid(relid))\n {\n tabentry = fetch_dbstat_tabentry(dbid, relid);\n if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n /* Table don't exists or isn't an heap relation. */\n PG_RETURN_NULL();\n\n tuplestore_put_for_relation(relid, tupstore, tupdesc,\ntabentry, ncolumns);\n }\n else\n {\n ...\n }\n}\nI don't understand the ELSE branch. the IF branch means the input\ndboid, heap/index oid is correct.\nthe ELSE branch means table reloid is invalid = 0.\nI am not sure 100% what the ELSE Branch means.\nAlso there are no comments explaining why.\nexperiments seem to show that when reloid is 0, it will print out all\nthe vacuum statistics\nfor all the tables in the current database. If so, then only super\nusers can call pg_stats_vacuum?\nbut the table owner should be able to call pg_stats_vacuum for that\nspecific table.\n\n\n If any reloid has not been set by the user, we output statistics for\n all objects - tables or indexes.In this part of the code, we find\n all the suitable objects from the snapshot, if they belong to the\n index or table type of objects.\n\n\n\n/* Type of ExtVacReport */\ntypedef enum ExtVacReportType\n{\n PGSTAT_EXTVAC_INVALID = 0,\n PGSTAT_EXTVAC_HEAP = 1,\n PGSTAT_EXTVAC_INDEX = 2,\n PGSTAT_EXTVAC_DB = 3,\n} ExtVacReportType;\ngenerally \"HEAP\" means table and index, maybe \"PGSTAT_EXTVAC_HEAP\" would be term\n\n\nNo, Heap means something like a table in a relationship database,\n or its alternative name is Heap.\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 21 Aug 2024 01:35:00 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "We check it there: \"tabentry->vacuum_ext.type != type\". Or were you \ntalking about something else?\n\nOn 19.08.2024 12:32, jian he wrote:\n> in pg_stats_vacuum\n> if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n> {\n> Oid relid = PG_GETARG_OID(1);\n>\n> /* Load table statistics for specified database. */\n> if (OidIsValid(relid))\n> {\n> tabentry = fetch_dbstat_tabentry(dbid, relid);\n> if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n> /* Table don't exists or isn't an heap relation. */\n> PG_RETURN_NULL();\n>\n> tuplestore_put_for_relation(relid, rsinfo, tabentry);\n> }\n> else\n> {\n> }\n>\n>\n> So for functions pg_stat_vacuum_indexes and pg_stat_vacuum_tables,\n> it seems you didn't check \"relid\" 's relkind,\n> you may need to use get_rel_relkind.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nWe check it there: \"tabentry->vacuum_ext.type != type\". Or were you talking about something else?\n\nOn 19.08.2024 12:32, jian he wrote:\n\n\nin pg_stats_vacuum\n if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n {\n Oid relid = PG_GETARG_OID(1);\n\n /* Load table statistics for specified database. */\n if (OidIsValid(relid))\n {\n tabentry = fetch_dbstat_tabentry(dbid, relid);\n if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n /* Table don't exists or isn't an heap relation. */\n PG_RETURN_NULL();\n\n tuplestore_put_for_relation(relid, rsinfo, tabentry);\n }\n else\n {\n }\n\n\nSo for functions pg_stat_vacuum_indexes and pg_stat_vacuum_tables,\nit seems you didn't check \"relid\" 's relkind,\nyou may need to use get_rel_relkind.\n\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 21 Aug 2024 01:37:16 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "I think you've counted the above system tables from the database, but \nI'll double-check it. Thank you for your review!\n\nOn 19.08.2024 19:28, Ilia Evdokimov wrote:\n> Are you certain that all tables are included in \n> `pg_stat_vacuum_tables`? I'm asking because of the following:\n>\n>\n> SELECT count(*) FROM pg_stat_all_tables ;\n>  count\n> -------\n>    108\n> (1 row)\n>\n> SELECT count(*) FROM pg_stat_vacuum_tables ;\n>  count\n> -------\n>     20\n> (1 row)\n>\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Wed, 21 Aug 2024 01:39:15 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On Wed, Aug 21, 2024 at 6:37 AM Alena Rybakina\n<[email protected]> wrote:\n>\n> We check it there: \"tabentry->vacuum_ext.type != type\". Or were you talking about something else?\n>\n> On 19.08.2024 12:32, jian he wrote:\n>\n> in pg_stats_vacuum\n> if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n> {\n> Oid relid = PG_GETARG_OID(1);\n>\n> /* Load table statistics for specified database. */\n> if (OidIsValid(relid))\n> {\n> tabentry = fetch_dbstat_tabentry(dbid, relid);\n> if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n> /* Table don't exists or isn't an heap relation. */\n> PG_RETURN_NULL();\n>\n> tuplestore_put_for_relation(relid, rsinfo, tabentry);\n> }\n> else\n> {\n> }\n>\n>\n> So for functions pg_stat_vacuum_indexes and pg_stat_vacuum_tables,\n> it seems you didn't check \"relid\" 's relkind,\n> you may need to use get_rel_relkind.\n>\n> --\n\nhi.\nI mentioned some points at [1],\nPlease check the attached patchset to address these issues.\n\nthere are four occurrences of \"CurrentDatabaseId\", i am still confused\nwith usage of CurrentDatabaseId.\n\nalso please don't top-post, otherwise the archive, like [2] is not\neasier to read for future readers.\ngenerally you quote first, then reply.\n\n[1] https://postgr.es/m/CACJufxHb_YGCp=pVH6DZcpk9yML+SueffPeaRbX2LzXZVahd_w@mail.gmail.com\n[2] https://postgr.es/m/[email protected]", "msg_date": "Thu, 22 Aug 2024 10:47:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On Thu, 22 Aug 2024 at 07:48, jian he <[email protected]> wrote:\n>\n> On Wed, Aug 21, 2024 at 6:37 AM Alena Rybakina\n> <[email protected]> wrote:\n> >\n> > We check it there: \"tabentry->vacuum_ext.type != type\". Or were you talking about something else?\n> >\n> > On 19.08.2024 12:32, jian he wrote:\n> >\n> > in pg_stats_vacuum\n> > if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n> > {\n> > Oid relid = PG_GETARG_OID(1);\n> >\n> > /* Load table statistics for specified database. */\n> > if (OidIsValid(relid))\n> > {\n> > tabentry = fetch_dbstat_tabentry(dbid, relid);\n> > if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n> > /* Table don't exists or isn't an heap relation. */\n> > PG_RETURN_NULL();\n> >\n> > tuplestore_put_for_relation(relid, rsinfo, tabentry);\n> > }\n> > else\n> > {\n> > }\n> >\n> >\n> > So for functions pg_stat_vacuum_indexes and pg_stat_vacuum_tables,\n> > it seems you didn't check \"relid\" 's relkind,\n> > you may need to use get_rel_relkind.\n> >\n> > --\n>\n> hi.\n> I mentioned some points at [1],\n> Please check the attached patchset to address these issues.\n>\n> there are four occurrences of \"CurrentDatabaseId\", i am still confused\n> with usage of CurrentDatabaseId.\n>\n> also please don't top-post, otherwise the archive, like [2] is not\n> easier to read for future readers.\n> generally you quote first, then reply.\n>\n> [1] https://postgr.es/m/CACJufxHb_YGCp=pVH6DZcpk9yML+SueffPeaRbX2LzXZVahd_w@mail.gmail.com\n> [2] https://postgr.es/m/[email protected]\n\nHi, your points are valid.\nRegarding 0003, I also wanted to object database naming in a\nregression test during my review but for some reason didn't.Now, as\nsoon as we already need to change it, I suggest we also change\nregression_statistic_vacuum_db1 to something less generic. Maybe\nregression_statistic_vacuum_db_unaffected.\n\n\n\n-- \nBest regards,\nKirill Reshke\n\n\n", "msg_date": "Thu, 22 Aug 2024 09:29:58 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On Wed, Aug 21, 2024 at 1:39 AM Alena Rybakina <[email protected]>\nwrote:\n>\n> I think you've counted the above system tables from the database, but\n> I'll double-check it. Thank you for your review!\n>\n> On 19.08.2024 19:28, Ilia Evdokimov wrote:\n> > Are you certain that all tables are included in\n> > `pg_stat_vacuum_tables`? I'm asking because of the following:\n> >\n> >\n> > SELECT count(*) FROM pg_stat_all_tables ;\n> > count\n> > -------\n> > 108\n> > (1 row)\n> >\n> > SELECT count(*) FROM pg_stat_vacuum_tables ;\n> > count\n> > -------\n> > 20\n> > (1 row)\n> >\n\nI'd like to do some review a well.\n\n+ MyDatabaseId = dbid;\n+\n+ PG_TRY();\n+ {\n+ tabentry = pgstat_fetch_stat_tabentry(relid);\n+ MyDatabaseId = storedMyDatabaseId;\n+ }\n+ PG_CATCH();\n+ {\n+ MyDatabaseId = storedMyDatabaseId;\n+ }\n+ PG_END_TRY();\n\nI think this is generally wrong to change MyDatabaseId, especially if you\nhave to wrap it with PG_TRY()/PG_CATCH(). I think, instead we need proper\nAPI changes, i.e. make pgstat_fetch_stat_tabentry() and others take dboid\nas an argument.\n\n+/*\n+ * Get the vacuum statistics for the heap tables.\n+ */\n+Datum\n+pg_stat_vacuum_tables(PG_FUNCTION_ARGS)\n+{\n+ return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_HEAP,\nEXTVACHEAPSTAT_COLUMNS);\n+\n+ PG_RETURN_NULL();\n+}\n\nThe PG_RETURN_NULL() is unneeded after another return statement. However,\ndoes pg_stats_vacuum() need to return anything? What about making its\nreturn type void?\n\n@@ -874,4 +874,38 @@ pgstat_get_custom_snapshot_data(PgStat_Kind kind)\n return pgStatLocal.snapshot.custom_data[idx];\n }\n\n+/* hash table for statistics snapshots entry */\n+typedef struct PgStat_SnapshotEntry\n+{\n+ PgStat_HashKey key;\n+ char status; /* for simplehash use */\n+ void *data; /* the stats data itself */\n+} PgStat_SnapshotEntry;\n\nIt would be nice to preserve encapsulation and don't expose pgstat_snapshot\nhash in the headers. I see there is only one usage of it outside of\npgstat.c: pg_stats_vacuum().\n\n+ Oid storedMyDatabaseId = MyDatabaseId;\n+\n+ pgstat_update_snapshot(PGSTAT_KIND_RELATION);\n+ MyDatabaseId = storedMyDatabaseId;\n\nThis manipulation with storedMyDatabaseId looks pretty useless. It seems\nto be intended to change MyDatabaseId, while I'm not fan of this as I\nmentioned above.\n\n+static PgStat_StatTabEntry *\n+fetch_dbstat_tabentry(Oid dbid, Oid relid)\n+{\n+ Oid storedMyDatabaseId = MyDatabaseId;\n+ PgStat_StatTabEntry *tabentry = NULL;\n+\n+ if (OidIsValid(CurrentDatabaseId) && CurrentDatabaseId == dbid)\n+ /* Quick path when we read data from the same database */\n+ return pgstat_fetch_stat_tabentry(relid);\n+\n+ pgstat_clear_snapshot();\n\nIt looks scary to reset the whole snapshot each time we access another\ndatabase. Need to also mention that the CurrentDatabaseId machinery isn't\nimplemented.\n\nNew functions\npg_stat_vacuum_tables(), pg_stat_vacuum_indexes(), pg_stat_vacuum_database()\nare SRFs. When zero Oid is passed they report all the objects. However,\nit seems they aren't intended to be used directly. Instead, there are\nviews with the same names. These views always call them with particular\nOids, therefore SRFs always return one row. Then why bother with SRF?\nThey could return plain records instead.\n\nAlso, as I mentioned above patchset makes a lot of trouble accessing\nstatistics of relations of another database. But that seems to be useless\ngiven corresponding views allow to see only relations of the current\ndatabase. Even if you call functions directly, what is the value of this\ninformation given that you don't know the relation oids in another\ndatabase? So, I think if we will give up and limit access to the relations\nof the current database patch will become simpler and clearer.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\nOn Wed, Aug 21, 2024 at 1:39 AM Alena Rybakina <[email protected]> wrote:>> I think you've counted the above system tables from the database, but> I'll double-check it. Thank you for your review!>> On 19.08.2024 19:28, Ilia Evdokimov wrote:> > Are you certain that all tables are included in> > `pg_stat_vacuum_tables`? I'm asking because of the following:> >> >> > SELECT count(*) FROM pg_stat_all_tables ;> >  count> > -------> >    108> > (1 row)> >> > SELECT count(*) FROM pg_stat_vacuum_tables ;> >  count> > -------> >     20> > (1 row)> >I'd like to do some review a well.+   MyDatabaseId = dbid;++   PG_TRY();+   {+       tabentry = pgstat_fetch_stat_tabentry(relid);+       MyDatabaseId = storedMyDatabaseId;+   }+   PG_CATCH();+   {+       MyDatabaseId = storedMyDatabaseId;+   }+   PG_END_TRY();I think this is generally wrong to change MyDatabaseId, especially if you have to wrap it with PG_TRY()/PG_CATCH().  I think, instead we need proper API changes, i.e. make pgstat_fetch_stat_tabentry() and others take dboid as an argument.+/*+ * Get the vacuum statistics for the heap tables.+ */+Datum+pg_stat_vacuum_tables(PG_FUNCTION_ARGS)+{+   return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_HEAP, EXTVACHEAPSTAT_COLUMNS);++   PG_RETURN_NULL();+}The PG_RETURN_NULL() is unneeded after another return statement.  However, does pg_stats_vacuum() need to return anything?  What about making its return type void?@@ -874,4 +874,38 @@ pgstat_get_custom_snapshot_data(PgStat_Kind kind)   return pgStatLocal.snapshot.custom_data[idx]; } +/* hash table for statistics snapshots entry */+typedef struct PgStat_SnapshotEntry+{+  PgStat_HashKey key;+  char     status;        /* for simplehash use */+  void     *data;         /* the stats data itself */+} PgStat_SnapshotEntry;It would be nice to preserve encapsulation and don't expose pgstat_snapshot hash in the headers.  I see there is only one usage of it outside of pgstat.c: pg_stats_vacuum().  +        Oid                  storedMyDatabaseId = MyDatabaseId;++        pgstat_update_snapshot(PGSTAT_KIND_RELATION);+        MyDatabaseId = storedMyDatabaseId;This manipulation with storedMyDatabaseId looks pretty useless.  It seems to be intended to change MyDatabaseId, while I'm not fan of this as I mentioned above.+static PgStat_StatTabEntry *+fetch_dbstat_tabentry(Oid dbid, Oid relid)+{+  Oid                  storedMyDatabaseId = MyDatabaseId;+  PgStat_StatTabEntry  *tabentry = NULL;++  if (OidIsValid(CurrentDatabaseId) && CurrentDatabaseId == dbid)+     /* Quick path when we read data from the same database */+     return pgstat_fetch_stat_tabentry(relid);++  pgstat_clear_snapshot();It looks scary to reset the whole snapshot each time we access another database.  Need to also mention that the CurrentDatabaseId machinery isn't implemented.New functions pg_stat_vacuum_tables(), pg_stat_vacuum_indexes(), pg_stat_vacuum_database() are SRFs.  When zero Oid is passed they report all the objects.  However, it seems they aren't intended to be used directly.  Instead, there are views with the same names.  These views always call them with particular Oids, therefore SRFs always return one row.  Then why bother with SRF?  They could return plain records instead.Also, as I mentioned above patchset makes a lot of trouble accessing statistics of relations of another database.  But that seems to be useless given corresponding views allow to see only relations of the current database.  Even if you call functions directly, what is the value of this information given that you don't know the relation oids in another database?  So, I think if we will give up and limit access to the relations of the current database patch will become simpler and clearer.------Regards,Alexander KorotkovSupabase", "msg_date": "Fri, 23 Aug 2024 04:07:37 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi!\n\nOn 23.08.2024 04:07, Alexander Korotkov wrote:\n> On Wed, Aug 21, 2024 at 1:39 AM Alena Rybakina \n> <[email protected]> wrote:\n> >\n> > I think you've counted the above system tables from the database, but\n> > I'll double-check it. Thank you for your review!\n> >\n> > On 19.08.2024 19:28, Ilia Evdokimov wrote:\n> > > Are you certain that all tables are included in\n> > > `pg_stat_vacuum_tables`? I'm asking because of the following:\n> > >\n> > >\n> > > SELECT count(*) FROM pg_stat_all_tables ;\n> > >  count\n> > > -------\n> > >    108\n> > > (1 row)\n> > >\n> > > SELECT count(*) FROM pg_stat_vacuum_tables ;\n> > >  count\n> > > -------\n> > >     20\n> > > (1 row)\n> > >\n>\n> I'd like to do some review a well.\nThank you very much for your review and contribution to this thread!\n>\n> +   MyDatabaseId = dbid;\n> +\n> +   PG_TRY();\n> +   {\n> +       tabentry = pgstat_fetch_stat_tabentry(relid);\n> +       MyDatabaseId = storedMyDatabaseId;\n> +   }\n> +   PG_CATCH();\n> +   {\n> +       MyDatabaseId = storedMyDatabaseId;\n> +   }\n> +   PG_END_TRY();\n>\n> I think this is generally wrong to change MyDatabaseId, especially if \n> you have to wrap it with PG_TRY()/PG_CATCH().  I think, instead we \n> need proper API changes, i.e. make pgstat_fetch_stat_tabentry() and \n> others take dboid as an argument.\nI fixed it by deleting this part pf the code. We can display statistics \nonly for current database.\n>\n> +/*\n> + * Get the vacuum statistics for the heap tables.\n> + */\n> +Datum\n> +pg_stat_vacuum_tables(PG_FUNCTION_ARGS)\n> +{\n> +   return pg_stats_vacuum(fcinfo, PGSTAT_EXTVAC_HEAP, \n> EXTVACHEAPSTAT_COLUMNS);\n> +\n> +   PG_RETURN_NULL();\n> +}\n>\n> The PG_RETURN_NULL() is unneeded after another return statement.  \n> However, does pg_stats_vacuum() need to return anything?  What about \n> making its return type void?\nI think you are right, we can not return anything. Fixed.\n>\n> @@ -874,4 +874,38 @@ pgstat_get_custom_snapshot_data(PgStat_Kind kind)\n>    return pgStatLocal.snapshot.custom_data[idx];\n>  }\n>\n> +/* hash table for statistics snapshots entry */\n> +typedef struct PgStat_SnapshotEntry\n> +{\n> +  PgStat_HashKey key;\n> +  char     status;        /* for simplehash use */\n> +  void     *data;         /* the stats data itself */\n> +} PgStat_SnapshotEntry;\n>\n> It would be nice to preserve encapsulation and don't \n> expose pgstat_snapshot hash in the headers.  I see there is only one \n> usage of it outside of pgstat.c: pg_stats_vacuum().\nFixed.\n>\n> +        Oid  storedMyDatabaseId = MyDatabaseId;\n> +\n> +        pgstat_update_snapshot(PGSTAT_KIND_RELATION);\n> +        MyDatabaseId = storedMyDatabaseId;\n>\n> This manipulation with storedMyDatabaseId looks pretty useless. It \n> seems to be intended to change MyDatabaseId, while I'm not fan of this \n> as I mentioned above.\nFixed.\n>\n> +static PgStat_StatTabEntry *\n> +fetch_dbstat_tabentry(Oid dbid, Oid relid)\n> +{\n> +  Oid                  storedMyDatabaseId = MyDatabaseId;\n> +  PgStat_StatTabEntry  *tabentry = NULL;\n> +\n> +  if (OidIsValid(CurrentDatabaseId) && CurrentDatabaseId == dbid)\n> +     /* Quick path when we read data from the same database */\n> +     return pgstat_fetch_stat_tabentry(relid);\n> +\n> +  pgstat_clear_snapshot();\n>\n> It looks scary to reset the whole snapshot each time we access another \n> database.  Need to also mention that the CurrentDatabaseId machinery \n> isn't implemented.\nFixed.\n>\n> New functions \n> pg_stat_vacuum_tables(), pg_stat_vacuum_indexes(), pg_stat_vacuum_database() \n> are SRFs.  When zero Oid is passed they report all the objects.  \n> However, it seems they aren't intended to be used directly.  Instead, \n> there are views with the same names. These views always call them with \n> particular Oids, therefore SRFs always return one row.  Then why \n> bother with SRF?  They could return plain records instead.\n\nI didn't understand correctly - did you mean that we don't need SRF if \nwe need to display statistics for a specific object?\n\nOtherwise, we need this when we display information on all database \nobjects (tables or indexes):\n\nwhile ((entry = ScanStatSnapshot(pgStatLocal.snapshot.stats, &hashiter)) \n!= NULL)\n{\n     CHECK_FOR_INTERRUPTS();\n\n     tabentry = (PgStat_StatTabEntry *) entry->data;\n\n     if (tabentry != NULL && tabentry->vacuum_ext.type == type)\n         tuplestore_put_for_relation(relid, rsinfo, tabentry);\n}\n\nI know we can construct a HeapTuple object containing a TupleDesc, \nvalues, and nulls for a particular object, but I'm not sure we can \naugment it while looping through multiple objects.\n\n/* Initialise attributes information in the tuple descriptor */\n\n  tupdesc = CreateTemplateTupleDesc(PG_STAT_GET_SUBSCRIPTION_STATS_COLS);\n\n...\n\nPG_RETURN_DATUM(HeapTupleGetDatum(heap_form_tuple(tupdesc, values, nulls)));\n\n\nIf I missed something or misunderstood, can you explain in more detail?\n\n>\n> Also, as I mentioned above patchset makes a lot of trouble accessing \n> statistics of relations of another database.  But that seems to be \n> useless given corresponding views allow to see only relations of the \n> current database.  Even if you call functions directly, what is the \n> value of this information given that you don't know the relation oids \n> in another database?  So, I think if we will give up and limit access \n> to the relations of the current database patch will become simpler and \n> clearer.\n>\nI agree with that and have fixed it already.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 25 Aug 2024 18:59:46 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On 22.08.2024 05:47, jian he wrote:\n> On Wed, Aug 21, 2024 at 6:37 AM Alena Rybakina\n> <[email protected]> wrote:\n>> We check it there: \"tabentry->vacuum_ext.type != type\". Or were you talking about something else?\n>>\n>> On 19.08.2024 12:32, jian he wrote:\n>>\n>> in pg_stats_vacuum\n>> if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n>> {\n>> Oid relid = PG_GETARG_OID(1);\n>>\n>> /* Load table statistics for specified database. */\n>> if (OidIsValid(relid))\n>> {\n>> tabentry = fetch_dbstat_tabentry(dbid, relid);\n>> if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n>> /* Table don't exists or isn't an heap relation. */\n>> PG_RETURN_NULL();\n>>\n>> tuplestore_put_for_relation(relid, rsinfo, tabentry);\n>> }\n>> else\n>> {\n>> }\n>>\n>>\n>> So for functions pg_stat_vacuum_indexes and pg_stat_vacuum_tables,\n>> it seems you didn't check \"relid\" 's relkind,\n>> you may need to use get_rel_relkind.\n>>\n>> --\n> hi.\n> I mentioned some points at [1],\n> Please check the attached patchset to address these issues.\n\nThank you for your work! I checked the patches and added your suggested \nchanges to the new version of the patch here [0]. In my opinion, nothing \nwas missing, but please take a look.\n\n[0] \nhttps://www.postgresql.org/message-id/c4e4e305-7119-4183-b49a-d7092f4efba3%40postgrespro.ru\n\n>\n> there are four occurrences of \"CurrentDatabaseId\", i am still confused\n> with usage of CurrentDatabaseId.\n\nIt needed to be used because of scanning objects from the other \ndatabase, so we change the id of dbid temporary to achieve it.\n\nYou should snow that every part of this code was deleted.Now we can \ncheck information about tables and indexes from the current database.\n\n> also please don't top-post, otherwise the archive, like [2] is not\n> easier to read for future readers.\n> generally you quote first, then reply.\n>\n> [1]https://postgr.es/m/CACJufxHb_YGCp=pVH6DZcpk9yML+SueffPeaRbX2LzXZVahd_w@mail.gmail.com\n> [2]https://postgr.es/m/[email protected]\nOk, no problem.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nOn 22.08.2024 05:47, jian he wrote:\n\n\nOn Wed, Aug 21, 2024 at 6:37 AM Alena Rybakina\n<[email protected]> wrote:\n\n\n\nWe check it there: \"tabentry->vacuum_ext.type != type\". Or were you talking about something else?\n\nOn 19.08.2024 12:32, jian he wrote:\n\nin pg_stats_vacuum\n if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n {\n Oid relid = PG_GETARG_OID(1);\n\n /* Load table statistics for specified database. */\n if (OidIsValid(relid))\n {\n tabentry = fetch_dbstat_tabentry(dbid, relid);\n if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n /* Table don't exists or isn't an heap relation. */\n PG_RETURN_NULL();\n\n tuplestore_put_for_relation(relid, rsinfo, tabentry);\n }\n else\n {\n }\n\n\nSo for functions pg_stat_vacuum_indexes and pg_stat_vacuum_tables,\nit seems you didn't check \"relid\" 's relkind,\nyou may need to use get_rel_relkind.\n\n--\n\n\n\nhi.\nI mentioned some points at [1],\nPlease check the attached patchset to address these issues.\n\nThank you for your work! I checked the patches and added your\n suggested changes to the new version of the patch here [0]. In my\n opinion, nothing was missing, but please take a look.\n[0]\nhttps://www.postgresql.org/message-id/c4e4e305-7119-4183-b49a-d7092f4efba3%40postgrespro.ru\n\n\n\n\nthere are four occurrences of \"CurrentDatabaseId\", i am still confused\nwith usage of CurrentDatabaseId.\n\nIt needed to be used because of scanning objects from the other\n database, so we change the id of dbid temporary to achieve it.\n\nYou should snow that every\n part of this code was deleted. Now we can check information about tables and indexes from the current database.\n\n\n\nalso please don't top-post, otherwise the archive, like [2] is not\neasier to read for future readers.\ngenerally you quote first, then reply.\n\n[1] https://postgr.es/m/CACJufxHb_YGCp=pVH6DZcpk9yML+SueffPeaRbX2LzXZVahd_w@mail.gmail.com\n[2] https://postgr.es/m/[email protected]\n\n Ok, no problem.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 25 Aug 2024 19:06:51 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On 22.08.2024 07:29, Kirill Reshke wrote:\n> On Thu, 22 Aug 2024 at 07:48, jian he<[email protected]> wrote:\n>> On Wed, Aug 21, 2024 at 6:37 AM Alena Rybakina\n>> <[email protected]> wrote:\n>>> We check it there: \"tabentry->vacuum_ext.type != type\". Or were you talking about something else?\n>>>\n>>> On 19.08.2024 12:32, jian he wrote:\n>>>\n>>> in pg_stats_vacuum\n>>> if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n>>> {\n>>> Oid relid = PG_GETARG_OID(1);\n>>>\n>>> /* Load table statistics for specified database. */\n>>> if (OidIsValid(relid))\n>>> {\n>>> tabentry = fetch_dbstat_tabentry(dbid, relid);\n>>> if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n>>> /* Table don't exists or isn't an heap relation. */\n>>> PG_RETURN_NULL();\n>>>\n>>> tuplestore_put_for_relation(relid, rsinfo, tabentry);\n>>> }\n>>> else\n>>> {\n>>> }\n>>>\n>>>\n>>> So for functions pg_stat_vacuum_indexes and pg_stat_vacuum_tables,\n>>> it seems you didn't check \"relid\" 's relkind,\n>>> you may need to use get_rel_relkind.\n>>>\n>>> --\n>> hi.\n>> I mentioned some points at [1],\n>> Please check the attached patchset to address these issues.\n>>\n>> there are four occurrences of \"CurrentDatabaseId\", i am still confused\n>> with usage of CurrentDatabaseId.\n>>\n>> also please don't top-post, otherwise the archive, like [2] is not\n>> easier to read for future readers.\n>> generally you quote first, then reply.\n>>\n>> [1]https://postgr.es/m/CACJufxHb_YGCp=pVH6DZcpk9yML+SueffPeaRbX2LzXZVahd_w@mail.gmail.com\n>> [2]https://postgr.es/m/[email protected]\n> Hi, your points are valid.\n> Regarding 0003, I also wanted to object database naming in a\n> regression test during my review but for some reason didn't.Now, as\n> soon as we already need to change it, I suggest we also change\n> regression_statistic_vacuum_db1 to something less generic. Maybe\n> regression_statistic_vacuum_db_unaffected.\n>\nHi! I fixed it in the new version of the patch [0]. Feel free to review it!\n\nTo be honest, I still doubt that the current database names \n(regression_statistic_vacuum_db and regression_statistic_vacuum_db1) can \nbe easily generated, but if you insist on renaming, I will do it.\n\n[0] \nhttps://www.postgresql.org/message-id/c4e4e305-7119-4183-b49a-d7092f4efba3%40postgrespro.ru\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nOn 22.08.2024 07:29, Kirill Reshke\n wrote:\n\n\nOn Thu, 22 Aug 2024 at 07:48, jian he <[email protected]> wrote:\n\n\n\nOn Wed, Aug 21, 2024 at 6:37 AM Alena Rybakina\n<[email protected]> wrote:\n\n\n\nWe check it there: \"tabentry->vacuum_ext.type != type\". Or were you talking about something else?\n\nOn 19.08.2024 12:32, jian he wrote:\n\nin pg_stats_vacuum\n if (type == PGSTAT_EXTVAC_INDEX || type == PGSTAT_EXTVAC_HEAP)\n {\n Oid relid = PG_GETARG_OID(1);\n\n /* Load table statistics for specified database. */\n if (OidIsValid(relid))\n {\n tabentry = fetch_dbstat_tabentry(dbid, relid);\n if (tabentry == NULL || tabentry->vacuum_ext.type != type)\n /* Table don't exists or isn't an heap relation. */\n PG_RETURN_NULL();\n\n tuplestore_put_for_relation(relid, rsinfo, tabentry);\n }\n else\n {\n }\n\n\nSo for functions pg_stat_vacuum_indexes and pg_stat_vacuum_tables,\nit seems you didn't check \"relid\" 's relkind,\nyou may need to use get_rel_relkind.\n\n--\n\n\n\nhi.\nI mentioned some points at [1],\nPlease check the attached patchset to address these issues.\n\nthere are four occurrences of \"CurrentDatabaseId\", i am still confused\nwith usage of CurrentDatabaseId.\n\nalso please don't top-post, otherwise the archive, like [2] is not\neasier to read for future readers.\ngenerally you quote first, then reply.\n\n[1] https://postgr.es/m/CACJufxHb_YGCp=pVH6DZcpk9yML+SueffPeaRbX2LzXZVahd_w@mail.gmail.com\n[2] https://postgr.es/m/[email protected]\n\n\n\nHi, your points are valid.\nRegarding 0003, I also wanted to object database naming in a\nregression test during my review but for some reason didn't.Now, as\nsoon as we already need to change it, I suggest we also change\nregression_statistic_vacuum_db1 to something less generic. Maybe\nregression_statistic_vacuum_db_unaffected.\n\n\n\nHi! I fixed it in the new version of the patch [0]. Feel free to\n review it!\n\nTo be honest, I still doubt that the current database names (regression_statistic_vacuum_db and regression_statistic_vacuum_db1) can be easily\n generated, but if you insist on renaming, I will do it.\n\n[0]\nhttps://www.postgresql.org/message-id/c4e4e305-7119-4183-b49a-d7092f4efba3%40postgrespro.ru\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 25 Aug 2024 19:12:40 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Just in case, I have attached a diff file to show the changes for the \nlatest version attached here [0] to make the review process easier.\n\n[0] \nhttps://www.postgresql.org/message-id/c4e4e305-7119-4183-b49a-d7092f4efba3%40postgrespro.ru\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 26 Aug 2024 14:55:13 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi, all!\n\nI noticed that the pgstat_accumulate_extvac_stats function may be \ndeclared as static in the pgstat_relation.c file rather than in the \npgstat.h file.\n\nI fixed part of the code with interrupt counters. I believe that it is \nnot worth taking into account the number of interrupts if its level is \ngreater than ERROR, for example PANIC. Our server will no longer be \navailable to us and statistics data will not help us.\n\nI have attached the new version of the code and the diff files \n(minor-vacuum.no-cbot).", "msg_date": "Wed, 4 Sep 2024 20:23:00 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On Thu, Sep 5, 2024 at 1:23 AM Alena Rybakina <[email protected]> wrote:\n>\n> Hi, all!\n>\n> I have attached the new version of the code and the diff files\n> (minor-vacuum.no-cbot).\n>\n\nhi.\n\nstill have white space issue when using \"git apply\",\nyou may need to use \"git diff --check\" to find out where.\n\n\n /* ----------\ndiff --git a/src/test/regress/expected/opr_sanity.out\nb/src/test/regress/expected/opr_sanity.out\nindex 5d72b970b03..7026de157e4 100644\n--- a/src/test/regress/expected/opr_sanity.out\n+++ b/src/test/regress/expected/opr_sanity.out\n@@ -32,11 +32,12 @@ WHERE p1.prolang = 0 OR p1.prorettype = 0 OR\n prokind NOT IN ('f', 'a', 'w', 'p') OR\n provolatile NOT IN ('i', 's', 'v') OR\n proparallel NOT IN ('s', 'r', 'u');\n- oid | proname\n-------+------------------------\n+ oid | proname\n+------+-------------------------\n 8001 | pg_stat_vacuum_tables\n 8002 | pg_stat_vacuum_indexes\n-(2 rows)\n+ 8003 | pg_stat_vacuum_database\n+(3 rows)\n\n\nlooking at src/test/regress/sql/opr_sanity.sql:\n\n-- **************** pg_proc ****************\n-- Look for illegal values in pg_proc fields.\n\nSELECT p1.oid, p1.proname\nFROM pg_proc as p1\nWHERE p1.prolang = 0 OR p1.prorettype = 0 OR\n p1.pronargs < 0 OR\n p1.pronargdefaults < 0 OR\n p1.pronargdefaults > p1.pronargs OR\n array_lower(p1.proargtypes, 1) != 0 OR\n array_upper(p1.proargtypes, 1) != p1.pronargs-1 OR\n 0::oid = ANY (p1.proargtypes) OR\n procost <= 0 OR\n CASE WHEN proretset THEN prorows <= 0 ELSE prorows != 0 END OR\n prokind NOT IN ('f', 'a', 'w', 'p') OR\n provolatile NOT IN ('i', 's', 'v') OR\n proparallel NOT IN ('s', 'r', 'u');\n\nthat means\n oid | proname\n------+-------------------------\n 8001 | pg_stat_vacuum_tables\n 8002 | pg_stat_vacuum_indexes\n 8003 | pg_stat_vacuum_database\n\n\nThese above functions, pg_proc.prorows should > 0 when\npg_proc.proretset is true.\nI think that's the opr_sanity test's intention.\nso you may need to change pg_proc.dat.\n\nBTW the doc says:\nprorows float4, Estimated number of result rows (zero if not proretset)\n\n\n\nsegmentation fault cases:\nselect * from pg_stat_vacuum_indexes(0);\nselect * from pg_stat_vacuum_tables(0);\n\n\n+ else if (type == PGSTAT_EXTVAC_DB)\n+ {\n+ PgStatShared_Database *dbentry;\n+ PgStat_EntryRef *entry_ref;\n+ Oid dbid = PG_GETARG_OID(0);\n+\n+ if (OidIsValid(dbid))\n+ {\n+ entry_ref = pgstat_get_entry_ref_locked(PGSTAT_KIND_DATABASE,\n+ dbid, InvalidOid, false);\n+ dbentry = (PgStatShared_Database *) entry_ref->shared_stats;\n+\n+ if (dbentry == NULL)\n+ /* Table doesn't exist or isn't a heap relation */\n+ return;\n+\n+ tuplestore_put_for_database(dbid, rsinfo, dbentry);\n+ pgstat_unlock_entry(entry_ref);\n+ }\n+ }\ndidn't error out when dbid is invalid?\n\n\n\npg_stat_vacuum_tables\npg_stat_vacuum_indexes\npg_stat_vacuum_database\nthese functions didn't verify the only input argument oid's kind.\nfor example:\n\ncreate table s(a int primary key) with (autovacuum_enabled = off);\ncreate view sv as select * from s;\nvacuum s;\nselect * from pg_stat_vacuum_tables('sv'::regclass::oid);\nselect * from pg_stat_vacuum_indexes('sv'::regclass::oid);\nselect * from pg_stat_vacuum_database('sv'::regclass::oid);\n\nabove all these 3 examples should error out? because sv is view.\n\nin src/backend/catalog/system_views.sql\nfor view creation of pg_stat_vacuum_indexes\nyou can change to\n\nWHERE\n db.datname = current_database() AND\n rel.oid = stats.relid AND\n ns.oid = rel.relnamespace\nAND rel.relkind = 'i':\n\n\n\npg_stat_vacuum_tables in in src/backend/catalog/system_views.sql\nyou can change to\n\nWHERE\n db.datname = current_database() AND\n rel.oid = stats.relid AND\n ns.oid = rel.relnamespace\nAND rel.relkind = 'r':\n\n\n", "msg_date": "Thu, 5 Sep 2024 20:47:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi! Thank you for your review!\n\nOn 05.09.2024 15:47, jian he wrote:\n> On Thu, Sep 5, 2024 at 1:23 AM Alena Rybakina<[email protected]> wrote:\n>> Hi, all!\n>>\n>> I have attached the new version of the code and the diff files\n>> (minor-vacuum.no-cbot).\n>>\n> hi.\n>\n> still have white space issue when using \"git apply\",\n> you may need to use \"git diff --check\" to find out where.\n>\n>\n> /* ----------\n> diff --git a/src/test/regress/expected/opr_sanity.out\n> b/src/test/regress/expected/opr_sanity.out\n> index 5d72b970b03..7026de157e4 100644\n> --- a/src/test/regress/expected/opr_sanity.out\n> +++ b/src/test/regress/expected/opr_sanity.out\n> @@ -32,11 +32,12 @@ WHERE p1.prolang = 0 OR p1.prorettype = 0 OR\n> prokind NOT IN ('f', 'a', 'w', 'p') OR\n> provolatile NOT IN ('i', 's', 'v') OR\n> proparallel NOT IN ('s', 'r', 'u');\n> - oid | proname\n> -------+------------------------\n> + oid | proname\n> +------+-------------------------\n> 8001 | pg_stat_vacuum_tables\n> 8002 | pg_stat_vacuum_indexes\n> -(2 rows)\n> + 8003 | pg_stat_vacuum_database\n> +(3 rows)\n>\n>\n> looking at src/test/regress/sql/opr_sanity.sql:\n>\n> -- **************** pg_proc ****************\n> -- Look for illegal values in pg_proc fields.\n>\n> SELECT p1.oid, p1.proname\n> FROM pg_proc as p1\n> WHERE p1.prolang = 0 OR p1.prorettype = 0 OR\n> p1.pronargs < 0 OR\n> p1.pronargdefaults < 0 OR\n> p1.pronargdefaults > p1.pronargs OR\n> array_lower(p1.proargtypes, 1) != 0 OR\n> array_upper(p1.proargtypes, 1) != p1.pronargs-1 OR\n> 0::oid = ANY (p1.proargtypes) OR\n> procost <= 0 OR\n> CASE WHEN proretset THEN prorows <= 0 ELSE prorows != 0 END OR\n> prokind NOT IN ('f', 'a', 'w', 'p') OR\n> provolatile NOT IN ('i', 's', 'v') OR\n> proparallel NOT IN ('s', 'r', 'u');\n>\n> that means\n> oid | proname\n> ------+-------------------------\n> 8001 | pg_stat_vacuum_tables\n> 8002 | pg_stat_vacuum_indexes\n> 8003 | pg_stat_vacuum_database\n>\n>\n> These above functions, pg_proc.prorows should > 0 when\n> pg_proc.proretset is true.\n> I think that's the opr_sanity test's intention.\n> so you may need to change pg_proc.dat.\n>\n> BTW the doc says:\n> prorows float4, Estimated number of result rows (zero if not proretset)\n>\nI agree with you and have fixed it.\n> segmentation fault cases:\n> select * from pg_stat_vacuum_indexes(0);\n> select * from pg_stat_vacuum_tables(0);\n>\n>\n> + else if (type == PGSTAT_EXTVAC_DB)\n> + {\n> + PgStatShared_Database *dbentry;\n> + PgStat_EntryRef *entry_ref;\n> + Oid dbid = PG_GETARG_OID(0);\n> +\n> + if (OidIsValid(dbid))\n> + {\n> + entry_ref = pgstat_get_entry_ref_locked(PGSTAT_KIND_DATABASE,\n> + dbid, InvalidOid, false);\n> + dbentry = (PgStatShared_Database *) entry_ref->shared_stats;\n> +\n> + if (dbentry == NULL)\n> + /* Table doesn't exist or isn't a heap relation */\n> + return;\n> +\n> + tuplestore_put_for_database(dbid, rsinfo, dbentry);\n> + pgstat_unlock_entry(entry_ref);\n> + }\n> + }\n> didn't error out when dbid is invalid?\n>\nIt is caused by the empty statistic snapshot. I have fixed that by \nupdating the snapshot (pgstat_update_snapshot(PGSTAT_KIND_RELATION) \nfunction).\n\nI also added the test to check it.\n\n> pg_stat_vacuum_tables\n> pg_stat_vacuum_indexes\n> pg_stat_vacuum_database\n> these functions didn't verify the only input argument oid's kind.\n> for example:\n>\n> create table s(a int primary key) with (autovacuum_enabled = off);\n> create view sv as select * from s;\n> vacuum s;\n> select * from pg_stat_vacuum_tables('sv'::regclass::oid);\n> select * from pg_stat_vacuum_indexes('sv'::regclass::oid);\n> select * from pg_stat_vacuum_database('sv'::regclass::oid);\n>\n> above all these 3 examples should error out? because sv is view.\n\nI don't think so. I noticed that if we try to find the object from the \nsystem table with the different type the Postgres returns empty rows. I \nthink we should do the same.\n\n> in src/backend/catalog/system_views.sql\n> for view creation of pg_stat_vacuum_indexes\n> you can change to\n>\n> WHERE\n> db.datname = current_database() AND\n> rel.oid = stats.relid AND\n> ns.oid = rel.relnamespace\n> AND rel.relkind = 'i':\n>\n>\n>\n> pg_stat_vacuum_tables in in src/backend/catalog/system_views.sql\n> you can change to\n>\n> WHERE\n> db.datname = current_database() AND\n> rel.oid = stats.relid AND\n> ns.oid = rel.relnamespace\n> AND rel.relkind = 'r':\n>\nI agree with your proposal and fixed it like that.", "msg_date": "Fri, 6 Sep 2024 00:00:27 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi,\n\nOn Thu, Sep 5, 2024 at 2:01 PM Alena Rybakina <[email protected]> wrote:\n>\n> Hi! Thank you for your review!\n>\n> On 05.09.2024 15:47, jian he wrote:\n>\n> On Thu, Sep 5, 2024 at 1:23 AM Alena Rybakina <[email protected]> wrote:\n>\n> Hi, all!\n>\n> I have attached the new version of the code and the diff files\n> (minor-vacuum.no-cbot).\n\nThank you for updating the patches. I've reviewed the 0001 patch and\nhave two comments.\n\nI think we can split the 0001 patch into two parts: adding\npg_stat_vacuum_tables system views that shows the vacuum statistics\nthat we are currently collecting such as scanned_pages and\nremoved_pages, and another one is to add new statistics to collect\nsuch as vacrel->set_all_visible_pages and visibility map updates.\n\nI'm concerned that a pg_stat_vacuum_tables view has some duplicated\nstatistics that we already collect in different ways. For instance,\ntotal_blks_{read,hit,dirtied,written} are already tracked at\nsystem-level by pg_stat_io, and per-relation block I/O statistics can\nbe collected using pg_stat_statements. Having duplicated statistics\nconsumes more memory for pgstat and could confuse users if these\nstatistics are not consistent. I think it would be better to avoid\ncollecting duplicated statistics in different places.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Sep 2024 11:15:39 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On Fri, Sep 27, 2024 at 2:16 PM Masahiko Sawada <[email protected]> wrote:\n>\n> Hi,\n>\n> On Thu, Sep 5, 2024 at 2:01 PM Alena Rybakina <[email protected]> wrote:\n> >\n> > Hi! Thank you for your review!\n> >\n> > On 05.09.2024 15:47, jian he wrote:\n> >\n> > On Thu, Sep 5, 2024 at 1:23 AM Alena Rybakina <[email protected]> wrote:\n> >\n> > Hi, all!\n> >\n> > I have attached the new version of the code and the diff files\n> > (minor-vacuum.no-cbot).\n>\n> Thank you for updating the patches. I've reviewed the 0001 patch and\n> have two comments.\n\nI took a very brief look at this and was wondering if it was worth\nhaving a way to make the per-table vacuum statistics opt-in (like a\ntable storage parameter) in order to decrease the shared memory\nfootprint of storing the stats.\n\n- Melanie\n\n\n", "msg_date": "Fri, 27 Sep 2024 15:19:31 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi,\n\nOn Fri, 2024-09-27 at 11:15 -0700, Masahiko Sawada wrote:\n> I'm concerned that a pg_stat_vacuum_tables view has some duplicated\n> statistics that we already collect in different ways. For instance,\n> total_blks_{read,hit,dirtied,written} are already tracked at\n> system-level by pg_stat_io,\n\npg_stat_vacuum_tables.total_blks_{read,hit,dirtied,written} tracks\nblocks used by vacuum in different ways while vacuuming this particular\ntable while pg_stat_io tracks blocks used by vacuum on the cluster\nlevel.\n\n> and per-relation block I/O statistics can\n> be collected using pg_stat_statements.\n\nThis is impossible. pg_stat_statements tracks block statistics on a \nstatement level. One statement could touch many tables and many\nindexes, and all used database blocks will be counted by the\npg_stat_statements counters on a statement-level. Autovacuum statistics\nwon't be accounted by the pg_stat_statements. After all,\npg_stat_statements won't hold the statements statistics forever. Under\npressure of new statements the statement eviction can happen and\nstatistics will be lost.\n\nAll of the above is addressed by relation-level vacuum statistics held\nin the Cumulative Statistics System proposed by this patch.\n-- \nregards, Andrei Zubkov\nPostgres Professional\n\n\n\n", "msg_date": "Fri, 27 Sep 2024 22:25:14 +0300", "msg_from": "Andrei Zubkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "On Fri, Sep 27, 2024 at 12:19 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Fri, Sep 27, 2024 at 2:16 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Thu, Sep 5, 2024 at 2:01 PM Alena Rybakina <[email protected]> wrote:\n> > >\n> > > Hi! Thank you for your review!\n> > >\n> > > On 05.09.2024 15:47, jian he wrote:\n> > >\n> > > On Thu, Sep 5, 2024 at 1:23 AM Alena Rybakina <[email protected]> wrote:\n> > >\n> > > Hi, all!\n> > >\n> > > I have attached the new version of the code and the diff files\n> > > (minor-vacuum.no-cbot).\n> >\n> > Thank you for updating the patches. I've reviewed the 0001 patch and\n> > have two comments.\n>\n> I took a very brief look at this and was wondering if it was worth\n> having a way to make the per-table vacuum statistics opt-in (like a\n> table storage parameter) in order to decrease the shared memory\n> footprint of storing the stats.\n\nI'm not sure how users can select tables that enable vacuum statistics\nas I think they basically want to have statistics for all tables, but\nI see your point. Since the size of PgStat_TableCounts approximately\ntripled by this patch (112 bytes to 320 bytes), it might be worth\nconsidering ways to reduce the number of entries or reducing the size\nof vacuum statistics.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Sep 2024 13:13:15 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Hi! Thank you for your interesting for this patch!\n>> I took a very brief look at this and was wondering if it was worth\n>> having a way to make the per-table vacuum statistics opt-in (like a\n>> table storage parameter) in order to decrease the shared memory\n>> footprint of storing the stats.\n> I'm not sure how users can select tables that enable vacuum statistics\n> as I think they basically want to have statistics for all tables, but\n> I see your point. Since the size of PgStat_TableCounts approximately\n> tripled by this patch (112 bytes to 320 bytes), it might be worth\n> considering ways to reduce the number of entries or reducing the size\n> of vacuum statistics.\n\nThe main purpose of these statistics is to see abnormal behavior of \nvacuum in relation to a table or the database as a whole.\n\nFor example, there may be a situation where vacuum has started to run \nmore often and spends a lot of resources on processing a certain index, \nbut the size of the index does not change significantly. Moreover, the \ntable in which this index is located can be much smaller in size. This \nmay be because the index is bloated and needs to be reindexed.\n\nThis is exactly what vacuum statistics can show - we will see that \ncompared to other objects, vacuum processed more blocks and spent more \ntime on this index.\n\nPerhaps the vacuum parameters for the index should be set more \naggressively to avoid this in the future.\n\nI suppose that if we turn off statistics collection for a certain \nobject, we can miss it. In addition, the user may not enable the \nparameter for the object in time, because he will forget about it.\n\nAs for the second option, now I cannot say which statistics can be \nremoved, to be honest. So far, they all seem necessary.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional\n\n\n\n\n\n\n Hi! Thank you for your interesting for this patch!\n\n\nI took a very brief look at this and was wondering if it was worth\nhaving a way to make the per-table vacuum statistics opt-in (like a\ntable storage parameter) in order to decrease the shared memory\nfootprint of storing the stats.\n\n\n\nI'm not sure how users can select tables that enable vacuum statistics\nas I think they basically want to have statistics for all tables, but\nI see your point. Since the size of PgStat_TableCounts approximately\ntripled by this patch (112 bytes to 320 bytes), it might be worth\nconsidering ways to reduce the number of entries or reducing the size\nof vacuum statistics.\n\n\nThe main purpose of these statistics is to\n see abnormal behavior of vacuum in relation to a table or\n the database as a whole. \n\nFor example, there may be a situation where\n vacuum has started to run more often and spends a lot of\n resources on processing a certain index, but the size of the\n index does not change significantly. Moreover, the table\n in which this index is located can be much smaller in size. This may be because the index is bloated and\n needs to be reindexed.  \nThis is exactly what vacuum statistics can\n show - we will see that compared to other objects, vacuum\n processed more blocks and spent more time on this index.\nPerhaps\n the vacuum parameters for the index should be set more\n aggressively to avoid this in the future.\n I suppose that if we turn off statistics\n collection for a certain object, we can miss it.\nIn addition, the\n user may not enable the parameter for the object in time,\n because he will forget about it. \nAs for the second option, now I cannot say\n which statistics can be removed, to be honest.\nSo far, they all seem\n necessary.\n\n\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional", "msg_date": "Sun, 29 Sep 2024 00:22:28 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" } ]
[ { "msg_contents": "I'd like to resurrect a subset of my proposal in [1], specifically that:\n\n The FOREIGN KEY constraint syntax gains a [ USING INDEX index_name ] clause\n optionally following the referenced column list.\n\n The index specified by this clause is used to support the foreign key\n constraint, and it must be a non-deferrable unique or primary key index on the\n referenced table compatible with the referenced columns.\n\nI believe that it may be independently valuable to have some syntax available to\ninfluence which index is used to ensure uniqueness of the foreign columns in a\nforeign key constraint. Currently, this index is identified implicitly from the\nREFERENCEd columns when the constraint is created. This causes the following to\nimperfectly round trip through a pg_dump and restore:\n\n CREATE TABLE foo (\n id INTEGER NOT NULL GENERATED BY DEFAULT AS IDENTITY\n );\n\n CREATE UNIQUE INDEX foo_key2 ON foo(id);\n CREATE UNIQUE INDEX foo_key1 ON foo(id);\n\n CREATE TABLE bar (\n foo_id INTEGER NOT NULL CONSTRAINT bar_fkey REFERENCES foo(id)\n );\n\nUsing this query to identify the unique index backing the bar_fkey constraint:\n\n SELECT objid, refobjid::regclass FROM pg_depend\n WHERE objid = (\n SELECT oid FROM pg_constraint WHERE conname = 'bar_fkey'\n ) AND refobjsubid = 0;\n\nThen after the DDL is applied, the foreign key constraint depends on foo_key2:\n\n objid | refobjid\n -------+----------\n 17152 | foo_key2\n\nBut following a pg_dump and restore, the foreign key's unique index dependency\nhas changed to foo_key1:\n\n objid | refobjid\n -------+----------\n 17167 | foo_key1\n\nThis discrepancy appears to be caused by this confluence of circumstances:\n\n1. The unique index backing the foreign side of a foreign key constraint is\n searched for in OID order:\n\n static Oid\n transformFkeyCheckAttrs(Relation pkrel,\n int numattrs, int16 *attnums,\n Oid *opclasses) /*\noutput parameter */\n {\n ...\n indexoidlist = RelationGetIndexList(pkrel);\n\n foreach(indexoidscan, indexoidlist)\n {\n ...\n }\n\n2. The indexes appear in the pg_dump output before the FOREIGN KEY constraint,\n and they appear in lexicographic, rather than OID, order.\n\nWhile, in this minimal reproduction, the two indexes are interchangeable, there\nare situations that may reasonably occur in the course of ordinary use in which\nthey aren't. For example, a redundant unique index with different storage\nparameters may exist during the migration of an application schema. If the\nincorrect index is then selected to be a dependency of a foreign key constraint\nfollowing a pg_dump and restore, it will likely cause subsequent steps in the\nmigration to fail.\n\nNote that this proposal deals with indexes rather than constraints because this\nis, internally, what PostgreSQL uses. Specifically, PostgreSQL doesn’t actually\nrequire there to be a unique constraint on the foreign columns of a foreign key\nconstraint; a unique index alone is sufficient. However, I believe that this\nproposal would be essentially the same if it were changed to a USING CONSTRAINT\nclause, since it is already possible to explicitly specify the underlying index\nfor a unique or primary key constraint.\n\nIf I submitted a patch implementing this functionality, would there be\nany interest in it?\n\n[1]: https://www.postgresql.org/message-id/flat/CA%2BCLzG8HZUk8Gb9BKN88fgdSEqHx%3D2iq5aDdvbz7JqSFjA2WxA%40mail.gmail.com\n\n\n", "msg_date": "Thu, 30 May 2024 19:06:51 -0700", "msg_from": "Kaiting Chen <[email protected]>", "msg_from_op": true, "msg_subject": "Explicit specification of index ensuring uniqueness of foreign\n columns" }, { "msg_contents": "Kaiting Chen <[email protected]> writes:\n> I'd like to resurrect a subset of my proposal in [1], specifically that:\n> The FOREIGN KEY constraint syntax gains a [ USING INDEX index_name ] clause\n> optionally following the referenced column list.\n> ...\n> While, in this minimal reproduction, the two indexes are interchangeable, there\n> are situations that may reasonably occur in the course of ordinary use in which\n> they aren't. For example, a redundant unique index with different storage\n> parameters may exist during the migration of an application schema.\n\nI agree that there's a hazard there, but I question if the case is\nsufficiently real-world to justify the costs of introducing a\nnon-SQL-standard clause in foreign key constraints.\n\nOne such cost is that pg_dump output would become less able to be\nloaded into other DBMSes, or even into older PG versions.\n\nI also wonder if this wouldn't just trade one fragility for another.\nSpecifically, I am not sure that we guarantee that the names of\nindexes underlying constraints remain the same across dump/reload.\nIf they don't, the USING INDEX clause might fail unnecessarily.\n\nAs against that, I'm not sure I've ever seen a real-world case with\nintentionally-duplicate unique indexes.\n\nSo on the whole I'm unconvinced that this is worth changing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 31 May 2024 11:46:15 -0700", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explicit specification of index ensuring uniqueness of foreign\n columns" }, { "msg_contents": "On Friday, May 31, 2024, Tom Lane <[email protected]> wrote:\n\n> Kaiting Chen <[email protected]> writes:\n> > I'd like to resurrect a subset of my proposal in [1], specifically that:\n> > The FOREIGN KEY constraint syntax gains a [ USING INDEX index_name ]\n> clause\n> > optionally following the referenced column list.\n> > ...\n> > While, in this minimal reproduction, the two indexes are\n> interchangeable, there\n> > are situations that may reasonably occur in the course of ordinary use\n> in which\n> > they aren't. For example, a redundant unique index with different storage\n> > parameters may exist during the migration of an application schema.\n>\n> I agree that there's a hazard there, but I question if the case is\n> sufficiently real-world to justify the costs of introducing a\n> non-SQL-standard clause in foreign key constraints.\n>\n> One such cost is that pg_dump output would become less able to be\n> loaded into other DBMSes, or even into older PG versions.\n>\n> I also wonder if this wouldn't just trade one fragility for another.\n> Specifically, I am not sure that we guarantee that the names of\n> indexes underlying constraints remain the same across dump/reload.\n> If they don't, the USING INDEX clause might fail unnecessarily.\n>\n> As against that, I'm not sure I've ever seen a real-world case with\n> intentionally-duplicate unique indexes.\n>\n> So on the whole I'm unconvinced that this is worth changing.\n\n\nSeems like most of those issues could be avoided if we only supply “alter\ntable” syntax (or a function…). i.e., give the dba a tool to modify their\nsystem when our default choices fail them. But continue on with the\ndefaults as they exist today.\n\nDavid J.\n\nOn Friday, May 31, 2024, Tom Lane <[email protected]> wrote:Kaiting Chen <[email protected]> writes:\n> I'd like to resurrect a subset of my proposal in [1], specifically that:\n>   The FOREIGN KEY constraint syntax gains a [ USING INDEX index_name ] clause\n>   optionally following the referenced column list.\n> ...\n> While, in this minimal reproduction, the two indexes are interchangeable, there\n> are situations that may reasonably occur in the course of ordinary use in which\n> they aren't. For example, a redundant unique index with different storage\n> parameters may exist during the migration of an application schema.\n\nI agree that there's a hazard there, but I question if the case is\nsufficiently real-world to justify the costs of introducing a\nnon-SQL-standard clause in foreign key constraints.\n\nOne such cost is that pg_dump output would become less able to be\nloaded into other DBMSes, or even into older PG versions.\n\nI also wonder if this wouldn't just trade one fragility for another.\nSpecifically, I am not sure that we guarantee that the names of\nindexes underlying constraints remain the same across dump/reload.\nIf they don't, the USING INDEX clause might fail unnecessarily.\n\nAs against that, I'm not sure I've ever seen a real-world case with\nintentionally-duplicate unique indexes.\n\nSo on the whole I'm unconvinced that this is worth changing.Seems like most of those issues could be avoided if we only supply “alter table” syntax (or a function…).  i.e., give the dba a tool to modify their system when our default choices fail them.  But continue on with the defaults as they exist today.David J.", "msg_date": "Fri, 31 May 2024 13:28:30 -0600", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explicit specification of index ensuring uniqueness of foreign\n columns" } ]
[ { "msg_contents": "Hello,\n\nI have installed PostgreSQL 15 and PostgreSQL 14 side by side and want to\nupgrade from 14 to 15. For upgrading purposes, I am using {postgresql-15-setup\ncheck_upgrade}. However, in my case, the installed 14 version is not\ncompatible with the latest 15.7.\n\nAfter the installation and cluster initialization of PostgreSQL 14 and 15,\nwhen I run the following command {postgresql-15-setup check_upgrade}, it\nreturns the following message:\n\"Performing upgrade check: Upgrade failed. Removing the new cluster. Please\nre-initdb the new cluster. failed \"\n\n\nAfter the failure the postgresql15 cluster removed forcefully due to the\nfollowing code written in postgresql-15-setup script file\n\n{\nif [ $script_result -eq 0 ]; then\n echo $\"OK\"\n else\n # Clean up after failure\n echo \"Upgrade failed. Removing the new cluster. Please re-initdb\nthe new cluster.\"\n\n* rm -rf \"$PGDATA\"* echo $\"failed\"\nfi\n}\n\nMy concern here is whether forcefully deleting the user cluster without\nobtaining permission from the user is the right approach.\n\n\n\nRegards,\nZaid Shabbir\nAGEDB\n\nHello,I have installed PostgreSQL 15 and PostgreSQL 14 side by side and want to upgrade from 14 to 15. For upgrading purposes, I am using {postgresql-15-setup check_upgrade}. However, in my case, the installed 14 version is not compatible with the latest 15.7.After the installation and cluster initialization of PostgreSQL 14 and 15, when I run the following command {postgresql-15-setup check_upgrade}, it returns the following message:\"Performing upgrade check: Upgrade failed. Removing the new cluster. Please re-initdb the new cluster. failed \"After the failure the postgresql15 cluster removed forcefully due to the following code written in postgresql-15-setup script file{if [ $script_result -eq 0 ]; then        echo $\"OK\"    else        # Clean up after failure        echo \"Upgrade failed. Removing the new cluster. Please re-initdb the new cluster.\"        rm -rf \"$PGDATA\"        echo $\"failed\"fi}My concern here is whether forcefully deleting the user cluster without obtaining permission from the user is the right approach.Regards,Zaid ShabbirAGEDB", "msg_date": "Fri, 31 May 2024 07:48:16 +0500", "msg_from": "Zaid Shabbir <[email protected]>", "msg_from_op": true, "msg_subject": "Cluster forcefully removal without user input" } ]
[ { "msg_contents": "Hi,\nIs it possible to switch on/off a background worker in runtime?\n\nworker.bgw_flags = BGWORKER_SHMEM_ACCESS;\nworker.bgw_start_time = BgWorkerStart_PostmasterStart;\n\nI want to switch off the worker based on some flag value, etc, either from the main process or the worker itself.\n\nAre there any already existing examples?\n\nThanks,\nIshan.\n\n-- \nThe information contained in this electronic communication is intended \nsolely for the individual(s) or entity to which it is addressed. It may \ncontain proprietary, confidential and/or legally privileged information. \nAny review, retransmission, dissemination, printing, copying or other use \nof, or taking any action in reliance on the contents of this information by \nperson(s) or entities other than the intended recipient is strictly \nprohibited and may be unlawful. If you have received this communication in \nerror, please notify us by responding to this email or telephone and \nimmediately and permanently delete all copies of this message and any \nattachments from your system(s). The contents of this message do not \nnecessarily represent the views or policies of BITS Pilani.\n\n\n\n\n\n\n\n\n\nHi,\nIs it possible to switch on/off a background worker in runtime?\n\n\nworker.bgw_flags\n=\nBGWORKER_SHMEM_ACCESS;\nworker.bgw_start_time\n=\nBgWorkerStart_PostmasterStart;\n \nI want to switch off the worker based on some flag value, etc, either from the main process or the worker itself.\n\nAre there any already existing examples?\n\nThanks,\nIshan.", "msg_date": "Fri, 31 May 2024 09:28:25 +0000", "msg_from": "\"ISHAN CHHANGANI .\" <[email protected]>", "msg_from_op": true, "msg_subject": "Switch background worker on/off in runtime." }, { "msg_contents": "Hi ISHAN\n\n\n\nOn Fri, May 31, 2024 at 2:28 PM ISHAN CHHANGANI . <\[email protected]> wrote:\n\n> Hi,\n>\n> Is it possible to switch on/off a background worker in runtime?\n>\nAs per my understanding there is no such way to do it on runtime. But you\ncan kill it by using the following command\n\nselect pg_terminate_backend(pid of bgworker);\n\nRegards\nKashif Zeeshan\nBitnine Global\n\n>\n> worker.bgw_flags = BGWORKER_SHMEM_ACCESS;\n>\n> worker.bgw_start_time = BgWorkerStart_PostmasterStart;\n>\n>\n>\n> I want to switch off the worker based on some flag value, etc, either from\n> the main process or the worker itself.\n>\n>\n> Are there any already existing examples?\n>\n> Thanks,\n>\n> Ishan.\n>\n> The information contained in this electronic communication is intended\n> solely for the individual(s) or entity to which it is addressed. It may\n> contain proprietary, confidential and/or legally privileged information.\n> Any review, retransmission, dissemination, printing, copying or other use\n> of, or taking any action in reliance on the contents of this information by\n> person(s) or entities other than the intended recipient is strictly\n> prohibited and may be unlawful. If you have received this communication in\n> error, please notify us by responding to this email or telephone and\n> immediately and permanently delete all copies of this message and any\n> attachments from your system(s). The contents of this message do not\n> necessarily represent the views or policies of BITS Pilani.\n>\n\nHi ISHANOn Fri, May 31, 2024 at 2:28 PM ISHAN CHHANGANI . <[email protected]> wrote:\n\n\nHi,\nIs it possible to switch on/off a background worker in runtime?As per my understanding there is no such way to do it on runtime. But you can kill it by using the following commandselect pg_terminate_backend(pid of bgworker);RegardsKashif ZeeshanBitnine Global \n\n\nworker.bgw_flags\n=\nBGWORKER_SHMEM_ACCESS;\nworker.bgw_start_time\n=\nBgWorkerStart_PostmasterStart;\n \nI want to switch off the worker based on some flag value, etc, either from the main process or the worker itself.\n\nAre there any already existing examples?\n\nThanks,\nIshan.\n\n\n\nThe information contained in this electronic communication is intended solely for the individual(s) or entity to which it is addressed. It may contain proprietary, confidential and/or legally privileged information. Any review, retransmission, dissemination, printing, copying or other use of, or taking any action in reliance on the contents of this information by person(s) or entities other than the intended recipient is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us by responding to this email or telephone and immediately and permanently delete all copies of this message and any attachments from your system(s). The contents of this message do not necessarily represent the views or policies of BITS Pilani.", "msg_date": "Fri, 31 May 2024 15:27:23 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Switch background worker on/off in runtime." }, { "msg_contents": "On 31.05.24 11:28, ISHAN CHHANGANI . wrote:\n> Is it possible to switch on/off a background worker in runtime?\n> \n> worker.bgw_flags =BGWORKER_SHMEM_ACCESS;\n> \n> worker.bgw_start_time =BgWorkerStart_PostmasterStart;\n> \n> I want to switch off the worker based on some flag value, etc, either \n> from the main process or the worker itself.\n> \n> Are there any already existing examples?\n\nI think this depends more on your exact use case. For example, the \nlogical replication background workers have sophisticated logic to do \nthis. There is a launcher, which is itself a background worker, which \nlaunches other per-subscription workers. And there are commands to \ndisable subscriptions, which would among other things stop their \ncorresponding background workers. That logic is specific to the needs \nof the logic replication system. You might get some ideas from that. \nBut in general it will need a bit of custom logic.\n\n\n\n", "msg_date": "Sun, 2 Jun 2024 23:22:07 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Switch background worker on/off in runtime." } ]
[ { "msg_contents": "Hi Tristan,\nUsing make I can run only selected tests under src/test/regress using\nTESTS=... make check-tests. I didn't find any way to do that with meson.\nmeson test --suite regress runs all the regression tests.\n\nWe talked this off-list at the conference. It seems we have to somehow\navoid passing pg_regress --schedule argument and instead pass the list of\ntests. Any idea how to do that?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nHi Tristan,Using make I can run only selected tests under src/test/regress using TESTS=... make check-tests. I didn't find any way to do that with meson. meson test --suite regress runs all the regression tests.We talked this off-list at the conference. It seems we have to somehow avoid passing pg_regress --schedule argument and instead pass the list of tests. Any idea how to do that?-- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 31 May 2024 10:01:48 -0700", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "meson and check-tests" }, { "msg_contents": "On Fri May 31, 2024 at 12:02 PM CDT, Ashutosh Bapat wrote:\n> Hi Tristan,\n> Using make I can run only selected tests under src/test/regress using\n> TESTS=... make check-tests. I didn't find any way to do that with meson.\n> meson test --suite regress runs all the regression tests.\n>\n> We talked this off-list at the conference. It seems we have to somehow\n> avoid passing pg_regress --schedule argument and instead pass the list of\n> tests. Any idea how to do that?\n\nI think there are 2 solutions to this.\n\n1. Avoid passing --schedule by default, which doesn't sound like a great \n solution.\n\n2. Teach pg_regress to ignore the --schedule option if specific tests \n are passed instead.\n\n3. Add a --no-schedule option to pg_regress which would override the \n previously added --schedule option.\n\nI personally prefer 2 or 3.\n\n2: meson test -C build regress/regress --test-args my_specific_test\n3: meson test -C build regress/regress --test-args \"--no-schedule my_specific_test\"\n\nDoes anyone have an opinion?\n\n-- \nTristan Partin\nhttps://tristan.partin.io\n\n\n", "msg_date": "Sat, 01 Jun 2024 15:47:51 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "On Sun, Jun 2, 2024 at 4:48 AM Tristan Partin <[email protected]> wrote:\n>\n> On Fri May 31, 2024 at 12:02 PM CDT, Ashutosh Bapat wrote:\n> > Hi Tristan,\n> > Using make I can run only selected tests under src/test/regress using\n> > TESTS=... make check-tests. I didn't find any way to do that with meson.\n> > meson test --suite regress runs all the regression tests.\n> >\n> > We talked this off-list at the conference. It seems we have to somehow\n> > avoid passing pg_regress --schedule argument and instead pass the list of\n> > tests. Any idea how to do that?\n>\n> I think there are 2 solutions to this.\n>\n> 1. Avoid passing --schedule by default, which doesn't sound like a great\n> solution.\n>\n> 2. Teach pg_regress to ignore the --schedule option if specific tests\n> are passed instead.\n>\n> 3. Add a --no-schedule option to pg_regress which would override the\n> previously added --schedule option.\n>\n> I personally prefer 2 or 3.\n>\n> 2: meson test -C build regress/regress --test-args my_specific_test\n> 3: meson test -C build regress/regress --test-args \"--no-schedule my_specific_test\"\n>\n> Does anyone have an opinion?\n>\n\nif each individual sql file can be tested separately, will\n`test: test_setup`?\nbe aslo executed as a prerequisite?\n\n\n\nin\n# ----------\n# The first group of parallel tests\n# ----------\ntest: boolean char name varchar text int2 int4 int8 oid float4 float8\nbit numeric txid uuid enum money rangetypes pg_lsn regproc\n\n# ----------\n# The second group of parallel tests\n# multirangetypes depends on rangetypes\n# multirangetypes shouldn't run concurrently with type_sanity\n# ----------\ntest: strings md5 numerology point lseg line box path polygon circle\ndate time timetz timestamp timestamptz interval inet macaddr macaddr8\nmultirangetypes\n\n\nIf we can test each individual sql file via meson, that would be great.\nbut based on the above comments, we need to preserve the specified order.\nlike:\nTEST=rangetypes, multirangetypes\n\nwill first run rangetypes then multirangetypes.\n\n\ncan we tag/name each src/test/regress/parallel_schedule groups\nlike:\ngroup1:test: boolean char name varchar text int2 int4 int8 oid float4\nfloat8 bit numeric txid uuid enum money rangetypes pg_lsn regproc\ngroup2:test: strings md5 numerology point lseg line box path polygon\ncircle date time timetz timestamp timestamptz interval inet macaddr\nmacaddr8 multirangetypes\n\nThen, we can test each individual group.\nTEST=group1, group2.\n\n\n", "msg_date": "Sun, 2 Jun 2024 12:05:40 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> On Fri May 31, 2024 at 12:02 PM CDT, Ashutosh Bapat wrote:\n>> We talked this off-list at the conference. It seems we have to somehow\n>> avoid passing pg_regress --schedule argument and instead pass the list of\n>> tests. Any idea how to do that?\n\n> I think there are 2 solutions to this.\n> 1. Avoid passing --schedule by default, which doesn't sound like a great \n> solution.\n> 2. Teach pg_regress to ignore the --schedule option if specific tests \n> are passed instead.\n> 3. Add a --no-schedule option to pg_regress which would override the \n> previously added --schedule option.\n> I personally prefer 2 or 3.\n\nJust to refresh peoples' memory of what the Makefiles do:\nsrc/test/regress/GNUmakefile has\n\ncheck: all\n\t$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)\n\ncheck-tests: all | temp-install\n\t$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)\n\n(and parallel cases for installcheck etc). AFAICS, meson.build has\nno equivalent to the EXTRA_TESTS add-on, nor does it have behavior\nequivalent to check-tests' substitution of $(TESTS) for --schedule.\nBut I suggest that those behaviors have stood for a long time and\nso the appropriate thing to do is duplicate them as best we can,\nnot invent something different.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 01 Jun 2024 22:25:11 -0700", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "\nOn 2024-06-02 Su 01:25, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n>> On Fri May 31, 2024 at 12:02 PM CDT, Ashutosh Bapat wrote:\n>>> We talked this off-list at the conference. It seems we have to somehow\n>>> avoid passing pg_regress --schedule argument and instead pass the list of\n>>> tests. Any idea how to do that?\n>> I think there are 2 solutions to this.\n>> 1. Avoid passing --schedule by default, which doesn't sound like a great\n>> solution.\n>> 2. Teach pg_regress to ignore the --schedule option if specific tests\n>> are passed instead.\n>> 3. Add a --no-schedule option to pg_regress which would override the\n>> previously added --schedule option.\n>> I personally prefer 2 or 3.\n> Just to refresh peoples' memory of what the Makefiles do:\n> src/test/regress/GNUmakefile has\n>\n> check: all\n> \t$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)\n>\n> check-tests: all | temp-install\n> \t$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)\n>\n> (and parallel cases for installcheck etc). AFAICS, meson.build has\n> no equivalent to the EXTRA_TESTS add-on, nor does it have behavior\n> equivalent to check-tests' substitution of $(TESTS) for --schedule.\n> But I suggest that those behaviors have stood for a long time and\n> so the appropriate thing to do is duplicate them as best we can,\n> not invent something different.\n>\n> \t\t\t\n\n\n+1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 2 Jun 2024 15:47:29 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "Hi,\n\nOn Sun, 2 Jun 2024 at 07:06, jian he <[email protected]> wrote:\n>\n> On Sun, Jun 2, 2024 at 4:48 AM Tristan Partin <[email protected]> wrote:\n> >\n> > On Fri May 31, 2024 at 12:02 PM CDT, Ashutosh Bapat wrote:\n> > > Hi Tristan,\n> > > Using make I can run only selected tests under src/test/regress using\n> > > TESTS=... make check-tests. I didn't find any way to do that with meson.\n> > > meson test --suite regress runs all the regression tests.\n> > >\n> > > We talked this off-list at the conference. It seems we have to somehow\n> > > avoid passing pg_regress --schedule argument and instead pass the list of\n> > > tests. Any idea how to do that?\n> >\n> > I think there are 2 solutions to this.\n> >\n> > 1. Avoid passing --schedule by default, which doesn't sound like a great\n> > solution.\n> >\n> > 2. Teach pg_regress to ignore the --schedule option if specific tests\n> > are passed instead.\n> >\n> > 3. Add a --no-schedule option to pg_regress which would override the\n> > previously added --schedule option.\n> >\n> > I personally prefer 2 or 3.\n> >\n> > 2: meson test -C build regress/regress --test-args my_specific_test\n> > 3: meson test -C build regress/regress --test-args \"--no-schedule my_specific_test\"\n> >\n> > Does anyone have an opinion?\n> >\n>\n> if each individual sql file can be tested separately, will\n> `test: test_setup`?\n> be aslo executed as a prerequisite?\n\nYes, it is required to run at least two setup tests because regress\ntests require a database to be created:\n\n1- The initdb executable is needed, and it is installed by the\n'tmp_install' test for the tests.\n\n2- Although the initdb executable exists, it is not enough by itself.\nRegress tests copies its database contents from the template\ndirectory, instead of running initdb for each test [1] (This is the\ndefault behaviour in the meson builds' regress tests). This template\ndirectory is created by the 'initdb_cache' test.\n\n[1] 252dcb3239\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 3 Jun 2024 08:56:25 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "On Sun Jun 2, 2024 at 12:25 AM CDT, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > On Fri May 31, 2024 at 12:02 PM CDT, Ashutosh Bapat wrote:\n> >> We talked this off-list at the conference. It seems we have to somehow\n> >> avoid passing pg_regress --schedule argument and instead pass the list of\n> >> tests. Any idea how to do that?\n>\n> > I think there are 2 solutions to this.\n> > 1. Avoid passing --schedule by default, which doesn't sound like a great \n> > solution.\n> > 2. Teach pg_regress to ignore the --schedule option if specific tests \n> > are passed instead.\n> > 3. Add a --no-schedule option to pg_regress which would override the \n> > previously added --schedule option.\n> > I personally prefer 2 or 3.\n>\n> Just to refresh peoples' memory of what the Makefiles do:\n> src/test/regress/GNUmakefile has\n>\n> check: all\n> \t$(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)\n>\n> check-tests: all | temp-install\n> \t$(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)\n>\n> (and parallel cases for installcheck etc). AFAICS, meson.build has\n> no equivalent to the EXTRA_TESTS add-on, nor does it have behavior\n> equivalent to check-tests' substitution of $(TESTS) for --schedule.\n> But I suggest that those behaviors have stood for a long time and\n> so the appropriate thing to do is duplicate them as best we can,\n> not invent something different.\n\nIn theory, this makes sense. In practice, this is hard to emulate. We \ncould make the check-tests a Meson run_target() instead of another \ntest(), which would end up running the same tests more than once.\n\nWe could also add the same functionality proposed in my email to the \nMakefile, so they mirror each other. check-tests would then just become \na legacy target name.\n\n-- \nTristan Partin\nhttps://tristan.partin.io\n\n\n", "msg_date": "Mon, 03 Jun 2024 11:34:23 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "On Mon, Jun 3, 2024 at 10:04 PM Tristan Partin <[email protected]> wrote:\n\n> On Sun Jun 2, 2024 at 12:25 AM CDT, Tom Lane wrote:\n> > \"Tristan Partin\" <[email protected]> writes:\n> > > On Fri May 31, 2024 at 12:02 PM CDT, Ashutosh Bapat wrote:\n> > >> We talked this off-list at the conference. It seems we have to somehow\n> > >> avoid passing pg_regress --schedule argument and instead pass the\n> list of\n> > >> tests. Any idea how to do that?\n> >\n> > > I think there are 2 solutions to this.\n> > > 1. Avoid passing --schedule by default, which doesn't sound like a\n> great\n> > > solution.\n> > > 2. Teach pg_regress to ignore the --schedule option if specific tests\n> > > are passed instead.\n> > > 3. Add a --no-schedule option to pg_regress which would override the\n> > > previously added --schedule option.\n> > > I personally prefer 2 or 3.\n>\n\n\n\n\n> >\n> > Just to refresh peoples' memory of what the Makefiles do:\n> > src/test/regress/GNUmakefile has\n> >\n> > check: all\n> > $(pg_regress_check) $(REGRESS_OPTS)\n> --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)\n> >\n> > check-tests: all | temp-install\n> > $(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS)\n> $(EXTRA_TESTS)\n\n>\n> > (and parallel cases for installcheck etc). AFAICS, meson.build has\n> > no equivalent to the EXTRA_TESTS add-on, nor does it have behavior\n> > equivalent to check-tests' substitution of $(TESTS) for --schedule.\n> > But I suggest that those behaviors have stood for a long time and\n> > so the appropriate thing to do is duplicate them as best we can,\n> > not invent something different.\n>\n> In theory, this makes sense. In practice, this is hard to emulate. We\n> could make the check-tests a Meson run_target() instead of another\n> test(), which would end up running the same tests more than once.\n>\n\nmeson has changed the way we run individual perl tests and that looks\nbetter. So I am fine if meson provides a better way to do what `make\ncheck-tests` does. But changing pg_regress seems a wrong choice or even\nmaking changes to the current make system. Instead we should make meson\npass the right arguments to pg_regress. In this case, it should not pass\n--schedule when we need `make check-tests` like functionality.\n\nJust adding check-tests as new target won't help we need some way to\nspecify \"which tests\" to run. Thus by default this target should not run\nany tests? I don't understand meson well. So I might be completely wrong?\n\nHow about the following options?\n1. TESTS=\"...\" meson test --suite regress - would run the specified tests\nfrom regress\n\n2. Make `meson test --suite regress / regress/partition_join` run\npartition_join.sql test. I am not how to specify multiple tests in this\ncommand. May be `meson test --suite regress /\nregress/test_setup,partition_join` will do that. make check-tests allows\none to run multiple tests like TESTS=\"test_setup partition_join\" make\ncheck-tests.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Jun 3, 2024 at 10:04 PM Tristan Partin <[email protected]> wrote:On Sun Jun 2, 2024 at 12:25 AM CDT, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > On Fri May 31, 2024 at 12:02 PM CDT, Ashutosh Bapat wrote:\n> >> We talked this off-list at the conference. It seems we have to somehow\n> >> avoid passing pg_regress --schedule argument and instead pass the list of\n> >> tests. Any idea how to do that?\n>\n> > I think there are 2 solutions to this.\n> > 1. Avoid passing --schedule by default, which doesn't sound like a great \n> >    solution.\n> > 2. Teach pg_regress to ignore the --schedule option if specific tests \n> >    are passed instead.\n> > 3. Add a --no-schedule option to pg_regress which would override the \n> >    previously added --schedule option.\n> > I personally prefer 2 or 3. \n>\n> Just to refresh peoples' memory of what the Makefiles do:\n> src/test/regress/GNUmakefile has\n>\n> check: all\n>       $(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)\n>\n> check-tests: all | temp-install\n>       $(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS) \n>\n> (and parallel cases for installcheck etc).  AFAICS, meson.build has\n> no equivalent to the EXTRA_TESTS add-on, nor does it have behavior\n> equivalent to check-tests' substitution of $(TESTS) for --schedule.\n> But I suggest that those behaviors have stood for a long time and\n> so the appropriate thing to do is duplicate them as best we can,\n> not invent something different.\n\nIn theory, this makes sense. In practice, this is hard to emulate. We \ncould make the check-tests a Meson run_target() instead of another \ntest(), which would end up running the same tests more than once.meson has changed the way we run individual perl tests and that looks better. So I am fine if meson provides a better way to do what `make check-tests` does. But changing pg_regress seems a wrong choice or even making changes to the current make system. Instead we should make meson pass the right arguments to pg_regress. In this case, it should not pass --schedule when we need `make check-tests` like functionality.Just adding check-tests as new target won't help we need some way to specify \"which tests\" to run. Thus by default this target should not run any tests? I don't understand meson well. So I might be completely wrong?How about the following options?1. TESTS=\"...\" meson test --suite regress - would run the specified tests from regress2. Make `meson test --suite regress / regress/partition_join` run partition_join.sql test. I am not how to specify multiple tests in this command. May be `meson test --suite regress / regress/test_setup,partition_join` will do that. make check-tests allows one to run multiple tests like TESTS=\"test_setup partition_join\" make check-tests.-- Best Wishes,Ashutosh Bapat", "msg_date": "Wed, 5 Jun 2024 16:56:10 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "On Wed, Jun 5, 2024 at 7:26 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n>\n>\n> On Mon, Jun 3, 2024 at 10:04 PM Tristan Partin <[email protected]> wrote:\n>>\n>> On Sun Jun 2, 2024 at 12:25 AM CDT, Tom Lane wrote:\n>> > \"Tristan Partin\" <[email protected]> writes:\n>> > > On Fri May 31, 2024 at 12:02 PM CDT, Ashutosh Bapat wrote:\n>> > >> We talked this off-list at the conference. It seems we have to somehow\n>> > >> avoid passing pg_regress --schedule argument and instead pass the list of\n>> > >> tests. Any idea how to do that?\n>> >\n>> > > I think there are 2 solutions to this.\n>> > > 1. Avoid passing --schedule by default, which doesn't sound like a great\n>> > > solution.\n>> > > 2. Teach pg_regress to ignore the --schedule option if specific tests\n>> > > are passed instead.\n>> > > 3. Add a --no-schedule option to pg_regress which would override the\n>> > > previously added --schedule option.\n>> > > I personally prefer 2 or 3.\n>\n>\n>\n>\n>>\n>> >\n>> > Just to refresh peoples' memory of what the Makefiles do:\n>> > src/test/regress/GNUmakefile has\n>> >\n>> > check: all\n>> > $(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS)\n>> >\n>> > check-tests: all | temp-install\n>> > $(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS)\n>>\n>> >\n>> > (and parallel cases for installcheck etc). AFAICS, meson.build has\n>> > no equivalent to the EXTRA_TESTS add-on, nor does it have behavior\n>> > equivalent to check-tests' substitution of $(TESTS) for --schedule.\n>> > But I suggest that those behaviors have stood for a long time and\n>> > so the appropriate thing to do is duplicate them as best we can,\n>> > not invent something different.\n>>\n>> In theory, this makes sense. In practice, this is hard to emulate. We\n>> could make the check-tests a Meson run_target() instead of another\n>> test(), which would end up running the same tests more than once.\n>\n>\n> meson has changed the way we run individual perl tests and that looks better. So I am fine if meson provides a better way to do what `make check-tests` does. But changing pg_regress seems a wrong choice or even making changes to the current make system. Instead we should make meson pass the right arguments to pg_regress. In this case, it should not pass --schedule when we need `make check-tests` like functionality.\n>\n> Just adding check-tests as new target won't help we need some way to specify \"which tests\" to run. Thus by default this target should not run any tests? I don't understand meson well. So I might be completely wrong?\n>\n> How about the following options?\n> 1. TESTS=\"...\" meson test --suite regress - would run the specified tests from regress\n>\n> 2. Make `meson test --suite regress / regress/partition_join` run partition_join.sql test. I am not how to specify multiple tests in this command. May be `meson test --suite regress / regress/test_setup,partition_join` will do that. make check-tests allows one to run multiple tests like TESTS=\"test_setup partition_join\" make check-tests.\n>\n\nhi. I think it's a good feature for development.\nThe full regression test is still a very long time to run.\n\nsometimes when you implement a feature, you are pretty sure some\nchanges will only happen in a single sql file.\nso this can make it more faster.\n\n\n", "msg_date": "Thu, 5 Sep 2024 11:29:43 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "Hi,\n\nI’ve been working on a patch and wanted to share my approach, which\nmight be helpful for others. The patch removes the '--schedule' and\n'${schedule_file}' options from the regress/regress test command when\nthe TESTS environment variable is set. Instead, it appends the TESTS\nvariable to the end of the command.\n\nPlease note that setup suite tests (at least tmp_install and\ninitdb_cache) should be executed before running these tests. One\ndrawback is that while the Meson logs will still show the test command\nwith the '--schedule' and '${schedule_file}' options, the actual\ncommand used will be changed.\n\nSome examples after the patched build:\n\n$ meson test --suite regress -> fails\n$ TESTS=\"create_table copy jsonb\" meson test --suite regress -> fails\n### run required setup suite tests\n$ meson test tmp_install\n$ meson test initdb_cache\n###\n$ meson test --suite regress -> passes (12s)\n$ TESTS=\"copy\" meson test --suite regress -> passes (0.35s)\n$ TESTS=\"copy jsonb\" meson test --suite regress -> passes (0.52s)\n$ TESTS='select_into' meson test --suite regress -> fails\n$ TESTS='test_setup select_into' meson test --suite regress -> passes (0.52s)\n$ TESTS='rangetypes multirangetypes' meson test --suite regress -> fails\n$ TESTS='test_setup multirangetypes rangetypes' meson test --suite\nregres -> fails\n$ TESTS='test_setup rangetypes multirangetypes' meson test --suite\nregress -> passes (0.91s)\n\nAny feedback would be appreciated.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Fri, 20 Sep 2024 13:25:01 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "On Fri, Sep 20, 2024 at 6:25 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> I’ve been working on a patch and wanted to share my approach, which\n> might be helpful for others. The patch removes the '--schedule' and\n> '${schedule_file}' options from the regress/regress test command when\n> the TESTS environment variable is set. Instead, it appends the TESTS\n> variable to the end of the command.\n>\n> Please note that setup suite tests (at least tmp_install and\n> initdb_cache) should be executed before running these tests. One\n> drawback is that while the Meson logs will still show the test command\n> with the '--schedule' and '${schedule_file}' options, the actual\n> command used will be changed.\n>\n> Some examples after the patched build:\n>\n> $ meson test --suite regress -> fails\n> $ TESTS=\"create_table copy jsonb\" meson test --suite regress -> fails\n> ### run required setup suite tests\n> $ meson test tmp_install\n> $ meson test initdb_cache\n> ###\n> $ meson test --suite regress -> passes (12s)\n> $ TESTS=\"copy\" meson test --suite regress -> passes (0.35s)\n> $ TESTS=\"copy jsonb\" meson test --suite regress -> passes (0.52s)\n> $ TESTS='select_into' meson test --suite regress -> fails\n> $ TESTS='test_setup select_into' meson test --suite regress -> passes (0.52s)\n> $ TESTS='rangetypes multirangetypes' meson test --suite regress -> fails\n> $ TESTS='test_setup multirangetypes rangetypes' meson test --suite\n> regres -> fails\n> $ TESTS='test_setup rangetypes multirangetypes' meson test --suite\n> regress -> passes (0.91s)\n>\n> Any feedback would be appreciated.\n>\n\nhi. Thanks for your work!\nI do find some issues.\n\n\nTESTS=\"copy jsonb jsonb\" meson test --suite regress\none will fail. not sure this is expected?\n\nin [1] you mentioned \"setup\", but that \"setup\" is more or less like\n\"meson test --suite setup --suite regress\"\nbut originally, I thought was about \"src/test/regress/sql/test_setup.sql\".\nfor example, now you cannot run src/test/regress/sql/stats_ext.sql\nwithout first running test_setup.sql, because some functions (like fipshash)\nlive in test_setup.sql.\n\nso\nTESTS=\"copy jsonb stats_ext\" meson test --suite regress\nwill fail.\n\nto make it work we need change it to\nTESTS=\"test_setup copy jsonb stats_ext\" meson test --suite regress\n\nMany tests depend on test_setup.sql, maybe we can implicitly prepend it.\nAnother dependency issue. alter_table depending on create_index.\n\nTESTS=\"test_setup alter_table\" meson test --suite regress\nwill fail.\nTESTS=\"test_setup create_index alter_table\" meson test --suite regress\nwill work.\n\n\n[1] https://www.postgresql.org/message-id/CAN55FZ3t%2BeDgKtsDoyi0UYwzbMkKDfqJgvsbamar9CvY_6qWPw%40mail.gmail.com\n\n\n", "msg_date": "Sat, 21 Sep 2024 14:01:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "Hi,\n\nOn Sat, 21 Sept 2024 at 09:01, jian he <[email protected]> wrote:\n>\n> hi. Thanks for your work!\n\nThank you for looking into this!\n\n> I do find some issues.\n>\n>\n> TESTS=\"copy jsonb jsonb\" meson test --suite regress\n> one will fail. not sure this is expected?\n\nYes, that is expected.\n\n> in [1] you mentioned \"setup\", but that \"setup\" is more or less like\n> \"meson test --suite setup --suite regress\"\n> but originally, I thought was about \"src/test/regress/sql/test_setup.sql\".\n> for example, now you cannot run src/test/regress/sql/stats_ext.sql\n> without first running test_setup.sql, because some functions (like fipshash)\n> live in test_setup.sql.\n\nPostgres binaries are created at the build step in the make builds so\nthese binaries can be used for the tests. But in the meson builds,\nthey are created at the 'meson test --suite setup' for the testing\n('meson install' command creates binaries as well but they are not for\ntesting, they are for installing binaries to the OS). So, 'meson test\n--suite setup' should be run before running regression tests.\n\n> so\n> TESTS=\"copy jsonb stats_ext\" meson test --suite regress\n> will fail.\n>\n> to make it work we need change it to\n> TESTS=\"test_setup copy jsonb stats_ext\" meson test --suite regress\n>\n> Many tests depend on test_setup.sql, maybe we can implicitly prepend it.\n> Another dependency issue. alter_table depending on create_index.\n>\n> TESTS=\"test_setup alter_table\" meson test --suite regress\n> will fail.\n> TESTS=\"test_setup create_index alter_table\" meson test --suite regress\n> will work.\n\nYes, I realized that but since that is how it is done in the make\nbuilds, I didn't want to change the behaviour. Also, I think it makes\nsense to leave it to the tester. It is more flexible in that way.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 23 Sep 2024 11:46:47 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "Hi,\n\nOn Mon, 23 Sept 2024 at 11:46, Nazir Bilal Yavuz <[email protected]> wrote:\n> On Sat, 21 Sept 2024 at 09:01, jian he <[email protected]> wrote:\n> > in [1] you mentioned \"setup\", but that \"setup\" is more or less like\n> > \"meson test --suite setup --suite regress\"\n> > but originally, I thought was about \"src/test/regress/sql/test_setup.sql\".\n> > for example, now you cannot run src/test/regress/sql/stats_ext.sql\n> > without first running test_setup.sql, because some functions (like fipshash)\n> > live in test_setup.sql.\n>\n> Postgres binaries are created at the build step in the make builds so\n> these binaries can be used for the tests. But in the meson builds,\n> they are created at the 'meson test --suite setup' for the testing\n> ('meson install' command creates binaries as well but they are not for\n> testing, they are for installing binaries to the OS). So, 'meson test\n> --suite setup' should be run before running regression tests.\n\nThe above sentence lacks some information. It appears that if binaries\nare not created beforehand (only running configure, not make), they\nare generated during tests in the make builds. This also applies to\nmeson builds when the meson test command is run, as meson executes\nsetup suite tests first, which creates the binaries. However, if we\nspecify a different test suite, like regress in this case, the setup\nsuite tests are not executed, and the binaries are not created,\npreventing the tests from running. I am not sure how to configure\nmeson builds to run setup suite tests if they are not executed\nbeforehand.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 23 Sep 2024 12:25:58 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "On Mon, Sep 23, 2024 at 2:16 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> On Sat, 21 Sept 2024 at 09:01, jian he <[email protected]> wrote:\n> >\n> > hi. Thanks for your work!\n>\n> Thank you for looking into this!\n>\n> > I do find some issues.\n> >\n> >\n> > TESTS=\"copy jsonb jsonb\" meson test --suite regress\n> > one will fail. not sure this is expected?\n>\n> Yes, that is expected.\n\nAgreed. It's the same behaviour with `make check-tests`\nTESTS=\"copy jsonb jsonb\" make check-tests\n\n# initializing database system by copying initdb template\n# using temp instance on port 65312 with PID 880081\nok 1 - copy 51 ms\nok 2 - jsonb 143 ms\nnot ok 3 - jsonb 179 ms\n1..3\n# 1 of 3 tests failed.\n\n>\n> > so\n> > TESTS=\"copy jsonb stats_ext\" meson test --suite regress\n> > will fail.\n> >\n> > to make it work we need change it to\n> > TESTS=\"test_setup copy jsonb stats_ext\" meson test --suite regress\n> >\n> > Many tests depend on test_setup.sql, maybe we can implicitly prepend it.\n> > Another dependency issue. alter_table depending on create_index.\n> >\n> > TESTS=\"test_setup alter_table\" meson test --suite regress\n> > will fail.\n> > TESTS=\"test_setup create_index alter_table\" meson test --suite regress\n> > will work.\n>\n> Yes, I realized that but since that is how it is done in the make\n> builds, I didn't want to change the behaviour. Also, I think it makes\n> sense to leave it to the tester. It is more flexible in that way.\n\nSince meson has a setup suite already, it might have been a good idea\nto do as Jian suggested. But a. setup is also another suite and not a\nsetup step per say. b. the dependencies between regression tests are\nnot documented well or rather we don't have a way to specify which\ntest depends upon which. So we can't infer the .sql files that need to\nbe run as a setup step. Somebody could add a dependency without meson\nor make being aware of that and tests will fail again. So I think we\nhave to leave it as is. If we get to that point we should fix both\nmake as well as meson. But not as part of this exercise.\n\nIt's a bit inconvenient that we don't see whether an individual test\nfailed or succeeded on the screen; we need to open testlog.txt for the\nsame. But that's true with the regress suite generally so no\ncomplaints there.\n\nIndividual TAP tests are run using `meson test -C <build dir>\n<suite>:<test>` syntax. If we can run individual SQL tests using same\nsyntax e.g. `meson test regress:partition_join` that would make it\nconsistent with the way TAP tests are run. But we need to make sure\nthat the test later in the syntax would see the objects left behind by\nprior tests. E.g. `meson test regress:test_setup\nregress:partition_join` should see both tests passing. partition_join\nuses some tables created by test_setup, so those need to be run\nsequentially. Is that doable?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 25 Sep 2024 15:57:08 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "Hi,\n\nThanks for looking into this!\n\nOn Wed, 25 Sept 2024 at 13:27, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Mon, Sep 23, 2024 at 2:16 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > On Sat, 21 Sept 2024 at 09:01, jian he <[email protected]> wrote:\n> > >\n> > > so\n> > > TESTS=\"copy jsonb stats_ext\" meson test --suite regress\n> > > will fail.\n> > >\n> > > to make it work we need change it to\n> > > TESTS=\"test_setup copy jsonb stats_ext\" meson test --suite regress\n> > >\n> > > Many tests depend on test_setup.sql, maybe we can implicitly prepend it.\n> > > Another dependency issue. alter_table depending on create_index.\n> > >\n> > > TESTS=\"test_setup alter_table\" meson test --suite regress\n> > > will fail.\n> > > TESTS=\"test_setup create_index alter_table\" meson test --suite regress\n> > > will work.\n> >\n> > Yes, I realized that but since that is how it is done in the make\n> > builds, I didn't want to change the behaviour. Also, I think it makes\n> > sense to leave it to the tester. It is more flexible in that way.\n>\n> Since meson has a setup suite already, it might have been a good idea\n> to do as Jian suggested. But a. setup is also another suite and not a\n> setup step per say. b. the dependencies between regression tests are\n> not documented well or rather we don't have a way to specify which\n> test depends upon which. So we can't infer the .sql files that need to\n> be run as a setup step. Somebody could add a dependency without meson\n> or make being aware of that and tests will fail again. So I think we\n> have to leave it as is. If we get to that point we should fix both\n> make as well as meson. But not as part of this exercise.\n>\n> It's a bit inconvenient that we don't see whether an individual test\n> failed or succeeded on the screen; we need to open testlog.txt for the\n> same. But that's true with the regress suite generally so no\n> complaints there.\n\nThanks for sharing your thoughts.\n\n> Individual TAP tests are run using `meson test -C <build dir>\n> <suite>:<test>` syntax. If we can run individual SQL tests using same\n> syntax e.g. `meson test regress:partition_join` that would make it\n> consistent with the way TAP tests are run. But we need to make sure\n> that the test later in the syntax would see the objects left behind by\n> prior tests. E.g. `meson test regress:test_setup\n> regress:partition_join` should see both tests passing. partition_join\n> uses some tables created by test_setup, so those need to be run\n> sequentially. Is that doable?\n\nI think that makes sense, but it is not easily achievable right now.\nThe difference between TAP tests and regress/regress tests is that TAP\ntests are registered individually, whereas regress/regress tests are\nregistered as one (with the --schedule option). This means we need to\nregister these tests one by one (instead of passing them with the\n--schedule option) to the Meson build system in order to run them as\n'meson test <test_group>:<test>'.\n\nAdditionally, the patch I shared earlier was only for regress/regress\ntests. From what I understand from here [1], only regress/regress\ntests support 'make check-tests', so the patch seems correct. I\nexperimented with how we can implement something similar for other\ntypes of tests, including other regression, isolation, and ECPG tests.\nThe attached patch works for all types of tests but only for the Meson\nbuilds. For example you can run:\n\n$ meson test --suite setup\n$ TESTS='check check_btree' meson test amcheck/regress\n$ TESTS='ddl stream' meson test test_decoding/regress\n$ TESTS='oldest_xmin skip_snapshot_restore' meson test test_decoding/isolation\n$ TESTS='sql/prepareas compat_informix/dec_test' meson test ecpg/ecpg\n\nWhat do you think about that behaviour? It is different from 'make\ncheck-tests' but it looks useful to me.\n\n[1] https://www.postgresql.org/message-id/1364.1717305911%40sss.pgh.pa.us\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Wed, 25 Sep 2024 17:54:10 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "On Wed, Sep 25, 2024 at 8:24 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for looking into this!\n>\n> On Wed, 25 Sept 2024 at 13:27, Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Mon, Sep 23, 2024 at 2:16 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> > >\n> > > On Sat, 21 Sept 2024 at 09:01, jian he <[email protected]> wrote:\n> > > >\n> > > > so\n> > > > TESTS=\"copy jsonb stats_ext\" meson test --suite regress\n> > > > will fail.\n> > > >\n> > > > to make it work we need change it to\n> > > > TESTS=\"test_setup copy jsonb stats_ext\" meson test --suite regress\n> > > >\n> > > > Many tests depend on test_setup.sql, maybe we can implicitly prepend it.\n> > > > Another dependency issue. alter_table depending on create_index.\n> > > >\n> > > > TESTS=\"test_setup alter_table\" meson test --suite regress\n> > > > will fail.\n> > > > TESTS=\"test_setup create_index alter_table\" meson test --suite regress\n> > > > will work.\n> > >\n> > > Yes, I realized that but since that is how it is done in the make\n> > > builds, I didn't want to change the behaviour. Also, I think it makes\n> > > sense to leave it to the tester. It is more flexible in that way.\n> >\n> > Since meson has a setup suite already, it might have been a good idea\n> > to do as Jian suggested. But a. setup is also another suite and not a\n> > setup step per say. b. the dependencies between regression tests are\n> > not documented well or rather we don't have a way to specify which\n> > test depends upon which. So we can't infer the .sql files that need to\n> > be run as a setup step. Somebody could add a dependency without meson\n> > or make being aware of that and tests will fail again. So I think we\n> > have to leave it as is. If we get to that point we should fix both\n> > make as well as meson. But not as part of this exercise.\n> >\n> > It's a bit inconvenient that we don't see whether an individual test\n> > failed or succeeded on the screen; we need to open testlog.txt for the\n> > same. But that's true with the regress suite generally so no\n> > complaints there.\n>\n> Thanks for sharing your thoughts.\n>\n> > Individual TAP tests are run using `meson test -C <build dir>\n> > <suite>:<test>` syntax. If we can run individual SQL tests using same\n> > syntax e.g. `meson test regress:partition_join` that would make it\n> > consistent with the way TAP tests are run. But we need to make sure\n> > that the test later in the syntax would see the objects left behind by\n> > prior tests. E.g. `meson test regress:test_setup\n> > regress:partition_join` should see both tests passing. partition_join\n> > uses some tables created by test_setup, so those need to be run\n> > sequentially. Is that doable?\n>\n> I think that makes sense, but it is not easily achievable right now.\n> The difference between TAP tests and regress/regress tests is that TAP\n> tests are registered individually, whereas regress/regress tests are\n> registered as one (with the --schedule option). This means we need to\n> register these tests one by one (instead of passing them with the\n> --schedule option) to the Meson build system in order to run them as\n> 'meson test <test_group>:<test>'.\n\nI understand. Probably that's a shortcoming in the way our meson\nsupport is designed. I don't see a way to fix it without changing a\nlot. So I guess the current interface is good enough.\n\n>\n> Additionally, the patch I shared earlier was only for regress/regress\n> tests. From what I understand from here [1], only regress/regress\n> tests support 'make check-tests', so the patch seems correct. I\n> experimented with how we can implement something similar for other\n> types of tests, including other regression, isolation, and ECPG tests.\n> The attached patch works for all types of tests but only for the Meson\n> builds. For example you can run:\n>\n> $ meson test --suite setup\n> $ TESTS='check check_btree' meson test amcheck/regress\n> $ TESTS='ddl stream' meson test test_decoding/regress\n> $ TESTS='oldest_xmin skip_snapshot_restore' meson test test_decoding/isolation\n> $ TESTS='sql/prepareas compat_informix/dec_test' meson test ecpg/ecpg\n>\n> What do you think about that behaviour? It is different from 'make\n> check-tests' but it looks useful to me.\n\nI think that would be a good enhancement, if a particular regression\nset takes longer e.g. the one in test_decoding takes a few seconds.\nWhen we worked on PG_TEST_EXTRA, it was advised to keep feature parity\nbetween meson and make. I guess, a similar advice applies here as well\nand we will have to change make to support these options. But that\nwill be more work.\n\nLet's split the patch into two 1. supporting TESTS in meson only for\nregress/regress, 2. extending that support to other suites. The first\npatch will bring meson inline with make as far as running a subset of\nregression tests is concerned and can be committed separately. We will\nseek opinions on the second patch and commit it separately if it takes\ntime. It will be good to see the support for running a subset of\nregression in meson ASAP so that developers can save time running\nentire regression suite when not needed. The second one will be an\nadditional feature that can wait if it takes more time to add it to\nboth meson and make.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 26 Sep 2024 11:15:06 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson and check-tests" }, { "msg_contents": "Hi,\n\nOn Thu, 26 Sept 2024 at 08:45, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Wed, Sep 25, 2024 at 8:24 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > Additionally, the patch I shared earlier was only for regress/regress\n> > tests. From what I understand from here [1], only regress/regress\n> > tests support 'make check-tests', so the patch seems correct. I\n> > experimented with how we can implement something similar for other\n> > types of tests, including other regression, isolation, and ECPG tests.\n> > The attached patch works for all types of tests but only for the Meson\n> > builds. For example you can run:\n> >\n> > $ meson test --suite setup\n> > $ TESTS='check check_btree' meson test amcheck/regress\n> > $ TESTS='ddl stream' meson test test_decoding/regress\n> > $ TESTS='oldest_xmin skip_snapshot_restore' meson test test_decoding/isolation\n> > $ TESTS='sql/prepareas compat_informix/dec_test' meson test ecpg/ecpg\n> >\n> > What do you think about that behaviour? It is different from 'make\n> > check-tests' but it looks useful to me.\n>\n> I think that would be a good enhancement, if a particular regression\n> set takes longer e.g. the one in test_decoding takes a few seconds.\n> When we worked on PG_TEST_EXTRA, it was advised to keep feature parity\n> between meson and make. I guess, a similar advice applies here as well\n> and we will have to change make to support these options. But that\n> will be more work.\n>\n> Let's split the patch into two 1. supporting TESTS in meson only for\n> regress/regress, 2. extending that support to other suites. The first\n> patch will bring meson inline with make as far as running a subset of\n> regression tests is concerned and can be committed separately. We will\n> seek opinions on the second patch and commit it separately if it takes\n> time. It will be good to see the support for running a subset of\n> regression in meson ASAP so that developers can save time running\n> entire regression suite when not needed. The second one will be an\n> additional feature that can wait if it takes more time to add it to\n> both meson and make.\n\nI agree with you. I splitted the patch into two like you said.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 26 Sep 2024 13:43:50 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson and check-tests" } ]
[ { "msg_contents": "Hello hackers,\n\nWhen executing the following query on master branch:\n\nCREATE EXTENSION pltcl;\nCREATE or replace PROCEDURE test_proc5(INOUT a text)\n         LANGUAGE pltcl\n         AS $$\n         set aa [concat $1 \"+\" $1]\n         return [list $aa $aa])\n         $$;\n\nCALL test_proc5('abc');\nCREATE EXTENSION\nCREATE PROCEDURE\nserver closed the connection unexpectedly\n         This probably means the server terminated abnormally\n         before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nThe connection to the server was lost. Attempting reset: Failed.\n\n\nCore was generated by `postgres: postgres postgres [loca'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-sse2.S:142\n142     ../sysdeps/x86_64/multiarch/strlen-sse2.S: No such file or \ndirectory.\n(gdb) bt\n#0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-sse2.S:142\n#1  0x00007f5f1353ba6a in utf_u2e (src=0x0) at pltcl.c:77\n#2  0x00007f5f1353c9f7 in throw_tcl_error (interp=0x55ec24bdaf70, \nproname=0x55ec24b6b140 \"test_proc5\") at pltcl.c:1373\n#3  0x00007f5f1353ed64 in pltcl_func_handler \n(fcinfo=fcinfo@entry=0x7ffdbfb407a0, \ncall_state=call_state@entry=0x7ffdbfb405d0, \npltrusted=pltrusted@entry=true) at pltcl.c:1029\n#4  0x00007f5f1353ee8d in pltcl_handler (fcinfo=0x7ffdbfb407a0, \npltrusted=pltrusted@entry=true) at pltcl.c:765\n#5  0x00007f5f1353f1ef in pltcl_call_handler (fcinfo=<optimized out>) at \npltcl.c:698\n#6  0x000055ec239ec64a in ExecuteCallStmt \n(stmt=stmt@entry=0x55ec24a9a940, params=params@entry=0x0, \natomic=atomic@entry=false, dest=dest@entry=0x55ec24a6ea18) at \nfunctioncmds.c:2285\n#7  0x000055ec23c103a7 in standard_ProcessUtility (pstmt=0x55ec24a9a9d8, \nqueryString=0x55ec24a99e68 \"CALL test_proc5('abc');\", \nreadOnlyTree=<optimized out>, context=PROCESS_UTILITY_TOPLEVEL, \nparams=0x0, queryEnv=0x0, dest=0x55ec24a6ea18,\n     qc=0x7ffdbfb40f40) at utility.c:851\n#8  0x000055ec23c1081b in ProcessUtility \n(pstmt=pstmt@entry=0x55ec24a9a9d8, queryString=<optimized out>, \nreadOnlyTree=<optimized out>, \ncontext=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>, \nqueryEnv=<optimized out>,\n     dest=0x55ec24a6ea18, qc=0x7ffdbfb40f40) at utility.c:523\n#9  0x000055ec23c0e04e in PortalRunUtility \n(portal=portal@entry=0x55ec24b18108, pstmt=0x55ec24a9a9d8, \nisTopLevel=isTopLevel@entry=true, \nsetHoldSnapshot=setHoldSnapshot@entry=true, \ndest=dest@entry=0x55ec24a6ea18, qc=qc@entry=0x7ffdbfb40f40)\n     at pquery.c:1158\n#10 0x000055ec23c0e3b7 in FillPortalStore \n(portal=portal@entry=0x55ec24b18108, isTopLevel=isTopLevel@entry=true) \nat pquery.c:1031\n#11 0x000055ec23c0e6ee in PortalRun (portal=portal@entry=0x55ec24b18108, \ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, \nrun_once=run_once@entry=true, dest=dest@entry=0x55ec24a9aec8, \naltdest=altdest@entry=0x55ec24a9aec8,\n     qc=0x7ffdbfb41130) at pquery.c:763\n#12 0x000055ec23c0acca in exec_simple_query \n(query_string=query_string@entry=0x55ec24a99e68 \"CALL \ntest_proc5('abc');\") at postgres.c:1274\n#13 0x000055ec23c0caad in PostgresMain (dbname=<optimized out>, \nusername=<optimized out>) at postgres.c:4680\n#14 0x000055ec23c0687a in BackendMain (startup_data=<optimized out>, \nstartup_data_len=<optimized out>) at backend_startup.c:105\n#15 0x000055ec23b766bf in postmaster_child_launch \n(child_type=child_type@entry=B_BACKEND, \nstartup_data=startup_data@entry=0x7ffdbfb41354 \"\", \nstartup_data_len=startup_data_len@entry=4, \nclient_sock=client_sock@entry=0x7ffdbfb41390)\n     at launch_backend.c:265\n#16 0x000055ec23b7ab36 in BackendStartup \n(client_sock=client_sock@entry=0x7ffdbfb41390) at postmaster.c:3593\n#17 0x000055ec23b7adb0 in ServerLoop () at postmaster.c:1674\n#18 0x000055ec23b7c20c in PostmasterMain (argc=argc@entry=3, \nargv=argv@entry=0x55ec24a54030) at postmaster.c:1372\n#19 0x000055ec23aacf9f in main (argc=3, argv=0x55ec24a54030) at main.c:197\n\n\n\n", "msg_date": "Sat, 1 Jun 2024 11:36:18 +0700", "msg_from": "\"a.kozhemyakin\" <[email protected]>", "msg_from_op": true, "msg_subject": "pltcl crashes due to a syntax error" }, { "msg_contents": "\"a.kozhemyakin\" <[email protected]> writes:\n> When executing the following query on master branch:\n\n> CREATE EXTENSION pltcl;\n> CREATE or replace PROCEDURE test_proc5(INOUT a text)\n>         LANGUAGE pltcl\n>         AS $$\n>         set aa [concat $1 \"+\" $1]\n>         return [list $aa $aa])\n>         $$;\n\n> CALL test_proc5('abc');\n> CREATE EXTENSION\n> CREATE PROCEDURE\n> server closed the connection unexpectedly\n\nReplicated here. I'll look into it later if nobody beats me\nto it. Thanks for the report!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 01 Jun 2024 11:18:59 -0700", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pltcl crashes due to a syntax error" }, { "msg_contents": "I can also reproduce in Alma Linux 8.10 with tcl-devel 8.6.8\n\ngdb says:\n\n\n(gdb) p interp\n$1 = (Tcl_Interp *) 0x288a500\n(gdb) p *interp\n$2 = {resultDontUse = 0x288a6d8 \"\", freeProcDontUse = 0x0,\n errorLineDontUse = 1}\n(gdb) p msg\nNo symbol \"msg\" in current context.\n(gdb) p emsg\n$3 = 0x27ebe48 \"list element in braces followed by \\\")\\\" instead of space\"\n\n\nInvolved PG source code is:\n\n**********************************************************************\n * throw_tcl_error - ereport an error returned from the Tcl interpreter\n **********************************************************************/\n static void\n throw_tcl_error\n<https://doxygen.postgresql.org/pltcl_8c.html#a82838fc9c33c4745329af8b74a52b27f>(Tcl_Interp\n*interp, const char *proname\n<https://doxygen.postgresql.org/pg__proc_8h.html#af3197a24ed757df3d91f9c41a3622b2c>\n)\n {\n /*\n * Caution is needed here because Tcl_GetVar could overwrite the\n * interpreter result (even though it's not really supposed to), and we\n * can't control the order of evaluation of ereport arguments. Hence, make\n * real sure we have our own copy of the result string before invoking\n * Tcl_GetVar.\n */\n char *emsg;\n char *econtext;\n\n emsg = pstrdup\n<https://doxygen.postgresql.org/mcxt_8c.html#a4c9fd325849ffd3be847d985ef319c41>\n(utf_u2e\n<https://doxygen.postgresql.org/pltcl_8c.html#aefb17648f789247bf5cfc5a9cd53f19c>\n(Tcl_GetStringResult(interp)));\n econtext = utf_u2e\n<https://doxygen.postgresql.org/pltcl_8c.html#aefb17648f789247bf5cfc5a9cd53f19c>(Tcl_GetVar(interp,\n\"errorInfo\", TCL_GLOBAL_ONLY));\n ereport\n<https://doxygen.postgresql.org/elog_8h.html#ae15bca8edf22ffe24b23e89348568b7c>\n(ERROR\n<https://doxygen.postgresql.org/elog_8h.html#a8fe83ac76edc595f6b98cd4a4127aed5>\n,\n (errcode\n<https://doxygen.postgresql.org/elog_8c.html#ab243d4465b39d615088449de076a217d>\n(ERRCODE_EXTERNAL_ROUTINE_EXCEPTION),\n errmsg\n<https://doxygen.postgresql.org/elog_8c.html#ac15cbe2e9b7f41f40df622ac31521987>\n(\"%s\", emsg),\n errcontext\n<https://doxygen.postgresql.org/elog_8h.html#a26722e60709391a7cdd6e1881a220336>\n(\"%s\\nin PL/Tcl function \\\"%s\\\"\",\n econtext, proname\n<https://doxygen.postgresql.org/pg__proc_8h.html#af3197a24ed757df3d91f9c41a3622b2c>\n)));\n }\n\nI understand that Tcl_GetVar should not be used any more and should be\nreplaced by Tcl_GetStringResult\n(but I know nothing about Tcl internals)\n\nFollowing patch :\ndiff postgres/src/pl/tcl/pltcl.c.orig postgres/src/pl/tcl/pltcl.c\n1373c1373,1376\n< econtext = utf_u2e(Tcl_GetVar(interp, \"errorInfo\",\nTCL_GLOBAL_ONLY));\n---\n> /*\n> * econtext = utf_u2e(Tcl_GetVar(interp, \"errorInfo\",\nTCL_GLOBAL_ONLY));\n> */\n> econtext = utf_u2e(Tcl_GetStringResult(interp));\n\ngives:\n\npierre=# CREATE OR REPLACE PROCEDURE test_proc(INOUT a text)\n AS $$\n set aa [concat $1 \"+\" $1]\n return [list $aa $aa])\n $$\n LANGUAGE pltcl;\nCREATE PROCEDURE\npierre=# CALL test_proc('abc');\n2024-06-02 14:22:45.223 CEST [61649] ERROR: list element in braces\nfollowed by \")\" instead of space\n2024-06-02 14:22:45.223 CEST [61649] CONTEXT: list element in braces\nfollowed by \")\" instead of space\nin PL/Tcl function \"test_proc\"\n2024-06-02 14:22:45.223 CEST [61649] STATEMENT: CALL test_proc('abc');\nERROR: list element in braces followed by \")\" instead of space\nCONTEXT: list element in braces followed by \")\" instead of space\nin PL/Tcl function \"test_proc\"\n\n\nPF\n\n\n\n\n\nLe sam. 1 juin 2024 à 06:36, a.kozhemyakin <[email protected]> a\nécrit :\n\n> Hello hackers,\n>\n> When executing the following query on master branch:\n>\n> CREATE EXTENSION pltcl;\n> CREATE or replace PROCEDURE test_proc5(INOUT a text)\n> LANGUAGE pltcl\n> AS $$\n> set aa [concat $1 \"+\" $1]\n> return [list $aa $aa])\n> $$;\n>\n> CALL test_proc5('abc');\n> CREATE EXTENSION\n> CREATE PROCEDURE\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> The connection to the server was lost. Attempting reset: Failed.\n>\n>\n> Core was generated by `postgres: postgres postgres [loca'.\n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-sse2.S:142\n> 142 ../sysdeps/x86_64/multiarch/strlen-sse2.S: No such file or\n> directory.\n> (gdb) bt\n> #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-sse2.S:142\n> #1 0x00007f5f1353ba6a in utf_u2e (src=0x0) at pltcl.c:77\n> #2 0x00007f5f1353c9f7 in throw_tcl_error (interp=0x55ec24bdaf70,\n> proname=0x55ec24b6b140 \"test_proc5\") at pltcl.c:1373\n> #3 0x00007f5f1353ed64 in pltcl_func_handler\n> (fcinfo=fcinfo@entry=0x7ffdbfb407a0,\n> call_state=call_state@entry=0x7ffdbfb405d0,\n> pltrusted=pltrusted@entry=true) at pltcl.c:1029\n> #4 0x00007f5f1353ee8d in pltcl_handler (fcinfo=0x7ffdbfb407a0,\n> pltrusted=pltrusted@entry=true) at pltcl.c:765\n> #5 0x00007f5f1353f1ef in pltcl_call_handler (fcinfo=<optimized out>) at\n> pltcl.c:698\n> #6 0x000055ec239ec64a in ExecuteCallStmt\n> (stmt=stmt@entry=0x55ec24a9a940, params=params@entry=0x0,\n> atomic=atomic@entry=false, dest=dest@entry=0x55ec24a6ea18) at\n> functioncmds.c:2285\n> #7 0x000055ec23c103a7 in standard_ProcessUtility (pstmt=0x55ec24a9a9d8,\n> queryString=0x55ec24a99e68 \"CALL test_proc5('abc');\",\n> readOnlyTree=<optimized out>, context=PROCESS_UTILITY_TOPLEVEL,\n> params=0x0, queryEnv=0x0, dest=0x55ec24a6ea18,\n> qc=0x7ffdbfb40f40) at utility.c:851\n> #8 0x000055ec23c1081b in ProcessUtility\n> (pstmt=pstmt@entry=0x55ec24a9a9d8, queryString=<optimized out>,\n> readOnlyTree=<optimized out>,\n> context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>,\n> queryEnv=<optimized out>,\n> dest=0x55ec24a6ea18, qc=0x7ffdbfb40f40) at utility.c:523\n> #9 0x000055ec23c0e04e in PortalRunUtility\n> (portal=portal@entry=0x55ec24b18108, pstmt=0x55ec24a9a9d8,\n> isTopLevel=isTopLevel@entry=true,\n> setHoldSnapshot=setHoldSnapshot@entry=true,\n> dest=dest@entry=0x55ec24a6ea18, qc=qc@entry=0x7ffdbfb40f40)\n> at pquery.c:1158\n> #10 0x000055ec23c0e3b7 in FillPortalStore\n> (portal=portal@entry=0x55ec24b18108, isTopLevel=isTopLevel@entry=true)\n> at pquery.c:1031\n> #11 0x000055ec23c0e6ee in PortalRun (portal=portal@entry=0x55ec24b18108,\n> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\n> run_once=run_once@entry=true, dest=dest@entry=0x55ec24a9aec8,\n> altdest=altdest@entry=0x55ec24a9aec8,\n> qc=0x7ffdbfb41130) at pquery.c:763\n> #12 0x000055ec23c0acca in exec_simple_query\n> (query_string=query_string@entry=0x55ec24a99e68 \"CALL\n> test_proc5('abc');\") at postgres.c:1274\n> #13 0x000055ec23c0caad in PostgresMain (dbname=<optimized out>,\n> username=<optimized out>) at postgres.c:4680\n> #14 0x000055ec23c0687a in BackendMain (startup_data=<optimized out>,\n> startup_data_len=<optimized out>) at backend_startup.c:105\n> #15 0x000055ec23b766bf in postmaster_child_launch\n> (child_type=child_type@entry=B_BACKEND,\n> startup_data=startup_data@entry=0x7ffdbfb41354 \"\",\n> startup_data_len=startup_data_len@entry=4,\n> client_sock=client_sock@entry=0x7ffdbfb41390)\n> at launch_backend.c:265\n> #16 0x000055ec23b7ab36 in BackendStartup\n> (client_sock=client_sock@entry=0x7ffdbfb41390) at postmaster.c:3593\n> #17 0x000055ec23b7adb0 in ServerLoop () at postmaster.c:1674\n> #18 0x000055ec23b7c20c in PostmasterMain (argc=argc@entry=3,\n> argv=argv@entry=0x55ec24a54030) at postmaster.c:1372\n> #19 0x000055ec23aacf9f in main (argc=3, argv=0x55ec24a54030) at main.c:197\n>\n>\n>\n>\n\nI can also reproduce in Alma Linux 8.10 with tcl-devel 8.6.8gdb says:(gdb) p interp$1 = (Tcl_Interp *) 0x288a500(gdb) p *interp$2 = {resultDontUse = 0x288a6d8 \"\", freeProcDontUse = 0x0,   errorLineDontUse = 1}(gdb) p msgNo symbol \"msg\" in current context.(gdb) p emsg$3 = 0x27ebe48 \"list element in braces followed by \\\")\\\" instead of space\"Involved PG source code is:**********************************************************************\n  * throw_tcl_error - ereport an error returned from the Tcl interpreter\n  **********************************************************************/\n static void\n throw_tcl_error(Tcl_Interp *interp, const char *proname)\n {\n  /*\n  * Caution is needed here because Tcl_GetVar could overwrite the\n  * interpreter result (even though it's not really supposed to), and we\n  * can't control the order of evaluation of ereport arguments. Hence, make\n  * real sure we have our own copy of the result string before invoking\n  * Tcl_GetVar.\n  */\n  char *emsg;\n  char *econtext;\n  \n  emsg = pstrdup(utf_u2e(Tcl_GetStringResult(interp)));\n  econtext = utf_u2e(Tcl_GetVar(interp, \"errorInfo\", TCL_GLOBAL_ONLY));\n  ereport(ERROR,\n  (errcode(ERRCODE_EXTERNAL_ROUTINE_EXCEPTION),\n  errmsg(\"%s\", emsg),\n  errcontext(\"%s\\nin PL/Tcl function \\\"%s\\\"\",\n  econtext, proname)));\n }I understand that Tcl_GetVar should not be used any more and should be replaced by Tcl_GetStringResult (but I know nothing about Tcl internals)Following patch :diff postgres/src/pl/tcl/pltcl.c.orig postgres/src/pl/tcl/pltcl.c1373c1373,1376<       econtext = utf_u2e(Tcl_GetVar(interp, \"errorInfo\", TCL_GLOBAL_ONLY));--->       /*>        * econtext = utf_u2e(Tcl_GetVar(interp, \"errorInfo\", TCL_GLOBAL_ONLY));>        */>       econtext = utf_u2e(Tcl_GetStringResult(interp));gives:pierre=# CREATE OR REPLACE PROCEDURE test_proc(INOUT a text) AS $$ set aa [concat $1 \"+\" $1] return [list $aa $aa]) $$ LANGUAGE pltcl;CREATE PROCEDUREpierre=# CALL test_proc('abc');2024-06-02 14:22:45.223 CEST [61649] ERROR:  list element in braces followed by \")\" instead of space2024-06-02 14:22:45.223 CEST [61649] CONTEXT:  list element in braces followed by \")\" instead of space\tin PL/Tcl function \"test_proc\"2024-06-02 14:22:45.223 CEST [61649] STATEMENT:  CALL test_proc('abc');ERROR:  list element in braces followed by \")\" instead of spaceCONTEXT:  list element in braces followed by \")\" instead of spacein PL/Tcl function \"test_proc\"PFLe sam. 1 juin 2024 à 06:36, a.kozhemyakin <[email protected]> a écrit :Hello hackers,\n\nWhen executing the following query on master branch:\n\nCREATE EXTENSION pltcl;\nCREATE or replace PROCEDURE test_proc5(INOUT a text)\n         LANGUAGE pltcl\n         AS $$\n         set aa [concat $1 \"+\" $1]\n         return [list $aa $aa])\n         $$;\n\nCALL test_proc5('abc');\nCREATE EXTENSION\nCREATE PROCEDURE\nserver closed the connection unexpectedly\n         This probably means the server terminated abnormally\n         before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nThe connection to the server was lost. Attempting reset: Failed.\n\n\nCore was generated by `postgres: postgres postgres [loca'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-sse2.S:142\n142     ../sysdeps/x86_64/multiarch/strlen-sse2.S: No such file or \ndirectory.\n(gdb) bt\n#0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-sse2.S:142\n#1  0x00007f5f1353ba6a in utf_u2e (src=0x0) at pltcl.c:77\n#2  0x00007f5f1353c9f7 in throw_tcl_error (interp=0x55ec24bdaf70, \nproname=0x55ec24b6b140 \"test_proc5\") at pltcl.c:1373\n#3  0x00007f5f1353ed64 in pltcl_func_handler \n(fcinfo=fcinfo@entry=0x7ffdbfb407a0, \ncall_state=call_state@entry=0x7ffdbfb405d0, \npltrusted=pltrusted@entry=true) at pltcl.c:1029\n#4  0x00007f5f1353ee8d in pltcl_handler (fcinfo=0x7ffdbfb407a0, \npltrusted=pltrusted@entry=true) at pltcl.c:765\n#5  0x00007f5f1353f1ef in pltcl_call_handler (fcinfo=<optimized out>) at \npltcl.c:698\n#6  0x000055ec239ec64a in ExecuteCallStmt \n(stmt=stmt@entry=0x55ec24a9a940, params=params@entry=0x0, \natomic=atomic@entry=false, dest=dest@entry=0x55ec24a6ea18) at \nfunctioncmds.c:2285\n#7  0x000055ec23c103a7 in standard_ProcessUtility (pstmt=0x55ec24a9a9d8, \nqueryString=0x55ec24a99e68 \"CALL test_proc5('abc');\", \nreadOnlyTree=<optimized out>, context=PROCESS_UTILITY_TOPLEVEL, \nparams=0x0, queryEnv=0x0, dest=0x55ec24a6ea18,\n     qc=0x7ffdbfb40f40) at utility.c:851\n#8  0x000055ec23c1081b in ProcessUtility \n(pstmt=pstmt@entry=0x55ec24a9a9d8, queryString=<optimized out>, \nreadOnlyTree=<optimized out>, \ncontext=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>, \nqueryEnv=<optimized out>,\n     dest=0x55ec24a6ea18, qc=0x7ffdbfb40f40) at utility.c:523\n#9  0x000055ec23c0e04e in PortalRunUtility \n(portal=portal@entry=0x55ec24b18108, pstmt=0x55ec24a9a9d8, \nisTopLevel=isTopLevel@entry=true, \nsetHoldSnapshot=setHoldSnapshot@entry=true, \ndest=dest@entry=0x55ec24a6ea18, qc=qc@entry=0x7ffdbfb40f40)\n     at pquery.c:1158\n#10 0x000055ec23c0e3b7 in FillPortalStore \n(portal=portal@entry=0x55ec24b18108, isTopLevel=isTopLevel@entry=true) \nat pquery.c:1031\n#11 0x000055ec23c0e6ee in PortalRun (portal=portal@entry=0x55ec24b18108, \ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, \nrun_once=run_once@entry=true, dest=dest@entry=0x55ec24a9aec8, \naltdest=altdest@entry=0x55ec24a9aec8,\n     qc=0x7ffdbfb41130) at pquery.c:763\n#12 0x000055ec23c0acca in exec_simple_query \n(query_string=query_string@entry=0x55ec24a99e68 \"CALL \ntest_proc5('abc');\") at postgres.c:1274\n#13 0x000055ec23c0caad in PostgresMain (dbname=<optimized out>, \nusername=<optimized out>) at postgres.c:4680\n#14 0x000055ec23c0687a in BackendMain (startup_data=<optimized out>, \nstartup_data_len=<optimized out>) at backend_startup.c:105\n#15 0x000055ec23b766bf in postmaster_child_launch \n(child_type=child_type@entry=B_BACKEND, \nstartup_data=startup_data@entry=0x7ffdbfb41354 \"\", \nstartup_data_len=startup_data_len@entry=4, \nclient_sock=client_sock@entry=0x7ffdbfb41390)\n     at launch_backend.c:265\n#16 0x000055ec23b7ab36 in BackendStartup \n(client_sock=client_sock@entry=0x7ffdbfb41390) at postmaster.c:3593\n#17 0x000055ec23b7adb0 in ServerLoop () at postmaster.c:1674\n#18 0x000055ec23b7c20c in PostmasterMain (argc=argc@entry=3, \nargv=argv@entry=0x55ec24a54030) at postmaster.c:1372\n#19 0x000055ec23aacf9f in main (argc=3, argv=0x55ec24a54030) at main.c:197", "msg_date": "Sun, 2 Jun 2024 14:32:55 +0200", "msg_from": "Pierre Forstmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pltcl crashes due to a syntax error" }, { "msg_contents": "On 2024-06-02 14:32 +0200, Pierre Forstmann wrote:\n> I understand that Tcl_GetVar should not be used any more and should be\n> replaced by Tcl_GetStringResult\n> (but I know nothing about Tcl internals)\n> \n> Following patch :\n> diff postgres/src/pl/tcl/pltcl.c.orig postgres/src/pl/tcl/pltcl.c\n> 1373c1373,1376\n> < econtext = utf_u2e(Tcl_GetVar(interp, \"errorInfo\",\n> TCL_GLOBAL_ONLY));\n> ---\n> > /*\n> > * econtext = utf_u2e(Tcl_GetVar(interp, \"errorInfo\",\n> TCL_GLOBAL_ONLY));\n> > */\n> > econtext = utf_u2e(Tcl_GetStringResult(interp));\n> \n> gives:\n> \n> pierre=# CREATE OR REPLACE PROCEDURE test_proc(INOUT a text)\n> AS $$\n> set aa [concat $1 \"+\" $1]\n> return [list $aa $aa])\n> $$\n> LANGUAGE pltcl;\n> CREATE PROCEDURE\n> pierre=# CALL test_proc('abc');\n> 2024-06-02 14:22:45.223 CEST [61649] ERROR: list element in braces\n> followed by \")\" instead of space\n> 2024-06-02 14:22:45.223 CEST [61649] CONTEXT: list element in braces\n> followed by \")\" instead of space\n> in PL/Tcl function \"test_proc\"\n> 2024-06-02 14:22:45.223 CEST [61649] STATEMENT: CALL test_proc('abc');\n> ERROR: list element in braces followed by \")\" instead of space\n> CONTEXT: list element in braces followed by \")\" instead of space\n> in PL/Tcl function \"test_proc\"\n\nTcl_GetStringResult is already used for emsg. Setting econtext to same\nstring is rather pointless. The problem is that Tcl_ListObjGetElements\ndoes not set errorInfo if conversion fails. From the manpage:\n\n\"If listPtr is not already a list value, Tcl_ListObjGetElements will\n attempt to convert it to one; if the conversion fails, it returns\n TCL_ERROR and leaves an error message in the interpreter's result value\n if interp is not NULL.\"\n\nTcl_GetVar returns null if errorInfo does not exist. Omitting econtext\nfrom errcontext in that case looks like the proper fix to me.\n\nOr just do away with throw_tcl_error and call ereport directly. Compare\nthat to pltcl_trigger_handler where the same case is handled like this:\n\n /************************************************************\n * Otherwise, the return value should be a column name/value list\n * specifying the modified tuple to return.\n ************************************************************/\n if (Tcl_ListObjGetElements(interp, Tcl_GetObjResult(interp),\n \t\t\t\t\t\t &result_Objc, &result_Objv) != TCL_OK)\n \tereport(ERROR,\n \t\t\t(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),\n \t\t\t errmsg(\"could not split return value from trigger: %s\",\n \t\t\t\t\tutf_u2e(Tcl_GetStringResult(interp)))));\n\n-- \nErik\n\n\n", "msg_date": "Sun, 2 Jun 2024 22:44:25 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pltcl crashes due to a syntax error" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> Tcl_GetVar returns null if errorInfo does not exist. Omitting econtext\n> from errcontext in that case looks like the proper fix to me.\n\nYeah, that was the conclusion I came to last night while sitting in\nthe airport, but I didn't have time to prepare a cleaned-up patch.\n\nThe new bit of information that this bug report provides is that it's\npossible to get a TCL_ERROR result without Tcl having set errorInfo.\nThat seems a tad odd, and it must happen only in weird corner cases,\nelse we'd have heard of this decades ago. Not sure if it's worth\ntrying to characterize those cases further, however.\n\n> Or just do away with throw_tcl_error and call ereport directly.\n\nI'd say this adds to the importance of having throw_tcl_error,\nbecause now it's even more complex than before, and there are\nmultiple call sites.\n\n> Compare\n> that to pltcl_trigger_handler where the same case is handled like this:\n\nHm, I wonder why that's not using throw_tcl_error. I guess because\nit wants to give its own primary message, but still ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Jun 2024 18:15:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pltcl crashes due to a syntax error" }, { "msg_contents": "On 2024-06-03 00:15 +0200, Tom Lane wrote:\n> The new bit of information that this bug report provides is that it's\n> possible to get a TCL_ERROR result without Tcl having set errorInfo.\n> That seems a tad odd, and it must happen only in weird corner cases,\n> else we'd have heard of this decades ago. Not sure if it's worth\n> trying to characterize those cases further, however.\n\nISTM that errorInfo is set automatically only during script evaluation.\nThe Tcl_AddErrorInfo manpage says:\n\n\"The -errorinfo option value is gradually built up as an error unwinds\n through the nested operations. Each time an error code is returned to\n Tcl_Eval, or any of the routines that performs script evaluation, the\n procedure Tcl_AddErrorInfo is called to add additional text to the\n -errorinfo value describing the command that was being executed when\n the error occurred. By the time the error has been passed all the way\n back to the application, it will contain a complete trace of the\n activity in progress when the error occurred.\"\n\nTcl 8.4 basically uses the same wording.\n\nExcept for the reported case, we only call throw_tcl_error in three\nplaces, all after checking the return code from Tcl_EvalObjEx. And this\none Tcl_ListObjGetElements instance is not called during script\nevaluation.\n\n> > Or just do away with throw_tcl_error and call ereport directly.\n> \n> I'd say this adds to the importance of having throw_tcl_error,\n> because now it's even more complex than before, and there are\n> multiple call sites.\n\nI agree to have some uniform error handling. But from the current usage\nit looks as if throw_tcl_error is tied to Tcl_EvalObjEx.\n\n-- \nErik\n\n\n", "msg_date": "Mon, 3 Jun 2024 18:16:48 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pltcl crashes due to a syntax error" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-06-03 00:15 +0200, Tom Lane wrote:\n>> The new bit of information that this bug report provides is that it's\n>> possible to get a TCL_ERROR result without Tcl having set errorInfo.\n>> That seems a tad odd, and it must happen only in weird corner cases,\n>> else we'd have heard of this decades ago. Not sure if it's worth\n>> trying to characterize those cases further, however.\n\n> ISTM that errorInfo is set automatically only during script evaluation.\n\nYeah, I've just come to the same conclusion. Changing throw_tcl_error\nto ignore errorInfo if it's unset would be wrong, because that implies\nthat the function we called doesn't fill errorInfo. I found by\ntesting that it's actually possible that errorInfo contains leftover\ntext from a previous error (that might not even have been in the same\nfunction), resulting in completely confusing/misleading output.\n\nSo your thought that we should just not use throw_tcl_error here\nwas correct, and a minimal fix could look like the attached.\n\nI thought about going further and creating a different function that\ncould be used in such cases, but we seem to have only the two\nTcl_ListObjGetElements() calls that could use it, so for now I think\nit's not worth the trouble.\n\nAlso, compile_pltcl_function contains a Tcl_EvalEx() call that\npresumably could use throw_tcl_error, except it wants to add \"could\nnot create internal procedure\" which would require some refactoring.\nAs far as I can tell that error case is not reproducibly reachable,\nas it'd require OOM or some other problem inside Tcl, so (a) it's\nprobably not worth troubling over and (b) changing it is a bit scary\nfor lack of ability to test. I'm inclined to leave that alone too.\n\nThe other thing I noticed while looking at this is that the error text\nfor the other Tcl_ListObjGetElements() call seems a bit confusingly\nworded: \"could not split return value from trigger: %s\". You could\neasily read that as suggesting that the return value is somehow\nattached to the trigger and has to be separated from it. I'm\ntempted to suggest rephrasing it to be parallel to the new error\nI added: \"could not parse trigger return value: %s\". But I didn't\ndo that below.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 03 Jun 2024 12:57:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pltcl crashes due to a syntax error" }, { "msg_contents": "On 2024-06-03 18:57 +0200, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n> > On 2024-06-03 00:15 +0200, Tom Lane wrote:\n> >> The new bit of information that this bug report provides is that it's\n> >> possible to get a TCL_ERROR result without Tcl having set errorInfo.\n> >> That seems a tad odd, and it must happen only in weird corner cases,\n> >> else we'd have heard of this decades ago. Not sure if it's worth\n> >> trying to characterize those cases further, however.\n> \n> > ISTM that errorInfo is set automatically only during script evaluation.\n> \n> Yeah, I've just come to the same conclusion. Changing throw_tcl_error\n> to ignore errorInfo if it's unset would be wrong, because that implies\n> that the function we called doesn't fill errorInfo. I found by\n> testing that it's actually possible that errorInfo contains leftover\n> text from a previous error (that might not even have been in the same\n> function), resulting in completely confusing/misleading output.\n> \n> So your thought that we should just not use throw_tcl_error here\n> was correct, and a minimal fix could look like the attached.\n\nLGTM.\n\n> Also, compile_pltcl_function contains a Tcl_EvalEx() call that\n> presumably could use throw_tcl_error, except it wants to add \"could\n> not create internal procedure\" which would require some refactoring.\n> As far as I can tell that error case is not reproducibly reachable,\n> as it'd require OOM or some other problem inside Tcl, so (a) it's\n> probably not worth troubling over and (b) changing it is a bit scary\n> for lack of ability to test. I'm inclined to leave that alone too.\n\nAgree.\n\n> The other thing I noticed while looking at this is that the error text\n> for the other Tcl_ListObjGetElements() call seems a bit confusingly\n> worded: \"could not split return value from trigger: %s\". You could\n> easily read that as suggesting that the return value is somehow\n> attached to the trigger and has to be separated from it. I'm\n> tempted to suggest rephrasing it to be parallel to the new error\n> I added: \"could not parse trigger return value: %s\". But I didn't\n> do that below.\n\nYeah, I'd fix that trigger error text as well to bring both in line.\n\n-- \nErik\n\n\n", "msg_date": "Mon, 3 Jun 2024 23:48:47 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pltcl crashes due to a syntax error" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-06-03 18:57 +0200, Tom Lane wrote:\n>> So your thought that we should just not use throw_tcl_error here\n>> was correct, and a minimal fix could look like the attached.\n\n> LGTM.\n\nPushed, thanks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Jun 2024 18:03:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pltcl crashes due to a syntax error" } ]
[ { "msg_contents": "Hi,\n\nSee attached for a small patch fixing some typos and grammatical\nerrors in a couple of comments.\n\nSide note: It's not clear to me what \"Vars of higher levels don't\nmatter here\" means in this context (or how that claim is justified),\nbut I haven't changed that part of the comment opting to simply\nresolve the clear mistakes in the wording here.\n\nRegards,\nJames Coleman", "msg_date": "Sat, 1 Jun 2024 17:08:23 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Fix grammar oddities in comments" }, { "msg_contents": "On Sun, 2 Jun 2024 at 10:08, James Coleman <[email protected]> wrote:\n> See attached for a small patch fixing some typos and grammatical\n> errors in a couple of comments.\n\nThanks. I pushed this after messing with the comments a bit more.\n\n> Side note: It's not clear to me what \"Vars of higher levels don't\n> matter here\" means in this context (or how that claim is justified),\n> but I haven't changed that part of the comment opting to simply\n> resolve the clear mistakes in the wording here.\n\nIt just means Vars with varlevelsup >= 2 don't matter. It only cares\nabout Vars with varlevelsup==1, i.e. Vars of the sub-query's direct\nparent.\n\nDavid\n\n\n", "msg_date": "Wed, 5 Jun 2024 21:34:43 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix grammar oddities in comments" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nConfirming linguistic correctness of changes, and lack of anything changed outside of comments that would otherwise affect readiness to commit.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Sat, 08 Jun 2024 22:04:44 +0000", "msg_from": "Aaron Altman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix grammar oddities in comments" }, { "msg_contents": "On Wed, Jun 5, 2024 at 5:34 AM David Rowley <[email protected]> wrote:\n>\n> On Sun, 2 Jun 2024 at 10:08, James Coleman <[email protected]> wrote:\n> > See attached for a small patch fixing some typos and grammatical\n> > errors in a couple of comments.\n>\n> Thanks. I pushed this after messing with the comments a bit more.\n\nThanks!\n\n> > Side note: It's not clear to me what \"Vars of higher levels don't\n> > matter here\" means in this context (or how that claim is justified),\n> > but I haven't changed that part of the comment opting to simply\n> > resolve the clear mistakes in the wording here.\n>\n> It just means Vars with varlevelsup >= 2 don't matter. It only cares\n> about Vars with varlevelsup==1, i.e. Vars of the sub-query's direct\n> parent.\n\nYes, I understood the content, but I didn't see any justification\nprovided, which is what I'd hope for in a comment like this (why not\nsimply what).\n\nAnyway, thanks again for reviewing and committing.\n\nJames Coleman\n\n\n", "msg_date": "Mon, 10 Jun 2024 08:28:28 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix grammar oddities in comments" } ]
[ { "msg_contents": "Writing the sql migration scripts that are run by CREATE EXTENSION and\nALTER EXTENSION UPDATE are security minefields for extension authors.\nOne big reason for this is that search_path is set to the schema of the\nextension while running these scripts, and thus if a user with lower\nprivileges can create functions or operators in that schema they can do\nall kinds of search_path confusion attacks if not every function and\noperator that is used in the script is schema qualified. While doing\nsuch schema qualification is possible, it relies on the author to never\nmake a mistake in any of the sql files. And sadly humans have a tendency\nto make mistakes.\n\nThis patch adds a new \"owned_schema\" option to the extension control\nfile that can be set to true to indicate that this extension wants to\nown the schema in which it is installed. What that means is that the\nschema should not exist before creating the extension, and will be\ncreated during extension creation. This thus gives the extension author\nan easy way to use a safe search_path, while still allowing all objects\nto be grouped together in a schema. The implementation also has the\npleasant side effect that the schema will be automatically dropped when\nthe extension is dropped.\n\nOne way in which certain extensions currently hack around the\nnon-existence of this feature is by using the approach that pg_cron\nuses: Setting the schema to pg_catalog and running \"CREATE SCHEMA\npg_cron\" from within the extension script. While this works, it's\nobviously a hack, and a big downside of it is that it doesn't allow\nusers to choose the schema name used by the extension.\n\nPS. I have never added fields to pg_catalag tables before, so there's\na clear TODO in the pg_upgrade code related to that. If anyone has\nsome pointers for me to look at to address that one that would be\nhelpful, if not I'll probably figure it out myself. All other code is\nin pretty finished state, although I'm considering if\nAlterExtensionNamespace should maybe be split a bit somehow, because\nowned_schema skips most of the code in that function.", "msg_date": "Sat, 1 Jun 2024 17:08:19 -0700", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Extension security improvement: Add support for extensions with an\n owned schema" }, { "msg_contents": "On Sun, Jun 2, 2024, 02:08 Jelte Fennema-Nio <[email protected]> wrote:\n> This patch adds a new \"owned_schema\" option to the extension control\n> file that can be set to true to indicate that this extension wants to\n> own the schema in which it is installed.\n\nHuge +1\n\nMany managed PostgreSQL services block superuser access, but provide a\nway for users to trigger a create/alter extension as superuser. There\nhave been various extensions whose SQL scripts can be tricked into\ncalling a function that was pre-created in the extension schema. This\nis usually done by finding an unqualified call to a pg_catalog\nfunction/operator, and overloading it with one whose arguments types\nare a closer match for the provided values, which then takes\nprecedence regardless of search_path order. The custom function can\nthen do something like \"alter user foo superuser\".\n\nThe sequence of steps assumes the user already has some kind of admin\nrole and is gaining superuser access to their own database server.\nHowever, the superuser implicitly has shell access, so it gives\nattackers an additional set of tools to poke around in the managed\nservice. For instance, they can change the way the machine responds to\ncontrol plane requests, which can sometimes trigger further\nescalations. In addition, many applications use the relatively\nprivileged default user, which means SQL injection issues can also\nescalate into superuser access and beyond.\n\nThere are some static analysis tools like\nhttps://github.com/timescale/pgspot that address this issue, though it\nseems like a totally unnecessary hole. Using schema = pg_catalog,\nrelocatable = false, and doing an explicit create schema (without \"if\nnot exists\") plugs the hole by effectively disabling extension\nschemas. For extensions I'm involved in, I consider this to be a hard\nrequirement.\n\nI think Jelte's solution is preferable going forward, because it\npreserves the flexibility that extension schemas were meant to\nprovide, and makes the potential hazards of reusing a schema more\nexplicit.\n\ncheers,\nMarco\n\n\n", "msg_date": "Wed, 5 Jun 2024 16:09:08 +0200", "msg_from": "Marco Slot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "On Sat, 2024-06-01 at 17:08 -0700, Jelte Fennema-Nio wrote:\n> This patch adds a new \"owned_schema\" option to the extension control\n> file that can be set to true to indicate that this extension wants to\n> own the schema in which it is installed. What that means is that the\n> schema should not exist before creating the extension, and will be\n> created during extension creation. This thus gives the extension\n> author\n> an easy way to use a safe search_path, while still allowing all\n> objects\n> to be grouped together in a schema. The implementation also has the\n> pleasant side effect that the schema will be automatically dropped\n> when\n> the extension is dropped.\n\nIs this orthogonal to relocatability?\n\nWhen you say \"an easy way to use a safe search_path\": the CREATE\nEXTENSION command already sets the search_path, so your patch just\nensures that it's empty (and therefore safe) first, right?\n\nShould we go further and try to prevent creating objects in an\nextension-owned schema with normal SQL?\n\nApproximately how many extensions should be adjusted to use\nowned_schema=true? What are the reasons an extension would not want to\nown the schema in which the objects are created? I assume some would\nstill create objects in pg_catalog, but ideally we'd come up with a\nbetter solution to that as well.\n\nThis protects the extension script, but I'm left wondering if we could\ndo something here to make it easier to protect extension functions\ncalled from outside the extension script, also. It would be nice if we\ncould implicitly tack on a \"SET search_path TO @extschema@, pg_catalog,\npg_temp\" to each function in the extension. I'm not proposing that, but\nperhaps a related idea might work. Probably outside the scope of your\nproposal.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 05 Jun 2024 10:53:31 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "On Wed, 5 Jun 2024 at 19:53, Jeff Davis <[email protected]> wrote:\n> Is this orthogonal to relocatability?\n\nIt's fairly orthogonal, but it has some impact on relocatability: You\ncan only relocate to a schema name that does not exist yet (while\ncurrently you can only relocate to a schema that already exists). This\nis because, for owned_schema=true, relocation is not actually changing\nthe schema of the extension objects, it only renames the existing\nschema to the new name.\n\n> When you say \"an easy way to use a safe search_path\": the CREATE\n> EXTENSION command already sets the search_path, so your patch just\n> ensures that it's empty (and therefore safe) first, right?\n\nCorrect: **safe** is the key word in that sentence. Without\nowned_schema, you get an **unsafe** search_path by default unless you\ngo out of your way to set \"schema=pg_catalog\" in the control file.\n\n> Should we go further and try to prevent creating objects in an\n> extension-owned schema with normal SQL?\n\nThat would be nice for sure, but security wise it doesn't matter\nafaict. Only the creator of the extension would be able to add stuff\nin the extension-owned schema anyway, so there's no privilege\nescalation concern there.\n\n> Approximately how many extensions should be adjusted to use\n> owned_schema=true?\n\nAdjusting existing extensions would be hard at the moment, because the\ncurrent patch does not introduce a migration path. But basically I\nthink for most new extension installs (even of existing extensions) it\nwould be fine if owned_schema=true would be the default. I didn't\npropose (yet) to make it the default though, to avoid discussing the\ntradeoff of security vs breaking installation for an unknown amount of\nexisting extensions.\n\nI think having a generic migration path would be hard, due to the many\nways in which extensions can now be installed. But I think we might be\nable to add one fairly easily for relocatable extensions: e.g. \"ALTER\nEXTESION SET SCHEMA new_schema OWNED_SCHEMA\", which would essentially\ndo CREATE SCHEMA new_schema + move all objects from old_schema to\nnew_schema. And even for non-relocatable one you could do something\nlike:\n\nCREATE SCHEMA temp_schema_{random_id};\n-- move all objects from ext_schema to temp_schema_{random_id};\nDROP SCHEMA ext_schema; -- if this fails, ext_schema was not empty\nALTER SCHEMA temp_schema_{random_id} RENAME TO ext_schema;\n\n> What are the reasons an extension would not want to\n> own the schema in which the objects are created? I assume some would\n> still create objects in pg_catalog, but ideally we'd come up with a\n> better solution to that as well.\n\nSome extensions depend on putting stuff into the public schema. But\nyeah it would be best if they didn't.\n\n> This protects the extension script, but I'm left wondering if we could\n> do something here to make it easier to protect extension functions\n> called from outside the extension script, also. It would be nice if we\n> could implicitly tack on a \"SET search_path TO @extschema@, pg_catalog,\n> pg_temp\" to each function in the extension. I'm not proposing that, but\n> perhaps a related idea might work. Probably outside the scope of your\n> proposal.\n\nYeah, this proposal definitely doesn't solve all security problems\nwith extensions. And indeed what you're proposing would solve another\nmajor issue, another option would be to default to the \"safe\"\nsearch_path that you proposed a while back. But yeah I agree that it's\noutside of the scope of this proposal. I feel like if we try to solve\nevery security problem at once, probably nothing gets solved instead.\nThat's why I tried to keep this proposal very targeted, i.e. have this\nbe step 1 of an N step plan to make extensions more secure by default.\n\n\n", "msg_date": "Wed, 5 Jun 2024 22:30:28 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "Attached is an updated version of this patch that fixes a few issues\nthat CI reported (autoconf, compiler warnings and broken docs).\n\nI also think I changed the pg_upgrade to do the correct thing, but I'm\nnot sure how to test this (even manually). Because part of it would\nonly be relevant once we support upgrading from PG18. So for now the\nupgrade_code I haven't actually run.", "msg_date": "Wed, 19 Jun 2024 17:19:14 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "On Wed, Jun 19, 2024 at 8:19 AM Jelte Fennema-Nio <[email protected]> wrote:\n\n\n> Because part of it would\n> only be relevant once we support upgrading from PG18. So for now the\n> upgrade_code I haven't actually run.\n>\n\nDoes it apply against v16? If so, branch off there, apply it, then upgrade\nfrom the v16 branch to master.\n\nDavid J.\n\nOn Wed, Jun 19, 2024 at 8:19 AM Jelte Fennema-Nio <[email protected]> wrote: Because part of it would\nonly be relevant once we support upgrading from PG18. So for now the\nupgrade_code I haven't actually run.Does it apply against v16?  If so, branch off there, apply it, then upgrade from the v16 branch to master.David J.", "msg_date": "Wed, 19 Jun 2024 08:22:14 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "On Sat, Jun 1, 2024 at 8:08 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Writing the sql migration scripts that are run by CREATE EXTENSION and\n> ALTER EXTENSION UPDATE are security minefields for extension authors.\n> One big reason for this is that search_path is set to the schema of the\n> extension while running these scripts, and thus if a user with lower\n> privileges can create functions or operators in that schema they can do\n> all kinds of search_path confusion attacks if not every function and\n> operator that is used in the script is schema qualified. While doing\n> such schema qualification is possible, it relies on the author to never\n> make a mistake in any of the sql files. And sadly humans have a tendency\n> to make mistakes.\n\nI agree that this is a problem. I also think that the patch might be a\nreasonable solution (but I haven't reviewed it).\n\nBut I wonder if there might also be another possible approach: could\nwe, somehow, prevent object references in extension scripts from\nresolving to anything other than the system catalogs and the contents\nof that extension? Perhaps with a control file setting to specify a\nlist of trusted extensions which we're also allowed to reference?\n\nI have a feeling that this might be pretty annoying to implement, and\nif that is true, then never mind. But if it isn't that annoying to\nimplement, it would make a lot of unsafe extensions safe by default,\nwithout the extension author needing to take any action. Which could\nbe pretty cool. It would also make it possible for extensions to\nsafely share a schema, if desired.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jun 2024 11:28:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "On Wed, 19 Jun 2024 at 17:28, Robert Haas <[email protected]> wrote:\n> But I wonder if there might also be another possible approach: could\n> we, somehow, prevent object references in extension scripts from\n> resolving to anything other than the system catalogs and the contents\n> of that extension?\n\nThis indeed does sound like the behaviour that pretty much every\nexisting extension wants to have. One small addition/clarification\nthat I would make to your definition: fully qualified references to\nother objects should still be allowed.\n\nI do think, even if we have this, there would be other good reasons to\nuse \"owned schemas\" for extension authors. At least the following two:\n1. To have a safe search_path that can be used in SET search_path on a\nfunction (see also [1]).\n2. To make it easy for extension authors to avoid conflicts with other\nextensions/UDFs.\n\n> Perhaps with a control file setting to specify a\n> list of trusted extensions which we're also allowed to reference?\n\nI think we could simply use the already existing \"requires\" field from\nthe control file. i.e. you're allowed to reference only your own\nextension\n\n> I have a feeling that this might be pretty annoying to implement, and\n> if that is true, then never mind.\n\nBased on a quick look it's not trivial, but also not super bad.\nBasically it seems like in src/backend/catalog/namespace.c, every time\nwe loop over activeSearchPath and CurrentExtensionObject is set, then\nwe should skip any item that's not stored in pg_catalog, unless\nthere's a DEPENDENCY_EXTENSION pg_depend entry for the item (and that\npg_depend entry references the extension or the requires list).\n\nThere's quite a few loops over activeSearchPath in namespace.c, but\nthey all seem pretty similar. So while a bunch of code would need to\nbe changed, the changes could probably be well encapsulated in a\nfunction.\n\n[1]: https://www.postgresql.org/message-id/flat/00d8f046156e355ec0eb49585408bafc8012e4a5.camel%40j-davis.com#3ad66667a8073d5ef50cfe44e305c38d\n\n\n", "msg_date": "Wed, 19 Jun 2024 19:50:09 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> On Wed, 19 Jun 2024 at 17:28, Robert Haas <[email protected]> wrote:\n>> I have a feeling that this might be pretty annoying to implement, and\n>> if that is true, then never mind.\n\n> Based on a quick look it's not trivial, but also not super bad.\n> Basically it seems like in src/backend/catalog/namespace.c, every time\n> we loop over activeSearchPath and CurrentExtensionObject is set, then\n> we should skip any item that's not stored in pg_catalog, unless\n> there's a DEPENDENCY_EXTENSION pg_depend entry for the item (and that\n> pg_depend entry references the extension or the requires list).\n\nWe could change the lookup rules that apply during execution of\nan extension script, but we already restrict search_path at that\ntime so I'm not sure how much further this'd move the goalposts.\n\nThe *real* problem IMO is that if you create a PL function or\n(old-style) SQL function within an extension, execution of that\nfunction is not similarly protected.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jun 2024 13:55:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "On Wed, 19 Jun 2024 at 19:55, Tom Lane <[email protected]> wrote:\n>\n> Jelte Fennema-Nio <[email protected]> writes:\n> > On Wed, 19 Jun 2024 at 17:28, Robert Haas <[email protected]> wrote:\n> >> I have a feeling that this might be pretty annoying to implement, and\n> >> if that is true, then never mind.\n>\n> > Based on a quick look it's not trivial, but also not super bad.\n> > Basically it seems like in src/backend/catalog/namespace.c, every time\n> > we loop over activeSearchPath and CurrentExtensionObject is set, then\n> > we should skip any item that's not stored in pg_catalog, unless\n> > there's a DEPENDENCY_EXTENSION pg_depend entry for the item (and that\n> > pg_depend entry references the extension or the requires list).\n>\n> We could change the lookup rules that apply during execution of\n> an extension script, but we already restrict search_path at that\n> time so I'm not sure how much further this'd move the goalposts.\n\nThe point I tried to make in my first email is that this restricted\nsearch_path you mention, is not very helpful at preventing privilege\nescalations. Since it's often possible for a non-superuser to create\nfunctions in one of the schemas in this search_path, e.g. by having\nthe non-superuser first create the schema & create some functions in\nit, and then asking the DBA/control plane to create the extension in\nthat schema.\n\nMy patch tries to address that problem by creating the schema of the\nextension during extension creation, and failing if it already exists.\nThus implicitly ensuring that a non-superuser cannot mess with the\nschema.\n\nThe proposal from Robert tries to instead address by changing the\nlookup rules during execution of an extension script to be more strict\nthan they would be outside of it (i.e. even if a function/operator\nmatches search_path it might still not be picked).\n\n> The *real* problem IMO is that if you create a PL function or\n> (old-style) SQL function within an extension, execution of that\n> function is not similarly protected.\n\nThat's definitely a big problem too, and that's the problem that [1]\ntries to fix. But first the lookup in extension scripts would need to\nbe made secure, because it doesn't seem very useful (security wise) to\nuse the same lookup mechanism in functions as we do in extension\nscripts, if the lookup in extension scripts is not secure in the first\nplace. I think the method of making the lookup secure in my patch\nwould transfer over well, because it adds a way for a safe search_path\nto exist, so all that's needed is for the PL function to use that\nsearch_path. Robbert's proposal would be more difficult I think. When\nexecuting a PL function from an extension we'd need to use the same\nchanged lookup rules that we'd use during the extension script of that\nextension. I think that should be possible, but it's definitely more\ninvolved.\n\n[1]: https://www.postgresql.org/message-id/flat/CAE9k0P%253DFcZ%253Dao3ZpEq29BveF%252B%253D27KBcRT2HFowJxoNCv02dHLA%2540mail.gmail.com\n\n\n", "msg_date": "Wed, 19 Jun 2024 21:06:01 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "On Wed, 19 Jun 2024 at 17:22, David G. Johnston\n<[email protected]> wrote:\n>\n> On Wed, Jun 19, 2024 at 8:19 AM Jelte Fennema-Nio <[email protected]> wrote:\n>\n>>\n>> Because part of it would\n>> only be relevant once we support upgrading from PG18. So for now the\n>> upgrade_code I haven't actually run.\n>\n>\n> Does it apply against v16? If so, branch off there, apply it, then upgrade from the v16 branch to master.\n\n\nI realized it's possible to do an \"upgrade\" with pg_upgrade from v17\nto v17. So I was able to test both the pre and post PG18 upgrade logic\nmanually by changing the version in this line:\n\nif (fout->remoteVersion >= 180000)\n\nAs expected the new pg_upgrade code was severely broken. Attached is a\nnew patch where the pg_upgrade code now actually works.", "msg_date": "Thu, 20 Jun 2024 13:18:03 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "On Wed, Jun 19, 2024 at 1:50 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I do think, even if we have this, there would be other good reasons to\n> use \"owned schemas\" for extension authors. At least the following two:\n> 1. To have a safe search_path that can be used in SET search_path on a\n> function (see also [1]).\n> 2. To make it easy for extension authors to avoid conflicts with other\n> extensions/UDFs.\n\n(1) is a very good point. (2) I don't know about one way or the other.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jun 2024 11:51:08 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "On Jun 19, 2024, at 11:28, Robert Haas <[email protected]> wrote:\n\n> But I wonder if there might also be another possible approach: could\n> we, somehow, prevent object references in extension scripts from\n> resolving to anything other than the system catalogs and the contents\n> of that extension? Perhaps with a control file setting to specify a\n> list of trusted extensions which we're also allowed to reference?\n\nIt would also have to allow access to other extensions it depends upon.\n\nD\n\n\n\n", "msg_date": "Thu, 20 Jun 2024 13:00:41 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "On Jun 19, 2024, at 13:50, Jelte Fennema-Nio <[email protected]> wrote:\n\n> This indeed does sound like the behaviour that pretty much every\n> existing extension wants to have. One small addition/clarification\n> that I would make to your definition: fully qualified references to\n> other objects should still be allowed.\n\nWould be tricky for referring to objects from other extensions with no defined schema, or are relatable.\n\n> 1. To have a safe search_path that can be used in SET search_path on a\n> function (see also [1]).\n> 2. To make it easy for extension authors to avoid conflicts with other\n> extensions/UDFs.\n\nThese would indeed be nice improvements IMO.\n\nBest,\n\nDavid\n\n\n\n\n", "msg_date": "Thu, 20 Jun 2024 13:02:36 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" }, { "msg_contents": "Hi,\n\nI've spent a bit of time looking at this patch. It seems there's a clear\nconsensus that having \"owned schemas\" for extensions would be good for\nsecurity. To me it also seems as a convenient way to organize stuff. It\nwas possible to create extensions in a separate schema before, ofc, but\nthat's up to the DBA. With this the extension author to enforce that.\n\nOne thing that's not quite clear to me is what's the correct way for\nexisting extensions to switch to an \"owned schema\". Let's say you have\nan extension. How do you transition to this? Can you just add it to the\ncontrol file and then some magic happens?\n\nA couple minor comments:\n\n\n1) doc/src/sgml/extend.sgml\n\n An extension is <firstterm>owned_schema</firstterm> if it requires a\n new dedicated schema for its objects. Such a requirement can make\n security concerns related to <literal>search_path</literal> injection\n much easier to reason about. The default is <literal>false</literal>,\n i.e., the extension can be installed into an existing schema.\n\nDoesn't \"extension is owned_schema\" sound a bit weird? I'd probably say\n\"extension may own a schema\" or something like that.\n\nAlso, \"requires a new dedicated schema\" is a bit ambiguous. It's not\nclear if it means the schema is expected to exist, or if it creates the\nschema itself.\n\nAnd perhaps it should clarify what \"much easier to reason about\" means.\nThat's pretty vague, and as a random extension author I wouldn't know\nabout the risks to consider. Maybe there's a section about this stuff\nthat we could reference?\n\n\n2) doc/src/sgml/ref/create_extension.sgml\n\n relocated. The named schema must already exist if the extension's\n control file does not specify <literal>owned_schema</literal>.\n\nSeems a bit unclear, I'd say having \"owned_schema = false\" in the\ncontrol file still qualifies as \"specifies owned_schema\". So might be\nbetter to say it needs to be set to true?\n\nAlso, perhaps \"dedicated_schema\" would be better than \"owned_schema\"? I\nmean, the point is not that it's \"owned\" by the extension, but that\nthere's nothing else in it. But that's nitpicking.\n\n\n3) src/backend/commands/extension.c\n\nI'm not sure why AlterExtensionNamespace moves the privilege check. Why\nshould it not check the privilege for owned schema too?\n\n\n4) src/bin/pg_dump/pg_dump.c\n\ncheckExtensionMembership has typo \"owned_schem\".\n\nShouldn't binary_upgrade_extension_member still set ext=NULL in the for\nloop, the way the original code does that?\n\nThe long if conditions might need some indentation, I guess. pgindent\nleaves them like this, but 100 columns seems a bit too much. I'd do a\nline break after each condition, I guess.\n\n\n\n\nregards\n\n-- \nTomas Vondra\n\n\n\n", "msg_date": "Fri, 27 Sep 2024 14:00:17 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension security improvement: Add support for extensions with\n an owned schema" } ]
[ { "msg_contents": "Inspired by David Rowley's work [1] on optimizing JSON escape processing\nwith SIMD, I noticed that the COPY code could potentially benefit from SIMD\ninstructions in a few places, eg:\n\n(1) CopyAttributeOutCSV() has 2 byte-by-byte loops\n(2) CopyAttributeOutText() has 1\n(3) CopyReadLineText() has 1\n(4) CopyReadAttributesCSV() has 1\n(5) CopyReadAttributesText() has 1\n\nAttached is a quick POC patch that uses SIMD instructions for case (1)\nabove. For sufficiently large attribute values, this is a significant\nperformance win. For small fields, performance looks to be about the same.\nResults on an M1 Macbook Pro.\n\n======\nneilconway=# select count(*), avg(length(a))::int, avg(length(b))::int,\navg(length(c))::int from short_strings;\n count | avg | avg | avg\n--------+-----+-----+-----\n 524288 | 8 | 8 | 8\n(1 row)\n\nneilconway=# select count(*), avg(length(a))::int, avg(length(b))::int,\navg(length(c))::int from long_strings;\n count | avg | avg | avg\n-------+-----+-----+-----\n 65536 | 657 | 657 | 657\n(1 row)\n\nmaster @ 8fea1bd541:\n\n$ for i in ~/*.sql; do hyperfine --warmup 5 \"./psql -f $i\"; done\nBenchmark 1: ./psql -f /Users/neilconway/copy-out-bench-long-quotes.sql\n Time (mean ± σ): 2.027 s ± 0.075 s [User: 0.001 s, System: 0.000\ns]\n Range (min … max): 1.928 s … 2.207 s 10 runs\n\nBenchmark 1: ./psql -f /Users/neilconway/copy-out-bench-long.sql\n Time (mean ± σ): 1.420 s ± 0.027 s [User: 0.001 s, System: 0.000\ns]\n Range (min … max): 1.379 s … 1.473 s 10 runs\n\nBenchmark 1: ./psql -f /Users/neilconway/copy-out-bench-short.sql\n Time (mean ± σ): 546.0 ms ± 9.6 ms [User: 1.4 ms, System: 0.3 ms]\n Range (min … max): 539.0 ms … 572.1 ms 10 runs\n\nmaster + SIMD patch:\n\n$ for i in ~/*.sql; do hyperfine --warmup 5 \"./psql -f $i\"; done\nBenchmark 1: ./psql -f /Users/neilconway/copy-out-bench-long-quotes.sql\n Time (mean ± σ): 797.8 ms ± 19.4 ms [User: 0.9 ms, System: 0.0 ms]\n Range (min … max): 770.0 ms … 828.5 ms 10 runs\n\nBenchmark 1: ./psql -f /Users/neilconway/copy-out-bench-long.sql\n Time (mean ± σ): 732.3 ms ± 20.8 ms [User: 1.2 ms, System: 0.0 ms]\n Range (min … max): 701.1 ms … 763.5 ms 10 runs\n\nBenchmark 1: ./psql -f /Users/neilconway/copy-out-bench-short.sql\n Time (mean ± σ): 545.7 ms ± 13.5 ms [User: 1.3 ms, System: 0.1 ms]\n Range (min … max): 533.6 ms … 580.2 ms 10 runs\n======\n\nImplementation-wise, it seems complex to use SIMD when\nencoding_embeds_ascii is true (which should be uncommon). In principle, we\ncould probably still use SIMD here, but it would require juggling between\nthe SIMD chunk size and sizes returned by pg_encoding_mblen(). For now, the\nPOC patch falls back to the old code path when encoding_embeds_ascii is\ntrue.\n\nAny feedback would be very welcome.\n\nCheers,\nNeil\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvpLXwMZvbCKcdGfU9XQjGCDm7tFpRdTXuB9PVgpNUYfEQ@mail.gmail.com", "msg_date": "Sun, 2 Jun 2024 15:17:21 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing COPY with SIMD" }, { "msg_contents": "On 6/2/24 15:17, Neil Conway wrote:\n> Inspired by David Rowley's work [1]\n\n\nWelcome back!\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 3 Jun 2024 09:22:14 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing COPY with SIMD" }, { "msg_contents": "On Mon, Jun 3, 2024 at 9:22 AM Joe Conway <[email protected]> wrote:\n\n> Welcome back!\n>\n\nThanks Joe! It's been a minute :)\n\nNeil\n\nOn Mon, Jun 3, 2024 at 9:22 AM Joe Conway <[email protected]> wrote:Welcome back!Thanks Joe! It's been a minute :)Neil", "msg_date": "Mon, 3 Jun 2024 10:16:22 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing COPY with SIMD" }, { "msg_contents": "On Sun, Jun 02, 2024 at 03:17:21PM -0400, Neil Conway wrote:\n> master @ 8fea1bd541:\n> \n> $ for i in ~/*.sql; do hyperfine --warmup 5 \"./psql -f $i\"; done\n> Benchmark 1: ./psql -f /Users/neilconway/copy-out-bench-long-quotes.sql\n> Time (mean ± σ): 2.027 s ± 0.075 s [User: 0.001 s, System: 0.000\n> s]\n> Range (min … max): 1.928 s … 2.207 s 10 runs\n> \n> Benchmark 1: ./psql -f /Users/neilconway/copy-out-bench-long.sql\n> Time (mean ± σ): 1.420 s ± 0.027 s [User: 0.001 s, System: 0.000\n> s]\n> Range (min … max): 1.379 s … 1.473 s 10 runs\n> \n> Benchmark 1: ./psql -f /Users/neilconway/copy-out-bench-short.sql\n> Time (mean ± σ): 546.0 ms ± 9.6 ms [User: 1.4 ms, System: 0.3 ms]\n> Range (min … max): 539.0 ms … 572.1 ms 10 runs\n> \n> master + SIMD patch:\n> \n> $ for i in ~/*.sql; do hyperfine --warmup 5 \"./psql -f $i\"; done\n> Benchmark 1: ./psql -f /Users/neilconway/copy-out-bench-long-quotes.sql\n> Time (mean ± σ): 797.8 ms ± 19.4 ms [User: 0.9 ms, System: 0.0 ms]\n> Range (min … max): 770.0 ms … 828.5 ms 10 runs\n> \n> Benchmark 1: ./psql -f /Users/neilconway/copy-out-bench-long.sql\n> Time (mean ± σ): 732.3 ms ± 20.8 ms [User: 1.2 ms, System: 0.0 ms]\n> Range (min … max): 701.1 ms … 763.5 ms 10 runs\n> \n> Benchmark 1: ./psql -f /Users/neilconway/copy-out-bench-short.sql\n> Time (mean ± σ): 545.7 ms ± 13.5 ms [User: 1.3 ms, System: 0.1 ms]\n> Range (min … max): 533.6 ms … 580.2 ms 10 runs\n\nThese are nice results.\n\n> -/*\n> - * Send text representation of one attribute, with conversion and escaping\n> - */\n> #define DUMPSOFAR() \\\n\nIIUC this comment was meant to describe the CopyAttributeOutText() function\njust below this macro. When the macro was added in commit 0a5fdb0 from\n2006, the comment became detached from the function. Maybe we should just\nmove it back down below the macro.\n\n> +/*\n> + * Send text representation of one attribute, with conversion and CSV-style\n> + * escaping. This variant uses SIMD instructions to optimize processing, but\n> + * we can only use this approach when encoding_embeds_ascii if false.\n> + */\n\nnitpick: Can we add a few words about why using SIMD instructions when\nencoding_embeds_ascii is true is difficult? I don't dispute that it is\ncomplex and/or not worth the effort, but it's not clear to me why that's\nthe case just from reading the patch.\n\n> +static void\n> +CopyAttributeOutCSVFast(CopyToState cstate, const char *ptr,\n> +\t\t\t\t\t\tbool use_quote)\n\nnitpick: Can we add \"vector\" or \"simd\" to the name instead of \"fast\"? IMHO\nit's better to be more descriptive.\n\nAt a glance, the code look pretty reasonable to me. I might have some\nother nitpicks, such as styling tricks to avoid too many levels of\nindentation, but that's not terribly important.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 3 Jun 2024 09:56:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing COPY with SIMD" }, { "msg_contents": "Thanks for the review and feedback!\n\nOn Mon, Jun 3, 2024 at 10:56 AM Nathan Bossart <[email protected]>\nwrote:\n\n> > -/*\n> > - * Send text representation of one attribute, with conversion and\n> escaping\n> > - */\n> > #define DUMPSOFAR() \\\n>\n> IIUC this comment was meant to describe the CopyAttributeOutText() function\n> just below this macro. When the macro was added in commit 0a5fdb0 from\n> 2006, the comment became detached from the function. Maybe we should just\n> move it back down below the macro.\n>\n\nAh, that makes sense -- done.\n\n\n> > +/*\n> > + * Send text representation of one attribute, with conversion and\n> CSV-style\n> > + * escaping. This variant uses SIMD instructions to optimize\n> processing, but\n> > + * we can only use this approach when encoding_embeds_ascii if false.\n> > + */\n>\n> nitpick: Can we add a few words about why using SIMD instructions when\n> encoding_embeds_ascii is true is difficult? I don't dispute that it is\n> complex and/or not worth the effort, but it's not clear to me why that's\n> the case just from reading the patch.\n>\n\nSounds good.\n\n\n> > +static void\n> > +CopyAttributeOutCSVFast(CopyToState cstate, const char *ptr,\n> > + bool use_quote)\n>\n> nitpick: Can we add \"vector\" or \"simd\" to the name instead of \"fast\"? IMHO\n> it's better to be more descriptive.\n>\n\nSure, done.\n\nAttached is a revised patch series, that incorporates the feedback above\nand makes two additional changes:\n\n* Add some regression tests to cover COPY behavior with octal and hex\nescape sequences\n* Optimize the COPY TO text (non-CSV) code path (CopyAttributeOutText()).\n\nIn CopyAttributeOutText(), I refactored some code into a helper function to\nreduce code duplication, on the theory that field delimiters and escape\nsequences are rare, so we don't mind taking a function call in those cases.\n\nWe could go further and use the same code to handle both the tail of the\nstring in the vectorized case and the entire string in the non-vectorized\ncase, but I didn't bother with that -- as written, it would require taking\nan unnecessary strlen() of the input string in the non-vectorized case.\n\nPerformance for COPY TO in text (non-CSV) mode:\n\n===\nmaster\n\nBenchmark 1: ./psql -f\n/Users/neilconway/copy-out-bench-text-long-strings.sql\n Time (mean ± σ): 1.240 s ± 0.013 s [User: 0.001 s, System: 0.000\ns]\n Range (min … max): 1.220 s … 1.256 s 10 runs\n\nBenchmark 1: ./psql -f /Users/neilconway/copy-out-bench-text-short.sql\n Time (mean ± σ): 522.3 ms ± 11.3 ms [User: 1.2 ms, System: 0.0 ms]\n Range (min … max): 512.0 ms … 544.3 ms 10 runs\n\nmaster + SIMD patches:\n\nBenchmark 1: ./psql -f\n/Users/neilconway/copy-out-bench-text-long-strings.sql\n Time (mean ± σ): 867.6 ms ± 12.7 ms [User: 1.2 ms, System: 0.0 ms]\n Range (min … max): 842.1 ms … 891.6 ms 10 runs\n\nBenchmark 1: ./psql -f /Users/neilconway/copy-out-bench-text-short.sql\n Time (mean ± σ): 536.7 ms ± 10.9 ms [User: 1.2 ms, System: 0.0 ms]\n Range (min … max): 530.1 ms … 566.8 ms 10 runs\n===\n\nLooks like there is a slight regression for short attribute values, but I\nthink the tradeoff is a net win.\n\nI'm going to take a look at applying similar ideas to COPY FROM next.\n\nNeil", "msg_date": "Wed, 5 Jun 2024 13:46:44 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing COPY with SIMD" }, { "msg_contents": "On Wed, Jun 05, 2024 at 01:46:44PM -0400, Neil Conway wrote:\n> We could go further and use the same code to handle both the tail of the\n> string in the vectorized case and the entire string in the non-vectorized\n> case, but I didn't bother with that -- as written, it would require taking\n> an unnecessary strlen() of the input string in the non-vectorized case.\n\nFor pg_lfind32(), we ended up using an overlapping approach for the\nvectorized case (see commit 7644a73). That appeared to help more than it\nharmed in the many (admittedly branch predictor friendly) tests I ran. I\nwonder if you could do something similar here.\n\n> Looks like there is a slight regression for short attribute values, but I\n> think the tradeoff is a net win.\n\nIt'd be interesting to see the threshold where your patch starts winning.\nIIUC the vector stuff won't take effect until there are 16 bytes to\nprocess. If we don't expect attributes to ordinarily be >= 16 bytes, it\nmight be worth trying to mitigate this ~3% regression. Maybe we can find\nsome other small gains elsewhere to offset it.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 5 Jun 2024 14:05:20 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing COPY with SIMD" }, { "msg_contents": "On Wed, Jun 5, 2024 at 3:05 PM Nathan Bossart <[email protected]>\nwrote:\n\n> For pg_lfind32(), we ended up using an overlapping approach for the\n> vectorized case (see commit 7644a73). That appeared to help more than it\n> harmed in the many (admittedly branch predictor friendly) tests I ran. I\n> wonder if you could do something similar here.\n>\n\nI didn't entirely follow what you are suggesting here -- seems like we\nwould need to do strlen() for the non-SIMD case if we tried to use a\nsimilar approach.\n\nIt'd be interesting to see the threshold where your patch starts winning.\n> IIUC the vector stuff won't take effect until there are 16 bytes to\n> process. If we don't expect attributes to ordinarily be >= 16 bytes, it\n> might be worth trying to mitigate this ~3% regression. Maybe we can find\n> some other small gains elsewhere to offset it.\n>\n\nFor the particular short-strings benchmark I have been using (3 columns\nwith 8-character ASCII strings in each), I suspect the regression is caused\nby the need to do a strlen(), rather than the vectorized loop itself (we\nskip the vectorized loop anyway because sizeof(Vector8) == 16 on this\nmachine). (This explains why we see a regression on short strings for text\nbut not CSV: CSV needed to do a strlen() for the non-quoted-string case\nregardless). Unfortunately this makes it tricky to make the optimization\nconditional on the length of the string. I suppose we could play some games\nwhere we start with a byte-by-byte loop and then switch over to the\nvectorized path (and take a strlen()) if we have seen more than, say,\nsizeof(Vector8) bytes so far. Seems a bit kludgy though.\n\nI will do some more benchmarking and report back. For the time being, I'm\nnot inclined to push to get the CopyAttributeOutTextVector() into the tree\nin its current state, as I agree that the short-attribute case is quite\nimportant.\n\nIn the meantime, attached is a revised patch series. This uses SIMD to\noptimize CopyReadLineText in COPY FROM. Performance results:\n\n====\nmaster @ 8fea1bd5411b:\n\nBenchmark 1: ./psql -f /Users/neilconway/copy-from-large-long-strings.sql\n Time (mean ± σ): 1.944 s ± 0.013 s [User: 0.001 s, System: 0.000\ns]\n Range (min … max): 1.927 s … 1.975 s 10 runs\n\nBenchmark 1: ./psql -f /Users/neilconway/copy-from-large-short-strings.sql\n Time (mean ± σ): 1.021 s ± 0.017 s [User: 0.002 s, System: 0.001\ns]\n Range (min … max): 1.005 s … 1.053 s 10 runs\n\nmaster + SIMD patches:\n\nBenchmark 1: ./psql -f /Users/neilconway/copy-from-large-long-strings.sql\n Time (mean ± σ): 1.513 s ± 0.022 s [User: 0.001 s, System: 0.000\ns]\n Range (min … max): 1.493 s … 1.552 s 10 runs\n\nBenchmark 1: ./psql -f /Users/neilconway/copy-from-large-short-strings.sql\n Time (mean ± σ): 1.032 s ± 0.032 s [User: 0.002 s, System: 0.001\ns]\n Range (min … max): 1.009 s … 1.113 s 10 runs\n====\n\nNeil", "msg_date": "Fri, 7 Jun 2024 14:07:36 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing COPY with SIMD" } ]
[ { "msg_contents": "hi.\n\n---- setup\ndrop table if exist test__int cascade;\ncreate extension intarray;\n\nCREATE TABLE test__int( a int[] );\nCREATE INDEX text_idx on test__int using gist (a gist__intbig_ops(siglen = 1));\ndrop extension intarray cascade;\nNOTICE: drop cascades to index text_idx\n2024-06-03 11:53:32.629 CST [41165] ERROR: cache lookup failed for\nfunction 17758\n2024-06-03 11:53:32.629 CST [41165] STATEMENT: drop extension intarray cascade;\nERROR: cache lookup failed for function 17758\n\n------------------------------------------------\nbacktrace info:\nindex_getprocinfo\n#0 index_opclass_options (indrel=0x7faeca727b58, attnum=1,\nattoptions=94372901674408, validate=false)\n at ../../Desktop/pg_src/src4/postgres/src/backend/access/index/indexam.c:1034\n#1 0x000055d4e63a79cb in RelationGetIndexAttOptions\n(relation=0x7faeca727b58, copy=false)\n at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:5872\n#2 0x000055d4e639d72d in RelationInitIndexAccessInfo (relation=0x7faeca727b58)\n at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:1569\n#3 0x000055d4e639c5ac in RelationBuildDesc (targetRelId=24582, insertIt=true)\n at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:1207\n#4 0x000055d4e639e9ce in RelationIdGetRelation (relationId=24582)\n at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:2115\n#5 0x000055d4e5a412fd in relation_open (relationId=24582, lockmode=8)\n at ../../Desktop/pg_src/src4/postgres/src/backend/access/common/relation.c:58\n#6 0x000055d4e5ae6a06 in index_open (relationId=24582, lockmode=8)\n at ../../Desktop/pg_src/src4/postgres/src/backend/access/index/indexam.c:137\n#7 0x000055d4e5be61b8 in index_drop (indexId=24582, concurrent=false,\nconcurrent_lock_mode=false)\n at ../../Desktop/pg_src/src4/postgres/src/backend/catalog/index.c:2156\n------------------------\ni guess it's because we first dropped the function g_intbig_options\nthen later we need it.\n\n\n", "msg_date": "Mon, 3 Jun 2024 12:14:35 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "cannot drop intarray extension" }, { "msg_contents": "Hi Jian\n\nOn Mon, Jun 3, 2024 at 9:14 AM jian he <[email protected]> wrote:\n\n> hi.\n>\n> ---- setup\n> drop table if exist test__int cascade;\n> create extension intarray;\n>\n> CREATE TABLE test__int( a int[] );\n> CREATE INDEX text_idx on test__int using gist (a gist__intbig_ops(siglen =\n> 1));\n> drop extension intarray cascade;\n> NOTICE: drop cascades to index text_idx\n> 2024-06-03 11:53:32.629 CST [41165] ERROR: cache lookup failed for\n> function 17758\n>\n\nIts a bug.\n\n\n> 2024-06-03 11:53:32.629 CST [41165] STATEMENT: drop extension intarray\n> cascade;\n> ERROR: cache lookup failed for function 17758\n>\n> ------------------------------------------------\n> backtrace info:\n> index_getprocinfo\n> #0 index_opclass_options (indrel=0x7faeca727b58, attnum=1,\n> attoptions=94372901674408, validate=false)\n> at\n> ../../Desktop/pg_src/src4/postgres/src/backend/access/index/indexam.c:1034\n> #1 0x000055d4e63a79cb in RelationGetIndexAttOptions\n> (relation=0x7faeca727b58, copy=false)\n> at\n> ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:5872\n> #2 0x000055d4e639d72d in RelationInitIndexAccessInfo\n> (relation=0x7faeca727b58)\n> at\n> ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:1569\n> #3 0x000055d4e639c5ac in RelationBuildDesc (targetRelId=24582,\n> insertIt=true)\n> at\n> ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:1207\n> #4 0x000055d4e639e9ce in RelationIdGetRelation (relationId=24582)\n> at\n> ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:2115\n> #5 0x000055d4e5a412fd in relation_open (relationId=24582, lockmode=8)\n> at\n> ../../Desktop/pg_src/src4/postgres/src/backend/access/common/relation.c:58\n> #6 0x000055d4e5ae6a06 in index_open (relationId=24582, lockmode=8)\n> at\n> ../../Desktop/pg_src/src4/postgres/src/backend/access/index/indexam.c:137\n> #7 0x000055d4e5be61b8 in index_drop (indexId=24582, concurrent=false,\n> concurrent_lock_mode=false)\n> at ../../Desktop/pg_src/src4/postgres/src/backend/catalog/index.c:2156\n> ------------------------\n> i guess it's because we first dropped the function g_intbig_options\n> then later we need it.\n>\n>\n>\n\nHi JianOn Mon, Jun 3, 2024 at 9:14 AM jian he <[email protected]> wrote:hi.\n\n---- setup\ndrop table if exist test__int cascade;\ncreate extension intarray;\n\nCREATE TABLE test__int( a int[] );\nCREATE INDEX text_idx on test__int using gist (a gist__intbig_ops(siglen = 1));\ndrop extension intarray cascade;\nNOTICE:  drop cascades to index text_idx\n2024-06-03 11:53:32.629 CST [41165] ERROR:  cache lookup failed for\nfunction 17758Its a bug. \n2024-06-03 11:53:32.629 CST [41165] STATEMENT:  drop extension intarray cascade;\nERROR:  cache lookup failed for function 17758\n\n------------------------------------------------\nbacktrace info:\nindex_getprocinfo\n#0  index_opclass_options (indrel=0x7faeca727b58, attnum=1,\nattoptions=94372901674408, validate=false)\n    at ../../Desktop/pg_src/src4/postgres/src/backend/access/index/indexam.c:1034\n#1  0x000055d4e63a79cb in RelationGetIndexAttOptions\n(relation=0x7faeca727b58, copy=false)\n    at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:5872\n#2  0x000055d4e639d72d in RelationInitIndexAccessInfo (relation=0x7faeca727b58)\n    at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:1569\n#3  0x000055d4e639c5ac in RelationBuildDesc (targetRelId=24582, insertIt=true)\n    at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:1207\n#4  0x000055d4e639e9ce in RelationIdGetRelation (relationId=24582)\n    at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:2115\n#5  0x000055d4e5a412fd in relation_open (relationId=24582, lockmode=8)\n    at ../../Desktop/pg_src/src4/postgres/src/backend/access/common/relation.c:58\n#6  0x000055d4e5ae6a06 in index_open (relationId=24582, lockmode=8)\n    at ../../Desktop/pg_src/src4/postgres/src/backend/access/index/indexam.c:137\n#7  0x000055d4e5be61b8 in index_drop (indexId=24582, concurrent=false,\nconcurrent_lock_mode=false)\n    at ../../Desktop/pg_src/src4/postgres/src/backend/catalog/index.c:2156\n------------------------\ni guess it's because we first dropped the function g_intbig_options\nthen later we need it.", "msg_date": "Mon, 3 Jun 2024 09:17:13 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cannot drop intarray extension" }, { "msg_contents": "On Mon, Jun 3, 2024 at 12:14 PM jian he <[email protected]> wrote:\n>\n> hi.\n>\n> ---- setup\n> drop table if exist test__int cascade;\n> create extension intarray;\n>\n> CREATE TABLE test__int( a int[] );\n> CREATE INDEX text_idx on test__int using gist (a gist__intbig_ops(siglen = 1));\n> drop extension intarray cascade;\n> NOTICE: drop cascades to index text_idx\n> 2024-06-03 11:53:32.629 CST [41165] ERROR: cache lookup failed for\n> function 17758\n> 2024-06-03 11:53:32.629 CST [41165] STATEMENT: drop extension intarray cascade;\n> ERROR: cache lookup failed for function 17758\n>\n\n> ------------------------------------------------\nextension (ltree, pg_trgm) also have the same problem.\n\ndrop table if exists t2 cascade;\nCREATE EXTENSION ltree;\nCREATE TABLE t2 (t ltree);\ncreate index tstidx on t2 using gist (t gist_ltree_ops(siglen=4));\ndrop extension ltree cascade;\n\ndrop table if exists t3 cascade;\nCREATE EXTENSION pg_trgm;\nCREATE TABLE t3(t text COLLATE \"C\");\ncreate index trgm_idx on t3 using gist (t gist_trgm_ops(siglen=1));\ndrop extension pg_trgm cascade;\n\n> ------------------------------------------------\nextension hstore work as expected, no error.\n\ndrop table if exists t1 cascade;\ncreate extension hstore;\nCREATE TABLE t1 (h hstore);\ncreate index hidx on t1 using gist(h gist_hstore_ops(siglen=1));\ndrop extension hstore cascade;\n\non the master branch. i didn't test on other branches.\n\n\n", "msg_date": "Mon, 3 Jun 2024 13:45:19 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cannot drop intarray extension" }, { "msg_contents": "On Mon, Jun 3, 2024 at 12:14 PM jian he <[email protected]> wrote:\n>\n> hi.\n>\n> ---- setup\n> drop table if exist test__int cascade;\n> create extension intarray;\n>\n> CREATE TABLE test__int( a int[] );\n> CREATE INDEX text_idx on test__int using gist (a gist__intbig_ops(siglen = 1));\n> drop extension intarray cascade;\n> NOTICE: drop cascades to index text_idx\n> 2024-06-03 11:53:32.629 CST [41165] ERROR: cache lookup failed for\n> function 17758\n> 2024-06-03 11:53:32.629 CST [41165] STATEMENT: drop extension intarray cascade;\n> ERROR: cache lookup failed for function 17758\n>\n> ------------------------------------------------\n> backtrace info:\n> index_getprocinfo\n> #0 index_opclass_options (indrel=0x7faeca727b58, attnum=1,\n> attoptions=94372901674408, validate=false)\n> at ../../Desktop/pg_src/src4/postgres/src/backend/access/index/indexam.c:1034\n> #1 0x000055d4e63a79cb in RelationGetIndexAttOptions\n> (relation=0x7faeca727b58, copy=false)\n> at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:5872\n> #2 0x000055d4e639d72d in RelationInitIndexAccessInfo (relation=0x7faeca727b58)\n> at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:1569\n> #3 0x000055d4e639c5ac in RelationBuildDesc (targetRelId=24582, insertIt=true)\n> at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:1207\n> #4 0x000055d4e639e9ce in RelationIdGetRelation (relationId=24582)\n> at ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:2115\n> #5 0x000055d4e5a412fd in relation_open (relationId=24582, lockmode=8)\n> at ../../Desktop/pg_src/src4/postgres/src/backend/access/common/relation.c:58\n> #6 0x000055d4e5ae6a06 in index_open (relationId=24582, lockmode=8)\n> at ../../Desktop/pg_src/src4/postgres/src/backend/access/index/indexam.c:137\n> #7 0x000055d4e5be61b8 in index_drop (indexId=24582, concurrent=false,\n> concurrent_lock_mode=false)\n> at ../../Desktop/pg_src/src4/postgres/src/backend/catalog/index.c:2156\n> ------------------------\n> i guess it's because we first dropped the function g_intbig_options\n\nin this context, the index \"text_idx\" has a normal dependency with pg_opclass.\nbut `drop extension intarray cascade;`,\nCASCADE means that we drop the pg_opclass and pg_opclass's inner dependency\nfirst, then drop the index.\n\nwhile drop index (sub functions\nRelationGetIndexAttOptions,index_opclass_options, index_getprocinfo)\nrequires that pg_opclass and its inner dependencies (namely\ng_intbig_options, g_int_options) are not dropped first.\n\n\nin deleteObjectsInList, under certain conditions trying to sort the to\nbe deleted object list\nby just using sort_object_addresses seems to work,\nbut it looks like a hack.\nmaybe the proper fix would be in findDependentObjects.", "msg_date": "Fri, 7 Jun 2024 11:32:14 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cannot drop intarray extension" }, { "msg_contents": "On Fri, Jun 07, 2024 at 11:32:14AM +0800, jian he wrote:\n> in deleteObjectsInList, under certain conditions trying to sort the to\n> be deleted object list\n> by just using sort_object_addresses seems to work,\n> but it looks like a hack.\n> maybe the proper fix would be in findDependentObjects.\n\n@@ -1459,6 +1459,7 @@ RemoveRelations(DropStmt *drop)\n[...]\n- performMultipleDeletions(objects, drop->behavior, flags);\n+ if (list_length(drop->objects) > 1)\n+ sortable = false;\n\nI have not studied the patch in details, but this looks\novercomplicated to me. All the callers of performMultipleDeletions\npass down sortable as true, while deleteObjectsInList() uses this\nargument to avoid the sorting on nested calls. It seems to me that\nthis could be simpler.\n--\nMichael", "msg_date": "Fri, 7 Jun 2024 14:14:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cannot drop intarray extension" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Fri, Jun 07, 2024 at 11:32:14AM +0800, jian he wrote:\n>> in deleteObjectsInList, under certain conditions trying to sort the to\n>> be deleted object list\n>> by just using sort_object_addresses seems to work,\n>> but it looks like a hack.\n>> maybe the proper fix would be in findDependentObjects.\n\n> I have not studied the patch in details, but this looks\n> overcomplicated to me.\n\nI dunno about overcomplicated, but it's fundamentally the wrong thing:\nit won't do much except move the problem from this example to other\nexample(s). The difficulty is precisely that we cannot simply delete\nobjects in reverse OID order and expect that to be OK. It appears to\nwork in simple cases because reverse OID order usually means deleting\nnewest objects first, and that usually means dropping depender objects\nbefore dependees --- but dependencies added as a result of later ALTER\ncommands may not be honored correctly. Not to mention that you can\nlose if an OID wraparound happened during the sequence of object\ncreations.\n\nIn the case at hand, the reason we're having trouble with\ng_intbig_options() is that the sequence of extension scripts\nrun by CREATE EXTENSION creates the gist__intbig_ops opfamily\nfirst, and then creates g_intbig_options() and attaches it to\nthe opfamily later (in intarray--1.2--1.3.sql). So g_intbig_options()\nhas a larger OID than the opclass that the index depends on.\nIn DROP EXTENSION, the first level of findDependentObjects() finds\nall the direct dependencies (members) of the extension, and then\nsorts them by OID descending, concluding that g_intbig_options()\ncan be dropped before the opclass. Subsequent recursive levels\nwill find the index and recognize that it must be dropped before\nthe opclass --- but this fails to account for the fact that we'd\nbetter drop the opclass before any of the functions it depends on.\nAt some point along the line, we will come across the dependency\nthat says so; but we don't do anything in response, because if\nfindDependentObjects() sees that the current object is already\nin targetObjects then it thinks it's done.\n\nI think when I wrote this code I was assuming that the dependency-\norder traversal performed by findDependentObjects() was sufficient\nto guarantee producing a safe deletion order, but it's now obvious\nthat that's not so. At minimum, when findDependentObjects() finds\nthat a dependency exists on an object that's already in targetObjects,\nit'd need to do something about moving that object to after the one\nit's working on. But I doubt we can fix it with just that, because\nthat won't be enough to handle indirect dependencies.\n\nIt looks to me that the only real fix will require performing a\ntopological sort, similar to what pg_dump does, to produce a safe\ndeletion order that honors all the direct and indirect dependencies\nfound by findDependentObjects().\n\nAn open question is whether we will need dependency-loop-breaking\nlogic, or whether the hackery done in findDependentObjects() is\nsufficient to ensure that we can assume there are no loops in the\ndependencies it chooses to output. It might be a better idea to\nget rid of that logic and instead have explicit loop-breaking like\nthe way pg_dump does it.\n\nIt's also tempting to wonder if we can share code for this with\npg_dump. The TopoSort function alone is not that big, but if we\nhave to duplicate the loop-breaking logic it'd get annoying.\n\nAnyway, this is a very long-standing problem and I don't think\nwe should try to solve it in a rush.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Jun 2024 16:04:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cannot drop intarray extension" } ]
[ { "msg_contents": "Hi,\n\nI was looking at this code comment and wondered what it meant. AFAICT\nover time code has been moved around causing comments to lose their\noriginal context, so now it is hard to understand what they are\nsaying.\n\n~~~\n\nAfter a 2017 patch [1] the code in walsender.c function\nlogical_read_xlog_page() looked like this:\n\n/* make sure we have enough WAL available */\nflushptr = WalSndWaitForWal(targetPagePtr + reqLen);\n\n/* fail if not (implies we are going to shut down) */\nif (flushptr < targetPagePtr + reqLen)\nreturn -1;\n\n~~~\n\nThe same code in HEAD now looks like this:\n\n/*\n* Make sure we have enough WAL available before retrieving the current\n* timeline. This is needed to determine am_cascading_walsender accurately\n* which is needed to determine the current timeline.\n*/\nflushptr = WalSndWaitForWal(targetPagePtr + reqLen);\n\n/*\n* Since logical decoding is also permitted on a standby server, we need\n* to check if the server is in recovery to decide how to get the current\n* timeline ID (so that it also cover the promotion or timeline change\n* cases).\n*/\nam_cascading_walsender = RecoveryInProgress();\n\nif (am_cascading_walsender)\nGetXLogReplayRecPtr(&currTLI);\nelse\ncurrTLI = GetWALInsertionTimeLine();\n\nXLogReadDetermineTimeline(state, targetPagePtr, reqLen, currTLI);\nsendTimeLineIsHistoric = (state->currTLI != currTLI);\nsendTimeLine = state->currTLI;\nsendTimeLineValidUpto = state->currTLIValidUntil;\nsendTimeLineNextTLI = state->nextTLI;\n\n/* fail if not (implies we are going to shut down) */\nif (flushptr < targetPagePtr + reqLen)\nreturn -1;\n\n~~~\n\nNotice how the \"fail if not\" comment has become distantly separated\nfrom the flushptr assignment it was once adjacent to, so that comment\nhardly makes sense anymore -- e.g. \"fail if not\" WHAT?\n\nPerhaps the comment should say something like it used to:\n/* Fail if there is not enough WAL available. This can happen during\nshutdown. */\n\n======\n[1] https://github.com/postgres/postgres/commit/fca85f8ef157d4d58dea1fdc8e1f1f957b74ee78\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 3 Jun 2024 15:50:55 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Mon, 3 Jun 2024 at 11:21, Peter Smith <[email protected]> wrote:\n>\n> Hi,\n>\n> I was looking at this code comment and wondered what it meant. AFAICT\n> over time code has been moved around causing comments to lose their\n> original context, so now it is hard to understand what they are\n> saying.\n>\n> ~~~\n>\n> After a 2017 patch [1] the code in walsender.c function\n> logical_read_xlog_page() looked like this:\n>\n> /* make sure we have enough WAL available */\n> flushptr = WalSndWaitForWal(targetPagePtr + reqLen);\n>\n> /* fail if not (implies we are going to shut down) */\n> if (flushptr < targetPagePtr + reqLen)\n> return -1;\n>\n> ~~~\n>\n> The same code in HEAD now looks like this:\n>\n> /*\n> * Make sure we have enough WAL available before retrieving the current\n> * timeline. This is needed to determine am_cascading_walsender accurately\n> * which is needed to determine the current timeline.\n> */\n> flushptr = WalSndWaitForWal(targetPagePtr + reqLen);\n>\n> /*\n> * Since logical decoding is also permitted on a standby server, we need\n> * to check if the server is in recovery to decide how to get the current\n> * timeline ID (so that it also cover the promotion or timeline change\n> * cases).\n> */\n> am_cascading_walsender = RecoveryInProgress();\n>\n> if (am_cascading_walsender)\n> GetXLogReplayRecPtr(&currTLI);\n> else\n> currTLI = GetWALInsertionTimeLine();\n>\n> XLogReadDetermineTimeline(state, targetPagePtr, reqLen, currTLI);\n> sendTimeLineIsHistoric = (state->currTLI != currTLI);\n> sendTimeLine = state->currTLI;\n> sendTimeLineValidUpto = state->currTLIValidUntil;\n> sendTimeLineNextTLI = state->nextTLI;\n>\n> /* fail if not (implies we are going to shut down) */\n> if (flushptr < targetPagePtr + reqLen)\n> return -1;\n>\n> ~~~\n>\n> Notice how the \"fail if not\" comment has become distantly separated\n> from the flushptr assignment it was once adjacent to, so that comment\n> hardly makes sense anymore -- e.g. \"fail if not\" WHAT?\n>\n> Perhaps the comment should say something like it used to:\n> /* Fail if there is not enough WAL available. This can happen during\n> shutdown. */\n\nAgree with this, +1 for this change.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 26 Jun 2024 14:30:26 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Wed, Jun 26, 2024 at 02:30:26PM +0530, vignesh C wrote:\n> On Mon, 3 Jun 2024 at 11:21, Peter Smith <[email protected]> wrote:\n>> Perhaps the comment should say something like it used to:\n>> /* Fail if there is not enough WAL available. This can happen during\n>> shutdown. */\n> \n> Agree with this, +1 for this change.\n\nThat would be an improvement. Would you like to send a patch with all\nthe areas you think could stand for improvements?\n--\nMichael", "msg_date": "Thu, 27 Jun 2024 14:43:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Thu, Jun 27, 2024 at 3:44 PM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jun 26, 2024 at 02:30:26PM +0530, vignesh C wrote:\n> > On Mon, 3 Jun 2024 at 11:21, Peter Smith <[email protected]> wrote:\n> >> Perhaps the comment should say something like it used to:\n> >> /* Fail if there is not enough WAL available. This can happen during\n> >> shutdown. */\n> >\n> > Agree with this, +1 for this change.\n>\n> That would be an improvement. Would you like to send a patch with all\n> the areas you think could stand for improvements?\n> --\n\nOK, I attached a patch equivalent of the suggestion in this thread.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 28 Jun 2024 09:44:38 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Fri, Jun 28, 2024 at 5:15 AM Peter Smith <[email protected]> wrote:\n>\n> On Thu, Jun 27, 2024 at 3:44 PM Michael Paquier <[email protected]> wrote:\n> >\n> > On Wed, Jun 26, 2024 at 02:30:26PM +0530, vignesh C wrote:\n> > > On Mon, 3 Jun 2024 at 11:21, Peter Smith <[email protected]> wrote:\n> > >> Perhaps the comment should say something like it used to:\n> > >> /* Fail if there is not enough WAL available. This can happen during\n> > >> shutdown. */\n> > >\n> > > Agree with this, +1 for this change.\n> >\n> > That would be an improvement. Would you like to send a patch with all\n> > the areas you think could stand for improvements?\n> > --\n>\n> OK, I attached a patch equivalent of the suggestion in this thread.\n>\n\nShouldn't the check for flushptr (if (flushptr < targetPagePtr +\nreqLen)) be moved immediately after the call to WalSndWaitForWal().\nThe comment seems to suggests the same: \"Make sure we have enough WAL\navailable before retrieving the current timeline. ..\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 28 Jun 2024 11:48:26 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Fri, Jun 28, 2024 at 4:18 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jun 28, 2024 at 5:15 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Thu, Jun 27, 2024 at 3:44 PM Michael Paquier <[email protected]> wrote:\n> > >\n> > > On Wed, Jun 26, 2024 at 02:30:26PM +0530, vignesh C wrote:\n> > > > On Mon, 3 Jun 2024 at 11:21, Peter Smith <[email protected]> wrote:\n> > > >> Perhaps the comment should say something like it used to:\n> > > >> /* Fail if there is not enough WAL available. This can happen during\n> > > >> shutdown. */\n> > > >\n> > > > Agree with this, +1 for this change.\n> > >\n> > > That would be an improvement. Would you like to send a patch with all\n> > > the areas you think could stand for improvements?\n> > > --\n> >\n> > OK, I attached a patch equivalent of the suggestion in this thread.\n> >\n>\n> Shouldn't the check for flushptr (if (flushptr < targetPagePtr +\n> reqLen)) be moved immediately after the call to WalSndWaitForWal().\n> The comment seems to suggests the same: \"Make sure we have enough WAL\n> available before retrieving the current timeline. ..\"\n>\n\nYes, as I wrote in the first post, those lines did once used to be\nadjacent in logical_read_xlog_page.\n\nI also wondered if they still belonged together, but I opted for the\nsafest option of fixing only the comment instead of refactoring old\ncode when no problem had been reported.\n\nAFAICT these lines became separated due to a 2023 patch [1], and you\nwere one of the reviewers of that patch, so I assumed the code\nseparation was deemed OK at that time. Unless it was some mistake that\nslipped past multiple reviewers?\n\n======\n[1] https://github.com/postgres/postgres/commit/0fdab27ad68a059a1663fa5ce48d76333f1bd74c#diff-da7052ce339ec037f5c721e08a9532b1fcfb4405966cf6a78d0c811550fd7b43\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 28 Jun 2024 17:24:43 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Fri, Jun 28, 2024 at 12:55 PM Peter Smith <[email protected]> wrote:\n>\n> On Fri, Jun 28, 2024 at 4:18 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Jun 28, 2024 at 5:15 AM Peter Smith <[email protected]> wrote:\n> > >\n> > > On Thu, Jun 27, 2024 at 3:44 PM Michael Paquier <[email protected]> wrote:\n> > > >\n> > > > On Wed, Jun 26, 2024 at 02:30:26PM +0530, vignesh C wrote:\n> > > > > On Mon, 3 Jun 2024 at 11:21, Peter Smith <[email protected]> wrote:\n> > > > >> Perhaps the comment should say something like it used to:\n> > > > >> /* Fail if there is not enough WAL available. This can happen during\n> > > > >> shutdown. */\n> > > > >\n> > > > > Agree with this, +1 for this change.\n> > > >\n> > > > That would be an improvement. Would you like to send a patch with all\n> > > > the areas you think could stand for improvements?\n> > > > --\n> > >\n> > > OK, I attached a patch equivalent of the suggestion in this thread.\n> > >\n> >\n> > Shouldn't the check for flushptr (if (flushptr < targetPagePtr +\n> > reqLen)) be moved immediately after the call to WalSndWaitForWal().\n> > The comment seems to suggests the same: \"Make sure we have enough WAL\n> > available before retrieving the current timeline. ..\"\n> >\n>\n> Yes, as I wrote in the first post, those lines did once used to be\n> adjacent in logical_read_xlog_page.\n>\n> I also wondered if they still belonged together, but I opted for the\n> safest option of fixing only the comment instead of refactoring old\n> code when no problem had been reported.\n>\n> AFAICT these lines became separated due to a 2023 patch [1], and you\n> were one of the reviewers of that patch, so I assumed the code\n> separation was deemed OK at that time. Unless it was some mistake that\n> slipped past multiple reviewers?\n>\n\nI don't know whether your assumption is correct. AFAICS, those two\nlines should be together. Let us ee if Bertrand remembers anything?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 28 Jun 2024 15:15:22 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "Hi,\n\nOn Fri, Jun 28, 2024 at 03:15:22PM +0530, Amit Kapila wrote:\n> On Fri, Jun 28, 2024 at 12:55 PM Peter Smith <[email protected]> wrote:\n> >\n> > On Fri, Jun 28, 2024 at 4:18 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Fri, Jun 28, 2024 at 5:15 AM Peter Smith <[email protected]> wrote:\n> > > >\n> > > > On Thu, Jun 27, 2024 at 3:44 PM Michael Paquier <[email protected]> wrote:\n> > > > >\n> > > > > On Wed, Jun 26, 2024 at 02:30:26PM +0530, vignesh C wrote:\n> > > > > > On Mon, 3 Jun 2024 at 11:21, Peter Smith <[email protected]> wrote:\n> > > > > >> Perhaps the comment should say something like it used to:\n> > > > > >> /* Fail if there is not enough WAL available. This can happen during\n> > > > > >> shutdown. */\n> > > > > >\n> > > > > > Agree with this, +1 for this change.\n> > > > >\n> > > > > That would be an improvement. Would you like to send a patch with all\n> > > > > the areas you think could stand for improvements?\n> > > > > --\n> > > >\n> > > > OK, I attached a patch equivalent of the suggestion in this thread.\n> > > >\n> > >\n> > > Shouldn't the check for flushptr (if (flushptr < targetPagePtr +\n> > > reqLen)) be moved immediately after the call to WalSndWaitForWal().\n> > > The comment seems to suggests the same: \"Make sure we have enough WAL\n> > > available before retrieving the current timeline. ..\"\n> > >\n> >\n> > Yes, as I wrote in the first post, those lines did once used to be\n> > adjacent in logical_read_xlog_page.\n> >\n> > I also wondered if they still belonged together, but I opted for the\n> > safest option of fixing only the comment instead of refactoring old\n> > code when no problem had been reported.\n> >\n> > AFAICT these lines became separated due to a 2023 patch [1], and you\n> > were one of the reviewers of that patch, so I assumed the code\n> > separation was deemed OK at that time. Unless it was some mistake that\n> > slipped past multiple reviewers?\n> >\n> \n> I don't know whether your assumption is correct. AFAICS, those two\n> lines should be together. Let us ee if Bertrand remembers anything?\n> \n\nIIRC the WalSndWaitForWal() call has been moved to ensure that we can determine\nthe timeline accurately. I agree with Amit that it would make more sense to\nmove the (flushptr < targetPagePtr + reqLen) check just after the flushptr\nassignement. I don't recall that we discussed any reason of not doing so.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 28 Jun 2024 13:00:47 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Fri, Jun 28, 2024 at 4:18 PM Amit Kapila <[email protected]> wrote:\n>\n...\n> Shouldn't the check for flushptr (if (flushptr < targetPagePtr +\n> reqLen)) be moved immediately after the call to WalSndWaitForWal().\n> The comment seems to suggests the same: \"Make sure we have enough WAL\n> available before retrieving the current timeline. ..\"\n\nOK, I have changed the code as suggested. Please see the attached v2 patch.\n\nmake check-world was successful.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 1 Jul 2024 09:34:04 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Fri, Jun 28, 2024 at 6:30 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Jun 28, 2024 at 03:15:22PM +0530, Amit Kapila wrote:\n> > On Fri, Jun 28, 2024 at 12:55 PM Peter Smith <[email protected]> wrote:\n> > >\n> >\n> > I don't know whether your assumption is correct. AFAICS, those two\n> > lines should be together. Let us ee if Bertrand remembers anything?\n> >\n>\n> IIRC the WalSndWaitForWal() call has been moved to ensure that we can determine\n> the timeline accurately.\n>\n\nThis part is understandable but I don't understand the part of the\ncomment (This is needed to determine am_cascading_walsender accurately\n..) atop a call to WalSndWaitForWal(). The am_cascading_walsender is\ndetermined based on the results of RecoveryInProgress(). Can the wait\nfor WAL by using WalSndWaitForWal() change the result of\nRecoveryInProgress()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 5 Jul 2024 11:10:00 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Mon, Jul 1, 2024 at 5:04 AM Peter Smith <[email protected]> wrote:\n>\n> On Fri, Jun 28, 2024 at 4:18 PM Amit Kapila <[email protected]> wrote:\n> >\n> ...\n> > Shouldn't the check for flushptr (if (flushptr < targetPagePtr +\n> > reqLen)) be moved immediately after the call to WalSndWaitForWal().\n> > The comment seems to suggests the same: \"Make sure we have enough WAL\n> > available before retrieving the current timeline. ..\"\n>\n> OK, I have changed the code as suggested. Please see the attached v2 patch.\n>\n\nLGTM. I'll push this early next week unless someone thinks otherwise.\nI have added a commit message in the attached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 5 Jul 2024 12:20:49 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "Hi,\n\nOn Fri, Jul 05, 2024 at 11:10:00AM +0530, Amit Kapila wrote:\n> On Fri, Jun 28, 2024 at 6:30 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Fri, Jun 28, 2024 at 03:15:22PM +0530, Amit Kapila wrote:\n> > > On Fri, Jun 28, 2024 at 12:55 PM Peter Smith <[email protected]> wrote:\n> > > >\n> > >\n> > > I don't know whether your assumption is correct. AFAICS, those two\n> > > lines should be together. Let us ee if Bertrand remembers anything?\n> > >\n> >\n> > IIRC the WalSndWaitForWal() call has been moved to ensure that we can determine\n> > the timeline accurately.\n> >\n> \n> This part is understandable but I don't understand the part of the\n> comment (This is needed to determine am_cascading_walsender accurately\n> ..) atop a call to WalSndWaitForWal(). The am_cascading_walsender is\n> determined based on the results of RecoveryInProgress(). Can the wait\n> for WAL by using WalSndWaitForWal() change the result of\n> RecoveryInProgress()?\n\nNo, but WalSndWaitForWal() must be called _before_ assigning\n\"am_cascading_walsender = RecoveryInProgress();\". The reason is that during\na promotion am_cascading_walsender must be assigned _after_ the walsender is\nwaked up (after the promotion). So that when the walsender exits WalSndWaitForWal(),\nthen am_cascading_walsender is assigned \"accurately\" and so the timeline is. \n\nWhat I meant to say in this comment is that \"am_cascading_walsender = RecoveryInProgress();\"\nmust be called _after_ \"flushptr = WalSndWaitForWal(targetPagePtr + reqLen);\".\n\nFor example, swaping both lines would cause the 035_standby_logical_decoding.pl\nto fail during the promotion test as the walsender would read from the \"previous\"\ntimeline and then produce things like:\n\n\"ERROR: could not find record while sending logically-decoded data: invalid record length at 0/6427B20: expected at least 24, got 0\"\n\nTo avoid ambiguity should we replace?\n\n\"\n /*\n * Make sure we have enough WAL available before retrieving the current\n * timeline. This is needed to determine am_cascading_walsender accurately\n * which is needed to determine the current timeline.\n */\n\"\n\nWith:\n\n\"\n /*\n * Make sure we have enough WAL available before retrieving the current\n * timeline. am_cascading_walsender must be assigned after\n\t * WalSndWaitForWal() (so that it is also correct when the walsender wakes\n\t * up after a promotion).\n */\n\"\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 6 Jul 2024 14:06:19 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Sat, Jul 6, 2024 at 7:36 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Jul 05, 2024 at 11:10:00AM +0530, Amit Kapila wrote:\n> > On Fri, Jun 28, 2024 at 6:30 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > On Fri, Jun 28, 2024 at 03:15:22PM +0530, Amit Kapila wrote:\n> > > > On Fri, Jun 28, 2024 at 12:55 PM Peter Smith <[email protected]> wrote:\n> > > > >\n> > > >\n> > > > I don't know whether your assumption is correct. AFAICS, those two\n> > > > lines should be together. Let us ee if Bertrand remembers anything?\n> > > >\n> > >\n> > > IIRC the WalSndWaitForWal() call has been moved to ensure that we can determine\n> > > the timeline accurately.\n> > >\n> >\n> > This part is understandable but I don't understand the part of the\n> > comment (This is needed to determine am_cascading_walsender accurately\n> > ..) atop a call to WalSndWaitForWal(). The am_cascading_walsender is\n> > determined based on the results of RecoveryInProgress(). Can the wait\n> > for WAL by using WalSndWaitForWal() change the result of\n> > RecoveryInProgress()?\n>\n> No, but WalSndWaitForWal() must be called _before_ assigning\n> \"am_cascading_walsender = RecoveryInProgress();\". The reason is that during\n> a promotion am_cascading_walsender must be assigned _after_ the walsender is\n> waked up (after the promotion). So that when the walsender exits WalSndWaitForWal(),\n> then am_cascading_walsender is assigned \"accurately\" and so the timeline is.\n>\n> What I meant to say in this comment is that \"am_cascading_walsender = RecoveryInProgress();\"\n> must be called _after_ \"flushptr = WalSndWaitForWal(targetPagePtr + reqLen);\".\n>\n> For example, swaping both lines would cause the 035_standby_logical_decoding.pl\n> to fail during the promotion test as the walsender would read from the \"previous\"\n> timeline and then produce things like:\n>\n> \"ERROR: could not find record while sending logically-decoded data: invalid record length at 0/6427B20: expected at least 24, got 0\"\n>\n> To avoid ambiguity should we replace?\n>\n> \"\n> /*\n> * Make sure we have enough WAL available before retrieving the current\n> * timeline. This is needed to determine am_cascading_walsender accurately\n> * which is needed to determine the current timeline.\n> */\n> \"\n>\n> With:\n>\n> \"\n> /*\n> * Make sure we have enough WAL available before retrieving the current\n> * timeline. am_cascading_walsender must be assigned after\n> * WalSndWaitForWal() (so that it is also correct when the walsender wakes\n> * up after a promotion).\n> */\n> \"\n>\n\nThis sounds better but it is better to add just before we determine\nam_cascading_walsender as is done in the attached. What do you think?\n\nBTW, is it possible that the promotion gets completed after we wait\nfor the required WAL and before assigning am_cascading_walsender? I\nthink even if that happens we can correctly determine the required\ntimeline because all the required WAL is already available, is that\ncorrect or am I missing something here?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 8 Jul 2024 08:46:19 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 08, 2024 at 08:46:19AM +0530, Amit Kapila wrote:\n> This sounds better but it is better to add just before we determine\n> am_cascading_walsender as is done in the attached. What do you think?\n\nThanks! LGTM.\n\n> \n> BTW, is it possible that the promotion gets completed after we wait\n> for the required WAL and before assigning am_cascading_walsender?\n\nYeah, I don't think there is anything that would prevent a promotion to\nhappen and complete here. I did a few tests (pausing the walsender with gdb at\nvarious places and promoting the standby).\n\n> think even if that happens we can correctly determine the required\n> timeline because all the required WAL is already available, is that\n> correct \n\nYeah that's correct.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jul 2024 05:38:25 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Mon, Jul 8, 2024 at 11:08 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Mon, Jul 08, 2024 at 08:46:19AM +0530, Amit Kapila wrote:\n> > This sounds better but it is better to add just before we determine\n> > am_cascading_walsender as is done in the attached. What do you think?\n>\n> Thanks! LGTM.\n>\n\nI would like to push this to HEAD only as we don't see any bug that\nthis change can prevent. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 8 Jul 2024 11:20:45 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 08, 2024 at 11:20:45AM +0530, Amit Kapila wrote:\n> On Mon, Jul 8, 2024 at 11:08 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Mon, Jul 08, 2024 at 08:46:19AM +0530, Amit Kapila wrote:\n> > > This sounds better but it is better to add just before we determine\n> > > am_cascading_walsender as is done in the attached. What do you think?\n> >\n> > Thanks! LGTM.\n> >\n> \n> I would like to push this to HEAD only as we don't see any bug that\n> this change can prevent. What do you think?\n> \n\nYeah, fully agree. I don't see how the previous check location could produce\nany bug.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jul 2024 06:19:54 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c comment with no context is hard to understand" }, { "msg_contents": "On Mon, Jul 8, 2024 at 4:19 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Mon, Jul 08, 2024 at 11:20:45AM +0530, Amit Kapila wrote:\n> > On Mon, Jul 8, 2024 at 11:08 AM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > On Mon, Jul 08, 2024 at 08:46:19AM +0530, Amit Kapila wrote:\n> > > > This sounds better but it is better to add just before we determine\n> > > > am_cascading_walsender as is done in the attached. What do you think?\n> > >\n> > > Thanks! LGTM.\n> > >\n> >\n> > I would like to push this to HEAD only as we don't see any bug that\n> > this change can prevent. What do you think?\n> >\n>\n> Yeah, fully agree. I don't see how the previous check location could produce\n> any bug.\n>\n\nHi,\n\nSince the patch was pushed, one recent failure was observed in BF\nmember rorqual [1]. but this is probably unrelated to the push because\nwe found the same failure also occurred in April [2].\n\n======\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-07-09%2003%3A46%3A44\n[2] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=rorqual&dt=2024-04-18%2006%3A52%3A07&stg=recovery-check\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 9 Jul 2024 16:01:33 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: walsender.c comment with no context is hard to understand" } ]
[ { "msg_contents": "\nWhen testing my own patches or review other's patches, I want to know if\nthe new code has been tested, however our current 'make coverage-html'\nshows all the codes rather than the 'new code', so is there a good way\nto get the answer for the above question?\n\nI searched lcov at [1] and the options like '--diff-file' or\n'--select-script' looks very promising, but all of them needs some time\nto try it out and then automate it. so I'd like to ask first..\n\n[1] https://github.com/linux-test-project/lcov/blob/master/README \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Mon, 03 Jun 2024 22:20:05 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "differential test coverage when working on a patch" } ]
[ { "msg_contents": "Building with clang and -flto on macOS currently fails with errors \nsimilar to [1]. This is because the --export-dynamic flag is called \n-export_dynamic [2] instead and we have not been passing this variant to \nthe linker, so far.\n\nAttached patch fixes that for configure/make.\n\nCC: Tom, who hit the same in [3] and Andres who last touched \n--export-dynamic in 9db49fc5bfdc0126be03f4b8986013e59d93b91d.\n\nWill also create an issue upstream for meson, because the logic is \nbuilt-in there.\n\nWould be great if this could be back-patched, since this is the same in \nall live versions.\n\nBest,\n\nWolfgang\n\n[1]: https://postgr.es/m/1581936537572-0.post%40n3.nabble.com\n[2]: \nhttps://opensource.apple.com/source/ld64/ld64-609/doc/man/man1/ld.1.auto.html \n(grep for export_dynamic)\n[3]: https://postgr.es/m/21800.1499270547%40sss.pgh.pa.us", "msg_date": "Mon, 3 Jun 2024 16:22:01 +0200", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Build with LTO / -flto on macOS" }, { "msg_contents": "On 03.06.24 16:22, Wolfgang Walther wrote:\n> Building with clang and -flto on macOS currently fails with errors \n> similar to [1]. This is because the --export-dynamic flag is called \n> -export_dynamic [2] instead and we have not been passing this variant to \n> the linker, so far.\n\nIt's probably worth clarifying that this option is needed on macOS only \nif LTO is also enabled. For standard (non-LTO) builds, the \nexport-dynamic behavior is already the default on macOS (otherwise \nnothing in PostgreSQL would work).\n\nI don't think we explicitly offer LTO builds as part of the make build \nsystem, so anyone trying this would do it sort of self-service, by \npassing additional options to configure or make. In which case they \nmight as well pass the -export_dynamic option along in the same way?\n\nI don't mind addressing this in PG18, but I would hesitate with \nbackpatching. With macOS, it's always hard to figure out whether these \nkinds of options work the same way going versions back.\n\n\n\n", "msg_date": "Mon, 3 Jun 2024 16:32:01 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Peter Eisentraut:\n> It's probably worth clarifying that this option is needed on macOS only \n> if LTO is also enabled.  For standard (non-LTO) builds, the \n> export-dynamic behavior is already the default on macOS (otherwise \n> nothing in PostgreSQL would work).\n\nRight, man page say this:\n\n > Preserves all global symbols in main executables during LTO. Without \nthis option, Link Time Optimization is allowed to inline and remove \nglobal functions. This option is used when a main executable may load a \nplug-in which requires certain symbols from the main executable.\n\nPeter:\n> I don't think we explicitly offer LTO builds as part of the make build \n> system, so anyone trying this would do it sort of self-service, by \n> passing additional options to configure or make.  In which case they \n> might as well pass the -export_dynamic option along in the same way?\n\nThe challenge is that it defeats the purpose of LTO to pass this along \nto everything, e.g. via CFLAGS. The Makefiles set this in LDFLAGS_EX_BE \nonly, so it only affects the backend binary. This is not at all obvious \nand took me quite a while to figure out why LTO silently didn't strip \nsymbols from other binaries. It does work to explicitly set \nLDFLAGS_EX_BE, though.\n\nAlso, passing the LTO flag on Linux \"just works\" (clang, not GCC \nnecessarily).\n\n> I don't mind addressing this in PG18, but I would hesitate with \n> backpatching.  With macOS, it's always hard to figure out whether these \n> kinds of options work the same way going versions back.\n\nAll the versions for ld64 are in [1]. It seems this was introduced in \nld64-224.1 [2] the first time. It was not there in ld64-136 [3]. Finally \nthe man page has **exactly** the same wording in the latest version \nld64-609 [4].\n\nWe could go further and compare the source, but I think it's safe to \nassume that this flag hasn't changed much and should not affect non-LTO \nbuilds. And for even older versions it would just not be supported, so \nconfigure would not use it.\n\nBest,\n\nWolfgang\n\n[1]: https://opensource.apple.com/source/ld64/\n[2]: \nhttps://opensource.apple.com/source/ld64/ld64-224.1/doc/man/man1/ld.1.auto.html\n[3]: \nhttps://opensource.apple.com/source/ld64/ld64-136/doc/man/man1/ld.1.auto.html\n[4]: \nhttps://opensource.apple.com/source/ld64/ld64-609/doc/man/man1/ld.1.auto.html\n\n\n", "msg_date": "Mon, 3 Jun 2024 17:07:22 +0200", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Wolfgang Walther:\n> Peter:\n>> I don't think we explicitly offer LTO builds as part of the make build \n>> system, so anyone trying this would do it sort of self-service, by \n>> passing additional options to configure or make.  In which case they \n>> might as well pass the -export_dynamic option along in the same way?\n> \n> The challenge is that it defeats the purpose of LTO to pass this along \n> to everything, e.g. via CFLAGS. The Makefiles set this in LDFLAGS_EX_BE \n> only, so it only affects the backend binary. This is not at all obvious \n> and took me quite a while to figure out why LTO silently didn't strip \n> symbols from other binaries. It does work to explicitly set \n> LDFLAGS_EX_BE, though.\n\nOh, and more importantly: LDFLAGS_EX_BE is not available on all back \nbranches. It was only introduced in v16 in preparation for meson. So up \nto v15, I would have to patch src/makesfiles/Makefile.darwin to set \nexport_dynamic.\n\nSo back-patching a change like this would certainly help to get LTO \nacross versions seamlessly - which is what I am trying to achieve while \npackaging all versions in nixpkgs / NixOS.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Mon, 3 Jun 2024 19:11:14 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Hi,\n\nOn 2024-06-03 17:07:22 +0200, Wolfgang Walther wrote:\n> Peter Eisentraut:\n> > It's probably worth clarifying that this option is needed on macOS only\n> > if LTO is also enabled.� For standard (non-LTO) builds, the\n> > export-dynamic behavior is already the default on macOS (otherwise\n> > nothing in PostgreSQL would work).\n> \n> Right, man page say this:\n> \n> > Preserves all global symbols in main executables during LTO. Without this\n> option, Link Time Optimization is allowed to inline and remove global\n> functions. This option is used when a main executable may load a plug-in\n> which requires certain symbols from the main executable.\n\nGah. Apples tendency to just break stuff that has worked across *nix-y\nplatforms for decades is pretty annoying. They could just have made\n--export-dynamic an alias for --export_dynamic, but no, everyone needs a\nspecial macos thingy in their build scripts.\n\n\n> Peter:\n> > I don't think we explicitly offer LTO builds as part of the make build\n> > system, so anyone trying this would do it sort of self-service, by\n> > passing additional options to configure or make.� In which case they\n> > might as well pass the -export_dynamic option along in the same way?\n> \n> The challenge is that it defeats the purpose of LTO to pass this along to\n> everything, e.g. via CFLAGS. The Makefiles set this in LDFLAGS_EX_BE only,\n> so it only affects the backend binary. This is not at all obvious and took\n> me quite a while to figure out why LTO silently didn't strip symbols from\n> other binaries. It does work to explicitly set LDFLAGS_EX_BE, though.\n> \n> Also, passing the LTO flag on Linux \"just works\" (clang, not GCC\n> necessarily).\n\nIt should just work on gcc, or at least has in the recent past.\n\n\nISTM if we want to test for -export_dynamic like what you proposed, we should\ndo so only if --export-dynamic wasn't found. No need to incur the overhead on\n!macos.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Jun 2024 11:40:44 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "On 03.06.24 17:07, Wolfgang Walther wrote:\n>> I don't mind addressing this in PG18, but I would hesitate with \n>> backpatching.  With macOS, it's always hard to figure out whether \n>> these kinds of options work the same way going versions back.\n> \n> All the versions for ld64 are in [1]. It seems this was introduced in \n> ld64-224.1 [2] the first time. It was not there in ld64-136 [3]. Finally \n> the man page has **exactly** the same wording in the latest version \n> ld64-609 [4].\n> \n> We could go further and compare the source, but I think it's safe to \n> assume that this flag hasn't changed much and should not affect non-LTO \n> builds. And for even older versions it would just not be supported, so \n> configure would not use it.\n\nWith the native compiler tooling on macOS, it is not safe to assume \nanything, including that the man pages are accurate or that the \ndocumented options actually work correctly and don't break anything \nelse. Unless we have actual testing on all the supported macOS \nversions, I don't believe it.\n\nGiven that LTO apparently never worked on macOS, this is not a \nregression, so I wouldn't backpatch it. I'm not objecting, but I don't \nwant to touch it.\n\n\n\n", "msg_date": "Tue, 4 Jun 2024 09:26:06 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Andres Freund:\n> Gah. Apples tendency to just break stuff that has worked across *nix-y\n> platforms for decades is pretty annoying. They could just have made\n> --export-dynamic an alias for --export_dynamic, but no, everyone needs a\n> special macos thingy in their build scripts.\n\nInteresting enough my Linux ld does support -export_dynamic, too.. but \nit doesn't say anywhere in the man pages or so.\n\n\n>> Also, passing the LTO flag on Linux \"just works\" (clang, not GCC\n>> necessarily).\n> \n> It should just work on gcc, or at least has in the recent past.\n\nWell it \"works\" in a sense that the build succeeds and check-world as \nwell. But there are some symbols in all the client binaries that I know \nare unused (paths to .../include etc.), and which LLVM's LTO strips out \nhappily - that are still in there after GCC's LTO.\n\nGCC can remove them with -fdata-sections -ffunction-sections \n-fmerge-constants and -Wl,--gc-sections. But not with -flto. At least I \ndidn't manage to.\n\n\n> ISTM if we want to test for -export_dynamic like what you proposed, we should\n> do so only if --export-dynamic wasn't found. No need to incur the overhead on\n> !macos.\n\nMakes sense! v2 attached.\n\nI also attached a .backpatch to show what that would look like for v15 \nand down.\n\n\nPeter Eisentraut:\n > With the native compiler tooling on macOS, it is not safe to assume\n > anything, including that the man pages are accurate or that the\n > documented options actually work correctly and don't break anything\n > else. Unless we have actual testing on all the supported macOS\n > versions, I don't believe it.\n\nWhich macOS versions are \"supported\"?\n\nI just set up a VM with macOS Mojave (2018) and tested both the .patch \non HEAD as well as the .backpatch on REL_12_STABLE with -flto. Build \npassed, make check-world as well.\n\nclang --version for Mojave:\nApple LLVM version 10.0.1 (clang-1001.0.46.4)\nTarget: x86_64-apple-darwin18.5.0\n\nclang --version for Sonoma (where I tested before):\nApple clang version 15.0.0 (clang-1500.3.9.4)\nTarget: [email protected]\n\nSince PostgreSQL 12 is from 2019 and Mojave from 2018, I think that's \nfar enough back?\n\n\n > Given that LTO apparently never worked on macOS, this is not a\n > regression, so I wouldn't backpatch it. I'm not objecting, but I don't\n > want to touch it.\n\nFair enough! Hopefully my testing convinces more than the man pages ;)\n\nBest,\n\nWolfgang", "msg_date": "Tue, 4 Jun 2024 18:30:07 +0200", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> With the native compiler tooling on macOS, it is not safe to assume \n> anything, including that the man pages are accurate or that the \n> documented options actually work correctly and don't break anything \n> else. Unless we have actual testing on all the supported macOS \n> versions, I don't believe it.\n\nRelevant to this: I wonder what we think the supported macOS versions\nare, anyway. AFAICS, the buildfarm only covers current (Sonoma)\nand current-1 (Ventura) major versions, and only the latest minor\nversions in those OS branches.\n\nI share Peter's unwillingness to assume that Apple hasn't randomly\nfixed or broken stuff across toolchain versions. Their track record\nfully justifies that lack of trust.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Jun 2024 12:41:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "On 04.06.24 18:41, Tom Lane wrote:\n> Relevant to this: I wonder what we think the supported macOS versions\n> are, anyway. AFAICS, the buildfarm only covers current (Sonoma)\n> and current-1 (Ventura) major versions, and only the latest minor\n> versions in those OS branches.\n\nFor other OS lines I think we are settling on supporting what the OS \nvendor supports. So for macOS at the moment this would be current, \ncurrent-1, and current-2, per \n<https://en.wikipedia.org/wiki/MacOS_version_history#Releases>.\n\n\n\n", "msg_date": "Wed, 5 Jun 2024 08:36:28 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Peter Eisentraut:\n> On 04.06.24 18:41, Tom Lane wrote:\n>> Relevant to this: I wonder what we think the supported macOS versions\n>> are, anyway.  AFAICS, the buildfarm only covers current (Sonoma)\n>> and current-1 (Ventura) major versions, and only the latest minor\n>> versions in those OS branches.\n> \n> For other OS lines I think we are settling on supporting what the OS \n> vendor supports.  So for macOS at the moment this would be current, \n> current-1, and current-2, per \n> <https://en.wikipedia.org/wiki/MacOS_version_history#Releases>.\n\nSo I tested both HEAD and v12 on current and current-5, both successful. \nThat should cover current-1 and current-2, too. If you want me to test \nany other macOS versions inbetween, or any other PG versions, I can do that.\n\nI would really like to upstream those kind of patches and see them \nbackpatched - otherwise we need to carry around those patches for up to \n5 years in the distros. And in light of the discussion in [1] my goal is \nto reduce the number of patches carried to a minimum. Yes - those \npatches are simple enough - but the more patches you have, the less \nlikely you are going to spot a malicious patch inbetween.\n\nBest,\n\nWolfgang\n\n[1]: https://postgr.es/m/flat/ZgdCpFThi9ODcCsJ%40momjian.us\n\n\n", "msg_date": "Wed, 5 Jun 2024 09:08:28 +0200", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Hi,\n\n> So I tested both HEAD and v12 on current and current-5, both successful.\n> That should cover current-1 and current-2, too. If you want me to test\n> any other macOS versions inbetween, or any other PG versions, I can do that.\n>\n> I would really like to upstream those kind of patches and see them\n> backpatched - otherwise we need to carry around those patches for up to\n> 5 years in the distros. And in light of the discussion in [1] my goal is\n> to reduce the number of patches carried to a minimum. Yes - those\n> patches are simple enough - but the more patches you have, the less\n> likely you are going to spot a malicious patch inbetween.\n\nThe patch was marked as \"Needs review\" so I decided to take a look at it.\n\nI tested v2-0001 on macOS Sonoma 14.5 with Autotools.\n\nconfigure said:\n\n```\nchecking whether gcc supports -Wl,--export-dynamic, for LDFLAGS_EX_BE... no\nchecking whether gcc supports -Wl,-export_dynamic, for LDFLAGS_EX_BE... yes\n```\n\nI also checked that -Wl,-export_dynamic was used when linking postgres binary.\n\nOn Linux configure says:\n\n```\nchecking whether gcc supports -Wl,--export-dynamic, for LDFLAGS_EX_BE... yes\n```\n\n... and `-Wl,--export-dynamic` is used when linking postgres.\n\ncfbot is happy with the patch too.\n\nThere is not much to say about the code. It's Autotools and it's ugly,\nbut it gets the job done.\n\nIt seems to me that the patch is not going to become any better and it\ndoesn't need any more attention from the reviewers. Thus I changed the\nstatus of the CF entry to \"Ready for Committer\".\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 Jul 2024 13:40:58 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Aleksander Alekseev <[email protected]> writes:\n> It seems to me that the patch is not going to become any better and it\n> doesn't need any more attention from the reviewers. Thus I changed the\n> status of the CF entry to \"Ready for Committer\".\n\nSo ... there is quite a disconnect between what this patch actually\ndoes (i.e., probe to see if \"-Wl,-export_dynamic\" is accepted) and\nthe title of this thread. I wouldn't have much of a problem with\nthe patch in isolation. However, what Apple's man page for ld(1)\nsays is\n\n -export_dynamic\n Preserves all global symbols in main executables during LTO.\n Without this option, Link Time Optimization is allowed to inline\n and remove global functions. This option is used when a main\n executable may load a plug-in which requires certain symbols from\n the main executable.\n\nwhich agrees with Wolfgang's comment that it doesn't do much unless\nyou enable LTO. So that raises two questions:\n\n1. If you're going to manually inject -flto, seems like you could\nmanually inject -Wl,-export_dynamic too, so why do you need this\npatch?\n\n2. Do we really want to encourage people to build with -flto?\n\nI fear that #2 is actually a pretty serious concern. I think there\nare a lot of places where we've assumed semi-implicitly that\ncompilation file boundaries are optimization barriers, particularly\naround stuff like LWLocks and semaphores. I don't really want to\nspend time chasing obscure, irreproducible bugs that may appear when\nthat assumption gets broken. I especially don't want to do it just\nbecause some packager has randomly decided to inject random build\nswitches.\n\nIn short: if we want to support LTO, let's do it officially and not\nby the back door. But I think somebody needs to make the case that\nthere are compelling benefits that would justify the nontrivial\namount of risk and work that may ensue. My default position here\nis \"sorry, we don't support that\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jul 2024 11:06:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Re: Tom Lane\n> I fear that #2 is actually a pretty serious concern. I think there\n> are a lot of places where we've assumed semi-implicitly that\n> compilation file boundaries are optimization barriers, particularly\n> around stuff like LWLocks and semaphores. I don't really want to\n> spend time chasing obscure, irreproducible bugs that may appear when\n> that assumption gets broken. I especially don't want to do it just\n> because some packager has randomly decided to inject random build\n> switches.\n\nUbuntu enabled -ftlo=auto by default in 22.04, so it has been around\nfor some time already.\n\n$ dpkg-buildflags\nCFLAGS=-g -O2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -ffile-prefix-map=... -flto=auto -ffat-lto-objects -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -fdebug-prefix-map=...\n\nChristoph\n\n\n", "msg_date": "Fri, 19 Jul 2024 17:21:05 +0200", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Hi,\n\nOn 2024-07-19 11:06:47 -0400, Tom Lane wrote:\n> 2. Do we really want to encourage people to build with -flto?\n> \n> I fear that #2 is actually a pretty serious concern. I think there\n> are a lot of places where we've assumed semi-implicitly that\n> compilation file boundaries are optimization barriers, particularly\n> around stuff like LWLocks and semaphores. I don't really want to\n> spend time chasing obscure, irreproducible bugs that may appear when\n> that assumption gets broken. I especially don't want to do it just\n> because some packager has randomly decided to inject random build\n> switches.\n\nI don't really buy this argument. It'd be one thing if compilation boundaries\nactually provided hard guarantees - but they don't, the CPU can reorder things\nas well, not just the compiler. And the CPU doesn't know about compilation\nunits.\n\nIf anything, compiler reorderings are *less* obscure than CPU reordering,\nbecause the latter is heavily dependent on running on large enough machines\nwith specific microarchitectures.\n\n\nThe only case I know where we do rely on compilation units providing some\nlevel of boundaries is on compilers where we don't know how to emit a compiler\nbarrier. That's probably a fallback we ought to remove one of these days...\n\n\n> In short: if we want to support LTO, let's do it officially and not\n> by the back door. But I think somebody needs to make the case that\n> there are compelling benefits that would justify the nontrivial\n> amount of risk and work that may ensue. My default position here\n> is \"sorry, we don't support that\".\n\nFWIW, I've seen pretty substantial wins, particularly in more heavyweight\nqueries.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 19 Jul 2024 12:29:07 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-07-19 11:06:47 -0400, Tom Lane wrote:\n>> 2. Do we really want to encourage people to build with -flto?\n\n> The only case I know where we do rely on compilation units providing some\n> level of boundaries is on compilers where we don't know how to emit a compiler\n> barrier. That's probably a fallback we ought to remove one of these days...\n\nHm. We've moved our platform/toolchain goalposts far enough in the\nlast few releases that that might not be too big a lift. Do you\nknow offhand which supported platforms still have a problem there?\n\n(mumble AIX mumble)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jul 2024 15:36:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Hi,\n\nOn 2024-07-19 15:36:29 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2024-07-19 11:06:47 -0400, Tom Lane wrote:\n> >> 2. Do we really want to encourage people to build with -flto?\n> \n> > The only case I know where we do rely on compilation units providing some\n> > level of boundaries is on compilers where we don't know how to emit a compiler\n> > barrier. That's probably a fallback we ought to remove one of these days...\n> \n> Hm. We've moved our platform/toolchain goalposts far enough in the\n> last few releases that that might not be too big a lift. Do you\n> know offhand which supported platforms still have a problem there?\n>\n> (mumble AIX mumble)\n\nIn 16 it looks like the only case might indeed have been [drumroll] AIX with\nxlc (with gcc . And there it it looks like it'd have been trivial to implement\n[1].\n\nWe've been talking about requiring 32 bit atomics and a spinlock\nimplementation - this imo fits in well with that, without proper barriers it's\npretty much impossible to have correct spinlocks and, even more so, any lock\nfree construct, of which we have a bunch.\n\n\nIOW, let's rip out the fallback implementation for compiler and memory\nbarriers and fix the fallout, if there is any.\n\n\nGreetings,\n\nAndres Freund\n\n[1] I think it'd just be __fence(). Looks like it's been present for a while,\n found it in \"IBM XL C/C++ for AIX, V10.1 Compiler Reference Version 10.1\",\n which looks to be from 2008.\n\n\n", "msg_date": "Fri, 19 Jul 2024 12:56:03 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "Hi,\n\n> So ... there is quite a disconnect between what this patch actually\n> does (i.e., probe to see if \"-Wl,-export_dynamic\" is accepted) and\n> the title of this thread. [...]\n\nThe thread title is indeed somewhat misleading, I was initially\npuzzled by it too. The actual idea, if I understood it correctly, is\nmerely to do on MacOS the same we currently do on Linux.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 Jul 2024 23:09:34 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "On Sat, Jul 20, 2024 at 7:56 AM Andres Freund <[email protected]> wrote:\n> On 2024-07-19 15:36:29 -0400, Tom Lane wrote:\n> > Andres Freund <[email protected]> writes:\n> > > On 2024-07-19 11:06:47 -0400, Tom Lane wrote:\n> > >> 2. Do we really want to encourage people to build with -flto?\n> >\n> > > The only case I know where we do rely on compilation units providing some\n> > > level of boundaries is on compilers where we don't know how to emit a compiler\n> > > barrier. That's probably a fallback we ought to remove one of these days...\n> >\n> > Hm. We've moved our platform/toolchain goalposts far enough in the\n> > last few releases that that might not be too big a lift. Do you\n> > know offhand which supported platforms still have a problem there?\n> >\n> > (mumble AIX mumble)\n>\n> In 16 it looks like the only case might indeed have been [drumroll] AIX with\n> xlc (with gcc . And there it it looks like it'd have been trivial to implement\n> [1].\n>\n> We've been talking about requiring 32 bit atomics and a spinlock\n> implementation - this imo fits in well with that, without proper barriers it's\n> pretty much impossible to have correct spinlocks and, even more so, any lock\n> free construct, of which we have a bunch.\n>\n>\n> IOW, let's rip out the fallback implementation for compiler and memory\n> barriers and fix the fallout, if there is any.\n\nI'll incorporate that into the next version of:\n\nhttps://www.postgresql.org/message-id/flat/3351991.1697728588%40sss.pgh.pa.us\n\n... with a view to committing in the next few days.\n\n(Ignore the <stdatomic.h> patch, that's just an experiment for now,\nbut it's not part of what I plan to commit.)\n\n\n", "msg_date": "Sat, 20 Jul 2024 09:04:53 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "On 19.07.24 12:40, Aleksander Alekseev wrote:\n> It seems to me that the patch is not going to become any better and it\n> doesn't need any more attention from the reviewers. Thus I changed the\n> status of the CF entry to \"Ready for Committer\".\n\nI'm happy to commit this patch.\n\nI checked that for non-LTO builds, this option does not change the \noutput binary, so it seems harmless in that sense.\n\nAn equivalent change has recently been merged into meson upstream, so \nwe'll get the same behavior on meson before long.\n\nThe argument \"If you're going to manually inject -flto, seems like you \ncould manually inject -Wl,-export_dynamic too, so why do you need this\npatch?\" is true, but the behavior that the link fails unless you use \nboth options is pretty surprising, so this is a small quality of life \nimprovement. Also, it seems that LTO use is already in the wild, so it \nseems sensible to make that easier to exercise during development too. \nMaybe a configure --enable-lto option would be sensible, but that can be \na separate patch.\n\n\n\n", "msg_date": "Mon, 22 Jul 2024 16:04:11 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" }, { "msg_contents": "On 22.07.24 16:04, Peter Eisentraut wrote:\n> On 19.07.24 12:40, Aleksander Alekseev wrote:\n>> It seems to me that the patch is not going to become any better and it\n>> doesn't need any more attention from the reviewers. Thus I changed the\n>> status of the CF entry to \"Ready for Committer\".\n> \n> I'm happy to commit this patch.\n> \n> I checked that for non-LTO builds, this option does not change the \n> output binary, so it seems harmless in that sense.\n> \n> An equivalent change has recently been merged into meson upstream, so \n> we'll get the same behavior on meson before long.\n\nDone.\n\n\n\n", "msg_date": "Wed, 31 Jul 2024 06:37:09 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with LTO / -flto on macOS" } ]
[ { "msg_contents": "Greetings.\n\nI am observing the following results on PostgreSQL 15.7\nFirst, setup:\n\ncreate table t_test(x bigint);\ninsert into t_test values(0);\n\ncreate or replace function f_get_x()\nreturns bigint\nlanguage plpgsql\nstable\nas $function$\ndeclare\n l_result bigint;\nbegin\n select x into l_result from t_test;\n --raise notice 'f_get_x() >> x=%', l_result;\n --raise notice 'f_get_x() >> xact=%', txid_current_if_assigned();\n return l_result;\nend;\n$function$;\n\ncreate or replace procedure f_print_x(x bigint)\nlanguage plpgsql\nas $procedure$\nbegin\n raise notice 'f_print_x() >> x=%', x;\n --raise notice 'f_print_x() >> xact=%', txid_current_if_assigned();\nend;\n$procedure$;\n\n\nNow, the case:\n\\set AUTOCOMMIT off\ndo\n$$ begin\n --raise notice 'do >> xact=%', txid_current_if_assigned();\n update t_test set x = 1;\n --raise notice 'do >> xact=%', txid_current_if_assigned();\n raise notice 'do >> x=%', f_get_x();\n --raise notice 'do >> xact=%', txid_current_if_assigned();\n call f_print_x(f_get_x());\nend; $$;\nNOTICE: do >> x=1\nNOTICE: f_print_x() >> x=0\nDO\n\nI don't understand why CALL statement is not seeing an updated record.\nWith AUTOCOMMIT=on, all goes as expected.\n\nI tried to examine snapshots and xids (commented lines), but they're always\nthe same.\n\nCan you explain this behavior, please? Is it expected?\n\n-- \nVictor Yegorov\n\nGreetings.I am observing the following results on PostgreSQL 15.7First, setup:create table t_test(x bigint);insert into t_test values(0);create or replace function f_get_x()returns bigintlanguage plpgsqlstableas $function$declare    l_result bigint;begin    select x into l_result from t_test;    --raise notice 'f_get_x() >> x=%', l_result;    --raise notice 'f_get_x() >> xact=%', txid_current_if_assigned();    return l_result;end;$function$;create or replace procedure f_print_x(x bigint)language plpgsqlas $procedure$begin    raise notice 'f_print_x() >> x=%', x;    --raise notice 'f_print_x() >> xact=%', txid_current_if_assigned();end;$procedure$;Now, the case:\\set AUTOCOMMIT offdo$$ begin    --raise notice 'do >> xact=%', txid_current_if_assigned();    update t_test set x = 1;    --raise notice 'do >> xact=%', txid_current_if_assigned();    raise notice 'do >> x=%', f_get_x();    --raise notice 'do >> xact=%', txid_current_if_assigned();    call f_print_x(f_get_x());end; $$;NOTICE:  do >> x=1NOTICE:  f_print_x() >> x=0DOI don't understand why CALL statement is not seeing an updated record.With AUTOCOMMIT=on, all goes as expected.I tried to examine snapshots and xids (commented lines), but they're always the same.Can you explain this behavior, please? Is it expected?-- Victor Yegorov", "msg_date": "Mon, 3 Jun 2024 17:41:38 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": true, "msg_subject": "Unexpected results from CALL and AUTOCOMMIT=off" }, { "msg_contents": "You declared function f_get_x as stable which means:\n\nhttps://www.postgresql.org/docs/15/sql-createfunction.html\n\nSTABLE indicates that the function cannot modify the database, and that\nwithin a single table scan it will consistently return the same result for\nthe same argument values, but that its result could change across SQL\nstatements. This is the appropriate selection for functions whose results\ndepend on database lookups, parameter variables (such as the current time\nzone), etc. (It is inappropriate for AFTER triggers that wish to query rows\nmodified by the current command.) Also note that the current_timestamp\nfamily of functions qualify as stable, since their values do not change\nwithin a transaction.\n\nIf you remove stable from function declaration, it works as expected:\n\ndrop table t_test;\nDROP TABLE\ncreate table t_test(x bigint);\nCREATE TABLE\ninsert into t_test values(0);\nINSERT 0 1\ncreate or replace function f_get_x()\nreturns bigint\nlanguage plpgsql\n-- stable\nas $function$\ndeclare\n l_result bigint;\nbegin\n select x into l_result from t_test;\n --raise notice 'f_get_x() >> x=%', l_result;\n --raise notice 'f_get_x() >> xact=%', txid_current_if_assigned();\n return l_result;\nend;\n$function$;\nCREATE FUNCTION\ncreate or replace procedure f_print_x(x bigint)\nlanguage plpgsql\nas $procedure$\nbegin\n raise notice 'f_print_x() >> x=%', x;\n --raise notice 'f_print_x() >> xact=%', txid_current_if_assigned();\nend;\n$procedure$;\nCREATE PROCEDURE\ndo\n$$ begin\n --raise notice 'do >> xact=%', txid_current_if_assigned();\n update t_test set x = 1;\n --raise notice 'do >> xact=%', txid_current_if_assigned();\n raise notice 'do >> x=%', f_get_x();\n --raise notice 'do >> xact=%', txid_current_if_assigned();\n call f_print_x(f_get_x());\nend; $$;\npsql:test.sql:38: NOTICE: do >> x=1\npsql:test.sql:38: NOTICE: f_print_x() >> x=1\nDO\n\nLe lun. 3 juin 2024 à 16:42, Victor Yegorov <[email protected]> a écrit :\n\n> Greetings.\n>\n> I am observing the following results on PostgreSQL 15.7\n> First, setup:\n>\n> create table t_test(x bigint);\n> insert into t_test values(0);\n>\n> create or replace function f_get_x()\n> returns bigint\n> language plpgsql\n> stable\n> as $function$\n> declare\n> l_result bigint;\n> begin\n> select x into l_result from t_test;\n> --raise notice 'f_get_x() >> x=%', l_result;\n> --raise notice 'f_get_x() >> xact=%', txid_current_if_assigned();\n> return l_result;\n> end;\n> $function$;\n>\n> create or replace procedure f_print_x(x bigint)\n> language plpgsql\n> as $procedure$\n> begin\n> raise notice 'f_print_x() >> x=%', x;\n> --raise notice 'f_print_x() >> xact=%', txid_current_if_assigned();\n> end;\n> $procedure$;\n>\n>\n> Now, the case:\n> \\set AUTOCOMMIT off\n> do\n> $$ begin\n> --raise notice 'do >> xact=%', txid_current_if_assigned();\n> update t_test set x = 1;\n> --raise notice 'do >> xact=%', txid_current_if_assigned();\n> raise notice 'do >> x=%', f_get_x();\n> --raise notice 'do >> xact=%', txid_current_if_assigned();\n> call f_print_x(f_get_x());\n> end; $$;\n> NOTICE: do >> x=1\n> NOTICE: f_print_x() >> x=0\n> DO\n>\n> I don't understand why CALL statement is not seeing an updated record.\n> With AUTOCOMMIT=on, all goes as expected.\n>\n> I tried to examine snapshots and xids (commented lines), but they're\n> always the same.\n>\n> Can you explain this behavior, please? Is it expected?\n>\n> --\n> Victor Yegorov\n>\n\nYou declared function f_get_x as stable which means:https://www.postgresql.org/docs/15/sql-createfunction.htmlSTABLE indicates that the function cannot \nmodify the database, and that within a single table scan it will \nconsistently return the same result for the same argument values, but \nthat its result could change across SQL statements. This is the \nappropriate selection for functions whose results depend on database \nlookups, parameter variables (such as the current time zone), etc. (It \nis inappropriate for AFTER triggers that wish to query rows modified by the current command.) Also note that the current_timestamp family of functions qualify as stable, since their values do not change within a transaction.If you remove stable from function declaration, it works as expected:drop table t_test;DROP TABLEcreate table t_test(x bigint);CREATE TABLEinsert into t_test values(0);INSERT 0 1create or replace function f_get_x()returns bigintlanguage plpgsql-- stableas $function$declare    l_result bigint;begin    select x into l_result from t_test;    --raise notice 'f_get_x() >> x=%', l_result;    --raise notice 'f_get_x() >> xact=%', txid_current_if_assigned();    return l_result;end;$function$;CREATE FUNCTIONcreate or replace procedure f_print_x(x bigint)language plpgsqlas $procedure$begin    raise notice 'f_print_x() >> x=%', x;    --raise notice 'f_print_x() >> xact=%', txid_current_if_assigned();end;$procedure$;CREATE PROCEDUREdo$$ begin    --raise notice 'do >> xact=%', txid_current_if_assigned();    update t_test set x = 1;    --raise notice 'do >> xact=%', txid_current_if_assigned();    raise notice 'do >> x=%', f_get_x();    --raise notice 'do >> xact=%', txid_current_if_assigned();    call f_print_x(f_get_x());end; $$;psql:test.sql:38: NOTICE:  do >> x=1psql:test.sql:38: NOTICE:  f_print_x() >> x=1DOLe lun. 3 juin 2024 à 16:42, Victor Yegorov <[email protected]> a écrit :Greetings.I am observing the following results on PostgreSQL 15.7First, setup:create table t_test(x bigint);insert into t_test values(0);create or replace function f_get_x()returns bigintlanguage plpgsqlstableas $function$declare    l_result bigint;begin    select x into l_result from t_test;    --raise notice 'f_get_x() >> x=%', l_result;    --raise notice 'f_get_x() >> xact=%', txid_current_if_assigned();    return l_result;end;$function$;create or replace procedure f_print_x(x bigint)language plpgsqlas $procedure$begin    raise notice 'f_print_x() >> x=%', x;    --raise notice 'f_print_x() >> xact=%', txid_current_if_assigned();end;$procedure$;Now, the case:\\set AUTOCOMMIT offdo$$ begin    --raise notice 'do >> xact=%', txid_current_if_assigned();    update t_test set x = 1;    --raise notice 'do >> xact=%', txid_current_if_assigned();    raise notice 'do >> x=%', f_get_x();    --raise notice 'do >> xact=%', txid_current_if_assigned();    call f_print_x(f_get_x());end; $$;NOTICE:  do >> x=1NOTICE:  f_print_x() >> x=0DOI don't understand why CALL statement is not seeing an updated record.With AUTOCOMMIT=on, all goes as expected.I tried to examine snapshots and xids (commented lines), but they're always the same.Can you explain this behavior, please? Is it expected?-- Victor Yegorov", "msg_date": "Mon, 3 Jun 2024 19:40:24 +0200", "msg_from": "Pierre Forstmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected results from CALL and AUTOCOMMIT=off" }, { "msg_contents": "пн, 3 июн. 2024 г. в 20:40, Pierre Forstmann <[email protected]>:\n\n> You declared function f_get_x as stable which means:\n>\n> …\n>\n> If you remove stable from function declaration, it works as expected:\n>\n\nWell, I checked\nhttps://www.postgresql.org/docs/current/xfunc-volatility.html\nThere's a paragraph describing why STABLE (and IMMUTABLE) use different\nsnapshots:\n\n> For functions written in SQL or in any of the standard procedural\nlanguages, there is a second important property determined by the\nvolatility category, namely the visibility of any data changes that have\nbeen made by the SQL command that is calling the function. A > VOLATILE\nfunction will see such changes, a STABLE or IMMUTABLE function will not.\nThis behavior is implemented using the snapshotting behavior of MVCC (see\nChapter 13): STABLE and IMMUTABLE functions use a snapshot established as\nof the start of the\n> calling query, whereas VOLATILE functions obtain a fresh snapshot at the\nstart of each query they execute.\n\nBut later, docs state, that\n\n> Because of this snapshotting behavior, a function containing only SELECT\ncommands can safely be marked STABLE, even if it selects from tables that\nmight be undergoing modifications by concurrent queries. PostgreSQL will\nexecute all commands of a STABLE function using the snapshot established\nfor the calling query, and so it will see a fixed view of the database\nthroughout that query.\n\nAnd therefore I assume STABLE should work in this case. Well, it seems not\nto.\n\nI assume there's smth to do with implicit BEGIN issued in non-AUTOCOMMIT\nmode and non-atomic DO block behaviour.\n\n\n-- \nVictor Yegorov\n\nпн, 3 июн. 2024 г. в 20:40, Pierre Forstmann <[email protected]>:You declared function f_get_x as stable which means:…If you remove stable from function declaration, it works as expected:Well, I checked https://www.postgresql.org/docs/current/xfunc-volatility.htmlThere's a paragraph describing why STABLE (and IMMUTABLE) use different snapshots:> For functions written in SQL or in any of the standard procedural languages, there is a second important property determined by the volatility category, namely the visibility of any data changes that have been made by the SQL command that is calling the function. A > VOLATILE function will see such changes, a STABLE or IMMUTABLE function will not. This behavior is implemented using the snapshotting behavior of MVCC (see Chapter 13): STABLE and IMMUTABLE functions use a snapshot established as of the start of the> calling query, whereas VOLATILE functions obtain a fresh snapshot at the start of each query they execute.But later, docs state, that> Because of this snapshotting behavior, a function containing only SELECT commands can safely be marked STABLE, even if it selects from tables that might be undergoing modifications by concurrent queries. PostgreSQL will execute all commands of a STABLE function using the snapshot established for the calling query, and so it will see a fixed view of the database throughout that query.And therefore I assume STABLE should work in this case. Well, it seems not to.I assume there's smth to do with implicit BEGIN issued in non-AUTOCOMMIT mode and non-atomic DO block behaviour.-- Victor Yegorov", "msg_date": "Mon, 3 Jun 2024 21:15:07 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected results from CALL and AUTOCOMMIT=off" }, { "msg_contents": "Victor Yegorov <[email protected]> writes:\n> пн, 3 июн. 2024 г. в 20:40, Pierre Forstmann <[email protected]>:\n>> If you remove stable from function declaration, it works as expected:\n\n> ... therefore I assume STABLE should work in this case. Well, it seems not\n> to.\n\nI agree that this looks like a bug, since your example shows that the\nsame function works as-expected in an ordinary expression but not in\na CALL. The dependency on AUTOCOMMIT (that is, being within an outer\ntransaction block) seems even odder. I've not dug into it yet, but\nI suppose we're passing the wrong snapshot to the CALL arguments.\nA volatile function wouldn't use that snapshot, explaining Pierre's\nresult.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Jun 2024 15:28:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected results from CALL and AUTOCOMMIT=off" }, { "msg_contents": "[ redirecting to pgsql-hackers ]\n\nI wrote:\n> I agree that this looks like a bug, since your example shows that the\n> same function works as-expected in an ordinary expression but not in\n> a CALL. The dependency on AUTOCOMMIT (that is, being within an outer\n> transaction block) seems even odder. I've not dug into it yet, but\n> I suppose we're passing the wrong snapshot to the CALL arguments.\n\nI poked into this and found that the source of the problem is that\nplpgsql's exec_stmt_call passes allow_nonatomic = true even when\nit's running in an atomic context. So we can fix it with basically\na one-line change:\n\n-\toptions.allow_nonatomic = true;\n+\toptions.allow_nonatomic = !estate->atomic;\n\nI'm worried about whether external callers might've made a comparable\nmistake, but I think all we can do is document it a little better.\nAFAICS there isn't any good way for spi.c to realize that this mistake\nhas been made, else we could have it patch up the mistake centrally.\nI've not attempted to make those doc updates in the attached draft\npatch though, nor have I added a test case yet.\n\nBefore realizing that this was the issue, I spent a fair amount of\ntime on the idea that _SPI_execute_plan() was doing things wrong,\nand that led me to notice that its comment about having four modes\nof snapshot operation has been falsified in multiple ways. So this\ndraft does include fixes for that comment.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 03 Jun 2024 21:32:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected results from CALL and AUTOCOMMIT=off" }, { "msg_contents": "I wrote:\n> I poked into this and found that the source of the problem is that\n> plpgsql's exec_stmt_call passes allow_nonatomic = true even when\n> it's running in an atomic context. So we can fix it with basically\n> a one-line change:\n\n> -\toptions.allow_nonatomic = true;\n> +\toptions.allow_nonatomic = !estate->atomic;\n\n> I'm worried about whether external callers might've made a comparable\n> mistake, but I think all we can do is document it a little better.\n> AFAICS there isn't any good way for spi.c to realize that this mistake\n> has been made, else we could have it patch up the mistake centrally.\n\nActually, after poking around some more I found that there *is* a way\nto deal with this within spi.c: we can make _SPI_execute_plan ignore\noptions->allow_nonatomic unless the SPI_OPT_NONATOMIC flag was given\nwhen connecting.\n\nI like this better than my first solution because (a) it seems to\nmake the allow_nonatomic flag behave in a more intuitive way;\n(b) spi.c gates some other behaviors on SPI_OPT_NONATOMIC, so that\ngating this one too seems more consistent, and (c) this way, we fix\nnot only plpgsql but anything that has copied its coding pattern.\n\nHence, new patch attached, now with docs and tests. Barring\nobjections I'll push this one.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 04 Jun 2024 14:28:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected results from CALL and AUTOCOMMIT=off" }, { "msg_contents": "On Tue, Jun 04, 2024 at 02:28:43PM -0400, Tom Lane wrote:\n> Actually, after poking around some more I found that there *is* a way\n> to deal with this within spi.c: we can make _SPI_execute_plan ignore\n> options->allow_nonatomic unless the SPI_OPT_NONATOMIC flag was given\n> when connecting.\n> \n> I like this better than my first solution because (a) it seems to\n> make the allow_nonatomic flag behave in a more intuitive way;\n> (b) spi.c gates some other behaviors on SPI_OPT_NONATOMIC, so that\n> gating this one too seems more consistent, and (c) this way, we fix\n> not only plpgsql but anything that has copied its coding pattern.\n\n+1\n\n> Hence, new patch attached, now with docs and tests. Barring\n> objections I'll push this one.\n\nShould we expand the documentation for SPI_connect_ext() to note that\nSPI_execute_extended()/SPI_execute_plan_extended() depend on the flag?\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 4 Jun 2024 15:13:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected results from CALL and AUTOCOMMIT=off" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Tue, Jun 04, 2024 at 02:28:43PM -0400, Tom Lane wrote:\n>> Hence, new patch attached, now with docs and tests. Barring\n>> objections I'll push this one.\n\n> Should we expand the documentation for SPI_connect_ext() to note that\n> SPI_execute_extended()/SPI_execute_plan_extended() depend on the flag?\n\nPerhaps. They already did, in that the atomic flag was taken into\naccount while deciding how to handle a nested CALL; basically what this\nfix does is to make sure that the snapshot handling is done the same\nway. I think that what I added to the docs is probably sufficient,\nbut I'll yield to majority opinion if people think not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Jun 2024 16:31:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected results from CALL and AUTOCOMMIT=off" } ]
[ { "msg_contents": "hi\nbased on gram.y and function transformJsonValueExpr.\n\ngram.y:\n| JSON_QUERY '('\njson_value_expr ',' a_expr json_passing_clause_opt\njson_returning_clause_opt\njson_wrapper_behavior\njson_quotes_clause_opt\njson_behavior_clause_opt\n')'\n\n| JSON_EXISTS '('\njson_value_expr ',' a_expr json_passing_clause_opt\njson_on_error_clause_opt\n')'\n\n| JSON_VALUE '('\njson_value_expr ',' a_expr json_passing_clause_opt\njson_returning_clause_opt\njson_behavior_clause_opt\n')'\n\njson_format_clause_opt contains:\n| FORMAT_LA JSON\n{\n$$ = (Node *) makeJsonFormat(JS_FORMAT_JSON, JS_ENC_DEFAULT, @1);\n}\n\n\nThat means, all the context_item can specify \"FORMAT JSON\" options,\nin the meantime, do we need to update these functions\nsynopsis/signature in the doc?\n\nsome examples:\ncreate table a(b jsonb);\ncreate table a1(b int4range);\nselect json_value(b format json, 'strict $[*]' DEFAULT 9 ON ERROR) from a;\nselect json_value(b format json, 'strict $[*]' DEFAULT 9 ON ERROR) from a1;\nselect json_value(text '\"1\"' format json, 'strict $[*]' DEFAULT 9 ON ERROR);\n\n------------------------------------------------\ntransformJsonValueExpr\n\n/* Try to coerce to the target type. */\ncoerced = coerce_to_target_type(pstate, expr, exprtype,\ntargettype, -1,\nCOERCION_EXPLICIT,\nCOERCE_EXPLICIT_CAST,\nlocation);\n\nbased on the function transformJsonValueExpr and subfunction\ncoerce_to_target_type,\nfor SQL/JSON query functions (JSON_EXISTS, JSON_QUERY, and JSON_VALUE)\nthe context_item requirement is any data type that not error out while\nexplicitly casting to jsonb in coerce_to_target_type.\n\nI played around with it, I think these types can be used in context_item.\n{char,text,bpchar,character varying } and these types of associated domains.\nbytea data type too, but need specify \"ENCODING UTF8\".\ne.g.\nselect json_value(bytea '\"1\"' format json ENCODING UTF8, 'strict $[*]'\nDEFAULT 9 ON ERROR);\n\n\nMaybe we can add some brief explanation in this para to explain more\nabout \"context_item\"\n{\nSQL/JSON functions JSON_EXISTS(), JSON_QUERY(), and JSON_VALUE()\ndescribed in Table 9.52 can be used to query JSON documents. Each of\nthese functions apply a path_expression (the query) to a context_item\n(the document); see Section 9.16.2 for more details on what\npath_expression can contain.\n}\n\n\n", "msg_date": "Mon, 3 Jun 2024 23:11:01 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "SQL/JSON query functions context_item doc entry and type requirement" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 4, 2024 at 12:11 AM jian he <[email protected]> wrote:\n>\n> hi\n> based on gram.y and function transformJsonValueExpr.\n>\n> gram.y:\n> | JSON_QUERY '('\n> json_value_expr ',' a_expr json_passing_clause_opt\n> json_returning_clause_opt\n> json_wrapper_behavior\n> json_quotes_clause_opt\n> json_behavior_clause_opt\n> ')'\n>\n> | JSON_EXISTS '('\n> json_value_expr ',' a_expr json_passing_clause_opt\n> json_on_error_clause_opt\n> ')'\n>\n> | JSON_VALUE '('\n> json_value_expr ',' a_expr json_passing_clause_opt\n> json_returning_clause_opt\n> json_behavior_clause_opt\n> ')'\n>\n> json_format_clause_opt contains:\n> | FORMAT_LA JSON\n> {\n> $$ = (Node *) makeJsonFormat(JS_FORMAT_JSON, JS_ENC_DEFAULT, @1);\n> }\n>\n>\n> That means, all the context_item can specify \"FORMAT JSON\" options,\n> in the meantime, do we need to update these functions\n> synopsis/signature in the doc?\n>\n> some examples:\n> create table a(b jsonb);\n> create table a1(b int4range);\n> select json_value(b format json, 'strict $[*]' DEFAULT 9 ON ERROR) from a;\n> select json_value(b format json, 'strict $[*]' DEFAULT 9 ON ERROR) from a1;\n> select json_value(text '\"1\"' format json, 'strict $[*]' DEFAULT 9 ON ERROR);\n>\n> ------------------------------------------------\n> transformJsonValueExpr\n>\n> /* Try to coerce to the target type. */\n> coerced = coerce_to_target_type(pstate, expr, exprtype,\n> targettype, -1,\n> COERCION_EXPLICIT,\n> COERCE_EXPLICIT_CAST,\n> location);\n>\n> based on the function transformJsonValueExpr and subfunction\n> coerce_to_target_type,\n> for SQL/JSON query functions (JSON_EXISTS, JSON_QUERY, and JSON_VALUE)\n> the context_item requirement is any data type that not error out while\n> explicitly casting to jsonb in coerce_to_target_type.\n>\n> I played around with it, I think these types can be used in context_item.\n> {char,text,bpchar,character varying } and these types of associated domains.\n> bytea data type too, but need specify \"ENCODING UTF8\".\n> e.g.\n> select json_value(bytea '\"1\"' format json ENCODING UTF8, 'strict $[*]'\n> DEFAULT 9 ON ERROR);\n>\n>\n> Maybe we can add some brief explanation in this para to explain more\n> about \"context_item\"\n> {\n> SQL/JSON functions JSON_EXISTS(), JSON_QUERY(), and JSON_VALUE()\n> described in Table 9.52 can be used to query JSON documents. Each of\n> these functions apply a path_expression (the query) to a context_item\n> (the document); see Section 9.16.2 for more details on what\n> path_expression can contain.\n> }\n\nIf I understand correctly, you're suggesting that we add a line to the\nabove paragraph to mention which types are appropriate for\ncontext_item. How about we add the following:\n\n<replaceable>context_item</replaceable> expression can be a value of\nany type that can be cast to <type>jsonb</type>. This includes types\nsuch as <type>char</type>, <type>text</type>, <type>bpchar</type>,\n<type>character varying</type>, and <type>bytea</type> (with\n<code>ENCODING UTF8</code>), as well as any domains over these types.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 17 Jun 2024 15:43:28 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Mon, Jun 17, 2024 at 2:43 PM Amit Langote <[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, Jun 4, 2024 at 12:11 AM jian he <[email protected]> wrote:\n> >\n> > hi\n> > based on gram.y and function transformJsonValueExpr.\n> >\n> > gram.y:\n> > | JSON_QUERY '('\n> > json_value_expr ',' a_expr json_passing_clause_opt\n> > json_returning_clause_opt\n> > json_wrapper_behavior\n> > json_quotes_clause_opt\n> > json_behavior_clause_opt\n> > ')'\n> >\n> > | JSON_EXISTS '('\n> > json_value_expr ',' a_expr json_passing_clause_opt\n> > json_on_error_clause_opt\n> > ')'\n> >\n> > | JSON_VALUE '('\n> > json_value_expr ',' a_expr json_passing_clause_opt\n> > json_returning_clause_opt\n> > json_behavior_clause_opt\n> > ')'\n> >\n> > json_format_clause_opt contains:\n> > | FORMAT_LA JSON\n> > {\n> > $$ = (Node *) makeJsonFormat(JS_FORMAT_JSON, JS_ENC_DEFAULT, @1);\n> > }\n> >\n> >\n> > That means, all the context_item can specify \"FORMAT JSON\" options,\n> > in the meantime, do we need to update these functions\n> > synopsis/signature in the doc?\n> >\n\n>\n> If I understand correctly, you're suggesting that we add a line to the\n> above paragraph to mention which types are appropriate for\n> context_item. How about we add the following:\n>\n> <replaceable>context_item</replaceable> expression can be a value of\n> any type that can be cast to <type>jsonb</type>. This includes types\n> such as <type>char</type>, <type>text</type>, <type>bpchar</type>,\n> <type>character varying</type>, and <type>bytea</type> (with\n> <code>ENCODING UTF8</code>), as well as any domains over these types.\n\nyour wording looks ok to me. I want to add two sentences. so it becomes:\n\n+ The <replaceable>context_item</replaceable> expression can be a value of\n+ any type that can be cast to <type>jsonb</type>. This includes types\n+ such as <type>char</type>, <type>text</type>, <type>bpchar</type>,\n+ <type>character varying</type>, and <type>bytea</type> (with\n+ <code>ENCODING UTF8</code>), as well as any domains over these types.\n+ The <replaceable>context_item</replaceable> expression can also\nbe followed with\n+ <literal>FORMAT JSON</literal>, <literal>ENCODING UTF8</literal>.\n+ These two options currently don't have actual meaning.\n+ <literal>ENCODING UTF8</literal> can only be specified when\n<replaceable>context_item</replaceable> type is <type>bytea</type>.\n\nimho, \"These two options currently don't have actual meaning.\" is accurate,\nbut still does not explain why we allow \"FORMAT JSON ENCODING UTF8\".\nI think we may need an explanation for \"FORMAT JSON ENCODING UTF8\".\nbecause json_array, json_object, json_serialize, json all didn't\nmention the meaning of \"[ FORMAT JSON [ ENCODING UTF8 ] ] \".\n\n\nI added \"[ FORMAT JSON [ ENCODING UTF8 ] ] \" to the function\nsignature/synopsis of json_exists, json_query, json_value.", "msg_date": "Mon, 17 Jun 2024 17:47:37 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "Hi,\n\nOn 06/17/24 02:43, Amit Langote wrote:\n> <replaceable>context_item</replaceable> expression can be a value of\n> any type that can be cast to <type>jsonb</type>. This includes types\n> such as <type>char</type>, <type>text</type>, <type>bpchar</type>,\n> <type>character varying</type>, and <type>bytea</type> (with\n> <code>ENCODING UTF8</code>), as well as any domains over these types.\n\nReading this message in conjunction with [0] makes me think that we are\nreally talking about a function that takes a first parameter of type jsonb,\nand behaves exactly that way (so any cast required is applied by the system\nahead of the call). Under those conditions, this seems like an unusual\nsentence to add in the docs, at least until we have also documented that\ntan's argument can be of any type that can be cast to double precision.\n\nOn the other hand, if the behavior of the functions were to be changed\n(perhaps using prosupport rewriting as suggested in [1]?) so that it was\nnot purely describable as a function accepting exactly jsonb with a\npossible system-applied cast in front, then in that case such an added\nexplanation in the docs might be very fitting.\n\nRegards,\n-Chap\n\n\n[0]\nhttps://www.postgresql.org/message-id/CA%2BHiwqGuqLfAEP-FwW3QHByfQOoUpyj6YZG6R6bScpQswvNYDA%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/66703054.6040109%40acm.org\n\n\n", "msg_date": "Mon, 17 Jun 2024 09:05:42 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Mon, Jun 17, 2024 at 9:05 PM Chapman Flack <[email protected]> wrote:\n>\n> Hi,\n>\n> On 06/17/24 02:43, Amit Langote wrote:\n> > <replaceable>context_item</replaceable> expression can be a value of\n> > any type that can be cast to <type>jsonb</type>. This includes types\n> > such as <type>char</type>, <type>text</type>, <type>bpchar</type>,\n> > <type>character varying</type>, and <type>bytea</type> (with\n> > <code>ENCODING UTF8</code>), as well as any domains over these types.\n>\n> Reading this message in conjunction with [0] makes me think that we are\n> really talking about a function that takes a first parameter of type jsonb,\n> and behaves exactly that way (so any cast required is applied by the system\n> ahead of the call). Under those conditions, this seems like an unusual\n> sentence to add in the docs, at least until we have also documented that\n> tan's argument can be of any type that can be cast to double precision.\n>\n\nI guess it would be fine to add an unusual sentence to the docs.\n\nimagine a function: array_avg(anyarray) returns anyelement.\narray_avg calculate an array's elements's avg. like\narray('{1,2,3}'::int[]) returns 2.\nbut array_avg won't make sense if the input argument is a date array.\nso mentioning in the doc: array_avg can accept anyarray, but anyarray\ncannot date array.\nseems ok.\n\n\n> On the other hand, if the behavior of the functions were to be changed\n> (perhaps using prosupport rewriting as suggested in [1]?) so that it was\n> not purely describable as a function accepting exactly jsonb with a\n> possible system-applied cast in front, then in that case such an added\n> explanation in the docs might be very fitting.\n>\n\nprosupport won't work, I think.\nbecause json_exists, json_value, json_query, json_table don't have\npg_proc entries.\nThese are more like expressions.\n\n\n", "msg_date": "Wed, 19 Jun 2024 23:29:39 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Wed, Jun 19, 2024 at 8:29 AM jian he <[email protected]> wrote:\n\n> On Mon, Jun 17, 2024 at 9:05 PM Chapman Flack <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 06/17/24 02:43, Amit Langote wrote:\n> > > <replaceable>context_item</replaceable> expression can be a value of\n> > > any type that can be cast to <type>jsonb</type>. This includes types\n> > > such as <type>char</type>, <type>text</type>, <type>bpchar</type>,\n> > > <type>character varying</type>, and <type>bytea</type> (with\n> > > <code>ENCODING UTF8</code>), as well as any domains over these types.\n> >\n> > Reading this message in conjunction with [0] makes me think that we are\n> > really talking about a function that takes a first parameter of type\n> jsonb,\n> > and behaves exactly that way (so any cast required is applied by the\n> system\n> > ahead of the call). Under those conditions, this seems like an unusual\n> > sentence to add in the docs, at least until we have also documented that\n> > tan's argument can be of any type that can be cast to double precision.\n> >\n>\n> I guess it would be fine to add an unusual sentence to the docs.\n>\n> imagine a function: array_avg(anyarray) returns anyelement.\n> array_avg calculate an array's elements's avg. like\n> array('{1,2,3}'::int[]) returns 2.\n> but array_avg won't make sense if the input argument is a date array.\n> so mentioning in the doc: array_avg can accept anyarray, but anyarray\n> cannot date array.\n> seems ok.\n>\n\nThere is existing wording for this:\n\n\"The expression can be of any JSON type, any character string type, or\nbytea in UTF8 encoding.\"\n\nIf you add this sentence to the paragraph the link that already exists,\nwhich simply points the reader to this sentence, becomes redundant and\nshould be removed.\n\nAs for table 9.16.3 - it is unwieldy already. Lets try and make the core\nsyntax shorter, not longer. We already have precedence in the subsequent\njson_table section - give each major clause item a name then below the\ntable define the syntax and meaning for those names. Unlike in that\nsection - which probably should be modified too - context_item should have\nits own description line.\n\nDavid J.\n\nOn Wed, Jun 19, 2024 at 8:29 AM jian he <[email protected]> wrote:On Mon, Jun 17, 2024 at 9:05 PM Chapman Flack <[email protected]> wrote:\n>\n> Hi,\n>\n> On 06/17/24 02:43, Amit Langote wrote:\n> > <replaceable>context_item</replaceable> expression can be a value of\n> > any type that can be cast to <type>jsonb</type>. This includes types\n> > such as <type>char</type>,  <type>text</type>, <type>bpchar</type>,\n> > <type>character varying</type>, and <type>bytea</type> (with\n> > <code>ENCODING UTF8</code>), as well as any domains over these types.\n>\n> Reading this message in conjunction with [0] makes me think that we are\n> really talking about a function that takes a first parameter of type jsonb,\n> and behaves exactly that way (so any cast required is applied by the system\n> ahead of the call). Under those conditions, this seems like an unusual\n> sentence to add in the docs, at least until we have also documented that\n> tan's argument can be of any type that can be cast to double precision.\n>\n\nI guess it would be fine to add an unusual sentence to the docs.\n\nimagine a function: array_avg(anyarray) returns anyelement.\narray_avg calculate an array's elements's avg. like\narray('{1,2,3}'::int[]) returns 2.\nbut array_avg won't make sense if the input argument is a date array.\nso mentioning in the doc: array_avg can accept anyarray, but anyarray\ncannot date array.\nseems ok. There is existing wording for this:\"The expression can be of any JSON type, any character string type, or bytea in UTF8 encoding.\"If you add this sentence to the paragraph the link that already exists, which simply points the reader to this sentence, becomes redundant and should be removed.As for table 9.16.3 - it is unwieldy already.  Lets try and make the core syntax shorter, not longer.  We already have precedence in the subsequent json_table section - give each major clause item a name then below the table define the syntax and meaning for those names.  Unlike in that section - which probably should be modified too - context_item should have its own description line.David J.", "msg_date": "Wed, 19 Jun 2024 09:03:15 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Thu, Jun 20, 2024 at 1:03 AM David G. Johnston\n<[email protected]> wrote:\n> On Wed, Jun 19, 2024 at 8:29 AM jian he <[email protected]> wrote:\n>>\n>> On Mon, Jun 17, 2024 at 9:05 PM Chapman Flack <[email protected]> wrote:\n>> >\n>> > Hi,\n>> >\n>> > On 06/17/24 02:43, Amit Langote wrote:\n>> > > <replaceable>context_item</replaceable> expression can be a value of\n>> > > any type that can be cast to <type>jsonb</type>. This includes types\n>> > > such as <type>char</type>, <type>text</type>, <type>bpchar</type>,\n>> > > <type>character varying</type>, and <type>bytea</type> (with\n>> > > <code>ENCODING UTF8</code>), as well as any domains over these types.\n>> >\n>> > Reading this message in conjunction with [0] makes me think that we are\n>> > really talking about a function that takes a first parameter of type jsonb,\n>> > and behaves exactly that way (so any cast required is applied by the system\n>> > ahead of the call). Under those conditions, this seems like an unusual\n>> > sentence to add in the docs, at least until we have also documented that\n>> > tan's argument can be of any type that can be cast to double precision.\n>> >\n>>\n>> I guess it would be fine to add an unusual sentence to the docs.\n>>\n>> imagine a function: array_avg(anyarray) returns anyelement.\n>> array_avg calculate an array's elements's avg. like\n>> array('{1,2,3}'::int[]) returns 2.\n>> but array_avg won't make sense if the input argument is a date array.\n>> so mentioning in the doc: array_avg can accept anyarray, but anyarray\n>> cannot date array.\n>> seems ok.\n>\n>\n> There is existing wording for this:\n>\n> \"The expression can be of any JSON type, any character string type, or bytea in UTF8 encoding.\"\n>\n> If you add this sentence to the paragraph the link that already exists, which simply points the reader to this sentence, becomes redundant and should be removed.\n\nI've just posted a patch in the other thread [1] to restrict\ncontext_item to be of jsonb type, which users would need to ensure by\nadding an explicit cast if needed. I think that makes this\nclarification unnecessary.\n\n> As for table 9.16.3 - it is unwieldy already. Lets try and make the core syntax shorter, not longer. We already have precedence in the subsequent json_table section - give each major clause item a name then below the table define the syntax and meaning for those names. Unlike in that section - which probably should be modified too - context_item should have its own description line.\n\nI had posted a patch a little while ago at [1] to render the syntax a\nbit differently with each function getting its own syntax synopsis.\nResending it here; have addressed Jian He's comments.\n\n--\nThanks, Amit Langote\n[1] https://www.postgresql.org/message-id/CA%2BHiwqF2Z6FATWQV6bG9NeKYf%3D%2B%2BfOgmdbYc9gWSNJ81jfqCuA%40mail.gmail.com", "msg_date": "Thu, 20 Jun 2024 18:46:37 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Thu, Jun 20, 2024 at 5:46 PM Amit Langote <[email protected]> wrote:\n>\n> On Thu, Jun 20, 2024 at 1:03 AM David G. Johnston\n> <[email protected]> wrote:\n> > On Wed, Jun 19, 2024 at 8:29 AM jian he <[email protected]> wrote:\n> >>\n> >> On Mon, Jun 17, 2024 at 9:05 PM Chapman Flack <[email protected]> wrote:\n> >> >\n> >> > Hi,\n> >> >\n> >> > On 06/17/24 02:43, Amit Langote wrote:\n> >> > > <replaceable>context_item</replaceable> expression can be a value of\n> >> > > any type that can be cast to <type>jsonb</type>. This includes types\n> >> > > such as <type>char</type>, <type>text</type>, <type>bpchar</type>,\n> >> > > <type>character varying</type>, and <type>bytea</type> (with\n> >> > > <code>ENCODING UTF8</code>), as well as any domains over these types.\n> >> >\n> >> > Reading this message in conjunction with [0] makes me think that we are\n> >> > really talking about a function that takes a first parameter of type jsonb,\n> >> > and behaves exactly that way (so any cast required is applied by the system\n> >> > ahead of the call). Under those conditions, this seems like an unusual\n> >> > sentence to add in the docs, at least until we have also documented that\n> >> > tan's argument can be of any type that can be cast to double precision.\n> >> >\n> >>\n> >> I guess it would be fine to add an unusual sentence to the docs.\n> >>\n> >> imagine a function: array_avg(anyarray) returns anyelement.\n> >> array_avg calculate an array's elements's avg. like\n> >> array('{1,2,3}'::int[]) returns 2.\n> >> but array_avg won't make sense if the input argument is a date array.\n> >> so mentioning in the doc: array_avg can accept anyarray, but anyarray\n> >> cannot date array.\n> >> seems ok.\n> >\n> >\n> > There is existing wording for this:\n> >\n> > \"The expression can be of any JSON type, any character string type, or bytea in UTF8 encoding.\"\n> >\n> > If you add this sentence to the paragraph the link that already exists, which simply points the reader to this sentence, becomes redundant and should be removed.\n>\n> I've just posted a patch in the other thread [1] to restrict\n> context_item to be of jsonb type, which users would need to ensure by\n> adding an explicit cast if needed. I think that makes this\n> clarification unnecessary.\n>\n> > As for table 9.16.3 - it is unwieldy already. Lets try and make the core syntax shorter, not longer. We already have precedence in the subsequent json_table section - give each major clause item a name then below the table define the syntax and meaning for those names. Unlike in that section - which probably should be modified too - context_item should have its own description line.\n>\n> I had posted a patch a little while ago at [1] to render the syntax a\n> bit differently with each function getting its own syntax synopsis.\n> Resending it here; have addressed Jian He's comments.\n>\n> --\n\n@@ -18746,6 +18752,7 @@ ERROR: jsonpath array subscript is out of bounds\n <literal>PASSING</literal> <replaceable>value</replaceable>s.\n </para>\n <para>\n+ Returns the result of applying the SQL/JSON\n If the path expression returns multiple SQL/JSON items, it might be\n necessary to wrap the result using the <literal>WITH WRAPPER</literal>\n clause to make it a valid JSON string. If the wrapper is\n\n\n+ Returns the result of applying the SQL/JSON\n is redundant?\n\n\nplaying around with it.\nfound some minor issues:\n\njson_exists allow: DEFAULT expression ON ERROR, which is not\nmentioned in the doc.\nfor example:\nselect JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default\ntrue ON ERROR);\nselect JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default 0 ON ERROR);\nselect JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default 11 ON ERROR);\n\n\n\nJSON_VALUE on error, on empty semantics should be the same as json_query.\nlike:\n[ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\nON EMPTY ]\n[ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\nON ERROR ])\n\nexamples:\nselect JSON_value(jsonb '[]' , '$' empty array on error);\nselect JSON_value(jsonb '[]' , '$' empty object on error);\n\n\n", "msg_date": "Fri, 21 Jun 2024 00:01:11 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Thu, Jun 20, 2024 at 9:01 AM jian he <[email protected]> wrote:\n\n> On Thu, Jun 20, 2024 at 5:46 PM Amit Langote <[email protected]>\n> wrote:\n> >\n> > On Thu, Jun 20, 2024 at 1:03 AM David G. Johnston\n> > <[email protected]> wrote:\n> > > On Wed, Jun 19, 2024 at 8:29 AM jian he <[email protected]>\n> wrote:\n> > >>\n> > >> On Mon, Jun 17, 2024 at 9:05 PM Chapman Flack <[email protected]>\n> wrote:\n> > >> >\n> > >> > Hi,\n> > >> >\n> > >> > On 06/17/24 02:43, Amit Langote wrote:\n> > >> > > <replaceable>context_item</replaceable> expression can be a value\n> of\n> > >> > > any type that can be cast to <type>jsonb</type>. This includes\n> types\n> > >> > > such as <type>char</type>, <type>text</type>,\n> <type>bpchar</type>,\n> > >> > > <type>character varying</type>, and <type>bytea</type> (with\n> > >> > > <code>ENCODING UTF8</code>), as well as any domains over these\n> types.\n> > >> >\n> > >> > Reading this message in conjunction with [0] makes me think that we\n> are\n> > >> > really talking about a function that takes a first parameter of\n> type jsonb,\n> > >> > and behaves exactly that way (so any cast required is applied by\n> the system\n> > >> > ahead of the call). Under those conditions, this seems like an\n> unusual\n> > >> > sentence to add in the docs, at least until we have also documented\n> that\n> > >> > tan's argument can be of any type that can be cast to double\n> precision.\n> > >> >\n> > >>\n> > >> I guess it would be fine to add an unusual sentence to the docs.\n> > >>\n> > >> imagine a function: array_avg(anyarray) returns anyelement.\n> > >> array_avg calculate an array's elements's avg. like\n> > >> array('{1,2,3}'::int[]) returns 2.\n> > >> but array_avg won't make sense if the input argument is a date array.\n> > >> so mentioning in the doc: array_avg can accept anyarray, but anyarray\n> > >> cannot date array.\n> > >> seems ok.\n> > >\n> > >\n> > > There is existing wording for this:\n> > >\n> > > \"The expression can be of any JSON type, any character string type, or\n> bytea in UTF8 encoding.\"\n> > >\n> > > If you add this sentence to the paragraph the link that already\n> exists, which simply points the reader to this sentence, becomes redundant\n> and should be removed.\n> >\n> > I've just posted a patch in the other thread [1] to restrict\n> > context_item to be of jsonb type, which users would need to ensure by\n> > adding an explicit cast if needed. I think that makes this\n> > clarification unnecessary.\n> >\n> > > As for table 9.16.3 - it is unwieldy already. Lets try and make the\n> core syntax shorter, not longer. We already have precedence in the\n> subsequent json_table section - give each major clause item a name then\n> below the table define the syntax and meaning for those names. Unlike in\n> that section - which probably should be modified too - context_item should\n> have its own description line.\n> >\n> > I had posted a patch a little while ago at [1] to render the syntax a\n> > bit differently with each function getting its own syntax synopsis.\n> > Resending it here; have addressed Jian He's comments.\n> >\n> > --\n>\n\nI was thinking more like:\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex c324906b22..b9d157663a 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -18692,8 +18692,10 @@ $.* ? (@ like_regex \"^\\\\d+$\")\n <entry role=\"func_table_entry\"><para role=\"func_signature\">\n <indexterm><primary>json_exists</primary></indexterm>\n <function>json_exists</function> (\n- <replaceable>context_item</replaceable>,\n<replaceable>path_expression</replaceable> <optional>\n<literal>PASSING</literal> { <replaceable>value</replaceable>\n<literal>AS</literal> <replaceable>varname</replaceable> } <optional>,\n...</optional></optional>\n- <optional> { <literal>TRUE</literal> | <literal>FALSE</literal>\n|<literal> UNKNOWN</literal> | <literal>ERROR</literal> } <literal>ON\nERROR</literal> </optional>)\n+ <replaceable>context_item</replaceable>,\n+ <replaceable>path_expression</replaceable>\n+ <optional>variable_definitions</optional>\n+ <optional>on_error_boolean</optional>)\n </para>\n <para>\n Returns true if the SQL/JSON\n<replaceable>path_expression</replaceable>\n@@ -18732,12 +18734,14 @@ ERROR: jsonpath array subscript is out of bounds\n <entry role=\"func_table_entry\"><para role=\"func_signature\">\n <indexterm><primary>json_query</primary></indexterm>\n <function>json_query</function> (\n- <replaceable>context_item</replaceable>,\n<replaceable>path_expression</replaceable> <optional>\n<literal>PASSING</literal> { <replaceable>value</replaceable>\n<literal>AS</literal> <replaceable>varname</replaceable> } <optional>,\n...</optional></optional>\n- <optional> <literal>RETURNING</literal>\n<replaceable>data_type</replaceable> <optional> <literal>FORMAT\nJSON</literal> <optional> <literal>ENCODING UTF8</literal> </optional>\n</optional> </optional>\n- <optional> { <literal>WITHOUT</literal> | <literal>WITH</literal>\n{ <literal>CONDITIONAL</literal> |\n<optional><literal>UNCONDITIONAL</literal></optional> } } <optional>\n<literal>ARRAY</literal> </optional> <literal>WRAPPER</literal> </optional>\n- <optional> { <literal>KEEP</literal> | <literal>OMIT</literal> }\n<literal>QUOTES</literal> <optional> <literal>ON SCALAR STRING</literal>\n</optional> </optional>\n- <optional> { <literal>ERROR</literal> | <literal>NULL</literal> |\n<literal>EMPTY</literal> { <optional> <literal>ARRAY</literal> </optional>\n| <literal>OBJECT</literal> } | <literal>DEFAULT</literal>\n<replaceable>expression</replaceable> } <literal>ON EMPTY</literal>\n</optional>\n- <optional> { <literal>ERROR</literal> | <literal>NULL</literal> |\n<literal>EMPTY</literal> { <optional> <literal>ARRAY</literal> </optional>\n| <literal>OBJECT</literal> } | <literal>DEFAULT</literal>\n<replaceable>expression</replaceable> } <literal>ON ERROR</literal>\n</optional>)\n+ <replaceable>context_item</replaceable>,\n+ <replaceable>path_expression</replaceable>\n+ <optional>variable_definitions</optional>\n+ <optional>return_clause</optional>\n+ <optional>wrapping_clause</optional>\n+ <optional>quoting_clause</optional>\n+ <optional>on_empty_set</optional>\n+ <optional>on_error_set</optional>)\n </para>\n <para>\n Returns the result of applying the SQL/JSON\n@@ -18809,11 +18813,12 @@ DETAIL: Missing \"]\" after array dimensions.\n <entry role=\"func_table_entry\"><para role=\"func_signature\">\n <indexterm><primary>json_value</primary></indexterm>\n <function>json_value</function> (\n- <replaceable>context_item</replaceable>,\n<replaceable>path_expression</replaceable>\n- <optional> <literal>PASSING</literal> {\n<replaceable>value</replaceable> <literal>AS</literal>\n<replaceable>varname</replaceable> } <optional>, ...</optional></optional>\n- <optional> <literal>RETURNING</literal>\n<replaceable>data_type</replaceable> </optional>\n- <optional> { <literal>ERROR</literal> | <literal>NULL</literal> |\n<literal>DEFAULT</literal> <replaceable>expression</replaceable> }\n<literal>ON EMPTY</literal> </optional>\n- <optional> { <literal>ERROR</literal> | <literal>NULL</literal> |\n<literal>DEFAULT</literal> <replaceable>expression</replaceable> }\n<literal>ON ERROR</literal> </optional>)\n+ <replaceable>context_item</replaceable>,\n+ <replaceable>path_expression</replaceable>\n+ <optional>variable_definitions</optional>\n+ <optional>return_type</optional>\n+ <optional>on_empty_value</optional>\n+ <optional>on_error_value</optional>)\n </para>\n <para>\n Returns the result of applying the SQL/JSON\n\nThen defining each of those below the table - keeping the on_error variants\ntogether.\n\n\n\n\n> playing around with it.\n> found some minor issues:\n>\n> json_exists allow: DEFAULT expression ON ERROR, which is not\n> mentioned in the doc.\n> for example:\n> select JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default\n> true ON ERROR);\n> select JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default 0 ON\n> ERROR);\n> select JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default 11 ON\n> ERROR);\n>\n\nYeah, surprised it works, the documented behavior seems logical. Being\nable to return a non-boolean here seems odd. Especially since it is cast\nto boolean on output.\n\n\n> JSON_VALUE on error, on empty semantics should be the same as json_query.\n> like:\n> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> ON EMPTY ]\n> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> ON ERROR ])\n>\n> examples:\n> select JSON_value(jsonb '[]' , '$' empty array on error);\n> select JSON_value(jsonb '[]' , '$' empty object on error);\n>\n\nAgain the documented behavior seems to make sense though and the ability to\nspecify empty in the value function seems like a bug. If you really want\nan empty array or object you do have access to default. The reason\njson_query provides for an empty array/object is that it is already\nexpecting to produce an array (object seems a little odd).\n\nI agree our docs and code do not match which needs to be fixed, ideally in\nthe direction of the standard which I'm guessing our documentation is based\noff of. But let's not go off of my guess.\n\nDavid J.\n\nOn Thu, Jun 20, 2024 at 9:01 AM jian he <[email protected]> wrote:On Thu, Jun 20, 2024 at 5:46 PM Amit Langote <[email protected]> wrote:\n>\n> On Thu, Jun 20, 2024 at 1:03 AM David G. Johnston\n> <[email protected]> wrote:\n> > On Wed, Jun 19, 2024 at 8:29 AM jian he <[email protected]> wrote:\n> >>\n> >> On Mon, Jun 17, 2024 at 9:05 PM Chapman Flack <[email protected]> wrote:\n> >> >\n> >> > Hi,\n> >> >\n> >> > On 06/17/24 02:43, Amit Langote wrote:\n> >> > > <replaceable>context_item</replaceable> expression can be a value of\n> >> > > any type that can be cast to <type>jsonb</type>. This includes types\n> >> > > such as <type>char</type>,  <type>text</type>, <type>bpchar</type>,\n> >> > > <type>character varying</type>, and <type>bytea</type> (with\n> >> > > <code>ENCODING UTF8</code>), as well as any domains over these types.\n> >> >\n> >> > Reading this message in conjunction with [0] makes me think that we are\n> >> > really talking about a function that takes a first parameter of type jsonb,\n> >> > and behaves exactly that way (so any cast required is applied by the system\n> >> > ahead of the call). Under those conditions, this seems like an unusual\n> >> > sentence to add in the docs, at least until we have also documented that\n> >> > tan's argument can be of any type that can be cast to double precision.\n> >> >\n> >>\n> >> I guess it would be fine to add an unusual sentence to the docs.\n> >>\n> >> imagine a function: array_avg(anyarray) returns anyelement.\n> >> array_avg calculate an array's elements's avg. like\n> >> array('{1,2,3}'::int[]) returns 2.\n> >> but array_avg won't make sense if the input argument is a date array.\n> >> so mentioning in the doc: array_avg can accept anyarray, but anyarray\n> >> cannot date array.\n> >> seems ok.\n> >\n> >\n> > There is existing wording for this:\n> >\n> > \"The expression can be of any JSON type, any character string type, or bytea in UTF8 encoding.\"\n> >\n> > If you add this sentence to the paragraph the link that already exists, which simply points the reader to this sentence, becomes redundant and should be removed.\n>\n> I've just posted a patch in the other thread [1] to restrict\n> context_item to be of jsonb type, which users would need to ensure by\n> adding an explicit cast if needed.  I think that makes this\n> clarification unnecessary.\n>\n> > As for table 9.16.3 - it is unwieldy already.  Lets try and make the core syntax shorter, not longer.  We already have precedence in the subsequent json_table section - give each major clause item a name then below the table define the syntax and meaning for those names.  Unlike in that section - which probably should be modified too - context_item should have its own description line.\n>\n> I had posted a patch a little while ago at [1] to render the syntax a\n> bit differently with each function getting its own syntax synopsis.\n> Resending it here; have addressed Jian He's comments.\n>\n> --I was thinking more like:diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgmlindex c324906b22..b9d157663a 100644--- a/doc/src/sgml/func.sgml+++ b/doc/src/sgml/func.sgml@@ -18692,8 +18692,10 @@ $.* ? (@ like_regex \"^\\\\d+$\")       <entry role=\"func_table_entry\"><para role=\"func_signature\">         <indexterm><primary>json_exists</primary></indexterm>         <function>json_exists</function> (-        <replaceable>context_item</replaceable>, <replaceable>path_expression</replaceable> <optional> <literal>PASSING</literal> { <replaceable>value</replaceable> <literal>AS</literal> <replaceable>varname</replaceable> } <optional>, ...</optional></optional>-        <optional> { <literal>TRUE</literal> | <literal>FALSE</literal> |<literal> UNKNOWN</literal> | <literal>ERROR</literal> } <literal>ON ERROR</literal> </optional>)+        <replaceable>context_item</replaceable>,+        <replaceable>path_expression</replaceable>+        <optional>variable_definitions</optional>+        <optional>on_error_boolean</optional>)        </para>        <para>         Returns true if the SQL/JSON <replaceable>path_expression</replaceable>@@ -18732,12 +18734,14 @@ ERROR:  jsonpath array subscript is out of bounds       <entry role=\"func_table_entry\"><para role=\"func_signature\">         <indexterm><primary>json_query</primary></indexterm>         <function>json_query</function> (-        <replaceable>context_item</replaceable>, <replaceable>path_expression</replaceable> <optional> <literal>PASSING</literal> { <replaceable>value</replaceable> <literal>AS</literal> <replaceable>varname</replaceable> } <optional>, ...</optional></optional>-        <optional> <literal>RETURNING</literal> <replaceable>data_type</replaceable> <optional> <literal>FORMAT JSON</literal> <optional> <literal>ENCODING UTF8</literal> </optional> </optional> </optional>-        <optional> { <literal>WITHOUT</literal> | <literal>WITH</literal> { <literal>CONDITIONAL</literal> | <optional><literal>UNCONDITIONAL</literal></optional> } } <optional> <literal>ARRAY</literal> </optional> <literal>WRAPPER</literal> </optional>-        <optional> { <literal>KEEP</literal> | <literal>OMIT</literal> } <literal>QUOTES</literal> <optional> <literal>ON SCALAR STRING</literal> </optional> </optional>-        <optional> { <literal>ERROR</literal> | <literal>NULL</literal> | <literal>EMPTY</literal> { <optional> <literal>ARRAY</literal> </optional> | <literal>OBJECT</literal> } | <literal>DEFAULT</literal> <replaceable>expression</replaceable> } <literal>ON EMPTY</literal> </optional>-        <optional> { <literal>ERROR</literal> | <literal>NULL</literal> | <literal>EMPTY</literal> { <optional> <literal>ARRAY</literal> </optional> | <literal>OBJECT</literal> } | <literal>DEFAULT</literal> <replaceable>expression</replaceable> } <literal>ON ERROR</literal> </optional>)+        <replaceable>context_item</replaceable>,+        <replaceable>path_expression</replaceable>+        <optional>variable_definitions</optional>+        <optional>return_clause</optional>+        <optional>wrapping_clause</optional>+        <optional>quoting_clause</optional>+        <optional>on_empty_set</optional>+        <optional>on_error_set</optional>)       </para>        <para>         Returns the result of applying the SQL/JSON@@ -18809,11 +18813,12 @@ DETAIL:  Missing \"]\" after array dimensions.       <entry role=\"func_table_entry\"><para role=\"func_signature\">         <indexterm><primary>json_value</primary></indexterm>         <function>json_value</function> (-        <replaceable>context_item</replaceable>, <replaceable>path_expression</replaceable>-        <optional> <literal>PASSING</literal> { <replaceable>value</replaceable> <literal>AS</literal> <replaceable>varname</replaceable> } <optional>, ...</optional></optional>-        <optional> <literal>RETURNING</literal> <replaceable>data_type</replaceable> </optional>-        <optional> { <literal>ERROR</literal> | <literal>NULL</literal> | <literal>DEFAULT</literal> <replaceable>expression</replaceable> } <literal>ON EMPTY</literal> </optional>-        <optional> { <literal>ERROR</literal> | <literal>NULL</literal> | <literal>DEFAULT</literal> <replaceable>expression</replaceable> } <literal>ON ERROR</literal> </optional>)+        <replaceable>context_item</replaceable>,+        <replaceable>path_expression</replaceable>+        <optional>variable_definitions</optional>+        <optional>return_type</optional>+        <optional>on_empty_value</optional>+        <optional>on_error_value</optional>)        </para>        <para>         Returns the result of applying the SQL/JSONThen defining each of those below the table - keeping the on_error variants together.\nplaying around with it.\nfound some minor issues:\n\njson_exists allow:  DEFAULT expression ON ERROR, which is not\nmentioned in the doc.\nfor example:\nselect JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default\ntrue ON ERROR);\nselect JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default 0 ON ERROR);\nselect JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default 11 ON ERROR);Yeah, surprised it works, the documented behavior seems logical.  Being able to return a non-boolean here seems odd.  Especially since it is cast to boolean on output.\n\nJSON_VALUE on error, on empty semantics should be the same as json_query.\nlike:\n[ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\nON EMPTY ]\n[ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\nON ERROR ])\n\nexamples:\nselect JSON_value(jsonb '[]' , '$'  empty array on error);\nselect JSON_value(jsonb '[]' , '$'  empty object on error);Again the documented behavior seems to make sense though and the ability to specify empty in the value function seems like a bug.  If you really want an empty array or object you do have access to default.  The reason json_query provides for an empty array/object is that it is already expecting to produce an array (object seems a little odd).I agree our docs and code do not match which needs to be fixed, ideally in the direction of the standard which I'm guessing our documentation is based off of.  But let's not go off of my guess.David J.", "msg_date": "Thu, 20 Jun 2024 17:46:44 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Fri, Jun 21, 2024 at 9:47 AM David G. Johnston\n<[email protected]> wrote:\n> On Thu, Jun 20, 2024 at 9:01 AM jian he <[email protected]> wrote:\n>>\n>> On Thu, Jun 20, 2024 at 5:46 PM Amit Langote <[email protected]> wrote:\n>> >\n>> > On Thu, Jun 20, 2024 at 1:03 AM David G. Johnston\n>> > <[email protected]> wrote:\n>> > > On Wed, Jun 19, 2024 at 8:29 AM jian he <[email protected]> wrote:\n>> > >>\n>> > >> On Mon, Jun 17, 2024 at 9:05 PM Chapman Flack <[email protected]> wrote:\n>> > >> >\n>> > >> > Hi,\n>> > >> >\n>> > >> > On 06/17/24 02:43, Amit Langote wrote:\n>> > >> > > <replaceable>context_item</replaceable> expression can be a value of\n>> > >> > > any type that can be cast to <type>jsonb</type>. This includes types\n>> > >> > > such as <type>char</type>, <type>text</type>, <type>bpchar</type>,\n>> > >> > > <type>character varying</type>, and <type>bytea</type> (with\n>> > >> > > <code>ENCODING UTF8</code>), as well as any domains over these types.\n>> > >> >\n>> > >> > Reading this message in conjunction with [0] makes me think that we are\n>> > >> > really talking about a function that takes a first parameter of type jsonb,\n>> > >> > and behaves exactly that way (so any cast required is applied by the system\n>> > >> > ahead of the call). Under those conditions, this seems like an unusual\n>> > >> > sentence to add in the docs, at least until we have also documented that\n>> > >> > tan's argument can be of any type that can be cast to double precision.\n>> > >> >\n>> > >>\n>> > >> I guess it would be fine to add an unusual sentence to the docs.\n>> > >>\n>> > >> imagine a function: array_avg(anyarray) returns anyelement.\n>> > >> array_avg calculate an array's elements's avg. like\n>> > >> array('{1,2,3}'::int[]) returns 2.\n>> > >> but array_avg won't make sense if the input argument is a date array.\n>> > >> so mentioning in the doc: array_avg can accept anyarray, but anyarray\n>> > >> cannot date array.\n>> > >> seems ok.\n>> > >\n>> > >\n>> > > There is existing wording for this:\n>> > >\n>> > > \"The expression can be of any JSON type, any character string type, or bytea in UTF8 encoding.\"\n>> > >\n>> > > If you add this sentence to the paragraph the link that already exists, which simply points the reader to this sentence, becomes redundant and should be removed.\n>> >\n>> > I've just posted a patch in the other thread [1] to restrict\n>> > context_item to be of jsonb type, which users would need to ensure by\n>> > adding an explicit cast if needed. I think that makes this\n>> > clarification unnecessary.\n>> >\n>> > > As for table 9.16.3 - it is unwieldy already. Lets try and make the core syntax shorter, not longer. We already have precedence in the subsequent json_table section - give each major clause item a name then below the table define the syntax and meaning for those names. Unlike in that section - which probably should be modified too - context_item should have its own description line.\n>> >\n>> > I had posted a patch a little while ago at [1] to render the syntax a\n>> > bit differently with each function getting its own syntax synopsis.\n>> > Resending it here; have addressed Jian He's comments.\n>> >\n>> > --\n>\n>\n> I was thinking more like:\n>\n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index c324906b22..b9d157663a 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -18692,8 +18692,10 @@ $.* ? (@ like_regex \"^\\\\d+$\")\n> <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> <indexterm><primary>json_exists</primary></indexterm>\n> <function>json_exists</function> (\n> - <replaceable>context_item</replaceable>, <replaceable>path_expression</replaceable> <optional> <literal>PASSING</literal> { <replaceable>value</replaceable> <literal>AS</literal> <replaceable>varname</replaceable> } <optional>, ...</optional></optional>\n> - <optional> { <literal>TRUE</literal> | <literal>FALSE</literal> |<literal> UNKNOWN</literal> | <literal>ERROR</literal> } <literal>ON ERROR</literal> </optional>)\n> + <replaceable>context_item</replaceable>,\n> + <replaceable>path_expression</replaceable>\n> + <optional>variable_definitions</optional>\n> + <optional>on_error_boolean</optional>)\n> </para>\n> <para>\n> Returns true if the SQL/JSON <replaceable>path_expression</replaceable>\n> @@ -18732,12 +18734,14 @@ ERROR: jsonpath array subscript is out of bounds\n> <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> <indexterm><primary>json_query</primary></indexterm>\n> <function>json_query</function> (\n> - <replaceable>context_item</replaceable>, <replaceable>path_expression</replaceable> <optional> <literal>PASSING</literal> { <replaceable>value</replaceable> <literal>AS</literal> <replaceable>varname</replaceable> } <optional>, ...</optional></optional>\n> - <optional> <literal>RETURNING</literal> <replaceable>data_type</replaceable> <optional> <literal>FORMAT JSON</literal> <optional> <literal>ENCODING UTF8</literal> </optional> </optional> </optional>\n> - <optional> { <literal>WITHOUT</literal> | <literal>WITH</literal> { <literal>CONDITIONAL</literal> | <optional><literal>UNCONDITIONAL</literal></optional> } } <optional> <literal>ARRAY</literal> </optional> <literal>WRAPPER</literal> </optional>\n> - <optional> { <literal>KEEP</literal> | <literal>OMIT</literal> } <literal>QUOTES</literal> <optional> <literal>ON SCALAR STRING</literal> </optional> </optional>\n> - <optional> { <literal>ERROR</literal> | <literal>NULL</literal> | <literal>EMPTY</literal> { <optional> <literal>ARRAY</literal> </optional> | <literal>OBJECT</literal> } | <literal>DEFAULT</literal> <replaceable>expression</replaceable> } <literal>ON EMPTY</literal> </optional>\n> - <optional> { <literal>ERROR</literal> | <literal>NULL</literal> | <literal>EMPTY</literal> { <optional> <literal>ARRAY</literal> </optional> | <literal>OBJECT</literal> } | <literal>DEFAULT</literal> <replaceable>expression</replaceable> } <literal>ON ERROR</literal> </optional>)\n> + <replaceable>context_item</replaceable>,\n> + <replaceable>path_expression</replaceable>\n> + <optional>variable_definitions</optional>\n> + <optional>return_clause</optional>\n> + <optional>wrapping_clause</optional>\n> + <optional>quoting_clause</optional>\n> + <optional>on_empty_set</optional>\n> + <optional>on_error_set</optional>)\n> </para>\n> <para>\n> Returns the result of applying the SQL/JSON\n> @@ -18809,11 +18813,12 @@ DETAIL: Missing \"]\" after array dimensions.\n> <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> <indexterm><primary>json_value</primary></indexterm>\n> <function>json_value</function> (\n> - <replaceable>context_item</replaceable>, <replaceable>path_expression</replaceable>\n> - <optional> <literal>PASSING</literal> { <replaceable>value</replaceable> <literal>AS</literal> <replaceable>varname</replaceable> } <optional>, ...</optional></optional>\n> - <optional> <literal>RETURNING</literal> <replaceable>data_type</replaceable> </optional>\n> - <optional> { <literal>ERROR</literal> | <literal>NULL</literal> | <literal>DEFAULT</literal> <replaceable>expression</replaceable> } <literal>ON EMPTY</literal> </optional>\n> - <optional> { <literal>ERROR</literal> | <literal>NULL</literal> | <literal>DEFAULT</literal> <replaceable>expression</replaceable> } <literal>ON ERROR</literal> </optional>)\n> + <replaceable>context_item</replaceable>,\n> + <replaceable>path_expression</replaceable>\n> + <optional>variable_definitions</optional>\n> + <optional>return_type</optional>\n> + <optional>on_empty_value</optional>\n> + <optional>on_error_value</optional>)\n> </para>\n> <para>\n> Returns the result of applying the SQL/JSON\n>\n> Then defining each of those below the table - keeping the on_error variants together.\n\nThat sounds appealing. I'll try to come up with a patch unless you or\nanyone else wants to take a stab at it.\n\n>> playing around with it.\n>> found some minor issues:\n>>\n>> json_exists allow: DEFAULT expression ON ERROR, which is not\n>> mentioned in the doc.\n>> for example:\n>> select JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default\n>> true ON ERROR);\n>> select JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default 0 ON ERROR);\n>> select JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default 11 ON ERROR);\n>\n>\n> Yeah, surprised it works, the documented behavior seems logical. Being able to return a non-boolean here seems odd. Especially since it is cast to boolean on output.\n>\n>>\n>> JSON_VALUE on error, on empty semantics should be the same as json_query.\n>> like:\n>> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n>> ON EMPTY ]\n>> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n>> ON ERROR ])\n>>\n>> examples:\n>> select JSON_value(jsonb '[]' , '$' empty array on error);\n>> select JSON_value(jsonb '[]' , '$' empty object on error);\n>\n>\n> Again the documented behavior seems to make sense though and the ability to specify empty in the value function seems like a bug. If you really want an empty array or object you do have access to default. The reason json_query provides for an empty array/object is that it is already expecting to produce an array (object seems a little odd).\n>\n> I agree our docs and code do not match which needs to be fixed, ideally in the direction of the standard which I'm guessing our documentation is based off of. But let's not go off of my guess.\n\nOops, that is indeed not great and, yes, the problem is code not\nmatching the documentation, the latter of which is \"correct\".\n\nBasically, the grammar allows specifying any of the all possible ON\nERROR/EMPTY behavior values irrespective of the function, so parse\nanalysis should be catching and flagging an attempt to specify\nincompatible value for a given function, which it does not.\n\nI've attached a patch to fix that, which I'd like to push before\nanything else we've been discussing.\n\n-- \nThanks, Amit Langote", "msg_date": "Fri, 21 Jun 2024 21:18:15 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Fri, Jun 21, 2024 at 9:18 PM Amit Langote <[email protected]> wrote:\n> On Fri, Jun 21, 2024 at 9:47 AM David G. Johnston\n> <[email protected]> wrote:\n> > On Thu, Jun 20, 2024 at 9:01 AM jian he <[email protected]> wrote:\n> >> playing around with it.\n> >> found some minor issues:\n> >>\n> >> json_exists allow: DEFAULT expression ON ERROR, which is not\n> >> mentioned in the doc.\n> >> for example:\n> >> select JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default\n> >> true ON ERROR);\n> >> select JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default 0 ON ERROR);\n> >> select JSON_EXISTS(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' default 11 ON ERROR);\n> >\n> >\n> > Yeah, surprised it works, the documented behavior seems logical. Being able to return a non-boolean here seems odd. Especially since it is cast to boolean on output.\n> >\n> >>\n> >> JSON_VALUE on error, on empty semantics should be the same as json_query.\n> >> like:\n> >> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> >> ON EMPTY ]\n> >> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> >> ON ERROR ])\n> >>\n> >> examples:\n> >> select JSON_value(jsonb '[]' , '$' empty array on error);\n> >> select JSON_value(jsonb '[]' , '$' empty object on error);\n> >\n> >\n> > Again the documented behavior seems to make sense though and the ability to specify empty in the value function seems like a bug. If you really want an empty array or object you do have access to default. The reason json_query provides for an empty array/object is that it is already expecting to produce an array (object seems a little odd).\n> >\n> > I agree our docs and code do not match which needs to be fixed, ideally in the direction of the standard which I'm guessing our documentation is based off of. But let's not go off of my guess.\n>\n> Oops, that is indeed not great and, yes, the problem is code not\n> matching the documentation, the latter of which is \"correct\".\n>\n> Basically, the grammar allows specifying any of the all possible ON\n> ERROR/EMPTY behavior values irrespective of the function, so parse\n> analysis should be catching and flagging an attempt to specify\n> incompatible value for a given function, which it does not.\n>\n> I've attached a patch to fix that, which I'd like to push before\n> anything else we've been discussing.\n\nWhile there are still a few hours to go before Saturday noon UTC when\nbeta2 freeze goes into effect, I'm thinking to just push this after\nbeta2 is stamped. Adding an open item for now.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Sat, 22 Jun 2024 17:31:50 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Fri, Jun 21, 2024 at 8:18 PM Amit Langote <[email protected]> wrote:\n>\n> >> JSON_VALUE on error, on empty semantics should be the same as json_query.\n> >> like:\n> >> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> >> ON EMPTY ]\n> >> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> >> ON ERROR ])\n> >>\n> >> examples:\n> >> select JSON_value(jsonb '[]' , '$' empty array on error);\n> >> select JSON_value(jsonb '[]' , '$' empty object on error);\n> >\n> > Again the documented behavior seems to make sense though and the ability to specify empty in the value function seems like a bug. If you really want an empty array or object you do have access to default. The reason json_query provides for an empty array/object is that it is already expecting to produce an array (object seems a little odd).\n> >\n> > I agree our docs and code do not match which needs to be fixed, ideally in the direction of the standard which I'm guessing our documentation is based off of. But let's not go off of my guess.\n>\n> Oops, that is indeed not great and, yes, the problem is code not\n> matching the documentation, the latter of which is \"correct\".\n>\n> Basically, the grammar allows specifying any of the all possible ON\n> ERROR/EMPTY behavior values irrespective of the function, so parse\n> analysis should be catching and flagging an attempt to specify\n> incompatible value for a given function, which it does not.\n>\n> I've attached a patch to fix that, which I'd like to push before\n> anything else we've been discussing.\n>\n\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"invalid ON ERROR behavior\"),\n+ errdetail(\"Only ERROR, NULL, EMPTY [ ARRAY | OBJECT }, or DEFAULT\n<value> is allowed in ON ERROR for JSON_QUERY().\"),\n+ parser_errposition(pstate, func->on_error->location));\n\n`EMPTY [ ARRAY | OBJECT }` seems not correct,\nmaybe just EMPTY, EMPTY ARRAY, EMPTY OBJECT.\n(apply to other places)\n\n\n`DEFAULT <value>`\n`DEFAULT <expression>` or just `DEFAULT expression` would be more correct?\n(apply to other places)\n\nI think we should make json_query, json_value on empty, on error\nbehave the same way.\notherwise, it will have consistency issues for scalar jsonb.\nfor example, we should expect the following two queries to return the\nsame result?\nSELECT * FROM JSON_query(jsonb '1', '$.a' returning jsonb empty on empty);\nSELECT * FROM JSON_value(jsonb '1', '$.a' returning jsonb empty on empty);\n\nAlso the json_table function will call json_value or json_query,\nmake these two functions on error, on empty behavior the same can\nreduce unintended complexity.\n\nSo based on your\npatch(v1-0001-SQL-JSON-Disallow-incompatible-values-in-ON-ERROR.patch)\nand the above points, I have made some changes, attached.\nit will make json_value, json_query not allow {true | false | unknown\n} on error, {true | false | unknown } on empty.\njson_table error message deal the same way as\nb4fad46b6bc8a9bf46ff689bcb1bd4edf8f267af\n\n\nBTW,\ni found one JSON_TABLE document deficiency\n [ { ERROR | NULL | EMPTY { ARRAY | OBJECT } | DEFAULT\nexpression } ON EMPTY ]\n [ { ERROR | NULL | EMPTY { ARRAY | OBJECT } | DEFAULT\nexpression } ON ERROR ]\n\nit should be\n\n [ { ERROR | NULL | EMPTY { [ARRAY] | OBJECT } | DEFAULT\nexpression } ON EMPTY ]\n [ { ERROR | NULL | EMPTY { [ARRAY] | OBJECT } | DEFAULT\nexpression } ON ERROR ]", "msg_date": "Sat, 22 Jun 2024 17:39:34 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "Hi,\n\nOn Sat, Jun 22, 2024 at 6:39 PM jian he <[email protected]> wrote:\n> On Fri, Jun 21, 2024 at 8:18 PM Amit Langote <[email protected]> wrote:\n> >\n> > >> JSON_VALUE on error, on empty semantics should be the same as json_query.\n> > >> like:\n> > >> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> > >> ON EMPTY ]\n> > >> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> > >> ON ERROR ])\n> > >>\n> > >> examples:\n> > >> select JSON_value(jsonb '[]' , '$' empty array on error);\n> > >> select JSON_value(jsonb '[]' , '$' empty object on error);\n> > >\n> > > Again the documented behavior seems to make sense though and the ability to specify empty in the value function seems like a bug. If you really want an empty array or object you do have access to default. The reason json_query provides for an empty array/object is that it is already expecting to produce an array (object seems a little odd).\n> > >\n> > > I agree our docs and code do not match which needs to be fixed, ideally in the direction of the standard which I'm guessing our documentation is based off of. But let's not go off of my guess.\n> >\n> > Oops, that is indeed not great and, yes, the problem is code not\n> > matching the documentation, the latter of which is \"correct\".\n> >\n> > Basically, the grammar allows specifying any of the all possible ON\n> > ERROR/EMPTY behavior values irrespective of the function, so parse\n> > analysis should be catching and flagging an attempt to specify\n> > incompatible value for a given function, which it does not.\n> >\n> > I've attached a patch to fix that, which I'd like to push before\n> > anything else we've been discussing.\n> >\n>\n> + errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"invalid ON ERROR behavior\"),\n> + errdetail(\"Only ERROR, NULL, EMPTY [ ARRAY | OBJECT }, or DEFAULT\n> <value> is allowed in ON ERROR for JSON_QUERY().\"),\n> + parser_errposition(pstate, func->on_error->location));\n>\n> `EMPTY [ ARRAY | OBJECT }` seems not correct,\n> maybe just EMPTY, EMPTY ARRAY, EMPTY OBJECT.\n> (apply to other places)\n\nOr EMPTY [ ARRAY ], EMPTY OBJECT\n\n> `DEFAULT <value>`\n> `DEFAULT <expression>` or just `DEFAULT expression` would be more correct?\n> (apply to other places)\n\n\"DEFAULT expression\" sounds good.\n\n> I think we should make json_query, json_value on empty, on error\n> behave the same way.\n> otherwise, it will have consistency issues for scalar jsonb.\n> for example, we should expect the following two queries to return the\n> same result?\n> SELECT * FROM JSON_query(jsonb '1', '$.a' returning jsonb empty on empty);\n> SELECT * FROM JSON_value(jsonb '1', '$.a' returning jsonb empty on empty);\n>\n> Also the json_table function will call json_value or json_query,\n> make these two functions on error, on empty behavior the same can\n> reduce unintended complexity.\n>\n> So based on your\n> patch(v1-0001-SQL-JSON-Disallow-incompatible-values-in-ON-ERROR.patch)\n> and the above points, I have made some changes, attached.\n> it will make json_value, json_query not allow {true | false | unknown\n> } on error, {true | false | unknown } on empty.\n> json_table error message deal the same way as\n> b4fad46b6bc8a9bf46ff689bcb1bd4edf8f267af\n\nHere is an updated patch that I think takes care of these points.\n\n> BTW,\n> i found one JSON_TABLE document deficiency\n> [ { ERROR | NULL | EMPTY { ARRAY | OBJECT } | DEFAULT\n> expression } ON EMPTY ]\n> [ { ERROR | NULL | EMPTY { ARRAY | OBJECT } | DEFAULT\n> expression } ON ERROR ]\n>\n> it should be\n>\n> [ { ERROR | NULL | EMPTY { [ARRAY] | OBJECT } | DEFAULT\n> expression } ON EMPTY ]\n> [ { ERROR | NULL | EMPTY { [ARRAY] | OBJECT } | DEFAULT\n> expression } ON ERROR ]\n\nYou're right. Fixed.\n\nAlso, I noticed that the grammar allows ON EMPTY in JSON_TABLE EXISTS\ncolumns which is meaningless because JSON_EXISTS() doesn't have a\ncorresponding ON EMPTY clause. Fixed grammar to prevent that in the\nattached 0002.\n\n--\nThanks, Amit Langote", "msg_date": "Thu, 27 Jun 2024 21:01:18 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Thu, Jun 27, 2024 at 9:01 PM Amit Langote <[email protected]> wrote:\n> On Sat, Jun 22, 2024 at 6:39 PM jian he <[email protected]> wrote:\n> > On Fri, Jun 21, 2024 at 8:18 PM Amit Langote <[email protected]> wrote:\n> > >\n> > > >> JSON_VALUE on error, on empty semantics should be the same as json_query.\n> > > >> like:\n> > > >> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> > > >> ON EMPTY ]\n> > > >> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> > > >> ON ERROR ])\n> > > >>\n> > > >> examples:\n> > > >> select JSON_value(jsonb '[]' , '$' empty array on error);\n> > > >> select JSON_value(jsonb '[]' , '$' empty object on error);\n> > > >\n> > > > Again the documented behavior seems to make sense though and the ability to specify empty in the value function seems like a bug. If you really want an empty array or object you do have access to default. The reason json_query provides for an empty array/object is that it is already expecting to produce an array (object seems a little odd).\n> > > >\n> > > > I agree our docs and code do not match which needs to be fixed, ideally in the direction of the standard which I'm guessing our documentation is based off of. But let's not go off of my guess.\n> > >\n> > > Oops, that is indeed not great and, yes, the problem is code not\n> > > matching the documentation, the latter of which is \"correct\".\n> > >\n> > > Basically, the grammar allows specifying any of the all possible ON\n> > > ERROR/EMPTY behavior values irrespective of the function, so parse\n> > > analysis should be catching and flagging an attempt to specify\n> > > incompatible value for a given function, which it does not.\n> > >\n> > > I've attached a patch to fix that, which I'd like to push before\n> > > anything else we've been discussing.\n> > >\n> >\n> > + errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"invalid ON ERROR behavior\"),\n> > + errdetail(\"Only ERROR, NULL, EMPTY [ ARRAY | OBJECT }, or DEFAULT\n> > <value> is allowed in ON ERROR for JSON_QUERY().\"),\n> > + parser_errposition(pstate, func->on_error->location));\n> >\n> > `EMPTY [ ARRAY | OBJECT }` seems not correct,\n> > maybe just EMPTY, EMPTY ARRAY, EMPTY OBJECT.\n> > (apply to other places)\n>\n> Or EMPTY [ ARRAY ], EMPTY OBJECT\n>\n> > `DEFAULT <value>`\n> > `DEFAULT <expression>` or just `DEFAULT expression` would be more correct?\n> > (apply to other places)\n>\n> \"DEFAULT expression\" sounds good.\n>\n> > I think we should make json_query, json_value on empty, on error\n> > behave the same way.\n> > otherwise, it will have consistency issues for scalar jsonb.\n> > for example, we should expect the following two queries to return the\n> > same result?\n> > SELECT * FROM JSON_query(jsonb '1', '$.a' returning jsonb empty on empty);\n> > SELECT * FROM JSON_value(jsonb '1', '$.a' returning jsonb empty on empty);\n> >\n> > Also the json_table function will call json_value or json_query,\n> > make these two functions on error, on empty behavior the same can\n> > reduce unintended complexity.\n> >\n> > So based on your\n> > patch(v1-0001-SQL-JSON-Disallow-incompatible-values-in-ON-ERROR.patch)\n> > and the above points, I have made some changes, attached.\n> > it will make json_value, json_query not allow {true | false | unknown\n> > } on error, {true | false | unknown } on empty.\n> > json_table error message deal the same way as\n> > b4fad46b6bc8a9bf46ff689bcb1bd4edf8f267af\n>\n> Here is an updated patch that I think takes care of these points.\n>\n> > BTW,\n> > i found one JSON_TABLE document deficiency\n> > [ { ERROR | NULL | EMPTY { ARRAY | OBJECT } | DEFAULT\n> > expression } ON EMPTY ]\n> > [ { ERROR | NULL | EMPTY { ARRAY | OBJECT } | DEFAULT\n> > expression } ON ERROR ]\n> >\n> > it should be\n> >\n> > [ { ERROR | NULL | EMPTY { [ARRAY] | OBJECT } | DEFAULT\n> > expression } ON EMPTY ]\n> > [ { ERROR | NULL | EMPTY { [ARRAY] | OBJECT } | DEFAULT\n> > expression } ON ERROR ]\n>\n> You're right. Fixed.\n>\n> Also, I noticed that the grammar allows ON EMPTY in JSON_TABLE EXISTS\n> columns which is meaningless because JSON_EXISTS() doesn't have a\n> corresponding ON EMPTY clause. Fixed grammar to prevent that in the\n> attached 0002.\n\nI've pushed this for now to close out the open item.\n\nI know there's some documentation improvement work left to do [1],\nwhich I'll try to find some time for next week.\n\n-- \nThanks, Amit Langote\n\n[1] https://www.postgresql.org/message-id/CAKFQuwbYBvUZasGj_ZnfXhC2kk4AT%3DepwGkNd2%3DRMMVXkfTNMQ%40mail.gmail.com\n\n\n", "msg_date": "Fri, 28 Jun 2024 14:18:12 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" }, { "msg_contents": "On Thursday, June 20, 2024, David G. Johnston <[email protected]>\nwrote:\n\n>\n>> >\n>> > > As for table 9.16.3 - it is unwieldy already. Lets try and make the\n>> core syntax shorter, not longer. We already have precedence in the\n>> subsequent json_table section - give each major clause item a name then\n>> below the table define the syntax and meaning for those names. Unlike in\n>> that section - which probably should be modified too - context_item should\n>> have its own description line.\n>> >\n>> > I had posted a patch a little while ago at [1] to render the syntax a\n>> > bit differently with each function getting its own syntax synopsis.\n>> > Resending it here; have addressed Jian He's comments.\n>> >\n>> > --\n>>\n>\n> I was thinking more like:\n>\n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index c324906b22..b9d157663a 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -18692,8 +18692,10 @@ $.* ? (@ like_regex \"^\\\\d+$\")\n> <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> <indexterm><primary>json_exists</primary></indexterm>\n> <function>json_exists</function> (\n> - <replaceable>context_item</replaceable>,\n> <replaceable>path_expression</replaceable> <optional>\n> <literal>PASSING</literal> { <replaceable>value</replaceable>\n> <literal>AS</literal> <replaceable>varname</replaceable> } <optional>,\n> ...</optional></optional>\n> - <optional> { <literal>TRUE</literal> | <literal>FALSE</literal>\n> |<literal> UNKNOWN</literal> | <literal>ERROR</literal> } <literal>ON\n> ERROR</literal> </optional>)\n> + <replaceable>context_item</replaceable>,\n> + <replaceable>path_expression</replaceable>\n> + <optional>variable_definitions</optional>\n> + <optional>on_error_boolean</optional>)\n> </para> empty semantics should be the same as json_query.\n>\n>>\n>>\nThe full first draft patch for this is here:\n\nhttps://www.postgresql.org/message-id/CAKFQuwZNxNHuPk44zDF7z8qZec1Aof10aA9tWvBU5CMhEKEd8A@mail.gmail.com\n\nDavid J.\n\nOn Thursday, June 20, 2024, David G. Johnston <[email protected]> wrote:\n>\n> > As for table 9.16.3 - it is unwieldy already.  Lets try and make the core syntax shorter, not longer.  We already have precedence in the subsequent json_table section - give each major clause item a name then below the table define the syntax and meaning for those names.  Unlike in that section - which probably should be modified too - context_item should have its own description line.\n>\n> I had posted a patch a little while ago at [1] to render the syntax a\n> bit differently with each function getting its own syntax synopsis.\n> Resending it here; have addressed Jian He's comments.\n>\n> --I was thinking more like:diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgmlindex c324906b22..b9d157663a 100644--- a/doc/src/sgml/func.sgml+++ b/doc/src/sgml/func.sgml@@ -18692,8 +18692,10 @@ $.* ? (@ like_regex \"^\\\\d+$\")       <entry role=\"func_table_entry\"><para role=\"func_signature\">         <indexterm><primary>json_exists</primary></indexterm>         <function>json_exists</function> (-        <replaceable>context_item</replaceable>, <replaceable>path_expression</replaceable> <optional> <literal>PASSING</literal> { <replaceable>value</replaceable> <literal>AS</literal> <replaceable>varname</replaceable> } <optional>, ...</optional></optional>-        <optional> { <literal>TRUE</literal> | <literal>FALSE</literal> |<literal> UNKNOWN</literal> | <literal>ERROR</literal> } <literal>ON ERROR</literal> </optional>)+        <replaceable>context_item</replaceable>,+        <replaceable>path_expression</replaceable>+        <optional>variable_definitions</optional>+        <optional>on_error_boolean</optional>)        </para> empty semantics should be the same as json_query.The full first draft patch for this is here:https://www.postgresql.org/message-id/CAKFQuwZNxNHuPk44zDF7z8qZec1Aof10aA9tWvBU5CMhEKEd8A@mail.gmail.comDavid J.", "msg_date": "Thu, 27 Jun 2024 22:27:06 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL/JSON query functions context_item doc entry and type\n requirement" } ]
[ { "msg_contents": "Hello Everybody!\n\nFor at least last two years we have had Developers Conference\nUnconference notes in PostgreSQL Wiki\n\nhttps://wiki.postgresql.org/wiki/PgCon_2022_Developer_Unconference\nhttps://wiki.postgresql.org/wiki/PgCon_2023_Developer_Unconference\n\nAnd I know that people took notes at least at the unconference\nsessions I attended.\n\nS is there a plan to collect them in\nhttps://wiki.postgresql.org/wiki/PgCon_2024_Developer_Unconference\nlike previous years ?\n\n--\nHannu\n\n\n", "msg_date": "Mon, 3 Jun 2024 19:36:51 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Will there be\n https://wiki.postgresql.org/wiki/PgCon_2024_Developer_Unconference\n ?" }, { "msg_contents": "On Mon, Jun 03, 2024 at 07:36:51PM +0200, Hannu Krosing wrote:\n> For at least last two years we have had Developers Conference\n> Unconference notes in PostgreSQL Wiki\n> \n> https://wiki.postgresql.org/wiki/PgCon_2022_Developer_Unconference\n> https://wiki.postgresql.org/wiki/PgCon_2023_Developer_Unconference\n> \n> And I know that people took notes at least at the unconference\n> sessions I attended.\n> \n> S is there a plan to collect them in\n> https://wiki.postgresql.org/wiki/PgCon_2024_Developer_Unconference\n> like previous years ?\n\nhttps://wiki.postgresql.org/wiki/PGConf.dev_2024_Developer_Unconference\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 3 Jun 2024 13:03:31 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Will there be\n https://wiki.postgresql.org/wiki/PgCon_2024_Developer_Unconference ?" } ]
[ { "msg_contents": "Hackers,\n\nAt the PGConf Unconference session on improving extension support in core, we talked quite a bit about the recent anxiety among extension developers about a lack of an ABI compatibility guarantee in Postgres. Yurii Rashkovskii did a little header file spelunking and talked[1] about a few changes in minor version releases, including to apparent field order in structs. Jeremy Schneider posted the example on Twitter[2], and Peter G replied[3]:\n\n> You must be referring to my commit 714780dc. The new field is stored within alignment padding (though only on back branches). Has this been tied to a known problem?\n\nAt the Unconference, Tom Lane said that this approach is pretty well drilled into the heads of every committer, and new ones pick it up through experience. The goal, IIUC, is to never introduce binary incompatibilities into the C APIs in minor releases. This standard would be good to document, to let extension developers know exactly what the guarantees are.\n\nI’m happy to start a doc patch to add an ABI compatibility guarantee (presumably in xfunc.sgml), but want to be sure the details are right, namely:\n\n* There are no source or binary compatibility guarantees for major releases.\n\n* The ABI is guaranteed to change only in backward compatible ways in minor releases. If for some reason it doesn’t it’s a bug that will need to be fixed.\n\nThis ensures that an extension compiled against an earlier minor release will continue to work without recompilation on later minor releases of the same major version.\n\nBut if I understand correctly, the use of techniques like adding a new field in padding does not mean that extensions compiled on later minor releases will work on earlier minor releases of the same major version. Unless, that is, we can provide a complete list of things not to do (like make use of padding) to avoid it. Is that feasible?\n\nAre there other details it should include?\n\nThanks,\n\nDavid\n\n[1]: https://www.pgevents.ca/events/pgconfdev2024/schedule/session/14\n[2]: https://x.com/jer_s/status/1785717368804815026\n[3]: https://x.com/petervgeoghegan/status/1785720228237717627\n\n\n\n", "msg_date": "Mon, 3 Jun 2024 14:43:17 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal: Document ABI Compatibility" }, { "msg_contents": "Hi,\n\nOn 2024-06-03 14:43:17 -0400, David E. Wheeler wrote:\n> At the PGConf Unconference session on improving extension support in core,\n> we talked quite a bit about the recent anxiety among extension developers\n> about a lack of an ABI compatibility guarantee in Postgres.\n\nAre there notes for the session?\n\n\n> Yurii Rashkovskii did a little header file spelunking and talked[1] about a\n> few changes in minor version releases, including to apparent field order in\n> structs.\n\nIt'd be nice if the slides for the talk could be uploaded...\n\n\n> > You must be referring to my commit 714780dc. The new field is stored within alignment padding (though only on back branches). Has this been tied to a known problem?\n> \n> At the Unconference, Tom Lane said that this approach is pretty well drilled\n> into the heads of every committer, and new ones pick it up through\n> experience. The goal, IIUC, is to never introduce binary incompatibilities\n> into the C APIs in minor releases. This standard would be good to document,\n> to let extension developers know exactly what the guarantees are.\n\nI don't think we can really make this a hard guarantee. Yes, we try hard to\navoid ABI breaks, but there IIRC have been a few cases over the years where\nthat wasn't practical for some reason. If we have to decide between a bad bug\nand causing an ABI issue that's unlikely to affect anybody, we'll probably\nchoose the ABI issue.\n\n\n> * The ABI is guaranteed to change only in backward compatible ways in minor\n> releases. If for some reason it doesn’t it’s a bug that will need to be\n> fixed.\n\nThus I am not really on board with this statement as-is.\n\nExtensions in general can do lots of stuff, guaranteeing that bug fixes don't\ncause any problems is just not feasible.\n\nIt'd be interesting to see a few examples of actual minor-version-upgrade\nextension breakages, so we can judge what caused them.\n\n\n> But if I understand correctly, the use of techniques like adding a new field\n> in padding does not mean that extensions compiled on later minor releases\n> will work on earlier minor releases of the same major version.\n\nI don't think it's common for such new-fields-in-padding to cause problems\nwhen using an earlier minor PG version. For that the extension would need to\nactually rely on the presence of the new field, but typically that'd not be\nthe case when we introduce a new field in a minor version.\n\n\n> Unless, that is, we can provide a complete list of things not to do (like\n> make use of padding) to avoid it. Is that feasible?\n\nYou can't really rely on the contents of padding, in general. So I don't think\nthis is really something that needs to be called out.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Jun 2024 11:58:11 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 3, 2024, at 14:58, Andres Freund <[email protected]> wrote:\n\n> Hi,\n\nHello Andres.\n\n> Are there notes for the session?\n\nYes, but not posted yet. Here’s what Andreas 'ads' Scherbaum sent me for that bit of the conversation:\n\n* Core is focused on core ABI stability\n* David: No \"statement of stability\" in Core\n * David/Jeremy/Tom: coding guidelines, style guidelines\n * useful to have docs in core about what's stable and what's not, what you should compile against or not, and ABI guarantees\n* Abigale: there are hooks, but no overall concept for extensions\n* Tom: Peter Eisentraut is working on tests for extensions stability\n* Jeremy: nothing is preventing people from installing incompatible versions\n\n\n> It'd be nice if the slides for the talk could be uploaded...\n\nHe also talked about it at Mini-Summit Five[1], for which I posted slides[2] (slides 11-14) and video[3] (starting at 29:08).\n\n> I don't think we can really make this a hard guarantee. Yes, we try hard to\n> avoid ABI breaks, but there IIRC have been a few cases over the years where\n> that wasn't practical for some reason. If we have to decide between a bad bug\n> and causing an ABI issue that's unlikely to affect anybody, we'll probably\n> choose the ABI issue.\n\nWe can document that too, and perhaps a policy for letting people know. I thought I recalled something like this in the past, but Rob Treat did some spelunking through change logs and found only CVEs that needed repairs by manually running some SQL. So some sense of its rarity would also be useful.\n\n> Thus I am not really on board with this statement as-is.\n\nThat’s fine, we should get it to where there’s consensus on the ordering and agreement on what, exactly, it should be.\n\n> Extensions in general can do lots of stuff, guaranteeing that bug fixes don't\n> cause any problems is just not feasible.\n> \n> It'd be interesting to see a few examples of actual minor-version-upgrade\n> extension breakages, so we can judge what caused them.\n\nIn the community Slack[4], Matthias van de Meent writes[5]:\n\n> Citus’ pg_version_compat.h[7] where it re-implements a low-level function that was newly introduced in PG14.7. If you build against PG14.7+ headers, you may get random crashes when running on 14.6.\n\nI suppose it would work fine on 14.7 if compiled on 14.6 though. I suspect there aren’t many examples, though, mostly just a lot of anxiety, and some have decided that extensions must be recompiled for every minor release in order to avoid the issue. StackGres[7] is one example, but I suspect Omni (Yurii’s company) may follow.\n\n> I don't think it's common for such new-fields-in-padding to cause problems\n> when using an earlier minor PG version. For that the extension would need to\n> actually rely on the presence of the new field, but typically that'd not be\n> the case when we introduce a new field in a minor version.\n\nThat’s what I was thinking, so “compatibility assured only if you don’t use padding yourself” would be super helpful.\n\n>> Unless, that is, we can provide a complete list of things not to do (like\n>> make use of padding) to avoid it. Is that feasible?\n> \n> You can't really rely on the contents of padding, in general. So I don't think\n> this is really something that needs to be called out.\n\nSure, probably not a problem, but if that’s the sole qualifier for making binary changes, I think it’s worth saying, as opposed to “we don’t make any”. Something like “Only changes to padding, which you never used anyway, right?” :-)\n\nD\n\n[1]: https://justatheory.com/2024/05/mini-summit-five/\n[2]: https://justatheory.com/shared/extension-ecosystem-summit/omni-universally-buildable-extensions.pdf\n[3]: https://youtu.be/R5ijx8IJyaM\n[4]: https://pgtreats.info/slack-invite\n[5]: https://postgresteam.slack.com/archives/C056ZA93H1A/p1716502630690559?thread_ts=1716500801.036709&cid=C056ZA93H1A\n[6] https://github.com/citusdata/citus/blob/main/src/include/pg_version_compat.h#L236-L248\n[7]: https://stackgres.io/doc/latest/intro/extensions/\n\n\n\n", "msg_date": "Mon, 3 Jun 2024 15:21:04 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-06-03 14:43:17 -0400, David E. Wheeler wrote:\n>> * The ABI is guaranteed to change only in backward compatible ways in minor\n>> releases. If for some reason it doesn’t it’s a bug that will need to be\n>> fixed.\n\n> Thus I am not really on board with this statement as-is.\n\nMe either. There are degrees of ABI compatibility, and we'll choose\nthe least invasive way, but it's seldom the case that no conceivable\nextension will be broken. For example, if we can't squeeze a new\nfield into padding space, we'll typically put it at the end of the\nstruct in existing branches. That's okay unless some extension has\na dependency on sizeof(the struct), for instance because it's\nallocating such structs itself. Also, for either the padding-space\nor add-at-the-end approaches, we're probably in trouble if some\nextension is creating its own instances of such structs, because\nit will not know how to fill the new field properly. We try not to\nchange structs that we think extensions are likely to create ...\nbut that's a guess not a guarantee.\n\n> It'd be interesting to see a few examples of actual minor-version-upgrade\n> extension breakages, so we can judge what caused them.\n\nYes, that could be a fruitful discussion.\n\n> You can't really rely on the contents of padding, in general. So I don't think\n> this is really something that needs to be called out.\n\nFor node structs, padding will generally be zero because we memset\nthem to zeroes. So to the extent that zero is an okay value for\nsuch a new field, that can help --- but if zero were always okay\nthen we'd likely not need a new field in the first place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Jun 2024 15:38:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "> > I don't think it's common for such new-fields-in-padding to cause problems\n> > when using an earlier minor PG version. For that the extension would need to\n> > actually rely on the presence of the new field, but typically that'd not be\n> > the case when we introduce a new field in a minor version.\n>\n> That’s what I was thinking, so “compatibility assured only if you don’t use padding yourself” would be super helpful.\n\nI was under the impression that the (a?) concern was related to\ncompiling the newer version and using that against an older version,\nso if you always compiled the extension against the latest point\nrelease, it wouldn't necessarily be backwards-compatible with older\ninstallations. (While probably allocated padding is zero'd out in the\nold version, I'd not guarantee that that's the case.)\n\n> >> Unless, that is, we can provide a complete list of things not to do (like\n> >> make use of padding) to avoid it. Is that feasible?\n> >\n> > You can't really rely on the contents of padding, in general. So I don't think\n> > this is really something that needs to be called out.\n>\n> Sure, probably not a problem, but if that’s the sole qualifier for making binary changes, I think it’s worth saying, as opposed to “we don’t make any”. Something like “Only changes to padding, which you never used anyway, right?” :-)\n\nWonder if what might be more useful is some sort of distribution list\nfor extensions authors to be notified of specific compatibility\nchanges. Could be powered by parsing commit messages for conventions\nabout backwards-incompatible changes; padding usage could be one,\nmaybe there are others we could code for. (For padding, imagine a\ntool that is able to look at struct offsets between git revisions and\nnote any offset differences/changes here that extensions authors could\nbe aware of; even diffs of `pahole` output between revisions could be\nhelpful.)\n\nABI guarantees for extensions are hard because all of core *is*\npotentially the ABI (or at least if you are well-behaved, public\nfunctions and structs), so some sort of \"these interfaces won't break\nin minor releases but use other internals and you're on your own\"\nmight be useful distinction. (We discussed trying to classify APIs\nwith varying levels of support, but that also comes with its own set\nof issues/overhead/maintenance for code authors/committers.)\n\n\n", "msg_date": "Mon, 3 Jun 2024 15:22:53 -0500", "msg_from": "David Christensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "Hi,\n\nOn 2024-06-03 15:21:04 -0400, David E. Wheeler wrote:\n> > Extensions in general can do lots of stuff, guaranteeing that bug fixes don't\n> > cause any problems is just not feasible.\n> >\n> > It'd be interesting to see a few examples of actual minor-version-upgrade\n> > extension breakages, so we can judge what caused them.\n>\n> In the community Slack[4], Matthias van de Meent writes[5]:\n>\n> > Citus’ pg_version_compat.h[7] where it re-implements a low-level function that was newly introduced in PG14.7. If you build against PG14.7+ headers, you may get random crashes when running on 14.6.\n\nI don't see how this would trigger random crashes.\n\nUnfortunately [4] doesn't seem to take me to a relevant message (pruned chat\nhistory?), so I can't infer more from that context.\n\n\n> I suppose it would work fine on 14.7 if compiled on 14.6 though. I suspect\n> there aren’t many examples, though, mostly just a lot of anxiety, and some\n> have decided that extensions must be recompiled for every minor release in\n> order to avoid the issue. StackGres[7] is one example, but I suspect Omni\n> (Yurii’s company) may follow.\n\nRegardless of ABI issues, it's probably a good idea to continually run tests\nagainst in-development minor versions, just to prevent something breaking from\ncreeping in. IIRC there were a handful of cases where we accidentally broke\nsome extension, because they relied on some implementation details.\n\n\n> >> Unless, that is, we can provide a complete list of things not to do (like\n> >> make use of padding) to avoid it. Is that feasible?\n> >\n> > You can't really rely on the contents of padding, in general. So I don't think\n> > this is really something that needs to be called out.\n>\n> Sure, probably not a problem, but if that’s the sole qualifier for making\n> binary changes, I think it’s worth saying, as opposed to “we don’t make\n> any”. Something like “Only changes to padding, which you never used anyway,\n> right?” :-)\n\nIDK, to me something like this seems to promise more than we actually can.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Jun 2024 14:56:24 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 3, 2024, at 5:56 PM, Andres Freund <[email protected]> wrote:\n\n> I don't see how this would trigger random crashes.\n> \n> Unfortunately [4] doesn't seem to take me to a relevant message (pruned chat\n> history?), so I can't infer more from that context.\n\nYou can use [4] to join the Slack (if you haven’t already) and [5] for the relevant post.\n\n> Regardless of ABI issues, it's probably a good idea to continually run tests\n> against in-development minor versions, just to prevent something breaking from\n> creeping in. IIRC there were a handful of cases where we accidentally broke\n> some extension, because they relied on some implementation details.\n\nOh yeah, I run regular tests against the latest minor release of all supported Postgres version for my extensions, using pgxn-tools[6], which looks like this[7]. Which I consider absolutely essential. But it doesn’t mean that something compiled against .4 will work with .3 and vice versa. That’s what we could use the guidance/guarantees on.\n\n>> Sure, probably not a problem, but if that’s the sole qualifier for making\n>> binary changes, I think it’s worth saying, as opposed to “we don’t make\n>> any”. Something like “Only changes to padding, which you never used anyway,\n>> right?” :-)\n> \n> IDK, to me something like this seems to promise more than we actually can.\n\nWhat I’d like to do is figure out exactly what we *can* promise and perhaps some guidelines, and start with that.\n\nBest,\n\nDavid\n\n[4]: https://pgtreats.info/slack-invite\n[5]: https://postgresteam.slack.com/archives/C056ZA93H1A/p1716502630690559?thread_ts=1716500801.036709&cid=C056ZA93H1A\n[6]: https://github.com/pgxn/docker-pgxn-tools\n[7]: https://github.com/pgxn/docker-pgxn-tools/actions/runs/9351752462\n\n\n\n", "msg_date": "Mon, 3 Jun 2024 18:02:18 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Mon, 2024-06-03 at 15:38 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2024-06-03 14:43:17 -0400, David E. Wheeler wrote:\n> > > * The ABI is guaranteed to change only in backward compatible ways in minor\n> > > releases. If for some reason it doesn’t it’s a bug that will need to be\n> > > fixed.\n> \n> > Thus I am not really on board with this statement as-is.\n> \n> Me either.  There are degrees of ABI compatibility, and we'll choose\n> the least invasive way, but it's seldom the case that no conceivable\n> extension will be broken.\n\noracle_fdw has been broken by minor releases several times in the past.\nThis may well be because of weird things that I am doing; still, my\nexperience is that minor releases are not always binary compatible.\n\n> > It'd be interesting to see a few examples of actual minor-version-upgrade\n> > extension breakages, so we can judge what caused them.\n> \n> Yes, that could be a fruitful discussion.\n\nDigging through my commits brought up 6214e2b2280462cbc3aa1986e350e167651b3905,\nfor one.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 04 Jun 2024 02:11:07 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On 04.06.24 02:11, Laurenz Albe wrote:\n> On Mon, 2024-06-03 at 15:38 -0400, Tom Lane wrote:\n>> Andres Freund <[email protected]> writes:\n>>> On 2024-06-03 14:43:17 -0400, David E. Wheeler wrote:\n>>>> * The ABI is guaranteed to change only in backward compatible ways in minor\n>>>> releases. If for some reason it doesn’t it’s a bug that will need to be\n>>>> fixed.\n>>\n>>> Thus I am not really on board with this statement as-is.\n>>\n>> Me either.  There are degrees of ABI compatibility, and we'll choose\n>> the least invasive way, but it's seldom the case that no conceivable\n>> extension will be broken.\n> \n> oracle_fdw has been broken by minor releases several times in the past.\n> This may well be because of weird things that I am doing; still, my\n> experience is that minor releases are not always binary compatible.\n> \n>>> It'd be interesting to see a few examples of actual minor-version-upgrade\n>>> extension breakages, so we can judge what caused them.\n>>\n>> Yes, that could be a fruitful discussion.\n> \n> Digging through my commits brought up 6214e2b2280462cbc3aa1986e350e167651b3905,\n> for one.\n\nI'm not sure I can see how that would have broken oracle_fdw, but in any \ncase it's an interesting example. This patch did not change any structs \nincompatibly, but it changed the semantics of a function without \nchanging the name:\n\n extern void InitResultRelInfo(ResultRelInfo *resultRelInfo,\n Relation resultRelationDesc,\n Index resultRelationIndex,\n- Relation partition_root,\n+ ResultRelInfo *partition_root_rri,\n int instrument_options);\n\nIf an extension calls this function, something would possibly crash if \nit's on the wrong side of the update fence.\n\nThis could possibly be avoided by renaming the symbol in backbranches. \nMaybe something like\n\n#define InitResultRelInfo InitResultRelInfo2\n\nThen you'd get a specific error message when loading the module, rather \nthan a crash.\n\nThis might be something to consider:\n\nno ABI break is better than an explicit ABI break is better than a \nsilent ABI break\n\n(Although this is actually an API break, isn't it?)\n\n\n\n", "msg_date": "Tue, 4 Jun 2024 09:18:28 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 4, 2024, at 03:18, Peter Eisentraut <[email protected]> wrote:\n\n> This could possibly be avoided by renaming the symbol in backbranches. Maybe something like\n> \n> #define InitResultRelInfo InitResultRelInfo2\n> \n> Then you'd get a specific error message when loading the module, rather than a crash.\n\nThat sounds more useful, yes. Is that a practice the project would consider adopting?\n\nThere’s also oracle_fdw@d137d15[1], which says:\n\n> An API break in PostgreSQL 10.4 and 9.6.9 makes it impossible\n> to use these versions: the \"extract_actual_join_clauses\" function\n> gained an additional parameter.\n\nThe 10.4 commit is 68fab04, and it does indeed add a new function:\n\n``` patch\n--- a/src/include/optimizer/restrictinfo.h\n+++ b/src/include/optimizer/restrictinfo.h\n@@ -36,6 +36,7 @@ extern List *get_actual_clauses(List *restrictinfo_list);\n extern List *extract_actual_clauses(List *restrictinfo_list,\n bool pseudoconstant);\n extern void extract_actual_join_clauses(List *restrictinfo_list,\n+ Relids joinrelids,\n List **joinquals,\n List **otherquals);\n extern bool join_clause_is_movable_to(RestrictInfo *rinfo, RelOptInfo *baserel);\n```\n\nI wonder if that sort of change could be avoided in backpatches, maybe by adding and using a `extract_actual_join_clauses_compat` function and using that internally instead?\n\nOr, to David C’s point, perhaps it would be better to say there are some categories of APIs that are not subject to any guarantees in minor releases?\n\nBest,\n\nDavid\n\n[1]: https://github.com/laurenz/oracle_fdw/commit/d137d15edca8c67df1e5cccca01f417f4833b028\n[2]: https://github.com/postgres/postgres/commit/68fab04f7c2a07c5308e3d2957198ccd7a80ebc5#diff-bb6fa74cb115e19684092f0938131cd5d99b26fa2d49480f7ea7f28e937a7fb4\n\n\n\n\n", "msg_date": "Mon, 10 Jun 2024 15:05:32 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "Hi,\n\nOn 2024-06-10 15:05:32 -0400, David E. Wheeler wrote:\n> > An API break in PostgreSQL 10.4 and 9.6.9 makes it impossible\n> > to use these versions: the \"extract_actual_join_clauses\" function\n> > gained an additional parameter.\n> \n> The 10.4 commit is 68fab04, and it does indeed add a new function:\n\nThat's 6 years ago, not sure we can really learn that much from that.\n\nAnd it's not like it's actually impossible, #ifdefs aren't great, but they are\nbetter than nothing.\n\n\n> ``` patch\n> --- a/src/include/optimizer/restrictinfo.h\n> +++ b/src/include/optimizer/restrictinfo.h\n> @@ -36,6 +36,7 @@ extern List *get_actual_clauses(List *restrictinfo_list);\n> extern List *extract_actual_clauses(List *restrictinfo_list,\n> bool pseudoconstant);\n> extern void extract_actual_join_clauses(List *restrictinfo_list,\n> + Relids joinrelids,\n> List **joinquals,\n> List **otherquals);\n> extern bool join_clause_is_movable_to(RestrictInfo *rinfo, RelOptInfo *baserel);\n> ```\n> \n> I wonder if that sort of change could be avoided in backpatches, maybe by adding and using a `extract_actual_join_clauses_compat` function and using that internally instead?\n> \n> Or, to David C’s point, perhaps it would be better to say there are some categories of APIs that are not subject to any guarantees in minor releases?\n\nI'm honestly very dubious that this is a good point to introduce a bunch of\nformalism. It's a already a lot of work to maintain them, if we make it even\nharder we'll end up more fixes not being backported, because it's not worth\nthe pain.\n\nTo be blunt, the number of examples raised here doesn't seem to indicate that\nthis is an area where we need to invest additional resources. We are already\nseverely constrained as a project by committer bandwidth, there are plenty\nother things that seem more important to focus on.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Jun 2024 12:39:33 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 10, 2024, at 15:39, Andres Freund <[email protected]> wrote:\n\n> That's 6 years ago, not sure we can really learn that much from that.\n> \n> And it's not like it's actually impossible, #ifdefs aren't great, but they are\n> better than nothing.\n\nRight, it’s just that extension authors could use some notification that such a change is coming so they can update their code, if necessary.\n\n>> Or, to David C’s point, perhaps it would be better to say there are some categories of APIs that are not subject to any guarantees in minor releases?\n> \n> I'm honestly very dubious that this is a good point to introduce a bunch of\n> formalism. It's a already a lot of work to maintain them, if we make it even\n> harder we'll end up more fixes not being backported, because it's not worth\n> the pain.\n\nWell it’s a matter of distributing the work. I don’t want to increase anyone’s workload unnecessarily, but as it is stuff like this can be surprising to extension maintainers with some expectation of minor release stability who had no warning of the change. That kind of thing can dissuade some people from deciding to write or maintain extensions, and lead others to recompile and distribute binaries for every single minor release.\n\n> To be blunt, the number of examples raised here doesn't seem to indicate that\n> this is an area where we need to invest additional resources. We are already\n> severely constrained as a project by committer bandwidth, there are plenty\n> other things that seem more important to focus on.\n\nSo my question is, what’s the least onerous thing for committers to commit to doing that we can write down to properly set expectations? That’s where I want to start: can we publish a policy that reflects what committers already adhere to? And is there some way to let people know that an incompatible change is being released? Even if it just starts out in the release notes?\n\nBased on this thread, I’ve drafted the sort of policy I have in mind. Please don’t assume I’m advocating for exactly the wording here! Let’s workshop this until it’s something the committers and core team can agree to. (At that point I’ll turn it into a doc patch) Have a look and let me know what you think.\n\n``` md\n\nABI Policy\n==========\n\nThe PostgreSQL core team maintains two application binary interface (ABI) guarantees: one for major releases and one for minor releases.\n\nMajor Releases\n--------------\n\nApplications that use the PostgreSQL APIs must be compiled for each major release supported by the application. The inclusion of `PG_MODULE_MAGIC` ensures that code compiled for one major version will rejected by other major versions.\n\nFurthermore, new releases may make API changes that require code changes. Use the `PG_VERSION_NUM` constant to adjust code in a backwards compatible way:\n\n``` c\n#if PG_VERSION_NUM >= 160000\n#include \"varatt.h\"\n#endif\n```\n\nPostgreSQL avoids unnecessary API changes in major releases, but usually ships a few necessary API changes, including deprecation, renaming, and argument variation. In such cases the incompatible changes will be listed in the Release Notes.\n\nMinor Releases\n--------------\n\nPostgreSQL makes every effort to avoid both API and ABI breaks in minor releases. In general, an application compiled against any minor release will work with any other minor release, past or future.\n\nWhen a change *is* required, PostgreSQL will choose the least invasive way possible, for example by squeezing a new field into padding space or appending it to the end of a struct. This sort of change should not impact dependent applications unless they use `sizeof(the struct)` or create their own instances of such structs --- patterns best avoided.\n\nIn rare cases, however, even such non-invasive changes may be impractical or impossible. In such an event, the change will be documented in the Release Notes, and details on the issue will also be posted to [TBD; mail list? Blog post? News item?].\n\nThe project strongly recommends that developers adopt continuous integration testing at least for the latest minor release all major versions of Postgres they support.\n```\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Tue, 11 Jun 2024 10:55:38 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On 11.06.24 16:55, David E. Wheeler wrote:\n> On Jun 10, 2024, at 15:39, Andres Freund <[email protected]> wrote:\n> \n>> That's 6 years ago, not sure we can really learn that much from that.\n>>\n>> And it's not like it's actually impossible, #ifdefs aren't great, but they are\n>> better than nothing.\n> \n> Right, it’s just that extension authors could use some notification that such a change is coming so they can update their code, if necessary.\n\nI think since around 6 years ago we have been much more vigilant about \navoiding ABI breaks. So if there aren't any more recent examples of \nbreakage, then maybe that was ultimately successful, and the upshot is, \ncontinue to be vigilant at about the same level?\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:43:45 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Wed, 12 Jun 2024 at 14:44, Peter Eisentraut <[email protected]> wrote:\n> I think since around 6 years ago we have been much more vigilant about\n> avoiding ABI breaks. So if there aren't any more recent examples of\n> breakage, then maybe that was ultimately successful, and the upshot is,\n> continue to be vigilant at about the same level?\n\nWhile not strictly an ABI break I guess, the backport of 32d5a4974c81\nbroke building Citus against 13.10 and 14.7[1].\n\n[1]: https://github.com/citusdata/citus/pull/6711\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:58:04 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 12, 2024, at 8:43 AM, Peter Eisentraut <[email protected]> wrote:\n\n>> Right, it’s just that extension authors could use some notification that such a change is coming so they can update their code, if necessary.\n> \n> I think since around 6 years ago we have been much more vigilant about avoiding ABI breaks. So if there aren't any more recent examples of breakage, then maybe that was ultimately successful, and the upshot is, continue to be vigilant at about the same level?\n\nThat sounds great to me. I’d like to get it documented, though, so that extension and other third party developers are aware of it, and not just making wild guesses and scaring each other over (perhaps) misconceptions.\n\nD\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:02:05 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 12, 2024, at 8:58 AM, Jelte Fennema-Nio <[email protected]> wrote:\n\n> While not strictly an ABI break I guess, the backport of 32d5a4974c81\n> broke building Citus against 13.10 and 14.7[1].\n> \n> [1]: https://github.com/citusdata/citus/pull/6711\n\nInteresting one. We might want to advise projects to use deferent names if they copy code from the core, use an extension-specific prefix perhaps. That way if it gets backported by the core, as in this example, it won’t break anything, and the extension can choose to switch.\n\nD\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:34:14 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Mon, Jun 3, 2024 at 3:39 PM Tom Lane <[email protected]> wrote:\n> Me either. There are degrees of ABI compatibility\n\nExactly this!\n\nWhat I think would be useful to document is our usual practices e.g.\nadding new struct members at the end of structs, trying to avoid\nchanging public function signatures. If we document promises to\nextension authors, I don't know how much difference that will make:\nwe'll probably end up needing to violate them at some point for one\nreason or another. But if we document what committers should do, then\nwe might do better than we're now, because committers will be more\nlikely to do it right, and extension authors can also read those\ninstructions to understand what our practices are.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:47:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 12, 2024, at 10:47, Robert Haas <[email protected]> wrote:\n\n> What I think would be useful to document is our usual practices e.g.\n> adding new struct members at the end of structs, trying to avoid\n> changing public function signatures. If we document promises to\n> extension authors, I don't know how much difference that will make:\n> we'll probably end up needing to violate them at some point for one\n> reason or another.\n\nI think that’s fine if there is some sort of notification process. The policy I drafted upthread starts with making sure the such a break is mentioned in the release notes.\n\n> But if we document what committers should do, then\n> we might do better than we're now, because committers will be more\n> likely to do it right, and extension authors can also read those\n> instructions to understand what our practices are.\n\nYes, this, thank you!\n\nD\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 11:04:22 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On 2024-06-11 10:55:38 -0400, David E. Wheeler wrote:\n> ABI Policy\n> ==========\n> \n> The PostgreSQL core team maintains two application binary interface (ABI) guarantees: one for major releases and one for minor releases.\n\nI.e. for major versions it's \"there is none\"?\n\n> Major Releases\n> --------------\n> \n> Applications that use the PostgreSQL APIs must be compiled for each major release supported by the application. The inclusion of `PG_MODULE_MAGIC` ensures that code compiled for one major version will rejected by other major versions.\n> \n> Furthermore, new releases may make API changes that require code changes. Use the `PG_VERSION_NUM` constant to adjust code in a backwards compatible way:\n> \n> ``` c\n> #if PG_VERSION_NUM >= 160000\n> #include \"varatt.h\"\n> #endif\n> ```\n> \n> PostgreSQL avoids unnecessary API changes in major releases, but usually\n> ships a few necessary API changes, including deprecation, renaming, and\n> argument variation.\n\n\n> In such cases the incompatible changes will be listed in the Release Notes.\n\nI don't think we actually exhaustively list all of them.\n\n\n> Minor Releases\n> --------------\n> \n> PostgreSQL makes every effort to avoid both API and ABI breaks in minor releases. In general, an application compiled against any minor release will work with any other minor release, past or future.\n\ns/every/a reasonable/ or just s/every/an/\n\n\n> When a change *is* required, PostgreSQL will choose the least invasive way\n> possible, for example by squeezing a new field into padding space or\n> appending it to the end of a struct. This sort of change should not impact\n> dependent applications unless they use `sizeof(the struct)` or create their\n> own instances of such structs --- patterns best avoided.\n\nThe padding case doesn't affect sizeof() fwiw.\n\nI think there's too often not an alternative to using sizeof(), potentially\nindirectly (via makeNode() or such. So this sounds a bit too general.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Jun 2024 08:20:32 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "Hi,\n\nOn 2024-06-12 14:58:04 +0200, Jelte Fennema-Nio wrote:\n> On Wed, 12 Jun 2024 at 14:44, Peter Eisentraut <[email protected]> wrote:\n> > I think since around 6 years ago we have been much more vigilant about\n> > avoiding ABI breaks. So if there aren't any more recent examples of\n> > breakage, then maybe that was ultimately successful, and the upshot is,\n> > continue to be vigilant at about the same level?\n> \n> While not strictly an ABI break I guess, the backport of 32d5a4974c81\n> broke building Citus against 13.10 and 14.7[1].\n\nI think that kind of thing is not something we (PG devs) really can do\nanything about. It's also\na) fairly easy thing to fix\nb) fails during compilation\nc) doesn't break ABI afaict\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Jun 2024 08:22:48 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Mon, Jun 3, 2024 at 3:38 PM Tom Lane <[email protected]> wrote:\n> > Thus I am not really on board with this statement as-is.\n>\n> Me either. There are degrees of ABI compatibility, and we'll choose\n> the least invasive way, but it's seldom the case that no conceivable\n> extension will be broken. For example, if we can't squeeze a new\n> field into padding space, we'll typically put it at the end of the\n> struct in existing branches. That's okay unless some extension has\n> a dependency on sizeof(the struct), for instance because it's\n> allocating such structs itself.\n\nRight. While there certainly are code bases (mostly C libraries\nwithout a huge surface area) where taking the hardest possible line on\nABI breakage makes sense, and where ABI breakage can be detected by a\nmechanical process, that isn't us. In fact I'd say that Postgres is\njust about the further possible thing from that.\n\nI'm a little surprised that we don't seem to have all that many\nproblems with ABI breakage, though. Although we theoretically have a\nhuge number of APIs that extension authors might choose to use, that\nisn't really true in practical terms. The universe of theoretically\npossible problems is vastly larger than the areas where we see\nproblems in practice. You have to be pragmatic about it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 12 Jun 2024 11:30:06 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Tue, Jun 11, 2024 at 10:55 AM David E. Wheeler <[email protected]> wrote:\n> Right, it’s just that extension authors could use some notification that such a change is coming so they can update their code, if necessary.\n\nIn general our strategy around ABI breaks is to avoid them whenever\npossible. We also make the most conservative assumptions about what a\ntrue ABI break is -- strictly speaking we can never be fully sure\nabout the impact of a theoretical/mechanical ABI break (we can only\nmake well educated guesses). My sense is that we're approaching having\nthe fewest possible real ABI breaks already -- we're already doing the\nbest we can. That doesn't seem like a useful area to focus on.\n\nAs an example, my bugfix commit 714780dc was apparently discussed by\nYurii Rashkovskii during his pgConf.dev talk. I was (and still am)\napproaching 100% certainty that that wasn't a true ABI break.\nDocumenting this somewhere seems rather unappealing. Strictly speaking\nI'm not 100% certain that this is a non-issue, but who benefits from\nhearing my hand-wavy characterisation of why I believe it's a\nnon-issue? You might as well just look at an ABI change report\nyourself.\n\nApproximately 0% of all extensions actually use the struct in\nquestion, and so obviously aren't affected. If anybody is using the\nstruct then it's merely very very likely that they aren't affected.\nBut why trust me here? After all, I can't even imagine why anybody\nwould want to use the struct in question. My hand-wavy speculation\nabout what it would look like if I was wrong about that is inherently\nsuspect, and probably just useless. Is it not?\n\nThat having been said, it would be useful if there was a community web\nresource for this -- something akin to coverage.postgresql.org, but\nwith differential ABI breakage reports. You can see an example report\nhere:\n\nhttps://postgr.es/m/CAH2-Wzm-W6hSn71sUkz0Rem=qDEU7TnFmc7_jG2DjrLFef_WKQ@mail.gmail.com\n\nTheoretically anybody can do this themselves. In practice they don't.\nSo something as simple as providing automated reports about ABI\nchanges might well move the needle here.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 12 Jun 2024 11:57:56 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On 12.06.24 16:47, Robert Haas wrote:\n> On Mon, Jun 3, 2024 at 3:39 PM Tom Lane <[email protected]> wrote:\n>> Me either. There are degrees of ABI compatibility\n> \n> Exactly this!\n> \n> What I think would be useful to document is our usual practices e.g.\n> adding new struct members at the end of structs, trying to avoid\n> changing public function signatures. If we document promises to\n> extension authors, I don't know how much difference that will make:\n> we'll probably end up needing to violate them at some point for one\n> reason or another. But if we document what committers should do, then\n> we might do better than we're now, because committers will be more\n> likely to do it right, and extension authors can also read those\n> instructions to understand what our practices are.\n\nFun fact: At the end of src/tools/RELEASE_CHANGES, there is some \nguidance on how to maintain ABI compatibility in *libpq*. That used to \nbe a problem. We have come far since then.\n\nBut yes, a bit of documentation like that (maybe not in that file \nthough) would make sense.\n\n\n", "msg_date": "Thu, 13 Jun 2024 09:49:05 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 12, 2024, at 11:20, Andres Freund <[email protected]> wrote:\n\n>> The PostgreSQL core team maintains two application binary interface (ABI) guarantees: one for major releases and one for minor releases.\n> \n> I.e. for major versions it's \"there is none\"?\n\nIs it? ISTM that there is the intention not to break things that don’t need to be broken, though that doesn’t rule out interface improvements.\n\n>> In such cases the incompatible changes will be listed in the Release Notes.\n> \n> I don't think we actually exhaustively list all of them.\n\nShould they be? I can maybe see the argument not to for major releases. But I’ve also has the experience of a new failure on a major release and having to go find the reason for it and the fix, often requiring the attention of someone on a mailing list who might rather tap the “compatibility changes” sign in the latest change log. :-)\n\n> s/every/a reasonable/ or just s/every/an/\n\n✅\n\n> The padding case doesn't affect sizeof() fwiw.\n\n✅\n\n> I think there's too often not an alternative to using sizeof(), potentially\n> indirectly (via makeNode() or such. So this sounds a bit too general.\n\nIs there some other way to avoid the issue? Or would constructor APIs need to be added to core?\n\n\nUpdated with your suggestions:\n\n\n``` md\n\nABI Policy\n==========\n\nThe PostgreSQL core team maintains two application binary interface (ABI) guarantees: one for major releases and one for minor releases.\n\nMajor Releases\n--------------\n\nApplications that use the PostgreSQL APIs must be compiled for each major release supported by the application. The inclusion of `PG_MODULE_MAGIC` ensures that code compiled for one major version will rejected by other major versions.\n\nFurthermore, new releases may make API changes that require code changes. Use the `PG_VERSION_NUM` constant to adjust code in a backwards compatible way:\n\n``` c\n#if PG_VERSION_NUM >= 160000\n#include \"varatt.h\"\n#endif\n```\n\nPostgreSQL avoids unnecessary API changes in major releases, but usually ships a few necessary API changes, including deprecation, renaming, and argument variation. In such cases the incompatible changes will be listed in the Release Notes.\n\nMinor Releases\n--------------\n\nPostgreSQL makes an effort to avoid both API and ABI breaks in minor releases. In general, an application compiled against any minor release will work with any other minor release, past or future.\n\nWhen a change *is* required, PostgreSQL will choose the least invasive way possible, for example by squeezing a new field into padding space or appending it to the end of a struct. This sort of change should not impact dependent applications unless, they use `sizeof(the struct)` on a struct with an appended field, or create their own instances of such structs --- patterns best avoided.\n\nIn rare cases, however, even such non-invasive changes may be impractical or impossible. In such an event, the change will be documented in the Release Notes, and details on the issue will also be posted to [TBD; mail list? Blog post? News item?].\n\nThe project strongly recommends that developers adopt continuous integration testing at least for the latest minor release all major versions of Postgres they support.\n```\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 18:37:29 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 12, 2024, at 11:30, Peter Geoghegan <[email protected]> wrote:\n\n> I'm a little surprised that we don't seem to have all that many\n> problems with ABI breakage, though. Although we theoretically have a\n> huge number of APIs that extension authors might choose to use, that\n> isn't really true in practical terms. The universe of theoretically\n> possible problems is vastly larger than the areas where we see\n> problems in practice. You have to be pragmatic about it.\n\nThings go wrong far less often than one might fear! Given this relative stability, I think it’s reasonable to document what heretofore assumed the policy is so that the fears can largely be put to rest by clear expectations.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 18:39:07 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 12, 2024, at 11:57, Peter Geoghegan <[email protected]> wrote:\n\n> That having been said, it would be useful if there was a community web\n> resource for this -- something akin to coverage.postgresql.org, but\n> with differential ABI breakage reports. You can see an example report\n> here:\n> \n> https://postgr.es/m/CAH2-Wzm-W6hSn71sUkz0Rem=qDEU7TnFmc7_jG2DjrLFef_WKQ@mail.gmail.com\n> \n> Theoretically anybody can do this themselves. In practice they don't.\n> So something as simple as providing automated reports about ABI\n> changes might well move the needle here.\n\nWhat would be required to make such a thing? Maybe it’d make a good Summer of Code project.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 18:40:59 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On 03/06/2024 21:21, David E. Wheeler wrote:\r\n> On Jun 3, 2024, at 14:58, Andres Freund <[email protected]> wrote:\r\n>\r\n>> Hi,\r\n> Hello Andres.\r\n>\r\n>> Are there notes for the session?\r\n> Yes, but not posted yet. Here’s what Andreas 'ads' Scherbaum sent me for that bit of the conversation:\r\n>\r\n> * Core is focused on core ABI stability\r\n> * David: No \"statement of stability\" in Core\r\n> * David/Jeremy/Tom: coding guidelines, style guidelines\r\n> * useful to have docs in core about what's stable and what's not, what you should compile against or not, and ABI guarantees\r\n> * Abigale: there are hooks, but no overall concept for extensions\r\n> * Tom: Peter Eisentraut is working on tests for extensions stability\r\n> * Jeremy: nothing is preventing people from installing incompatible versions\r\n\r\nThe full \"discussion\" is here:\r\n\r\nhttps://wiki.postgresql.org/wiki/PGConf.dev_2024_Developer_Unconference#Improving_extensions_in_core\r\n\r\nAnd the ABI discussion here:\r\nhttps://wiki.postgresql.org/wiki/PGConf.dev_2024_Extension_Summit#ABI.2FAPI_discussion\r\n\r\n-- \r\n\t\t\t\tAndreas 'ads' Scherbaum\r\nGerman PostgreSQL User Group\r\nEuropean PostgreSQL User Group - Board of Directors\r\nVolunteer Regional Contact, Germany - PostgreSQL Project\r\n\r\n", "msg_date": "Wed, 19 Jun 2024 00:40:52 +0200", "msg_from": "Andreas 'ads' Scherbaum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On 18.06.24 00:37, David E. Wheeler wrote:\n> ABI Policy\n> ==========\n> \n> The PostgreSQL core team maintains two application binary interface (ABI) guarantees: one for major releases and one for minor releases.\n> \n> Major Releases\n> --------------\n> \n> Applications that use the PostgreSQL APIs\n\nThis is probably a bit confusing. This might as well mean client \napplication code against libpq. Better something like \"server plugin \ncode that uses the PostgreSQL server APIs\".\n\n> must be compiled for each major release supported by the application. The inclusion of `PG_MODULE_MAGIC` ensures that code compiled for one major version will rejected by other major versions.\n\nok so far\n\n> Furthermore, new releases may make API changes that require code changes. Use the `PG_VERSION_NUM` constant to adjust code in a backwards compatible way:\n> \n> ``` c\n> #if PG_VERSION_NUM >= 160000\n> #include \"varatt.h\"\n> #endif\n> ```\n\nBut now we're talking about API. That might be subject of another \ndocument or another section in this one, but it seems confusing to mix \nthis with the ABI discussion.\n\n> PostgreSQL avoids unnecessary API changes in major releases, but usually ships a few necessary API changes, including deprecation, renaming, and argument variation.\n\nObviously, as a practical matter, there won't be random pointless \nchanges. But I wouldn't go as far as writing anything down about how \nthese APIs are developed.\n\n> In such cases the incompatible changes will be listed in the Release Notes.\n\nI don't think anyone is signing up to do that.\n\n\n> Minor Releases\n> --------------\n> \n> PostgreSQL makes an effort to avoid both API and ABI breaks in minor releases. In general, an application compiled against any minor release will work with any other minor release, past or future.\n> \n> When a change *is* required, PostgreSQL will choose the least invasive way possible, for example by squeezing a new field into padding space or appending it to the end of a struct. This sort of change should not impact dependent applications unless, they use `sizeof(the struct)` on a struct with an appended field, or create their own instances of such structs --- patterns best avoided.\n> \n> In rare cases, however, even such non-invasive changes may be impractical or impossible. In such an event, the change will be documented in the Release Notes, and details on the issue will also be posted to [TBD; mail list? Blog post? News item?].\n\nI think one major problem besides actively avoiding or managing such \nminor-version ABI breaks is some automated detection. Otherwise, this \njust means \"we try, but who knows\".\n\n\n\n", "msg_date": "Wed, 19 Jun 2024 11:41:04 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On 18.06.24 00:40, David E. Wheeler wrote:\n> On Jun 12, 2024, at 11:57, Peter Geoghegan <[email protected]> wrote:\n> \n>> That having been said, it would be useful if there was a community web\n>> resource for this -- something akin to coverage.postgresql.org, but\n>> with differential ABI breakage reports. You can see an example report\n>> here:\n>>\n>> https://postgr.es/m/CAH2-Wzm-W6hSn71sUkz0Rem=qDEU7TnFmc7_jG2DjrLFef_WKQ@mail.gmail.com\n>>\n>> Theoretically anybody can do this themselves. In practice they don't.\n>> So something as simple as providing automated reports about ABI\n>> changes might well move the needle here.\n> \n> What would be required to make such a thing? Maybe it’d make a good Summer of Code project.\n\nThe above thread contains a lengthy discussion about what one could do.\n\n\n", "msg_date": "Wed, 19 Jun 2024 11:42:36 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Mon, Jun 17, 2024 at 6:38 PM David E. Wheeler <[email protected]> wrote:\n> Is it? ISTM that there is the intention not to break things that don’t need to be broken, though that doesn’t rule out interface improvements.\n\nI suppose that it's true that we try to avoid gratuitous breakage, but\nI feel like it would be weird to document that.\n\nSometimes I go to a store and I see a sign that says \"shoplifters will\nbe prosecuted.\" But I have yet to see a store with a sign that says\n\"people who appear to be doing absolutely nothing wrong will not be\nprosecuted.\" If I did see such a sign, I would frankly be a little\nconcerned.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jun 2024 14:51:25 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 19, 2024, at 05:42, Peter Eisentraut <[email protected]> wrote:\n\n>>> https://postgr.es/m/CAH2-Wzm-W6hSn71sUkz0Rem=qDEU7TnFmc7_jG2DjrLFef_WKQ@mail.gmail.com\n>>> \n>>> Theoretically anybody can do this themselves. In practice they don't.\n>>> So something as simple as providing automated reports about ABI\n>>> changes might well move the needle here.\n>> What would be required to make such a thing? Maybe it’d make a good Summer of Code project.\n> \n> The above thread contains a lengthy discussion about what one could do.\n\nI somehow missed that GSoC 2024 is already going with contributors. Making a mental note to add an item for 2025.\n\nD\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 15:47:34 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 24, 2024, at 14:51, Robert Haas <[email protected]> wrote:\n\n> I suppose that it's true that we try to avoid gratuitous breakage, but\n> I feel like it would be weird to document that.\n\nI see how that can seem weird to a committer deeply familiar with the development process and how things happen. But people outside the -hackers bubble have very little idea. It’s fair to say it needn’t be a long statement for major versions: a single sentence such as “we try to avoid gratuitous breakage” is a perfectly reasonable framing. But I’d say, in the interest of completeness, it would be useful to document the policy for major release *as well as* minor releases.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 15:50:24 -0400", "msg_from": "David E. Wheeler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 19, 2024, at 05:41, Peter Eisentraut <[email protected]> wrote:\n\n> This is probably a bit confusing. This might as well mean client application code against libpq. Better something like \"server plugin code that uses the PostgreSQL server APIs\".\n\nThat works.\n\n> But now we're talking about API. That might be subject of another document or another section in this one, but it seems confusing to mix this with the ABI discussion.\n\nHrm. They’re super closely-related in my mind, as an extension developer. I need to know both! I guess I’m taking of this policy as what I can expect may be changed (and how to adapt to it) and what won’t.\n\nThat said, I’m fine to remove the API stuff if there’s consensus objecting to it, to be defined in a separate policy (perhaps on the same doc page).\n\n>> PostgreSQL avoids unnecessary API changes in major releases, but usually ships a few necessary API changes, including deprecation, renaming, and argument variation.\n> \n> Obviously, as a practical matter, there won't be random pointless changes. But I wouldn't go as far as writing anything down about how these APIs are developed.\n\nFair enough, was trying to give some idea of the sorts of changes. Don’t have to include them.\n\n>> In such cases the incompatible changes will be listed in the Release Notes.\n> \n> I don't think anyone is signing up to do that.\n\nIt needn’t be comprehensive. Just mention that an ABI or API changed in the release note item. Unless they almost *all* make such changes.\n\n>> Minor Releases\n>> --------------\n\n> I think one major problem besides actively avoiding or managing such minor-version ABI breaks is some automated detection. Otherwise, this just means \"we try, but who knows”.\n\nI think you *do* try, and the fact that there are so few issues means you succeed at that. I’m not advocating for an ABI guarantee here, just a description of the policy committees already follow.\n\nHere’s an update based on all the feedback, framing things more from the perspective of “do I need to recompile this or change my code”. Many thanks!\n\n``` md\nABI Policy\n==========\n\nChanges to the the PostgreSQL server APIs may require recompilation of server plugin code that uses them. This policy describes the core team's approach to such changes, and what server API users can expect.\n\nMajor Releases\n--------------\n\nApplications that use server APIs must be compiled for each major release supported by the application. The inclusion of `PG_MODULE_MAGIC` ensures that code compiled for one major version will rejected by other major versions. Developers needing to support multiple versions of PostgreSQL with incompatible APIs should use the `PG_VERSION_NUM` constant to adjust code as appropriate. For example:\n\n``` c\n#if PG_VERSION_NUM >= 160000\n#include \"varatt.h\"\n#endif\n```\n\nThe core team avoids unnecessary breakage, but users of the server APIs should expect and be prepared to make adjustments and recompile for every major release.\n\nMinor Releases\n--------------\n\nPostgreSQL makes an effort to avoid server API and ABI breaks in minor releases. In general, an application compiled against any minor release will work with any other minor release, past or future. In the absence of automated detection of such changes, this is not a guarantee, but history such breaking changes have been extremely rare.\n\nWhen a change *is* required, PostgreSQL will choose the least invasive change possible, for example by squeezing a new field into padding space or appending it to the end of a struct. This sort of change should not impact dependent applications unless they use `sizeof(the struct)` on a struct with an appended field, or create their own instances of such structs --- patterns best avoided.\n\nIn rare cases, however, even such non-invasive changes may be impractical or impossible. In such an event, the change will be documented in the Release Notes, and details on the issue will also be posted to [TBD; mail list? Blog post? News item?].\n\nTo minimize issues and catch changes early, the project strongly recommends that developers adopt continuous integration testing at least for the latest minor release all major versions of Postgres they support.\n```\n\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 16:26:27 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On 24.06.24 22:26, David E. Wheeler wrote:\n>> But now we're talking about API. That might be subject of another document or another section in this one, but it seems confusing to mix this with the ABI discussion.\n> Hrm. They’re super closely-related in my mind, as an extension developer. I need to know both! I guess I’m taking of this policy as what I can expect may be changed (and how to adapt to it) and what won’t.\n> \n> That said, I’m fine to remove the API stuff if there’s consensus objecting to it, to be defined in a separate policy (perhaps on the same doc page).\n\nI took at a stab at this, using some of your text, but discussing API \nand ABI separately.\n\n\n# Server API and ABI guidance\n\nThis section contains guidance to authors of extensions and other\nserver plugins about ABI and API stability in the PostgreSQL server.\n\n## General\n\nThe PostgreSQL server contains several well-delimited APIs for server\nplugins, such as the function manager (fmgr), SPI, and various hooks\nspecifically designed for extensions. These interfaces are carefully\nmanaged for long-term stability and compatibility. However, the\nentire set of global functions and variables in the server effectively\nconstitutes the publicly usable API, but most parts of that were not\ndesigned with extensibility and long-term stability in mind. That\nmeans, while taking advantage of these interfaces is valid, the\nfurther one strays from the well-trodden path, the likelier it will be\nthat one might encounter ABI or API compatibility issues at some\npoint. Extension authors are also encouraged to provide feedback\nabout their requirements, so that over time, as new use patterns\narise, certain interfaces can be consider more stabilized or new\nbetter-designed interfaces for new uses can be added.\n\n## API compatibility\n\n(API = application programming interface, meaning the interface used\nat compile time)\n\n### Major versions\n\nThere is _no_ promise of API compatibility between PostgreSQL major\nversions. That means, extension code might require source code\nchanges to work with multiple major versions. These can usually be\nmanaged with preprocessor conditions like `#if PG_VERSION_NUM >=\n160000`. Sophisticated extensions that use interfaces beyond the\nwell-delimited ones usually require a few such changes for each major\nserver version.\n\n### Minor versions\n\nPostgreSQL makes an effort to avoid server API breaks in minor\nreleases. In general, extension code that compiles and works with\nsome minor release should also compile and work with any other minor\nrelease, past or future.\n\nWhen a change *is* required, this will be carefully managed, taking\nthe requirements of extensions into account. Such changes will be\ncommunicated in the release notes.\n\n## ABI compatibility\n\n(ABI = application binary interface, meaning the interface used at run\ntime)\n\n### Major versions\n\nServers of different major versions have intentionally incompatible\nABIs. That means, extensions that use server APIs must be re-compiled\nfor each major release. The inclusion of `PG_MODULE_MAGIC` ensures\nthat code compiled for one major version will be rejected by other\nmajor versions.\n\n### Minor versions\n\nPostgreSQL makes an effort to avoid server ABI breaks in minor\nreleases. In general, an extension compiled against any minor release\nshould work with any other minor release, past or future.\n\nWhen a change *is* required, PostgreSQL will choose the least invasive\nchange possible, for example by squeezing a new field into padding\nspace or appending it to the end of a struct. These sorts of changes\nshould not impact extensions unless they use very unusual code\npatterns.\n\nIn rare cases, however, even such non-invasive changes may be\nimpractical or impossible. In such an event, the change will be\ncarefully managed, taking the requirements of extensions into account.\nSuch changes will also be documented in the release notes.\n\nNote, however, again that many parts of the server are not designed or\nmaintained as publicly-consumable APIs (and that, in most cases, the\nactual boundary is also not well-defined). If urgent needs arise,\nchanges in those parts will naturally be done with less consideration\nfor extension code than changes in well-defined and widely used\ninterfaces.\n\nAlso, in the absence of automated detection of such changes, this is\nnot a guarantee, but historically such breaking changes have been\nextremely rare.\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 13:33:13 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> On 24.06.24 22:26, David E. Wheeler wrote:\n>>> But now we're talking about API. That might be subject of another\n>> document or another section in this one, but it seems confusing to mix\n>> this with the ABI discussion.\n>> Hrm. They’re super closely-related in my mind, as an extension\n>> developer. I need to know both! I guess I’m taking of this policy as\n>> what I can expect may be changed (and how to adapt to it) and what\n>> won’t.\n>> That said, I’m fine to remove the API stuff if there’s consensus\n>> objecting to it, to be defined in a separate policy (perhaps on the\n>> same doc page).\n>\n> I took at a stab at this, using some of your text, but discussing API\n> and ABI separately.\n\nThis looks good to me, just one minor nitpick:\n\n> ### Minor versions\n>\n> PostgreSQL makes an effort to avoid server API breaks in minor\n> releases. In general, extension code that compiles and works with\n> some minor release should also compile and work with any other minor\n> release, past or future.\n\nI think this should explicitly say \"any other minor release within [or\n\"from\" or \"of\"?] the same major version\" (and ditto in the ABI section).\n\n- ilmari\n\n\n", "msg_date": "Tue, 25 Jun 2024 13:13:13 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 25, 2024, at 7:33 AM, Peter Eisentraut <[email protected]> wrote:\n\n> I took at a stab at this, using some of your text, but discussing API and ABI separately.\n\nOh man this is fantastic, thank you! I’d be more than happy to just turn this into a patch. But where should it go? Upthread I assumed xfunc.sgml, and still think that’s a likely candidate. Perhaps I’ll just start there --- unless someone thinks it should go somewhere other than the docs.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 13:55:17 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Tue, 2024-06-25 at 13:55 -0400, David E. Wheeler wrote:\n> On Jun 25, 2024, at 7:33 AM, Peter Eisentraut <[email protected]> wrote:\n> \n> > I took at a stab at this, using some of your text, but discussing API and ABI separately.\n> \n> Oh man this is fantastic, thank you! I’d be more than happy to just turn this into a patch.\n> But where should it go? Upthread I assumed xfunc.sgml, and still think that’s a likely\n> candidate. Perhaps I’ll just start there --- unless someone thinks it should go somewhere\n> other than the docs.\n\nPerhaps such information should go somewhere here:\nhttps://www.postgresql.org/support/versioning/\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 26 Jun 2024 10:48:03 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 26, 2024, at 04:48, Laurenz Albe <[email protected]> wrote:\n\n> Perhaps such information should go somewhere here:\n> https://www.postgresql.org/support/versioning/\n\nThis seems deeper and more detailed than what’s there now, but I can certainly imagine wanting to include this policy on the web site. That said, it didn’t occur to me to look under support when trying to find a place to put this; I was looking under Developers, on the principle that extension developers would look there.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Wed, 26 Jun 2024 13:12:29 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 25, 2024, at 13:55, David E. Wheeler <[email protected]> wrote:\n\n> Oh man this is fantastic, thank you! I’d be more than happy to just turn this into a patch. But where should it go? Upthread I assumed xfunc.sgml, and still think that’s a likely candidate. Perhaps I’ll just start there --- unless someone thinks it should go somewhere other than the docs.\n\nOkay here’s a patch that adds the proposed API and ABI guidance to the C Language docs. The content is the same as Peter proposed, with some light copy-editing.\n\nBest,\n\nDavid", "msg_date": "Wed, 26 Jun 2024 15:14:39 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 26, 2024, at 15:14, David E. Wheeler <[email protected]> wrote:\n\n> Okay here’s a patch that adds the proposed API and ABI guidance to the C Language docs. The content is the same as Peter proposed, with some light copy-editing.\n\nCF: https://commitfest.postgresql.org/48/5080/\nPR: https://github.com/theory/postgres/pull/6\n\nD\n\n\n\n", "msg_date": "Wed, 26 Jun 2024 15:20:52 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 26, 2024, at 15:20, David E. Wheeler <[email protected]> wrote:\n\n> CF: https://commitfest.postgresql.org/48/5080/\n> PR: https://github.com/theory/postgres/pull/6\n\nAaaand v2 without the unnecessary formatting of unrelated documentation 🤦🏻‍♂️.\n\nBest,\n\nDavid", "msg_date": "Wed, 26 Jun 2024 15:23:29 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On 6/26/24 12:23 PM, David E. Wheeler wrote:\n> On Jun 26, 2024, at 15:20, David E. Wheeler <[email protected]> wrote:\n> \n>> CF: https://commitfest.postgresql.org/48/5080/\n>> PR: https://github.com/theory/postgres/pull/6\n> \n> Aaaand v2 without the unnecessary formatting of unrelated documentation 🤦🏻‍♂️.\n\nMinor nit - misspelled \"considerd\"\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Thu, 27 Jun 2024 14:48:41 -0700", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 27, 2024, at 17:48, Jeremy Schneider <[email protected]> wrote:\n\n> Minor nit - misspelled “considerd\"\n\nThank you, Jeremy. V3 attached.\n\nBest,\n\nDavid", "msg_date": "Thu, 27 Jun 2024 18:07:13 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jun 27, 2024, at 18:07, David E. Wheeler <[email protected]> wrote:\n\n>> Minor nit - misspelled “considerd\"\n> \n> Thank you, Jeremy. V3 attached.\n\nRebase on 5784a49 attached. I presume this topic needs quite a bit of review and consensus from the committers more generally.\n\nBest,\n\nDavid", "msg_date": "Fri, 19 Jul 2024 10:10:29 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On 19.07.24 16:10, David E. Wheeler wrote:\n> On Jun 27, 2024, at 18:07, David E. Wheeler <[email protected]> wrote:\n> \n>>> Minor nit - misspelled “considerd\"\n>>\n>> Thank you, Jeremy. V3 attached.\n> \n> Rebase on 5784a49 attached. I presume this topic needs quite a bit of review and consensus from the committers more generally.\n\nWell, nobody has protested against what we wrote, so I have committed it.\n\n\n\n", "msg_date": "Wed, 31 Jul 2024 11:27:58 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Document ABI Compatibility" }, { "msg_contents": "On Jul 31, 2024, at 05:27, Peter Eisentraut <[email protected]> wrote:\n\n> Well, nobody has protested against what we wrote, so I have committed it.\n\nExcellent, thank you!\n\nD\n\n\n\n", "msg_date": "Wed, 31 Jul 2024 09:50:33 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Document ABI Compatibility" } ]
[ { "msg_contents": "Hi,\n\nCommit 667e65aac3 changed num_dead_tuples and max_dead_tuples columns\nto dead_tuple_bytes and max_dead_tuple_bytes columns, respectively.\nBut at PGConf.dev, I got feedback from multiple people that\nnum_dead_tuples information still can provide meaning insights for\nusers to understand the vacuum progress. One use case is to compare\nnum_dead_tuples to pg_stat_all_tables.n_dead_tup column.\n\nI've attached the patch to revive num_dead_tuples column back to the\npg_stat_progress_vacuum view. This requires to bump catalog version.\nWe're post-beta1 but it should be okay as it's only for PG17.\n\nFeedback is very welcome.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 3 Jun 2024 14:26:37 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Mon, Jun 3, 2024 at 5:27 PM Masahiko Sawada <[email protected]> wrote:\n> I've attached the patch to revive num_dead_tuples column back to the\n> pg_stat_progress_vacuum view. This requires to bump catalog version.\n> We're post-beta1 but it should be okay as it's only for PG17.\n>\n> Feedback is very welcome.\n\nCan we rename this to num_dead_item_ids (or something similar) in\npassing? That way we'll avoid confusing the number of dead tuples\nremoved from the table by VACUUM (which includes dead heap-only\ntuples, but excludes any preexisting LP_DEAD items left behind by\nopportunistic pruning) with the number of dead item identifiers.\n\nAs you know, TIDStore stores TIDs that refer to dead item identifiers\nin the heap, which is often very different to the number of dead\ntuples removed by VACUUM. The VACUUM log output has reported on dead\nitem identifiers separately since 14. This seems like a good\nopportunity to bring pg_stat_progress_vacuum in line.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 Jun 2024 17:49:45 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On 2024-Jun-03, Masahiko Sawada wrote:\n\n> Commit 667e65aac3 changed num_dead_tuples and max_dead_tuples columns\n> to dead_tuple_bytes and max_dead_tuple_bytes columns, respectively.\n> But at PGConf.dev, I got feedback from multiple people that\n> num_dead_tuples information still can provide meaning insights for\n> users to understand the vacuum progress. One use case is to compare\n> num_dead_tuples to pg_stat_all_tables.n_dead_tup column.\n\n+1.\n\n> @@ -2887,7 +2887,9 @@ dead_items_add(LVRelState *vacrel, BlockNumber blkno, OffsetNumber *offsets,\n> \tTidStoreSetBlockOffsets(dead_items, blkno, offsets, num_offsets);\n> \tvacrel->dead_items_info->num_items += num_offsets;\n> \n> -\t/* update the memory usage report */\n> +\t/* update the progress information */\n> +\tpgstat_progress_update_param(PROGRESS_VACUUM_NUM_DEAD_TUPLES,\n> +\t\t\t\t\t\t\t\t vacrel->dead_items_info->num_items);\n> \tpgstat_progress_update_param(PROGRESS_VACUUM_DEAD_TUPLE_BYTES,\n> \t\t\t\t\t\t\t\t TidStoreMemoryUsage(dead_items));\n> }\n\nYou could use pgstat_progress_update_multi_param here.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The important things in the world are problems with society that we don't\nunderstand at all. The machines will become more complicated but they won't\nbe more complicated than the societies that run them.\" (Freeman Dyson)\n\n\n", "msg_date": "Tue, 4 Jun 2024 08:35:04 -0700", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "\n\n> On 4 Jun 2024, at 00:26, Masahiko Sawada <[email protected]> wrote:\n\nThank you! Vacuum enhancement is a really good step forward, and this small change would help a lot of observability tools.\n\n\n> On 4 Jun 2024, at 00:49, Peter Geoghegan <[email protected]> wrote:\n> \n> Can we rename this to num_dead_item_ids (or something similar) in\n> passing? \n\nI do not insist, but many tools will have to adapt to this change [0,1]. However, most of tools will have to deal with removed max_dead_tuples anyway [2], so this is not that big problem.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/jalexandre0/pgmetrics/blob/61cb150f6d1e535513a6a07fc0c6d751fbed517e/collector/collect.go#L1289\n[1] https://github.com/DataDog/integrations-core/blob/250553f3b91d25de28add93afeb929e2abf67e53/postgres/datadog_checks/postgres/util.py#L517\n[2] https://github.com/search?q=num_dead_tuples+language%3AGo&type=code\n\n", "msg_date": "Wed, 5 Jun 2024 13:19:41 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Jun 5, 2024 at 7:19 PM Andrey M. Borodin <[email protected]> wrote:\n>\n>\n>\n> > On 4 Jun 2024, at 00:26, Masahiko Sawada <[email protected]> wrote:\n>\n> Thank you! Vacuum enhancement is a really good step forward, and this small change would help a lot of observability tools.\n>\n>\n> > On 4 Jun 2024, at 00:49, Peter Geoghegan <[email protected]> wrote:\n> >\n> > Can we rename this to num_dead_item_ids (or something similar) in\n> > passing?\n>\n> I do not insist, but many tools will have to adapt to this change [0,1]. However, most of tools will have to deal with removed max_dead_tuples anyway [2], so this is not that big problem.\n\nTrue, this incompatibility would not be a big problem.\n\nnum_dead_item_ids seems good to me. I've updated the patch that\nincorporated the comment from Álvaro[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/202406041535.pmyty3ci4pfd%40alvherre.pgsql\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 7 Jun 2024 10:22:50 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Fri, Jun 7, 2024 at 10:22 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 7:19 PM Andrey M. Borodin <[email protected]> wrote:\n> >\n> >\n> >\n> > > On 4 Jun 2024, at 00:26, Masahiko Sawada <[email protected]> wrote:\n> >\n> > Thank you! Vacuum enhancement is a really good step forward, and this small change would help a lot of observability tools.\n> >\n> >\n> > > On 4 Jun 2024, at 00:49, Peter Geoghegan <[email protected]> wrote:\n> > >\n> > > Can we rename this to num_dead_item_ids (or something similar) in\n> > > passing?\n> >\n> > I do not insist, but many tools will have to adapt to this change [0,1]. However, most of tools will have to deal with removed max_dead_tuples anyway [2], so this is not that big problem.\n>\n> True, this incompatibility would not be a big problem.\n>\n> num_dead_item_ids seems good to me. I've updated the patch that\n> incorporated the comment from Álvaro[1].\n\nI'm going to push the v2 patch in a few days if there is no further comment.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 22:38:06 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Jun 6, 2024 at 9:23 PM Masahiko Sawada <[email protected]> wrote:\n> num_dead_item_ids seems good to me. I've updated the patch that\n> incorporated the comment from Álvaro[1].\n\nGreat, thank you.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 12 Jun 2024 13:04:59 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Tue, Jun 11, 2024 at 10:38 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Jun 7, 2024 at 10:22 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Jun 5, 2024 at 7:19 PM Andrey M. Borodin <[email protected]> wrote:\n> > >\n> > >\n> > >\n> > > > On 4 Jun 2024, at 00:26, Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > Thank you! Vacuum enhancement is a really good step forward, and this small change would help a lot of observability tools.\n> > >\n> > >\n> > > > On 4 Jun 2024, at 00:49, Peter Geoghegan <[email protected]> wrote:\n> > > >\n> > > > Can we rename this to num_dead_item_ids (or something similar) in\n> > > > passing?\n> > >\n> > > I do not insist, but many tools will have to adapt to this change [0,1]. However, most of tools will have to deal with removed max_dead_tuples anyway [2], so this is not that big problem.\n> >\n> > True, this incompatibility would not be a big problem.\n> >\n> > num_dead_item_ids seems good to me. I've updated the patch that\n> > incorporated the comment from Álvaro[1].\n>\n> I'm going to push the v2 patch in a few days if there is no further comment.\n>\n\nI was about to push the patch but let me confirm just in case: is it\nokay to bump the catversion even after post-beta1? This patch\nreintroduces a previously-used column to pg_stat_progress_vacuum so it\nrequires bumping the catversion.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 14 Jun 2024 09:35:18 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "Masahiko Sawada <[email protected]> writes:\n> I was about to push the patch but let me confirm just in case: is it\n> okay to bump the catversion even after post-beta1?\n\nYes, that happens somewhat routinely.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Jun 2024 20:38:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Jun 13, 2024 at 08:38:05PM -0400, Tom Lane wrote:\n> Masahiko Sawada <[email protected]> writes:\n>> I was about to push the patch but let me confirm just in case: is it\n>> okay to bump the catversion even after post-beta1?\n> \n> Yes, that happens somewhat routinely.\n\nUp to RC, even after beta2. This happens routinely every year because\ntweaks are always required for what got committed. And that's OK to\ndo so now.\n--\nMichael", "msg_date": "Fri, 14 Jun 2024 09:41:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Fri, Jun 14, 2024 at 9:41 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jun 13, 2024 at 08:38:05PM -0400, Tom Lane wrote:\n> > Masahiko Sawada <[email protected]> writes:\n> >> I was about to push the patch but let me confirm just in case: is it\n> >> okay to bump the catversion even after post-beta1?\n> >\n> > Yes, that happens somewhat routinely.\n>\n> Up to RC, even after beta2. This happens routinely every year because\n> tweaks are always required for what got committed. And that's OK to\n> do so now.\n\nThank you both for confirmation. I'll push it shortly.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 14 Jun 2024 09:57:27 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Fri, Jun 14, 2024 at 9:57 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Jun 14, 2024 at 9:41 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Thu, Jun 13, 2024 at 08:38:05PM -0400, Tom Lane wrote:\n> > > Masahiko Sawada <[email protected]> writes:\n> > >> I was about to push the patch but let me confirm just in case: is it\n> > >> okay to bump the catversion even after post-beta1?\n> > >\n> > > Yes, that happens somewhat routinely.\n> >\n> > Up to RC, even after beta2. This happens routinely every year because\n> > tweaks are always required for what got committed. And that's OK to\n> > do so now.\n>\n> Thank you both for confirmation. I'll push it shortly.\n>\n\nPushed. Thank you for giving feedback and reviewing the patch!\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 14 Jun 2024 10:21:17 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Jun 13, 2024 at 9:22 PM Masahiko Sawada <[email protected]> wrote:\n> On Fri, Jun 14, 2024 at 9:57 AM Masahiko Sawada <[email protected]> wrote:\n> > On Fri, Jun 14, 2024 at 9:41 AM Michael Paquier <[email protected]> wrote:\n> > > On Thu, Jun 13, 2024 at 08:38:05PM -0400, Tom Lane wrote:\n> > > > Masahiko Sawada <[email protected]> writes:\n> > > >> I was about to push the patch but let me confirm just in case: is it\n> > > >> okay to bump the catversion even after post-beta1?\n> > > >\n> > > > Yes, that happens somewhat routinely.\n> > >\n> > > Up to RC, even after beta2. This happens routinely every year because\n> > > tweaks are always required for what got committed. And that's OK to\n> > > do so now.\n> >\n> > Thank you both for confirmation. I'll push it shortly.\n> >\n>\n> Pushed. Thank you for giving feedback and reviewing the patch!\n>\n\nOne minor side effect of this change is the original idea of comparing\npg_stat_progress.num_dead_tuples to pg_stat_all_tables.n_dead_tup\ncolumn becomes less obvious. I presume the release notes for\npg_stat_progress_vacuum will be updated to also include this column\nname change as well, so maybe that's enough for folks to figure things\nout? At least I couldn't find anywhere in the docs where we have\ndescribed the relationship between these columns before. Thoughts?\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Sat, 15 Jun 2024 07:47:32 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Sat, Jun 15, 2024 at 8:47 PM Robert Treat <[email protected]> wrote:\n>\n> On Thu, Jun 13, 2024 at 9:22 PM Masahiko Sawada <[email protected]> wrote:\n> > On Fri, Jun 14, 2024 at 9:57 AM Masahiko Sawada <[email protected]> wrote:\n> > > On Fri, Jun 14, 2024 at 9:41 AM Michael Paquier <[email protected]> wrote:\n> > > > On Thu, Jun 13, 2024 at 08:38:05PM -0400, Tom Lane wrote:\n> > > > > Masahiko Sawada <[email protected]> writes:\n> > > > >> I was about to push the patch but let me confirm just in case: is it\n> > > > >> okay to bump the catversion even after post-beta1?\n> > > > >\n> > > > > Yes, that happens somewhat routinely.\n> > > >\n> > > > Up to RC, even after beta2. This happens routinely every year because\n> > > > tweaks are always required for what got committed. And that's OK to\n> > > > do so now.\n> > >\n> > > Thank you both for confirmation. I'll push it shortly.\n> > >\n> >\n> > Pushed. Thank you for giving feedback and reviewing the patch!\n> >\n>\n> One minor side effect of this change is the original idea of comparing\n> pg_stat_progress.num_dead_tuples to pg_stat_all_tables.n_dead_tup\n> column becomes less obvious. I presume the release notes for\n> pg_stat_progress_vacuum will be updated to also include this column\n> name change as well, so maybe that's enough for folks to figure things\n> out?\n\nThe release note has been updated, and I think it would help users\nunderstand the change.\n\n> At least I couldn't find anywhere in the docs where we have\n> described the relationship between these columns before. Thoughts?\n\nIt would be a good idea to improve the documentation, but I think that\nwe cannot simply compare these two numbers since the numbers that\nthese fields count are slightly different. For instance,\npg_stat_all_tables.n_dead_tup includes the number of dead tuples that\nare going to be HOT-pruned.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 19 Jun 2024 09:48:33 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" }, { "msg_contents": "On Tue, Jun 18, 2024 at 8:49 PM Masahiko Sawada <[email protected]> wrote:\n> > At least I couldn't find anywhere in the docs where we have\n> > described the relationship between these columns before. Thoughts?\n>\n> It would be a good idea to improve the documentation, but I think that\n> we cannot simply compare these two numbers since the numbers that\n> these fields count are slightly different. For instance,\n> pg_stat_all_tables.n_dead_tup includes the number of dead tuples that\n> are going to be HOT-pruned.\n\nThis isn't a small difference. It's actually a huge one in many cases:\n\nhttps://www.postgresql.org/message-id/CAH2-WzkkGT2Gt4XauS5eQOQi4mVvL5X49hBTtWccC8DEqeNfKA@mail.gmail.com\n\nPractically speaking they're just two different things, with hardly\nany fixed relationship.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 Jun 2024 21:16:43 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revive num_dead_tuples column of pg_stat_progress_vacuum" } ]
[ { "msg_contents": "In the past, we have discussed various approaches to replicate\nsequences by decoding the sequence changes from WAL. However, we faced\nseveral challenges to achieve the same, some of which are due to the\nnon-transactional nature of sequences. The major ones were: (a)\ncorrectness of the decoding part, some of the problems were discussed\nat [1][2][3] (b) handling of sequences especially adding certain\nsequences automatically (e.g. sequences backing SERIAL/BIGSERIAL\ncolumns) for built-in logical replication is not considered in the\nproposed work [1] (c) there were some performance concerns in not so\nfrequent scenarios [4] (see performance issues), we can probably deal\nwith this by making sequences optional for builtin logical replication\n\nIt could be possible that we can deal with these and any other issues\nwith more work but as the use case for this feature is primarily major\nversion upgrades it is not clear that we want to make such a big\nchange to the code or are there better alternatives to achieve the\nsame.\n\nThis time at pgconf.dev (https://2024.pgconf.dev/), we discussed\nalternative approaches for this work which I would like to summarize.\nThe various methods we discussed are as follows:\n\n1. Provide a tool to copy all the sequences from publisher to\nsubscriber. The major drawback is that users need to perform this as\nan additional step during the upgrade which would be inconvenient and\nprobably not as useful as some built-in mechanism.\n2. Provide a command say Alter Subscription ... Replicate Sequences\n(or something like that) which users can perform before shutdown of\nthe publisher node during upgrade. This will allow copying all the\nsequences from the publisher node to the subscriber node directly.\nSimilar to previous approach, this could also be inconvenient for\nusers.\n3. Replicate published sequences via walsender at the time of shutdown\nor incrementally while decoding checkpoint record. The two ways to\nachieve this are: (a) WAL log a special NOOP record just before\nshutting down checkpointer. Then allow the WALsender to read the\nsequence data and send it to the subscriber while decoding the new\nNOOP record. (b) Similar to the previous idea but instead of WAL\nlogging a new record directly invokes a decoding callback after\nwalsender receives a request to shutdown which will allow pgoutput to\nread and send required sequences. This approach has a drawback that we\nare adding more work at the time of shutdown but note that we already\nwaits for all the WAL records to be decoded and sent before shutting\ndown the walsender during shutdown of the node.\n\nAny other ideas?\n\nI have added the members I remember that were part of the discussion\nin the email. Please feel free to correct me if I have misunderstood\nor missed any point we talked about.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/e4145f77-6f37-40e0-a770-aba359c50b93%40enterprisedb.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1Lxt%2B5a9fA-B7FRzfd1vns%3DEwZTF5z9_xO9Ms4wsqD88Q%40mail.gmail.com\n[3] - https://www.postgresql.org/message-id/CAA4eK1KR4%3DyALKP0pOdVkqUwoUqD_v7oU3HzY-w0R_EBvgHL2w%40mail.gmail.com\n[4] - https://www.postgresql.org/message-id/12822961-b7de-9d59-dd27-2e3dc3980c7e%40enterprisedb.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 4 Jun 2024 16:27:03 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Logical Replication of sequences" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <[email protected]> wrote:\n>\n> 3. Replicate published sequences via walsender at the time of shutdown\n> or incrementally while decoding checkpoint record. The two ways to\n> achieve this are: (a) WAL log a special NOOP record just before\n> shutting down checkpointer. Then allow the WALsender to read the\n> sequence data and send it to the subscriber while decoding the new\n> NOOP record. (b) Similar to the previous idea but instead of WAL\n> logging a new record directly invokes a decoding callback after\n> walsender receives a request to shutdown which will allow pgoutput to\n> read and send required sequences. This approach has a drawback that we\n> are adding more work at the time of shutdown but note that we already\n> waits for all the WAL records to be decoded and sent before shutting\n> down the walsender during shutdown of the node.\n\nThanks. IIUC, both of the above approaches decode the sequences during\nonly shutdown. I'm wondering, why not periodically decode and\nreplicate the published sequences so that the decoding at the shutdown\nwill not take that longer? I can imagine a case where there are tens\nof thousands of sequences in a production server, and surely decoding\nand sending them just during the shutdown can take a lot of time\nhampering the overall server uptime.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 4 Jun 2024 16:52:55 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Jun 4, 2024 at 4:53 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <[email protected]> wrote:\n> >\n> > 3. Replicate published sequences via walsender at the time of shutdown\n> > or incrementally while decoding checkpoint record. The two ways to\n> > achieve this are: (a) WAL log a special NOOP record just before\n> > shutting down checkpointer. Then allow the WALsender to read the\n> > sequence data and send it to the subscriber while decoding the new\n> > NOOP record. (b) Similar to the previous idea but instead of WAL\n> > logging a new record directly invokes a decoding callback after\n> > walsender receives a request to shutdown which will allow pgoutput to\n> > read and send required sequences. This approach has a drawback that we\n> > are adding more work at the time of shutdown but note that we already\n> > waits for all the WAL records to be decoded and sent before shutting\n> > down the walsender during shutdown of the node.\n>\n> Thanks. IIUC, both of the above approaches decode the sequences during\n> only shutdown. I'm wondering, why not periodically decode and\n> replicate the published sequences so that the decoding at the shutdown\n> will not take that longer?\n>\n\nEven if we decode it periodically (say each time we decode the\ncheckpoint record) then also we need to send the entire set of\nsequences at shutdown. This is because the sequences may have changed\nfrom the last time we sent them.\n\n>\n> I can imagine a case where there are tens\n> of thousands of sequences in a production server, and surely decoding\n> and sending them just during the shutdown can take a lot of time\n> hampering the overall server uptime.\n>\n\nIt is possible but we will send only the sequences that belong to\npublications for which walsender is supposed to send the required\ndata. Now, we can also imagine providing option 2 (Alter Subscription\n... Replicate Sequences) so that users can replicate sequences before\nshutdown and then disable the subscriptions so that there won't be a\ncorresponding walsender.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 4 Jun 2024 17:40:08 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <[email protected]> wrote:\n\n>\n> 3. Replicate published sequences via walsender at the time of shutdown\n> or incrementally while decoding checkpoint record. The two ways to\n> achieve this are: (a) WAL log a special NOOP record just before\n> shutting down checkpointer. Then allow the WALsender to read the\n> sequence data and send it to the subscriber while decoding the new\n> NOOP record. (b) Similar to the previous idea but instead of WAL\n> logging a new record directly invokes a decoding callback after\n> walsender receives a request to shutdown which will allow pgoutput to\n> read and send required sequences. This approach has a drawback that we\n> are adding more work at the time of shutdown but note that we already\n> waits for all the WAL records to be decoded and sent before shutting\n> down the walsender during shutdown of the node.\n>\n> Any other ideas?\n>\n>\nIn case of primary crash the sequence won't get replicated. That is true\neven with the previous approach in case walsender is shut down because of a\ncrash, but it is more serious with this approach. How about periodically\nsending this information?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <[email protected]> wrote:\n3. Replicate published sequences via walsender at the time of shutdown\nor incrementally while decoding checkpoint record. The two ways to\nachieve this are: (a) WAL log a special NOOP record just before\nshutting down checkpointer. Then allow the WALsender to read the\nsequence data and send it to the subscriber while decoding the new\nNOOP record. (b) Similar to the previous idea but instead of WAL\nlogging a new record directly invokes a decoding callback after\nwalsender receives a request to shutdown which will allow pgoutput to\nread and send required sequences. This approach has a drawback that we\nare adding more work at the time of shutdown but note that we already\nwaits for all the WAL records to be decoded and sent before shutting\ndown the walsender during shutdown of the node.\n\nAny other ideas?In case of primary crash the sequence won't get replicated. That is true even with the previous approach in case walsender is shut down because of a crash, but it is more serious with this approach. How about periodically sending this information? -- Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 4 Jun 2024 19:39:46 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On 6/4/24 06:57, Amit Kapila wrote:\n> 1. Provide a tool to copy all the sequences from publisher to\n> subscriber. The major drawback is that users need to perform this as\n> an additional step during the upgrade which would be inconvenient and\n> probably not as useful as some built-in mechanism.\n\nAgree, this requires additional steps. Not a preferred approach in my\nopinion. When a large set of sequences are present, it will add\nadditional downtime for upgrade process.\n\n> 2. Provide a command say Alter Subscription ... Replicate Sequences\n> (or something like that) which users can perform before shutdown of\n> the publisher node during upgrade. This will allow copying all the\n> sequences from the publisher node to the subscriber node directly.\n> Similar to previous approach, this could also be inconvenient for\n> users.\n\nThis is similar to option 1 except that it is a SQL command now. Still\nnot a preferred approach in my opinion. When a large set of sequences are\npresent, it will add additional downtime for upgrade process.\n\n\n> 3. Replicate published sequences via walsender at the time of shutdown\n> or incrementally while decoding checkpoint record. The two ways to\n> achieve this are: (a) WAL log a special NOOP record just before\n> shutting down checkpointer. Then allow the WALsender to read the\n> sequence data and send it to the subscriber while decoding the new\n> NOOP record. (b) Similar to the previous idea but instead of WAL\n> logging a new record directly invokes a decoding callback after\n> walsender receives a request to shutdown which will allow pgoutput to\n> read and send required sequences. This approach has a drawback that we\n> are adding more work at the time of shutdown but note that we already\n> waits for all the WAL records to be decoded and sent before shutting\n> down the walsender during shutdown of the node.\n\nAt the time of shutdown a) most logical upgrades don't necessarily call\nfor shutdown b) it will still add to total downtime with large set of\nsequences. Incremental option is better as it will not require a shutdown.\n\nI do see a scenario where sequence of events can lead to loss of sequence\nand generate duplicate sequence values, if subscriber starts consuming\nsequences while publisher is also consuming them. In such cases, subscriber\nshall not be allowed sequence consumption.\n\n\n\n-- \nKind Regards,\nYogesh Sharma\nOpen Source Enthusiast and Advocate\nPostgreSQL Contributors Team @ RDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 4 Jun 2024 11:26:47 -0400", "msg_from": "Yogesh Sharma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Jun 4, 2024 at 7:40 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <[email protected]> wrote:\n>>\n>>\n>> 3. Replicate published sequences via walsender at the time of shutdown\n>> or incrementally while decoding checkpoint record. The two ways to\n>> achieve this are: (a) WAL log a special NOOP record just before\n>> shutting down checkpointer. Then allow the WALsender to read the\n>> sequence data and send it to the subscriber while decoding the new\n>> NOOP record. (b) Similar to the previous idea but instead of WAL\n>> logging a new record directly invokes a decoding callback after\n>> walsender receives a request to shutdown which will allow pgoutput to\n>> read and send required sequences. This approach has a drawback that we\n>> are adding more work at the time of shutdown but note that we already\n>> waits for all the WAL records to be decoded and sent before shutting\n>> down the walsender during shutdown of the node.\n>>\n>> Any other ideas?\n>>\n>\n> In case of primary crash the sequence won't get replicated. That is true even with the previous approach in case walsender is shut down because of a crash, but it is more serious with this approach.\n>\n\nRight, but if we just want to support a major version upgrade scenario\nthen this should be fine because upgrades require a clean shutdown.\n\n>\n How about periodically sending this information?\n>\n\nNow, if we want to support some sort of failover then probably this\nwill help. Do you have that use case in mind? If we want to send\nperiodically then we can do it when decoding checkpoint\n(XLOG_CHECKPOINT_ONLINE) or some other periodic WAL record like\nrunning_xacts (XLOG_RUNNING_XACTS).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 5 Jun 2024 08:45:27 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Jun 4, 2024 at 8:56 PM Yogesh Sharma\n<[email protected]> wrote:\n>\n> On 6/4/24 06:57, Amit Kapila wrote:\n>\n> > 2. Provide a command say Alter Subscription ... Replicate Sequences\n> > (or something like that) which users can perform before shutdown of\n> > the publisher node during upgrade. This will allow copying all the\n> > sequences from the publisher node to the subscriber node directly.\n> > Similar to previous approach, this could also be inconvenient for\n> > users.\n>\n> This is similar to option 1 except that it is a SQL command now.\n>\n\nRight, but I would still prefer a command as it provides clear steps\nfor the upgrade. Users need to perform (a) Replicate Sequences for a\nparticular subscription (b) Disable that subscription (c) Perform (a)\nand (b) for all the subscriptions corresponding to the publisher we\nwant to shut down for upgrade.\n\nI agree there are some manual steps involved here but it is advisable\nfor users to ensure that they have received the required data on the\nsubscriber before the upgrade of the publisher node, otherwise, they\nmay not be able to continue replication after the upgrade. For\nexample, see the \"Prepare for publisher upgrades\" step in pg_upgrade\ndocs [1].\n\n>\n> > 3. Replicate published sequences via walsender at the time of shutdown\n> > or incrementally while decoding checkpoint record. The two ways to\n> > achieve this are: (a) WAL log a special NOOP record just before\n> > shutting down checkpointer. Then allow the WALsender to read the\n> > sequence data and send it to the subscriber while decoding the new\n> > NOOP record. (b) Similar to the previous idea but instead of WAL\n> > logging a new record directly invokes a decoding callback after\n> > walsender receives a request to shutdown which will allow pgoutput to\n> > read and send required sequences. This approach has a drawback that we\n> > are adding more work at the time of shutdown but note that we already\n> > waits for all the WAL records to be decoded and sent before shutting\n> > down the walsender during shutdown of the node.\n>\n> At the time of shutdown a) most logical upgrades don't necessarily call\n> for shutdown\n>\n\nWon't the major version upgrade expect that the node is down? Refer to\nstep \"Stop both servers\" in [1].\n\n>\n b) it will still add to total downtime with large set of\n> sequences. Incremental option is better as it will not require a shutdown.\n>\n> I do see a scenario where sequence of events can lead to loss of sequence\n> and generate duplicate sequence values, if subscriber starts consuming\n> sequences while publisher is also consuming them. In such cases, subscriber\n> shall not be allowed sequence consumption.\n>\n\nIt would be fine to not allow subscribers to consume sequences that\nare being logically replicated but what about the cases where we\nhaven't sent the latest values of sequences before the shutdown of the\npublisher? In such a case, the publisher would have already consumed\nsome values that wouldn't have been sent to the subscriber and now\nwhen the publisher is down then even if we re-allow the sequence\nvalues to be consumed from the subscriber, it can lead to duplicate\nvalues.\n\n[1] - https://www.postgresql.org/docs/devel/pgupgrade.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 5 Jun 2024 09:13:26 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On 04.06.24 12:57, Amit Kapila wrote:\n> 2. Provide a command say Alter Subscription ... Replicate Sequences\n> (or something like that) which users can perform before shutdown of\n> the publisher node during upgrade. This will allow copying all the\n> sequences from the publisher node to the subscriber node directly.\n> Similar to previous approach, this could also be inconvenient for\n> users.\n\nI would start with this. In any case, you're going to need to write \ncode to collect all the sequence values, send them over some protocol, \napply them on the subscriber. The easiest way to start is to trigger \nthat manually. Then later you can add other ways to trigger it, either \nby timer or around shutdown, or whatever other ideas there might be.\n\n\n", "msg_date": "Wed, 5 Jun 2024 09:21:22 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 5, 2024 at 9:13 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jun 4, 2024 at 8:56 PM Yogesh Sharma\n> <[email protected]> wrote:\n> >\n> > On 6/4/24 06:57, Amit Kapila wrote:\n> >\n> > > 2. Provide a command say Alter Subscription ... Replicate Sequences\n> > > (or something like that) which users can perform before shutdown of\n> > > the publisher node during upgrade. This will allow copying all the\n> > > sequences from the publisher node to the subscriber node directly.\n> > > Similar to previous approach, this could also be inconvenient for\n> > > users.\n> >\n> > This is similar to option 1 except that it is a SQL command now.\n> >\n>\n> Right, but I would still prefer a command as it provides clear steps\n> for the upgrade. Users need to perform (a) Replicate Sequences for a\n> particular subscription (b) Disable that subscription (c) Perform (a)\n> and (b) for all the subscriptions corresponding to the publisher we\n> want to shut down for upgrade.\n>\n\nAnother advantage of this approach over just a plain tool to copy all\nsequences before upgrade is that here we can have the facility to copy\njust the required sequences. I mean the set sequences that the user\nhas specified as part of the publication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 5 Jun 2024 14:11:36 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 4, 2024 at 5:40 PM Amit Kapila <[email protected]> wrote:\n>\n> Even if we decode it periodically (say each time we decode the\n> checkpoint record) then also we need to send the entire set of\n> sequences at shutdown. This is because the sequences may have changed\n> from the last time we sent them.\n\nAgree. How about decoding and sending only the sequences that are\nchanged from the last time when they were sent? I know it requires a\nbit of tracking and more work, but all I'm looking for is to reduce\nthe amount of work that walsenders need to do during the shutdown.\n\nHaving said that, I like the idea of letting the user sync the\nsequences via ALTER SUBSCRIPTION command and not weave the logic into\nthe shutdown checkpoint path. As Peter Eisentraut said here\nhttps://www.postgresql.org/message-id/42e5cb35-4aeb-4f58-8091-90619c7c3ecc%40eisentraut.org,\nthis can be a good starting point to get going.\n\n> > I can imagine a case where there are tens\n> > of thousands of sequences in a production server, and surely decoding\n> > and sending them just during the shutdown can take a lot of time\n> > hampering the overall server uptime.\n>\n> It is possible but we will send only the sequences that belong to\n> publications for which walsender is supposed to send the required\n> data.\n\nRight, but what if all the publication tables can have tens of\nthousands of sequences.\n\n> Now, we can also imagine providing option 2 (Alter Subscription\n> ... Replicate Sequences) so that users can replicate sequences before\n> shutdown and then disable the subscriptions so that there won't be a\n> corresponding walsender.\n\nAs stated above, I like this idea to start with.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 5 Jun 2024 15:17:24 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 5, 2024 at 12:51 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 04.06.24 12:57, Amit Kapila wrote:\n> > 2. Provide a command say Alter Subscription ... Replicate Sequences\n> > (or something like that) which users can perform before shutdown of\n> > the publisher node during upgrade. This will allow copying all the\n> > sequences from the publisher node to the subscriber node directly.\n> > Similar to previous approach, this could also be inconvenient for\n> > users.\n>\n> I would start with this. In any case, you're going to need to write\n> code to collect all the sequence values, send them over some protocol,\n> apply them on the subscriber. The easiest way to start is to trigger\n> that manually. Then later you can add other ways to trigger it, either\n> by timer or around shutdown, or whatever other ideas there might be.\n>\n\nAgreed. To achieve this, we can allow sequences to be copied during\nthe initial CREATE SUBSCRIPTION command similar to what we do for\ntables. And then later by new/existing command, we re-copy the already\nexisting sequences on the subscriber.\n\nThe options for the new command could be:\nAlter Subscription ... Refresh Sequences\nAlter Subscription ... Replicate Sequences\n\nIn the second option, we need to introduce a new keyword Replicate.\nCan you think of any better option?\n\nIn addition to the above, the command Alter Subscription .. Refresh\nPublication will fetch any missing sequences similar to what it does\nfor tables.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 5 Jun 2024 18:00:38 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 5, 2024 at 8:45 AM Amit Kapila <[email protected]> wrote:\n\n> On Tue, Jun 4, 2024 at 7:40 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <[email protected]>\n> wrote:\n> >>\n> >>\n> >> 3. Replicate published sequences via walsender at the time of shutdown\n> >> or incrementally while decoding checkpoint record. The two ways to\n> >> achieve this are: (a) WAL log a special NOOP record just before\n> >> shutting down checkpointer. Then allow the WALsender to read the\n> >> sequence data and send it to the subscriber while decoding the new\n> >> NOOP record. (b) Similar to the previous idea but instead of WAL\n> >> logging a new record directly invokes a decoding callback after\n> >> walsender receives a request to shutdown which will allow pgoutput to\n> >> read and send required sequences. This approach has a drawback that we\n> >> are adding more work at the time of shutdown but note that we already\n> >> waits for all the WAL records to be decoded and sent before shutting\n> >> down the walsender during shutdown of the node.\n> >>\n> >> Any other ideas?\n> >>\n> >\n> > In case of primary crash the sequence won't get replicated. That is true\n> even with the previous approach in case walsender is shut down because of a\n> crash, but it is more serious with this approach.\n> >\n>\n> Right, but if we just want to support a major version upgrade scenario\n> then this should be fine because upgrades require a clean shutdown.\n>\n> >\n> How about periodically sending this information?\n> >\n>\n> Now, if we want to support some sort of failover then probably this\n> will help. Do you have that use case in mind?\n\n\nRegular failover was a goal for supporting logical replication of\nsequences. That might be more common than major upgrade scenario.\n\n\n> If we want to send\n> periodically then we can do it when decoding checkpoint\n> (XLOG_CHECKPOINT_ONLINE) or some other periodic WAL record like\n> running_xacts (XLOG_RUNNING_XACTS).\n>\n>\nYeah. I am thinking along those lines.\n\nIt must be noted, however, that none of those optional make sure that the\nreplicated sequence's states are consistent with the replicated object\nstate which use those sequences. E.g. table t1 uses sequence s1. By last\nsequence replication, as of time T1, let's say t1 had consumed values upto\nvl1 from s1. But later, by time T2, it consumed values upto vl2 which were\nnot replicated but the changes to t1 by T2 were replicated. If failover\nhappens at that point, INSERTs on t1 would fail because of duplicate keys\n(values between vl1 and vl2). Previous attempt to support logical sequence\nreplication solved this problem by replicating a future state of sequences\n(current value +/- log count). Similarly, if the sequence was ALTERed\nbetween T1 and T2, the state of sequence on replica would be inconsistent\nwith the state of t1. Failing over at this stage, might end t1 in an\ninconsistent state.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Wed, Jun 5, 2024 at 8:45 AM Amit Kapila <[email protected]> wrote:On Tue, Jun 4, 2024 at 7:40 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <[email protected]> wrote:\n>>\n>>\n>> 3. Replicate published sequences via walsender at the time of shutdown\n>> or incrementally while decoding checkpoint record. The two ways to\n>> achieve this are: (a) WAL log a special NOOP record just before\n>> shutting down checkpointer. Then allow the WALsender to read the\n>> sequence data and send it to the subscriber while decoding the new\n>> NOOP record. (b) Similar to the previous idea but instead of WAL\n>> logging a new record directly invokes a decoding callback after\n>> walsender receives a request to shutdown which will allow pgoutput to\n>> read and send required sequences. This approach has a drawback that we\n>> are adding more work at the time of shutdown but note that we already\n>> waits for all the WAL records to be decoded and sent before shutting\n>> down the walsender during shutdown of the node.\n>>\n>> Any other ideas?\n>>\n>\n> In case of primary crash the sequence won't get replicated. That is true even with the previous approach in case walsender is shut down because of a crash, but it is more serious with this approach.\n>\n\nRight, but if we just want to support a major version upgrade scenario\nthen this should be fine because upgrades require a clean shutdown.\n\n>\n How about periodically sending this information?\n>\n\nNow, if we want to support some sort of failover then probably this\nwill help. Do you have that use case in mind?Regular failover was a goal for supporting logical replication of sequences. That might be more common than major upgrade scenario. If we want to send\nperiodically then we can do it when decoding checkpoint\n(XLOG_CHECKPOINT_ONLINE) or some other periodic WAL record like\nrunning_xacts (XLOG_RUNNING_XACTS).\nYeah. I am thinking along those lines.It must be noted, however, that none of those optional make sure that the replicated sequence's states are consistent with the replicated object state which use those sequences. E.g. table t1 uses sequence s1. By last sequence replication, as of time T1, let's say t1 had consumed values upto vl1 from s1. But later, by time T2, it consumed values upto vl2 which were not replicated but the changes to t1 by T2 were replicated. If failover happens at that point, INSERTs on t1 would fail because of duplicate keys (values between vl1 and vl2). Previous attempt to support logical sequence replication solved this problem by replicating a future state of sequences (current value +/- log count). Similarly, if the sequence was ALTERed between T1 and T2, the state of sequence on replica would be inconsistent with the state of t1. Failing over at this stage, might end t1 in an inconsistent state.-- Best Wishes,Ashutosh Bapat", "msg_date": "Wed, 5 Jun 2024 18:01:15 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 5, 2024 at 6:01 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 8:45 AM Amit Kapila <[email protected]> wrote:\n>>\n>> How about periodically sending this information?\n>> >\n>>\n>> Now, if we want to support some sort of failover then probably this\n>> will help. Do you have that use case in mind?\n>\n>\n> Regular failover was a goal for supporting logical replication of sequences. That might be more common than major upgrade scenario.\n>\n\nWe can't support regular failovers to subscribers unless we can\nreplicate/copy slots because the existing nodes connected to the\ncurrent publisher/primary would expect that. It should be primarily\nuseful for major version upgrades at this stage.\n\n>>\n>> If we want to send\n>> periodically then we can do it when decoding checkpoint\n>> (XLOG_CHECKPOINT_ONLINE) or some other periodic WAL record like\n>> running_xacts (XLOG_RUNNING_XACTS).\n>>\n>\n> Yeah. I am thinking along those lines.\n>\n> It must be noted, however, that none of those optional make sure that the replicated sequence's states are consistent with the replicated object state which use those sequences.\n>\n\nRight, I feel as others are advocating, it seems better to support it\nmanually via command and then later we can extend it to do at shutdown\nor at some regular intervals. If we do that then we should be able to\nsupport major version upgrades and planned switchover.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Jun 2024 09:22:26 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 5, 2024 at 12:43 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jun 4, 2024 at 8:56 PM Yogesh Sharma\n> <[email protected]> wrote:\n> >\n> > On 6/4/24 06:57, Amit Kapila wrote:\n> >\n> > > 2. Provide a command say Alter Subscription ... Replicate Sequences\n> > > (or something like that) which users can perform before shutdown of\n> > > the publisher node during upgrade. This will allow copying all the\n> > > sequences from the publisher node to the subscriber node directly.\n> > > Similar to previous approach, this could also be inconvenient for\n> > > users.\n> >\n> > This is similar to option 1 except that it is a SQL command now.\n> >\n>\n> Right, but I would still prefer a command as it provides clear steps\n> for the upgrade. Users need to perform (a) Replicate Sequences for a\n> particular subscription (b) Disable that subscription (c) Perform (a)\n> and (b) for all the subscriptions corresponding to the publisher we\n> want to shut down for upgrade.\n>\n> I agree there are some manual steps involved here but it is advisable\n> for users to ensure that they have received the required data on the\n> subscriber before the upgrade of the publisher node, otherwise, they\n> may not be able to continue replication after the upgrade. For\n> example, see the \"Prepare for publisher upgrades\" step in pg_upgrade\n> docs [1].\n>\n> >\n> > > 3. Replicate published sequences via walsender at the time of shutdown\n> > > or incrementally while decoding checkpoint record. The two ways to\n> > > achieve this are: (a) WAL log a special NOOP record just before\n> > > shutting down checkpointer. Then allow the WALsender to read the\n> > > sequence data and send it to the subscriber while decoding the new\n> > > NOOP record. (b) Similar to the previous idea but instead of WAL\n> > > logging a new record directly invokes a decoding callback after\n> > > walsender receives a request to shutdown which will allow pgoutput to\n> > > read and send required sequences. This approach has a drawback that we\n> > > are adding more work at the time of shutdown but note that we already\n> > > waits for all the WAL records to be decoded and sent before shutting\n> > > down the walsender during shutdown of the node.\n> >\n> > At the time of shutdown a) most logical upgrades don't necessarily call\n> > for shutdown\n> >\n>\n> Won't the major version upgrade expect that the node is down? Refer to\n> step \"Stop both servers\" in [1].\n\nI think the idea is that the publisher is the old version and the\nsubscriber is the new version, and changes generated on the publisher\nare replicated to the subscriber via logical replication. And at some\npoint, we change the application (or a router) settings so that no\nmore transactions come to the publisher, do the last upgrade\npreparation work (e.g. copying the latest sequence values if\nrequried), and then change the application so that new transactions\ncome to the subscriber.\n\nI remember the blog post about Knock doing a similar process to\nupgrade the clusters with minimal downtime[1].\n\nRegards,\n\n[1] https://knock.app/blog/zero-downtime-postgres-upgrades\n\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 13:01:42 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 5, 2024 at 3:17 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Jun 4, 2024 at 5:40 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Even if we decode it periodically (say each time we decode the\n> > checkpoint record) then also we need to send the entire set of\n> > sequences at shutdown. This is because the sequences may have changed\n> > from the last time we sent them.\n>\n> Agree. How about decoding and sending only the sequences that are\n> changed from the last time when they were sent? I know it requires a\n> bit of tracking and more work, but all I'm looking for is to reduce\n> the amount of work that walsenders need to do during the shutdown.\n>\n\nI see your point but going towards tracking the changed sequences\nsounds like moving towards what we do for incremental backups unless\nwe can invent some other smart way.\n\n> Having said that, I like the idea of letting the user sync the\n> sequences via ALTER SUBSCRIPTION command and not weave the logic into\n> the shutdown checkpoint path. As Peter Eisentraut said here\n> https://www.postgresql.org/message-id/42e5cb35-4aeb-4f58-8091-90619c7c3ecc%40eisentraut.org,\n> this can be a good starting point to get going.\n>\n\nAgreed.\n\n> > > I can imagine a case where there are tens\n> > > of thousands of sequences in a production server, and surely decoding\n> > > and sending them just during the shutdown can take a lot of time\n> > > hampering the overall server uptime.\n> >\n> > It is possible but we will send only the sequences that belong to\n> > publications for which walsender is supposed to send the required\n> > data.\n>\n> Right, but what if all the publication tables can have tens of\n> thousands of sequences.\n>\n\nIn such cases we have no option but to send all the sequences.\n\n> > Now, we can also imagine providing option 2 (Alter Subscription\n> > ... Replicate Sequences) so that users can replicate sequences before\n> > shutdown and then disable the subscriptions so that there won't be a\n> > corresponding walsender.\n>\n> As stated above, I like this idea to start with.\n>\n\n+1.\n\n\n--\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Jun 2024 09:34:13 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 12:51 PM Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 04.06.24 12:57, Amit Kapila wrote:\n> > > 2. Provide a command say Alter Subscription ... Replicate Sequences\n> > > (or something like that) which users can perform before shutdown of\n> > > the publisher node during upgrade. This will allow copying all the\n> > > sequences from the publisher node to the subscriber node directly.\n> > > Similar to previous approach, this could also be inconvenient for\n> > > users.\n> >\n> > I would start with this. In any case, you're going to need to write\n> > code to collect all the sequence values, send them over some protocol,\n> > apply them on the subscriber. The easiest way to start is to trigger\n> > that manually. Then later you can add other ways to trigger it, either\n> > by timer or around shutdown, or whatever other ideas there might be.\n> >\n>\n> Agreed.\n\n+1\n\n> To achieve this, we can allow sequences to be copied during\n> the initial CREATE SUBSCRIPTION command similar to what we do for\n> tables. And then later by new/existing command, we re-copy the already\n> existing sequences on the subscriber.\n>\n> The options for the new command could be:\n> Alter Subscription ... Refresh Sequences\n> Alter Subscription ... Replicate Sequences\n>\n> In the second option, we need to introduce a new keyword Replicate.\n> Can you think of any better option?\n\nAnother idea is doing that using options. For example,\n\nFor initial sequences synchronization:\n\nCREATE SUBSCRIPTION ... WITH (copy_sequence = true);\n\nFor re-copy (or update) sequences:\n\nALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);\n\n>\n> In addition to the above, the command Alter Subscription .. Refresh\n> Publication will fetch any missing sequences similar to what it does\n> for tables.\n\nOn the subscriber side, do we need to track which sequences are\ncreated via CREATE/ALTER SUBSCRIPTION?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 14:39:45 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 6, 2024 at 9:32 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 12:43 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jun 4, 2024 at 8:56 PM Yogesh Sharma\n> > <[email protected]> wrote:\n> > >\n> > > On 6/4/24 06:57, Amit Kapila wrote:\n> > >\n> > > > 2. Provide a command say Alter Subscription ... Replicate Sequences\n> > > > (or something like that) which users can perform before shutdown of\n> > > > the publisher node during upgrade. This will allow copying all the\n> > > > sequences from the publisher node to the subscriber node directly.\n> > > > Similar to previous approach, this could also be inconvenient for\n> > > > users.\n> > >\n> > > This is similar to option 1 except that it is a SQL command now.\n> > >\n> >\n> > Right, but I would still prefer a command as it provides clear steps\n> > for the upgrade. Users need to perform (a) Replicate Sequences for a\n> > particular subscription (b) Disable that subscription (c) Perform (a)\n> > and (b) for all the subscriptions corresponding to the publisher we\n> > want to shut down for upgrade.\n> >\n> > I agree there are some manual steps involved here but it is advisable\n> > for users to ensure that they have received the required data on the\n> > subscriber before the upgrade of the publisher node, otherwise, they\n> > may not be able to continue replication after the upgrade. For\n> > example, see the \"Prepare for publisher upgrades\" step in pg_upgrade\n> > docs [1].\n> >\n> > >\n> > > > 3. Replicate published sequences via walsender at the time of shutdown\n> > > > or incrementally while decoding checkpoint record. The two ways to\n> > > > achieve this are: (a) WAL log a special NOOP record just before\n> > > > shutting down checkpointer. Then allow the WALsender to read the\n> > > > sequence data and send it to the subscriber while decoding the new\n> > > > NOOP record. (b) Similar to the previous idea but instead of WAL\n> > > > logging a new record directly invokes a decoding callback after\n> > > > walsender receives a request to shutdown which will allow pgoutput to\n> > > > read and send required sequences. This approach has a drawback that we\n> > > > are adding more work at the time of shutdown but note that we already\n> > > > waits for all the WAL records to be decoded and sent before shutting\n> > > > down the walsender during shutdown of the node.\n> > >\n> > > At the time of shutdown a) most logical upgrades don't necessarily call\n> > > for shutdown\n> > >\n> >\n> > Won't the major version upgrade expect that the node is down? Refer to\n> > step \"Stop both servers\" in [1].\n>\n> I think the idea is that the publisher is the old version and the\n> subscriber is the new version, and changes generated on the publisher\n> are replicated to the subscriber via logical replication. And at some\n> point, we change the application (or a router) settings so that no\n> more transactions come to the publisher, do the last upgrade\n> preparation work (e.g. copying the latest sequence values if\n> requried), and then change the application so that new transactions\n> come to the subscriber.\n>\n\nOkay, thanks for sharing the exact steps. If one has to follow that\npath then sending incrementally (at checkpoint WAL or other times)\nwon't work because we want to ensure that the sequences are up-to-date\nbefore the application starts using the new database. To do that in a\nbullet-proof way, one has to copy/replicate sequences during the\nrequests to the new database are paused (Reference from the blog you\nshared: For the first second after flipping the flag, our application\nartificially paused any new database requests for one second.).\nCurrently, they are using some guesswork to replicate sequences that\nrequire manual verification and more manual work for each sequence.\nThe new command (Alter Subscription ... Replicate Sequence) should\nease their procedure and can do things where they would require no or\nvery less verification.\n\n> I remember the blog post about Knock doing a similar process to\n> upgrade the clusters with minimal downtime[1].\n>\n\nThanks for sharing the blog post.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Jun 2024 12:17:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 6, 2024 at 11:10 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <[email protected]> wrote:\n> >\n>\n> > To achieve this, we can allow sequences to be copied during\n> > the initial CREATE SUBSCRIPTION command similar to what we do for\n> > tables. And then later by new/existing command, we re-copy the already\n> > existing sequences on the subscriber.\n> >\n> > The options for the new command could be:\n> > Alter Subscription ... Refresh Sequences\n> > Alter Subscription ... Replicate Sequences\n> >\n> > In the second option, we need to introduce a new keyword Replicate.\n> > Can you think of any better option?\n>\n> Another idea is doing that using options. For example,\n>\n> For initial sequences synchronization:\n>\n> CREATE SUBSCRIPTION ... WITH (copy_sequence = true);\n>\n\nHow will it interact with the existing copy_data option? So copy_data\nwill become equivalent to copy_table_data, right?\n\n> For re-copy (or update) sequences:\n>\n> ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);\n>\n\nSimilar to the previous point it can be slightly confusing w.r.t\ncopy_data. And would copy_sequence here mean that it would copy\nsequence values of both pre-existing and newly added sequences, if so,\nthat would make it behave differently than copy_data? The other\npossibility in this direction would be to introduce an option like\nreplicate_all_sequences/copy_all_sequences which indicates a copy of\nboth pre-existing and new sequences, if any.\n\nIf we want to go in the direction of having an option such as\ncopy_(all)_sequences then do you think specifying that copy_data is\njust for tables in the docs would be sufficient? I am afraid that it\nwould be confusing for users.\n\n> >\n> > In addition to the above, the command Alter Subscription .. Refresh\n> > Publication will fetch any missing sequences similar to what it does\n> > for tables.\n>\n> On the subscriber side, do we need to track which sequences are\n> created via CREATE/ALTER SUBSCRIPTION?\n>\n\nI think so unless we find some other way to know at refresh\npublication time which all new sequences need to be part of the\nsubscription. What should be the behavior w.r.t sequences when the\nuser performs ALTER SUBSCRIPTION ... REFRESH PUBLICATION? I was\nthinking similar to tables, it should fetch any missing sequence\ninformation from the publisher.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Jun 2024 15:10:06 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 6, 2024 at 9:22 AM Amit Kapila <[email protected]> wrote:\n\n> On Wed, Jun 5, 2024 at 6:01 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Wed, Jun 5, 2024 at 8:45 AM Amit Kapila <[email protected]>\n> wrote:\n> >>\n> >> How about periodically sending this information?\n> >> >\n> >>\n> >> Now, if we want to support some sort of failover then probably this\n> >> will help. Do you have that use case in mind?\n> >\n> >\n> > Regular failover was a goal for supporting logical replication of\n> sequences. That might be more common than major upgrade scenario.\n> >\n>\n> We can't support regular failovers to subscribers unless we can\n> replicate/copy slots because the existing nodes connected to the\n> current publisher/primary would expect that. It should be primarily\n> useful for major version upgrades at this stage.\n>\n\nWe don't want to design it in a way that requires major rework when we are\nable to copy slots and then support regular failovers. That's when the\nconsistency between a sequence and the table using it would be a must. So\nit's better that we take that into consideration now.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Jun 6, 2024 at 9:22 AM Amit Kapila <[email protected]> wrote:On Wed, Jun 5, 2024 at 6:01 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 8:45 AM Amit Kapila <[email protected]> wrote:\n>>\n>>  How about periodically sending this information?\n>> >\n>>\n>> Now, if we want to support some sort of failover then probably this\n>> will help. Do you have that use case in mind?\n>\n>\n> Regular failover was a goal for supporting logical replication of sequences. That might be more common than major upgrade scenario.\n>\n\nWe can't support regular failovers to subscribers unless we can\nreplicate/copy slots because the existing nodes connected to the\ncurrent publisher/primary would expect that. It should be primarily\nuseful for major version upgrades at this stage.We don't want to design it in a way that requires major rework when we are able to copy slots and then support regular failovers. That's when the consistency between a sequence and the table using it would be a must. So it's better that we take that into consideration now.-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 6 Jun 2024 15:43:53 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 6, 2024 at 3:44 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Jun 6, 2024 at 9:22 AM Amit Kapila <[email protected]> wrote:\n>>\n>> On Wed, Jun 5, 2024 at 6:01 PM Ashutosh Bapat\n>> <[email protected]> wrote:\n>> >\n>> > On Wed, Jun 5, 2024 at 8:45 AM Amit Kapila <[email protected]> wrote:\n>> >>\n>> >> How about periodically sending this information?\n>> >> >\n>> >>\n>> >> Now, if we want to support some sort of failover then probably this\n>> >> will help. Do you have that use case in mind?\n>> >\n>> >\n>> > Regular failover was a goal for supporting logical replication of sequences. That might be more common than major upgrade scenario.\n>> >\n>>\n>> We can't support regular failovers to subscribers unless we can\n>> replicate/copy slots because the existing nodes connected to the\n>> current publisher/primary would expect that. It should be primarily\n>> useful for major version upgrades at this stage.\n>\n>\n> We don't want to design it in a way that requires major rework when we are able to copy slots and then support regular failover.\n>\n\nI don't think we can just copy slots like we do for standbys. The\nslots would require WAL locations to continue, so not sure if we can\nmake it work for failover for subscribers.\n\n>\n That's when the consistency between a sequence and the table using it\nwould be a must. So it's better that we take that into consideration\nnow.\n>\n\nWith the ideas being discussed here, I could only see the use case of\na major version upgrade or planned switchover to work. If we come up\nwith any other agreeable way that is better than this then we can\nconsider the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Jun 2024 16:30:51 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 6, 2024 at 9:34 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 3:17 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Tue, Jun 4, 2024 at 5:40 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > Even if we decode it periodically (say each time we decode the\n> > > checkpoint record) then also we need to send the entire set of\n> > > sequences at shutdown. This is because the sequences may have changed\n> > > from the last time we sent them.\n> >\n> > Agree. How about decoding and sending only the sequences that are\n> > changed from the last time when they were sent? I know it requires a\n> > bit of tracking and more work, but all I'm looking for is to reduce\n> > the amount of work that walsenders need to do during the shutdown.\n> >\n>\n> I see your point but going towards tracking the changed sequences\n> sounds like moving towards what we do for incremental backups unless\n> we can invent some other smart way.\n\nYes, we would need an entirely new infrastructure to track the\nsequence change since the last sync. We can only determine this from\nWAL, and relying on it would somehow bring us back to the approach we\nwere trying to achieve with logical decoding of sequences patch.\n\n> > Having said that, I like the idea of letting the user sync the\n> > sequences via ALTER SUBSCRIPTION command and not weave the logic into\n> > the shutdown checkpoint path. As Peter Eisentraut said here\n> > https://www.postgresql.org/message-id/42e5cb35-4aeb-4f58-8091-90619c7c3ecc%40eisentraut.org,\n> > this can be a good starting point to get going.\n> >\n>\n> Agreed.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 18:02:16 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 6, 2024 at 6:40 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jun 6, 2024 at 11:10 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <[email protected]> wrote:\n> > >\n> >\n> > > To achieve this, we can allow sequences to be copied during\n> > > the initial CREATE SUBSCRIPTION command similar to what we do for\n> > > tables. And then later by new/existing command, we re-copy the already\n> > > existing sequences on the subscriber.\n> > >\n> > > The options for the new command could be:\n> > > Alter Subscription ... Refresh Sequences\n> > > Alter Subscription ... Replicate Sequences\n> > >\n> > > In the second option, we need to introduce a new keyword Replicate.\n> > > Can you think of any better option?\n> >\n> > Another idea is doing that using options. For example,\n> >\n> > For initial sequences synchronization:\n> >\n> > CREATE SUBSCRIPTION ... WITH (copy_sequence = true);\n> >\n>\n> How will it interact with the existing copy_data option? So copy_data\n> will become equivalent to copy_table_data, right?\n\nRight.\n\n>\n> > For re-copy (or update) sequences:\n> >\n> > ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);\n> >\n>\n> Similar to the previous point it can be slightly confusing w.r.t\n> copy_data. And would copy_sequence here mean that it would copy\n> sequence values of both pre-existing and newly added sequences, if so,\n> that would make it behave differently than copy_data? The other\n> possibility in this direction would be to introduce an option like\n> replicate_all_sequences/copy_all_sequences which indicates a copy of\n> both pre-existing and new sequences, if any.\n\nCopying sequence data works differently than replicating table data\n(initial data copy and logical replication). So I thought the\ncopy_sequence option (or whatever better name) always does both\nupdating pre-existing sequences and adding new sequences. REFRESH\nPUBLICATION updates the tables to be subscribed, so we also update or\nadd sequences associated to these tables.\n\n>\n> If we want to go in the direction of having an option such as\n> copy_(all)_sequences then do you think specifying that copy_data is\n> just for tables in the docs would be sufficient? I am afraid that it\n> would be confusing for users.\n\nI see your point. But I guess it would not be very problematic as it\ndoesn't break the current behavior and copy_(all)_sequences is\nprimarily for upgrade use cases.\n\n>\n> > >\n> > > In addition to the above, the command Alter Subscription .. Refresh\n> > > Publication will fetch any missing sequences similar to what it does\n> > > for tables.\n> >\n> > On the subscriber side, do we need to track which sequences are\n> > created via CREATE/ALTER SUBSCRIPTION?\n> >\n>\n> I think so unless we find some other way to know at refresh\n> publication time which all new sequences need to be part of the\n> subscription. What should be the behavior w.r.t sequences when the\n> user performs ALTER SUBSCRIPTION ... REFRESH PUBLICATION? I was\n> thinking similar to tables, it should fetch any missing sequence\n> information from the publisher.\n\nIt seems to make sense to me. But I have one question: do we want to\nsupport replicating sequences that are not associated with any tables?\nif yes, what if we refresh two different subscriptions that subscribe\nto different tables on the same database? On the other hand, if no\n(i.e. replicating only sequences owned by tables), can we know which\nsequences to replicate by checking the subscribed tables?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 11:25:17 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, Jun 7, 2024 at 7:55 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Jun 6, 2024 at 6:40 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Jun 6, 2024 at 11:10 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > >\n> > > > To achieve this, we can allow sequences to be copied during\n> > > > the initial CREATE SUBSCRIPTION command similar to what we do for\n> > > > tables. And then later by new/existing command, we re-copy the already\n> > > > existing sequences on the subscriber.\n> > > >\n> > > > The options for the new command could be:\n> > > > Alter Subscription ... Refresh Sequences\n> > > > Alter Subscription ... Replicate Sequences\n> > > >\n> > > > In the second option, we need to introduce a new keyword Replicate.\n> > > > Can you think of any better option?\n> > >\n> > > Another idea is doing that using options. For example,\n> > >\n> > > For initial sequences synchronization:\n> > >\n> > > CREATE SUBSCRIPTION ... WITH (copy_sequence = true);\n> > >\n> >\n> > How will it interact with the existing copy_data option? So copy_data\n> > will become equivalent to copy_table_data, right?\n>\n> Right.\n>\n> >\n> > > For re-copy (or update) sequences:\n> > >\n> > > ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);\n> > >\n> >\n> > Similar to the previous point it can be slightly confusing w.r.t\n> > copy_data. And would copy_sequence here mean that it would copy\n> > sequence values of both pre-existing and newly added sequences, if so,\n> > that would make it behave differently than copy_data? The other\n> > possibility in this direction would be to introduce an option like\n> > replicate_all_sequences/copy_all_sequences which indicates a copy of\n> > both pre-existing and new sequences, if any.\n>\n> Copying sequence data works differently than replicating table data\n> (initial data copy and logical replication). So I thought the\n> copy_sequence option (or whatever better name) always does both\n> updating pre-existing sequences and adding new sequences. REFRESH\n> PUBLICATION updates the tables to be subscribed, so we also update or\n> add sequences associated to these tables.\n>\n\nAre you imagining the behavior for sequences associated with tables\ndifferently than the ones defined by the CREATE SEQUENCE .. command? I\nwas thinking that users would associate sequences with publications\nsimilar to what we do for tables for both cases. For example, they\nneed to explicitly mention the sequences they want to replicate by\ncommands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE\nPUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR\nSEQUENCES IN SCHEMA sch1;\n\nIn this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1\nshould copy both the explicitly defined sequences and sequences\ndefined with the tables. Do you think a different variant for just\ncopying sequences implicitly associated with tables (say for identity\ncolumns)?\n\n>\n> >\n> > > >\n> > > > In addition to the above, the command Alter Subscription .. Refresh\n> > > > Publication will fetch any missing sequences similar to what it does\n> > > > for tables.\n> > >\n> > > On the subscriber side, do we need to track which sequences are\n> > > created via CREATE/ALTER SUBSCRIPTION?\n> > >\n> >\n> > I think so unless we find some other way to know at refresh\n> > publication time which all new sequences need to be part of the\n> > subscription. What should be the behavior w.r.t sequences when the\n> > user performs ALTER SUBSCRIPTION ... REFRESH PUBLICATION? I was\n> > thinking similar to tables, it should fetch any missing sequence\n> > information from the publisher.\n>\n> It seems to make sense to me. But I have one question: do we want to\n> support replicating sequences that are not associated with any tables?\n>\n\nYes, unless we see a problem with it.\n\n> if yes, what if we refresh two different subscriptions that subscribe\n> to different tables on the same database?\n\nWhat problem do you see with it?\n\n>\n On the other hand, if no\n> (i.e. replicating only sequences owned by tables), can we know which\n> sequences to replicate by checking the subscribed tables?\n>\n\nSorry, I didn't understand your question. Can you please try to\nexplain in more words or use some examples?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Jun 2024 15:59:53 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 9:13 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jun 4, 2024 at 8:56 PM Yogesh Sharma\n> > <[email protected]> wrote:\n> > >\n> > > On 6/4/24 06:57, Amit Kapila wrote:\n> > >\n> > > > 2. Provide a command say Alter Subscription ... Replicate Sequences\n> > > > (or something like that) which users can perform before shutdown of\n> > > > the publisher node during upgrade. This will allow copying all the\n> > > > sequences from the publisher node to the subscriber node directly.\n> > > > Similar to previous approach, this could also be inconvenient for\n> > > > users.\n> > >\n> > > This is similar to option 1 except that it is a SQL command now.\n> > >\n> >\n> > Right, but I would still prefer a command as it provides clear steps\n> > for the upgrade. Users need to perform (a) Replicate Sequences for a\n> > particular subscription (b) Disable that subscription (c) Perform (a)\n> > and (b) for all the subscriptions corresponding to the publisher we\n> > want to shut down for upgrade.\n> >\n>\n> Another advantage of this approach over just a plain tool to copy all\n> sequences before upgrade is that here we can have the facility to copy\n> just the required sequences. I mean the set sequences that the user\n> has specified as part of the publication.\n\nHere is a WIP patch to handle synchronizing the sequence during\ncreate/alter subscription. The following changes were made for it:\nSubscriber modifications:\nEnable sequence synchronization during subscription creation or\nalteration using the following syntax:\nCREATE SUBSCRIPTION ... WITH (sequences=true);\nWhen a subscription is created with the sequence option enabled, the\nsequence list from the specified publications in the subscription will\nbe retrieved from the publisher. Each sequence's data will then be\ncopied from the remote publisher sequence to the local subscriber\nsequence by using a wal receiver connection. Since all of the sequence\nupdating is done within a single transaction, if any errors occur\nduring the copying process, the entire transaction will be rolled\nback.\n\nTo refresh sequences, use the syntax:\nALTER SUBSCRIPTION REFRESH SEQUENCES;\nDuring sequence refresh, the sequence list is updated by removing\nstale sequences and adding any missing sequences. The updated sequence\nlist is then re-synchronized.\n\nA new catalog table, pg_subscription_seq, has been introduced for\nmapping subscriptions to sequences. Additionally, the sequence LSN\n(Log Sequence Number) is stored, facilitating determination of\nsequence changes occurring before or after the returned sequence\nstate.\n\nI have taken some code changes from Tomas's patch at [1].\nI'll adjust the syntax as needed based on the ongoing discussion at [2].\n\n[1] - https://www.postgresql.org/message-id/09613730-5ee9-4cc3-82d8-f089be90aa64%40enterprisedb.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1K2X%2BPaErtGVQPD0k_5XqxjV_Cwg37%2B-pWsmKFncwc7Wg%40mail.gmail.com\n\nRegards,\nVignesh", "msg_date": "Sat, 8 Jun 2024 18:43:32 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jun 7, 2024 at 7:55 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Thu, Jun 6, 2024 at 6:40 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Thu, Jun 6, 2024 at 11:10 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > >\n> > > > > To achieve this, we can allow sequences to be copied during\n> > > > > the initial CREATE SUBSCRIPTION command similar to what we do for\n> > > > > tables. And then later by new/existing command, we re-copy the already\n> > > > > existing sequences on the subscriber.\n> > > > >\n> > > > > The options for the new command could be:\n> > > > > Alter Subscription ... Refresh Sequences\n> > > > > Alter Subscription ... Replicate Sequences\n> > > > >\n> > > > > In the second option, we need to introduce a new keyword Replicate.\n> > > > > Can you think of any better option?\n> > > >\n> > > > Another idea is doing that using options. For example,\n> > > >\n> > > > For initial sequences synchronization:\n> > > >\n> > > > CREATE SUBSCRIPTION ... WITH (copy_sequence = true);\n> > > >\n> > >\n> > > How will it interact with the existing copy_data option? So copy_data\n> > > will become equivalent to copy_table_data, right?\n> >\n> > Right.\n> >\n> > >\n> > > > For re-copy (or update) sequences:\n> > > >\n> > > > ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);\n> > > >\n> > >\n> > > Similar to the previous point it can be slightly confusing w.r.t\n> > > copy_data. And would copy_sequence here mean that it would copy\n> > > sequence values of both pre-existing and newly added sequences, if so,\n> > > that would make it behave differently than copy_data? The other\n> > > possibility in this direction would be to introduce an option like\n> > > replicate_all_sequences/copy_all_sequences which indicates a copy of\n> > > both pre-existing and new sequences, if any.\n> >\n> > Copying sequence data works differently than replicating table data\n> > (initial data copy and logical replication). So I thought the\n> > copy_sequence option (or whatever better name) always does both\n> > updating pre-existing sequences and adding new sequences. REFRESH\n> > PUBLICATION updates the tables to be subscribed, so we also update or\n> > add sequences associated to these tables.\n> >\n>\n> Are you imagining the behavior for sequences associated with tables\n> differently than the ones defined by the CREATE SEQUENCE .. command? I\n> was thinking that users would associate sequences with publications\n> similar to what we do for tables for both cases. For example, they\n> need to explicitly mention the sequences they want to replicate by\n> commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE\n> PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR\n> SEQUENCES IN SCHEMA sch1;\n>\n> In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1\n> should copy both the explicitly defined sequences and sequences\n> defined with the tables. Do you think a different variant for just\n> copying sequences implicitly associated with tables (say for identity\n> columns)?\n\nOh, I was thinking that your proposal was to copy literally all\nsequences by REPLICA/REFRESH SEQUENCE command. But it seems to make\nsense to explicitly specify the sequences they want to replicate. It\nalso means that they can create a publication that has only sequences.\nIn this case, even if they create a subscription for that publication,\nwe don't launch any apply workers for that subscription. Right?\n\nAlso, given that the main use case (at least as the first step) is\nversion upgrade, do we really need to support SEQUENCES IN SCHEMA and\neven FOR SEQUENCE? The WIP patch Vignesh recently submitted is more\nthan 6k lines. I think we can cut the scope for the first\nimplementation so as to make the review easy.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Jun 2024 15:14:23 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n\n> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n> [...]\n> A new catalog table, pg_subscription_seq, has been introduced for\n> mapping subscriptions to sequences. Additionally, the sequence LSN\n> (Log Sequence Number) is stored, facilitating determination of\n> sequence changes occurring before or after the returned sequence\n> state.\n>\n\nCan't it be done using pg_depend? It seems a bit excessive unless I'm\nmissing\nsomething. How do you track sequence mapping with the publication?\n\nRegards,\nAmul\n\nOn Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n[...]\nA new catalog table, pg_subscription_seq, has been introduced for\nmapping subscriptions to sequences. Additionally, the sequence LSN\n(Log Sequence Number) is stored, facilitating determination of\nsequence changes occurring before or after the returned sequence\nstate. Can't it be done using pg_depend? It seems a bit excessive unless I'm missingsomething. How do you track sequence mapping with the publication?Regards,Amul", "msg_date": "Mon, 10 Jun 2024 12:24:04 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Jun 7, 2024 at 7:55 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Thu, Jun 6, 2024 at 6:40 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Thu, Jun 6, 2024 at 11:10 AM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <[email protected]> wrote:\n> > > > > >\n> > > > >\n> > > > > > To achieve this, we can allow sequences to be copied during\n> > > > > > the initial CREATE SUBSCRIPTION command similar to what we do for\n> > > > > > tables. And then later by new/existing command, we re-copy the already\n> > > > > > existing sequences on the subscriber.\n> > > > > >\n> > > > > > The options for the new command could be:\n> > > > > > Alter Subscription ... Refresh Sequences\n> > > > > > Alter Subscription ... Replicate Sequences\n> > > > > >\n> > > > > > In the second option, we need to introduce a new keyword Replicate.\n> > > > > > Can you think of any better option?\n> > > > >\n> > > > > Another idea is doing that using options. For example,\n> > > > >\n> > > > > For initial sequences synchronization:\n> > > > >\n> > > > > CREATE SUBSCRIPTION ... WITH (copy_sequence = true);\n> > > > >\n> > > >\n> > > > How will it interact with the existing copy_data option? So copy_data\n> > > > will become equivalent to copy_table_data, right?\n> > >\n> > > Right.\n> > >\n> > > >\n> > > > > For re-copy (or update) sequences:\n> > > > >\n> > > > > ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);\n> > > > >\n> > > >\n> > > > Similar to the previous point it can be slightly confusing w.r.t\n> > > > copy_data. And would copy_sequence here mean that it would copy\n> > > > sequence values of both pre-existing and newly added sequences, if so,\n> > > > that would make it behave differently than copy_data? The other\n> > > > possibility in this direction would be to introduce an option like\n> > > > replicate_all_sequences/copy_all_sequences which indicates a copy of\n> > > > both pre-existing and new sequences, if any.\n> > >\n> > > Copying sequence data works differently than replicating table data\n> > > (initial data copy and logical replication). So I thought the\n> > > copy_sequence option (or whatever better name) always does both\n> > > updating pre-existing sequences and adding new sequences. REFRESH\n> > > PUBLICATION updates the tables to be subscribed, so we also update or\n> > > add sequences associated to these tables.\n> > >\n> >\n> > Are you imagining the behavior for sequences associated with tables\n> > differently than the ones defined by the CREATE SEQUENCE .. command? I\n> > was thinking that users would associate sequences with publications\n> > similar to what we do for tables for both cases. For example, they\n> > need to explicitly mention the sequences they want to replicate by\n> > commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE\n> > PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR\n> > SEQUENCES IN SCHEMA sch1;\n> >\n> > In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1\n> > should copy both the explicitly defined sequences and sequences\n> > defined with the tables. Do you think a different variant for just\n> > copying sequences implicitly associated with tables (say for identity\n> > columns)?\n>\n> Oh, I was thinking that your proposal was to copy literally all\n> sequences by REPLICA/REFRESH SEQUENCE command. But it seems to make\n> sense to explicitly specify the sequences they want to replicate. It\n> also means that they can create a publication that has only sequences.\n> In this case, even if they create a subscription for that publication,\n> we don't launch any apply workers for that subscription. Right?\n>\n> Also, given that the main use case (at least as the first step) is\n> version upgrade, do we really need to support SEQUENCES IN SCHEMA and\n> even FOR SEQUENCE?\n\nAlso, I guess that specifying individual sequences might not be easy\nto use for users in some cases. For sequences owned by a column of a\ntable, users might want to specify them altogether, rather than\nseparately. For example, CREATE PUBLICATION ... FOR TABLE tab1 WITH\nSEQUENCES means to add the table tab1 and its sequences to the\npublication. For other sequences (i.e., not owned by any tables),\nusers might want to specify them individually.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Jun 2024 16:12:42 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, Jun 10, 2024 at 12:43 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > >\n> > > Are you imagining the behavior for sequences associated with tables\n> > > differently than the ones defined by the CREATE SEQUENCE .. command? I\n> > > was thinking that users would associate sequences with publications\n> > > similar to what we do for tables for both cases. For example, they\n> > > need to explicitly mention the sequences they want to replicate by\n> > > commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE\n> > > PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR\n> > > SEQUENCES IN SCHEMA sch1;\n> > >\n> > > In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1\n> > > should copy both the explicitly defined sequences and sequences\n> > > defined with the tables. Do you think a different variant for just\n> > > copying sequences implicitly associated with tables (say for identity\n> > > columns)?\n> >\n> > Oh, I was thinking that your proposal was to copy literally all\n> > sequences by REPLICA/REFRESH SEQUENCE command.\n> >\n\nI am trying to keep the behavior as close to tables as possible.\n\n> > But it seems to make\n> > sense to explicitly specify the sequences they want to replicate. It\n> > also means that they can create a publication that has only sequences.\n> > In this case, even if they create a subscription for that publication,\n> > we don't launch any apply workers for that subscription. Right?\n> >\n\nRight, good point. I had not thought about this.\n\n> > Also, given that the main use case (at least as the first step) is\n> > version upgrade, do we really need to support SEQUENCES IN SCHEMA and\n> > even FOR SEQUENCE?\n>\n\nAt the very least, we can split the patch to move these variants to a\nseparate patch. Once the main patch is finalized, we can try to\nevaluate the remaining separately.\n\n> Also, I guess that specifying individual sequences might not be easy\n> to use for users in some cases. For sequences owned by a column of a\n> table, users might want to specify them altogether, rather than\n> separately. For example, CREATE PUBLICATION ... FOR TABLE tab1 WITH\n> SEQUENCES means to add the table tab1 and its sequences to the\n> publication. For other sequences (i.e., not owned by any tables),\n> users might want to specify them individually.\n>\n\nYeah, or we can have a syntax like CREATE PUBLICATION ... FOR TABLE\ntab1 INCLUDE SEQUENCES. Normally, we use the WITH clause for options\n(For example, CREATE SUBSCRIPTION ... WITH (streaming=...)).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 10 Jun 2024 14:48:25 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n>\n>\n>\n> On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n>>\n>> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n>> [...]\n>> A new catalog table, pg_subscription_seq, has been introduced for\n>> mapping subscriptions to sequences. Additionally, the sequence LSN\n>> (Log Sequence Number) is stored, facilitating determination of\n>> sequence changes occurring before or after the returned sequence\n>> state.\n>\n>\n> Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n> something.\n\nWe'll require the lsn because the sequence LSN informs the user that\nit has been synchronized up to the LSN in pg_subscription_seq. Since\nwe are not supporting incremental sync, the user will be able to\nidentify if he should run refresh sequences or not by checking the lsn\nof the pg_subscription_seq and the lsn of the sequence(using\npg_sequence_state added) in the publisher. Also, this parallels our\nimplementation for pg_subscription_seq and will aid in expanding for\na) incremental synchronization and b) utilizing workers for\nsynchronization using sequence states if necessary.\n\nHow do you track sequence mapping with the publication?\n\nIn the publisher we use pg_publication_rel and\npg_publication_namespace for mapping the sequences with the\npublication.\n\nRegards,\nVignesh\nVignesh\n\n\n", "msg_date": "Mon, 10 Jun 2024 16:59:55 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, 10 Jun 2024 at 14:48, Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jun 10, 2024 at 12:43 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > >\n> > > > Are you imagining the behavior for sequences associated with tables\n> > > > differently than the ones defined by the CREATE SEQUENCE .. command? I\n> > > > was thinking that users would associate sequences with publications\n> > > > similar to what we do for tables for both cases. For example, they\n> > > > need to explicitly mention the sequences they want to replicate by\n> > > > commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE\n> > > > PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR\n> > > > SEQUENCES IN SCHEMA sch1;\n> > > >\n> > > > In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1\n> > > > should copy both the explicitly defined sequences and sequences\n> > > > defined with the tables. Do you think a different variant for just\n> > > > copying sequences implicitly associated with tables (say for identity\n> > > > columns)?\n> > >\n> > > Oh, I was thinking that your proposal was to copy literally all\n> > > sequences by REPLICA/REFRESH SEQUENCE command.\n> > >\n>\n> I am trying to keep the behavior as close to tables as possible.\n>\n> > > But it seems to make\n> > > sense to explicitly specify the sequences they want to replicate. It\n> > > also means that they can create a publication that has only sequences.\n> > > In this case, even if they create a subscription for that publication,\n> > > we don't launch any apply workers for that subscription. Right?\n> > >\n>\n> Right, good point. I had not thought about this.\n>\n> > > Also, given that the main use case (at least as the first step) is\n> > > version upgrade, do we really need to support SEQUENCES IN SCHEMA and\n> > > even FOR SEQUENCE?\n> >\n>\n> At the very least, we can split the patch to move these variants to a\n> separate patch. Once the main patch is finalized, we can try to\n> evaluate the remaining separately.\n\nI engaged in an offline discussion with Amit about strategizing the\ndivision of patches to facilitate the review process. We agreed on the\nfollowing split: The first patch will encompass the setting and\ngetting of sequence values (core sequence changes). The second patch\nwill cover all changes on the publisher side related to \"FOR ALL\nSEQUENCES.\" The third patch will address subscriber side changes aimed\nat synchronizing \"FOR ALL SEQUENCES\" publications. The fourth patch\nwill focus on supporting \"FOR SEQUENCE\" publication. Lastly, the fifth\npatch will introduce support for \"FOR ALL SEQUENCES IN SCHEMA\"\npublication.\n\nI will work on this and share an updated patch for the same soon.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 11 Jun 2024 08:55:14 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:\n\n> On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n> >\n> >\n> >\n> > On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n> >>\n> >> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]>\n> wrote:\n> >> [...]\n> >> A new catalog table, pg_subscription_seq, has been introduced for\n> >> mapping subscriptions to sequences. Additionally, the sequence LSN\n> >> (Log Sequence Number) is stored, facilitating determination of\n> >> sequence changes occurring before or after the returned sequence\n> >> state.\n> >\n> >\n> > Can't it be done using pg_depend? It seems a bit excessive unless I'm\n> missing\n> > something.\n>\n> We'll require the lsn because the sequence LSN informs the user that\n> it has been synchronized up to the LSN in pg_subscription_seq. Since\n> we are not supporting incremental sync, the user will be able to\n> identify if he should run refresh sequences or not by checking the lsn\n> of the pg_subscription_seq and the lsn of the sequence(using\n> pg_sequence_state added) in the publisher. Also, this parallels our\n> implementation for pg_subscription_seq and will aid in expanding for\n> a) incremental synchronization and b) utilizing workers for\n> synchronization using sequence states if necessary.\n>\n> How do you track sequence mapping with the publication?\n>\n> In the publisher we use pg_publication_rel and\n> pg_publication_namespace for mapping the sequences with the\n> publication.\n>\n\nThanks for the explanation. I'm wondering what the complexity would be, if\nwe\nwanted to do something similar on the subscriber side, i.e., tracking via\npg_subscription_rel.\n\nRegards,\nAmul\n\nOn Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n>\n>\n>\n> On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n>>\n>> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n>> [...]\n>> A new catalog table, pg_subscription_seq, has been introduced for\n>> mapping subscriptions to sequences. Additionally, the sequence LSN\n>> (Log Sequence Number) is stored, facilitating determination of\n>> sequence changes occurring before or after the returned sequence\n>> state.\n>\n>\n> Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n> something.\n\nWe'll require the lsn because the sequence LSN informs the user that\nit has been synchronized up to the LSN in pg_subscription_seq. Since\nwe are not supporting incremental sync, the user will be able to\nidentify if he should run refresh sequences or not by checking the lsn\nof the pg_subscription_seq and the lsn of the sequence(using\npg_sequence_state added) in the publisher.  Also, this parallels our\nimplementation for pg_subscription_seq and will aid in expanding for\na) incremental synchronization and b) utilizing workers for\nsynchronization using sequence states if necessary.\n\nHow do you track sequence mapping with the publication?\n\nIn the publisher we use pg_publication_rel and\npg_publication_namespace for mapping the sequences with the\npublication.Thanks for the explanation. I'm wondering what the complexity would be, if wewanted to do something similar on the subscriber side, i.e., tracking viapg_subscription_rel.Regards,Amul", "msg_date": "Tue, 11 Jun 2024 09:40:59 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 11 Jun 2024 at 09:41, Amul Sul <[email protected]> wrote:\n>\n> On Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:\n>>\n>> On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n>> >\n>> >\n>> >\n>> > On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n>> >>\n>> >> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n>> >> [...]\n>> >> A new catalog table, pg_subscription_seq, has been introduced for\n>> >> mapping subscriptions to sequences. Additionally, the sequence LSN\n>> >> (Log Sequence Number) is stored, facilitating determination of\n>> >> sequence changes occurring before or after the returned sequence\n>> >> state.\n>> >\n>> >\n>> > Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n>> > something.\n>>\n>> We'll require the lsn because the sequence LSN informs the user that\n>> it has been synchronized up to the LSN in pg_subscription_seq. Since\n>> we are not supporting incremental sync, the user will be able to\n>> identify if he should run refresh sequences or not by checking the lsn\n>> of the pg_subscription_seq and the lsn of the sequence(using\n>> pg_sequence_state added) in the publisher. Also, this parallels our\n>> implementation for pg_subscription_seq and will aid in expanding for\n>> a) incremental synchronization and b) utilizing workers for\n>> synchronization using sequence states if necessary.\n>>\n>> How do you track sequence mapping with the publication?\n>>\n>> In the publisher we use pg_publication_rel and\n>> pg_publication_namespace for mapping the sequences with the\n>> publication.\n>\n>\n> Thanks for the explanation. I'm wondering what the complexity would be, if we\n> wanted to do something similar on the subscriber side, i.e., tracking via\n> pg_subscription_rel.\n\nBecause we won't utilize sync workers to synchronize the sequence, and\nthe sequence won't necessitate sync states like init, sync,\nfinishedcopy, syncdone, ready, etc., initially, I considered keeping\nthe sequences separate. However, I'm ok with using pg_subscription_rel\nas it could potentially help in enhancing incremental synchronization\nand parallelizing later on.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 11 Jun 2024 11:03:30 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Jun 11, 2024 at 12:25 PM vignesh C <[email protected]> wrote:\n>\n> On Mon, 10 Jun 2024 at 14:48, Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Jun 10, 2024 at 12:43 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > >\n> > > > > Are you imagining the behavior for sequences associated with tables\n> > > > > differently than the ones defined by the CREATE SEQUENCE .. command? I\n> > > > > was thinking that users would associate sequences with publications\n> > > > > similar to what we do for tables for both cases. For example, they\n> > > > > need to explicitly mention the sequences they want to replicate by\n> > > > > commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE\n> > > > > PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR\n> > > > > SEQUENCES IN SCHEMA sch1;\n> > > > >\n> > > > > In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1\n> > > > > should copy both the explicitly defined sequences and sequences\n> > > > > defined with the tables. Do you think a different variant for just\n> > > > > copying sequences implicitly associated with tables (say for identity\n> > > > > columns)?\n> > > >\n> > > > Oh, I was thinking that your proposal was to copy literally all\n> > > > sequences by REPLICA/REFRESH SEQUENCE command.\n> > > >\n> >\n> > I am trying to keep the behavior as close to tables as possible.\n> >\n> > > > But it seems to make\n> > > > sense to explicitly specify the sequences they want to replicate. It\n> > > > also means that they can create a publication that has only sequences.\n> > > > In this case, even if they create a subscription for that publication,\n> > > > we don't launch any apply workers for that subscription. Right?\n> > > >\n> >\n> > Right, good point. I had not thought about this.\n> >\n> > > > Also, given that the main use case (at least as the first step) is\n> > > > version upgrade, do we really need to support SEQUENCES IN SCHEMA and\n> > > > even FOR SEQUENCE?\n> > >\n> >\n> > At the very least, we can split the patch to move these variants to a\n> > separate patch. Once the main patch is finalized, we can try to\n> > evaluate the remaining separately.\n>\n> I engaged in an offline discussion with Amit about strategizing the\n> division of patches to facilitate the review process. We agreed on the\n> following split: The first patch will encompass the setting and\n> getting of sequence values (core sequence changes). The second patch\n> will cover all changes on the publisher side related to \"FOR ALL\n> SEQUENCES.\" The third patch will address subscriber side changes aimed\n> at synchronizing \"FOR ALL SEQUENCES\" publications. The fourth patch\n> will focus on supporting \"FOR SEQUENCE\" publication. Lastly, the fifth\n> patch will introduce support for \"FOR ALL SEQUENCES IN SCHEMA\"\n> publication.\n>\n> I will work on this and share an updated patch for the same soon.\n\n+1. Sounds like a good plan.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 16:08:08 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 11 Jun 2024 at 12:38, Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Jun 11, 2024 at 12:25 PM vignesh C <[email protected]> wrote:\n> >\n> > On Mon, 10 Jun 2024 at 14:48, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Jun 10, 2024 at 12:43 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <[email protected]> wrote:\n> > > > > >\n> > > > > >\n> > > > > > Are you imagining the behavior for sequences associated with tables\n> > > > > > differently than the ones defined by the CREATE SEQUENCE .. command? I\n> > > > > > was thinking that users would associate sequences with publications\n> > > > > > similar to what we do for tables for both cases. For example, they\n> > > > > > need to explicitly mention the sequences they want to replicate by\n> > > > > > commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE\n> > > > > > PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR\n> > > > > > SEQUENCES IN SCHEMA sch1;\n> > > > > >\n> > > > > > In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1\n> > > > > > should copy both the explicitly defined sequences and sequences\n> > > > > > defined with the tables. Do you think a different variant for just\n> > > > > > copying sequences implicitly associated with tables (say for identity\n> > > > > > columns)?\n> > > > >\n> > > > > Oh, I was thinking that your proposal was to copy literally all\n> > > > > sequences by REPLICA/REFRESH SEQUENCE command.\n> > > > >\n> > >\n> > > I am trying to keep the behavior as close to tables as possible.\n> > >\n> > > > > But it seems to make\n> > > > > sense to explicitly specify the sequences they want to replicate. It\n> > > > > also means that they can create a publication that has only sequences.\n> > > > > In this case, even if they create a subscription for that publication,\n> > > > > we don't launch any apply workers for that subscription. Right?\n> > > > >\n> > >\n> > > Right, good point. I had not thought about this.\n> > >\n> > > > > Also, given that the main use case (at least as the first step) is\n> > > > > version upgrade, do we really need to support SEQUENCES IN SCHEMA and\n> > > > > even FOR SEQUENCE?\n> > > >\n> > >\n> > > At the very least, we can split the patch to move these variants to a\n> > > separate patch. Once the main patch is finalized, we can try to\n> > > evaluate the remaining separately.\n> >\n> > I engaged in an offline discussion with Amit about strategizing the\n> > division of patches to facilitate the review process. We agreed on the\n> > following split: The first patch will encompass the setting and\n> > getting of sequence values (core sequence changes). The second patch\n> > will cover all changes on the publisher side related to \"FOR ALL\n> > SEQUENCES.\" The third patch will address subscriber side changes aimed\n> > at synchronizing \"FOR ALL SEQUENCES\" publications. The fourth patch\n> > will focus on supporting \"FOR SEQUENCE\" publication. Lastly, the fifth\n> > patch will introduce support for \"FOR ALL SEQUENCES IN SCHEMA\"\n> > publication.\n> >\n> > I will work on this and share an updated patch for the same soon.\n>\n> +1. Sounds like a good plan.\n\nAmit and I engaged in an offline discussion regarding the design and\ncontemplated that it could be like below:\n1) CREATE PUBLICATION syntax enhancement:\nCREATE PUBLICATION ... FOR ALL SEQUENCES;\nThe addition of a new column titled \"all sequences\" in the\npg_publication system table will signify whether the publication is\ndesignated as all sequences publication or not.\n\n2) CREATE SUBSCRIPTION -- no syntax change.\nUpon creation of a subscription, the following additional steps will\nbe managed by the subscriber:\ni) The subscriber will retrieve the list of sequences associated with\nthe subscription's publications.\nii) For each sequence: a) Retrieve the sequence value from the\npublisher by invoking the pg_sequence_state function. b) Set the\nsequence with the value obtained from the publisher. iv) Once the\nsubscription creation is completed, all sequence values will become\nvisible at the subscriber's end.\n\nAn alternative design approach could involve retrieving the sequence\nlist from the publisher during subscription creation and inserting the\nsequences with an \"init\" state into the pg_subscription_rel system\ntable. These tasks could be executed by a single sequence sync worker,\nwhich would:\ni) Retrieve the list of sequences in the \"init\" state from the\npg_subscription_rel system table.\nii) Initiate a transaction.\niii) For each sequence: a) Obtain the sequence value from the\npublisher by utilizing the pg_sequence_state function. b) Update the\nsequence with the value obtained from the publisher.\niv) Commit the transaction.\n\nThe benefit with the second approach is that if there are large number\nof sequences, the sequence sync can be enhanced to happen in parallel\nand also if there are any locks held on the sequences in the\npublisher, the sequence worker can wait to acquire the lock instead of\nblocking the whole create subscription command which will delay the\ninitial copy of the tables too.\n\n3) Refreshing the sequence can be achieved through the existing\ncommand: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change\nhere).\nThe subscriber identifies stale sequences, meaning sequences present\nin pg_subscription_rel but absent from the publication, and removes\nthem from the pg_subscription_rel system table. The subscriber also\nchecks for newly added sequences in the publisher and synchronizes\ntheir values from the publisher using the steps outlined in the\nsubscription creation process. It's worth noting that previously\nsynchronized sequences won't be synchronized again; the sequence sync\nwill occur solely for the newly added sequences.\n\n4) Introducing a new command for refreshing all sequences: ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.\nThe subscriber will remove stale sequences and add newly added\nsequences from the publisher. Following this, it will re-synchronize\nthe sequence values for all sequences in the updated list from the\npublisher, following the steps outlined in the subscription creation\nprocess.\n\n5) Incorporate the pg_sequence_state function to fetch the sequence\nvalue from the publisher, along with the page LSN. Incorporate\nSetSequence function, which will procure a new relfilenode for the\nsequence and set the new relfilenode with the specified value. This\nwill facilitate rollback in case of any failures.\n\nThoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 11 Jun 2024 16:06:06 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Jun 11, 2024 at 7:36 PM vignesh C <[email protected]> wrote:\n>\n> On Tue, 11 Jun 2024 at 12:38, Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Jun 11, 2024 at 12:25 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Mon, 10 Jun 2024 at 14:48, Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Mon, Jun 10, 2024 at 12:43 PM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <[email protected]> wrote:\n> > > > > >\n> > > > > > On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <[email protected]> wrote:\n> > > > > > >\n> > > > > > >\n> > > > > > > Are you imagining the behavior for sequences associated with tables\n> > > > > > > differently than the ones defined by the CREATE SEQUENCE .. command? I\n> > > > > > > was thinking that users would associate sequences with publications\n> > > > > > > similar to what we do for tables for both cases. For example, they\n> > > > > > > need to explicitly mention the sequences they want to replicate by\n> > > > > > > commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE\n> > > > > > > PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR\n> > > > > > > SEQUENCES IN SCHEMA sch1;\n> > > > > > >\n> > > > > > > In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1\n> > > > > > > should copy both the explicitly defined sequences and sequences\n> > > > > > > defined with the tables. Do you think a different variant for just\n> > > > > > > copying sequences implicitly associated with tables (say for identity\n> > > > > > > columns)?\n> > > > > >\n> > > > > > Oh, I was thinking that your proposal was to copy literally all\n> > > > > > sequences by REPLICA/REFRESH SEQUENCE command.\n> > > > > >\n> > > >\n> > > > I am trying to keep the behavior as close to tables as possible.\n> > > >\n> > > > > > But it seems to make\n> > > > > > sense to explicitly specify the sequences they want to replicate. It\n> > > > > > also means that they can create a publication that has only sequences.\n> > > > > > In this case, even if they create a subscription for that publication,\n> > > > > > we don't launch any apply workers for that subscription. Right?\n> > > > > >\n> > > >\n> > > > Right, good point. I had not thought about this.\n> > > >\n> > > > > > Also, given that the main use case (at least as the first step) is\n> > > > > > version upgrade, do we really need to support SEQUENCES IN SCHEMA and\n> > > > > > even FOR SEQUENCE?\n> > > > >\n> > > >\n> > > > At the very least, we can split the patch to move these variants to a\n> > > > separate patch. Once the main patch is finalized, we can try to\n> > > > evaluate the remaining separately.\n> > >\n> > > I engaged in an offline discussion with Amit about strategizing the\n> > > division of patches to facilitate the review process. We agreed on the\n> > > following split: The first patch will encompass the setting and\n> > > getting of sequence values (core sequence changes). The second patch\n> > > will cover all changes on the publisher side related to \"FOR ALL\n> > > SEQUENCES.\" The third patch will address subscriber side changes aimed\n> > > at synchronizing \"FOR ALL SEQUENCES\" publications. The fourth patch\n> > > will focus on supporting \"FOR SEQUENCE\" publication. Lastly, the fifth\n> > > patch will introduce support for \"FOR ALL SEQUENCES IN SCHEMA\"\n> > > publication.\n> > >\n> > > I will work on this and share an updated patch for the same soon.\n> >\n> > +1. Sounds like a good plan.\n>\n> Amit and I engaged in an offline discussion regarding the design and\n> contemplated that it could be like below:\n> 1) CREATE PUBLICATION syntax enhancement:\n> CREATE PUBLICATION ... FOR ALL SEQUENCES;\n> The addition of a new column titled \"all sequences\" in the\n> pg_publication system table will signify whether the publication is\n> designated as all sequences publication or not.\n>\n\nThe first approach sounds like we don't create entries for sequences\nin pg_subscription_rel. In this case, how do we know all sequences\nthat we need to refresh when executing the REFRESH PUBLICATION\nSEQUENCES command you mentioned below?\n\n> 2) CREATE SUBSCRIPTION -- no syntax change.\n> Upon creation of a subscription, the following additional steps will\n> be managed by the subscriber:\n> i) The subscriber will retrieve the list of sequences associated with\n> the subscription's publications.\n> ii) For each sequence: a) Retrieve the sequence value from the\n> publisher by invoking the pg_sequence_state function. b) Set the\n> sequence with the value obtained from the publisher. iv) Once the\n> subscription creation is completed, all sequence values will become\n> visible at the subscriber's end.\n\nSequence values are always copied from the publisher? or does it\nhappen only when copy_data = true?\n\n>\n> An alternative design approach could involve retrieving the sequence\n> list from the publisher during subscription creation and inserting the\n> sequences with an \"init\" state into the pg_subscription_rel system\n> table. These tasks could be executed by a single sequence sync worker,\n> which would:\n> i) Retrieve the list of sequences in the \"init\" state from the\n> pg_subscription_rel system table.\n> ii) Initiate a transaction.\n> iii) For each sequence: a) Obtain the sequence value from the\n> publisher by utilizing the pg_sequence_state function. b) Update the\n> sequence with the value obtained from the publisher.\n> iv) Commit the transaction.\n>\n> The benefit with the second approach is that if there are large number\n> of sequences, the sequence sync can be enhanced to happen in parallel\n> and also if there are any locks held on the sequences in the\n> publisher, the sequence worker can wait to acquire the lock instead of\n> blocking the whole create subscription command which will delay the\n> initial copy of the tables too.\n\nI prefer to have separate workers to sync sequences. Probably we can\nstart with a single worker and extend it to have multiple workers. BTW\nthe sequence-sync worker will be taken from\nmax_sync_workers_per_subscription pool?\n\nOr yet another idea I came up with is that a tablesync worker will\nsynchronize both the table and sequences owned by the table. That is,\nafter the tablesync worker caught up with the apply worker, the\ntablesync worker synchronizes sequences associated with the target\ntable as well. One benefit would be that at the time of initial table\nsync being completed, the table and its sequence data are consistent.\nAs soon as new changes come to the table, it would become inconsistent\nso it might not be helpful much, though. Also, sequences that are not\nowned by any table will still need to be synchronized by someone.\n\n>\n> 3) Refreshing the sequence can be achieved through the existing\n> command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change\n> here).\n> The subscriber identifies stale sequences, meaning sequences present\n> in pg_subscription_rel but absent from the publication, and removes\n> them from the pg_subscription_rel system table. The subscriber also\n> checks for newly added sequences in the publisher and synchronizes\n> their values from the publisher using the steps outlined in the\n> subscription creation process. It's worth noting that previously\n> synchronized sequences won't be synchronized again; the sequence sync\n> will occur solely for the newly added sequences.\n>\n> 4) Introducing a new command for refreshing all sequences: ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.\n> The subscriber will remove stale sequences and add newly added\n> sequences from the publisher. Following this, it will re-synchronize\n> the sequence values for all sequences in the updated list from the\n> publisher, following the steps outlined in the subscription creation\n> process.\n\nThe difference between 3) and 4) is whether or not to re-synchronize\nthe previously synchronized sequences. Do we really want to introduce\na new command for 4)? I felt that we can invent an option say\ncopy_all_sequence for the REFRESH PUBLICATION command to cover the 4)\ncase.\n\n>\n> 5) Incorporate the pg_sequence_state function to fetch the sequence\n> value from the publisher, along with the page LSN. Incorporate\n> SetSequence function, which will procure a new relfilenode for the\n> sequence and set the new relfilenode with the specified value. This\n> will facilitate rollback in case of any failures.\n\nDoes it mean that we create a new relfilenode for every update of the value?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:13:31 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Jun 11, 2024 at 4:06 PM vignesh C <[email protected]> wrote:\n>\n> Amit and I engaged in an offline discussion regarding the design and\n> contemplated that it could be like below:\n\nIf I understand correctly, does this require the sequences to already\nexist on the subscribing node before creating the subscription, or\nwill it also copy any non-existing sequences?\n\n> 1) CREATE PUBLICATION syntax enhancement:\n> CREATE PUBLICATION ... FOR ALL SEQUENCES;\n> The addition of a new column titled \"all sequences\" in the\n> pg_publication system table will signify whether the publication is\n> designated as all sequences publication or not.\n>\n> 2) CREATE SUBSCRIPTION -- no syntax change.\n> Upon creation of a subscription, the following additional steps will\n> be managed by the subscriber:\n> i) The subscriber will retrieve the list of sequences associated with\n> the subscription's publications.\n> ii) For each sequence: a) Retrieve the sequence value from the\n> publisher by invoking the pg_sequence_state function. b) Set the\n> sequence with the value obtained from the publisher. iv) Once the\n> subscription creation is completed, all sequence values will become\n> visible at the subscriber's end.\n>\n> An alternative design approach could involve retrieving the sequence\n> list from the publisher during subscription creation and inserting the\n> sequences with an \"init\" state into the pg_subscription_rel system\n> table. These tasks could be executed by a single sequence sync worker,\n> which would:\n> i) Retrieve the list of sequences in the \"init\" state from the\n> pg_subscription_rel system table.\n> ii) Initiate a transaction.\n> iii) For each sequence: a) Obtain the sequence value from the\n> publisher by utilizing the pg_sequence_state function. b) Update the\n> sequence with the value obtained from the publisher.\n> iv) Commit the transaction.\n>\n> The benefit with the second approach is that if there are large number\n> of sequences, the sequence sync can be enhanced to happen in parallel\n> and also if there are any locks held on the sequences in the\n> publisher, the sequence worker can wait to acquire the lock instead of\n> blocking the whole create subscription command which will delay the\n> initial copy of the tables too.\n\nYeah w.r.t. this point second approach seems better.\n\n> 3) Refreshing the sequence can be achieved through the existing\n> command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change\n> here).\n> The subscriber identifies stale sequences, meaning sequences present\n> in pg_subscription_rel but absent from the publication, and removes\n> them from the pg_subscription_rel system table. The subscriber also\n> checks for newly added sequences in the publisher and synchronizes\n> their values from the publisher using the steps outlined in the\n> subscription creation process. It's worth noting that previously\n> synchronized sequences won't be synchronized again; the sequence sync\n> will occur solely for the newly added sequences.\n\n> 4) Introducing a new command for refreshing all sequences: ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.\n> The subscriber will remove stale sequences and add newly added\n> sequences from the publisher. Following this, it will re-synchronize\n> the sequence values for all sequences in the updated list from the\n> publisher, following the steps outlined in the subscription creation\n> process.\n\nOkay, this answers my first question: we will remove the sequences\nthat are removed from the publisher and add the new sequences. I don't\nsee any problem with this, but doesn't it seem like we are effectively\ndoing DDL replication only for sequences without having a\ncomprehensive plan for overall DDL replication?\n\n> 5) Incorporate the pg_sequence_state function to fetch the sequence\n> value from the publisher, along with the page LSN. Incorporate\n> SetSequence function, which will procure a new relfilenode for the\n> sequence and set the new relfilenode with the specified value. This\n> will facilitate rollback in case of any failures.\n\nI do not understand this point, you mean whenever we are fetching the\nsequence value from the publisher we need to create a new relfilenode\non the subscriber? Why not just update the catalog tuple is\nsufficient? Or this is for handling the ALTER SEQUENCE case?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:51:29 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 12, 2024 at 10:44 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Jun 11, 2024 at 7:36 PM vignesh C <[email protected]> wrote:\n> >\n> > 1) CREATE PUBLICATION syntax enhancement:\n> > CREATE PUBLICATION ... FOR ALL SEQUENCES;\n> > The addition of a new column titled \"all sequences\" in the\n> > pg_publication system table will signify whether the publication is\n> > designated as all sequences publication or not.\n> >\n>\n> The first approach sounds like we don't create entries for sequences\n> in pg_subscription_rel. In this case, how do we know all sequences\n> that we need to refresh when executing the REFRESH PUBLICATION\n> SEQUENCES command you mentioned below?\n>\n\nAs per my understanding, we should be creating entries for sequences\nin pg_subscription_rel similar to tables. The difference would be that\nwe won't need all the sync_states (i = initialize, d = data is being\ncopied, f = finished table copy, s = synchronized, r = ready) as we\ndon't need any synchronization with apply workers.\n\n> > 2) CREATE SUBSCRIPTION -- no syntax change.\n> > Upon creation of a subscription, the following additional steps will\n> > be managed by the subscriber:\n> > i) The subscriber will retrieve the list of sequences associated with\n> > the subscription's publications.\n> > ii) For each sequence: a) Retrieve the sequence value from the\n> > publisher by invoking the pg_sequence_state function. b) Set the\n> > sequence with the value obtained from the publisher. iv) Once the\n> > subscription creation is completed, all sequence values will become\n> > visible at the subscriber's end.\n>\n> Sequence values are always copied from the publisher? or does it\n> happen only when copy_data = true?\n>\n\nIt is better to do it when \"copy_data = true\" to keep it compatible\nwith the table's behavior.\n\n> >\n> > An alternative design approach could involve retrieving the sequence\n> > list from the publisher during subscription creation and inserting the\n> > sequences with an \"init\" state into the pg_subscription_rel system\n> > table. These tasks could be executed by a single sequence sync worker,\n> > which would:\n> > i) Retrieve the list of sequences in the \"init\" state from the\n> > pg_subscription_rel system table.\n> > ii) Initiate a transaction.\n> > iii) For each sequence: a) Obtain the sequence value from the\n> > publisher by utilizing the pg_sequence_state function. b) Update the\n> > sequence with the value obtained from the publisher.\n> > iv) Commit the transaction.\n> >\n> > The benefit with the second approach is that if there are large number\n> > of sequences, the sequence sync can be enhanced to happen in parallel\n> > and also if there are any locks held on the sequences in the\n> > publisher, the sequence worker can wait to acquire the lock instead of\n> > blocking the whole create subscription command which will delay the\n> > initial copy of the tables too.\n>\n> I prefer to have separate workers to sync sequences.\n>\n\n+1.\n\n> Probably we can\n> start with a single worker and extend it to have multiple workers.\n\nYeah, starting with a single worker sounds good for now. Do you think\nwe should sync all the sequences in a single transaction or have some\nthreshold value above which a different transaction would be required\nor maybe a different sequence sync worker altogether? Now, having\nmultiple sequence-sync workers requires some synchronization so that\nonly a single worker is allocated for one sequence.\n\nThe simplest thing is to use a single sequence sync worker that syncs\nall sequences in one transaction but with a large number of sequences,\nit could be inefficient. OTOH, I am not sure if it would be a problem\nin reality.\n\n>\n> BTW\n> the sequence-sync worker will be taken from\n> max_sync_workers_per_subscription pool?\n>\n\nI think so.\n\n> Or yet another idea I came up with is that a tablesync worker will\n> synchronize both the table and sequences owned by the table. That is,\n> after the tablesync worker caught up with the apply worker, the\n> tablesync worker synchronizes sequences associated with the target\n> table as well. One benefit would be that at the time of initial table\n> sync being completed, the table and its sequence data are consistent.\n> As soon as new changes come to the table, it would become inconsistent\n> so it might not be helpful much, though. Also, sequences that are not\n> owned by any table will still need to be synchronized by someone.\n>\n\nThe other thing to consider in this idea is that we somehow need to\ndistinguish the sequences owned by the table.\n\n> >\n> > 3) Refreshing the sequence can be achieved through the existing\n> > command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change\n> > here).\n> > The subscriber identifies stale sequences, meaning sequences present\n> > in pg_subscription_rel but absent from the publication, and removes\n> > them from the pg_subscription_rel system table. The subscriber also\n> > checks for newly added sequences in the publisher and synchronizes\n> > their values from the publisher using the steps outlined in the\n> > subscription creation process. It's worth noting that previously\n> > synchronized sequences won't be synchronized again; the sequence sync\n> > will occur solely for the newly added sequences.\n> >\n> > 4) Introducing a new command for refreshing all sequences: ALTER\n> > SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.\n> > The subscriber will remove stale sequences and add newly added\n> > sequences from the publisher. Following this, it will re-synchronize\n> > the sequence values for all sequences in the updated list from the\n> > publisher, following the steps outlined in the subscription creation\n> > process.\n>\n> The difference between 3) and 4) is whether or not to re-synchronize\n> the previously synchronized sequences. Do we really want to introduce\n> a new command for 4)? I felt that we can invent an option say\n> copy_all_sequence for the REFRESH PUBLICATION command to cover the 4)\n> case.\n>\n\nYeah, that is also an option but it could confuse along with copy_data\noption. Say the user has selected copy_data = false but\ncopy_all_sequences = true then the first option indicates to *not*\ncopy the data of table and sequences and the second option indicates\nto copy the sequences data which sounds contradictory. The other idea\nis to have an option copy_existing_sequences (which indicates to copy\nexisting sequence values) but that also has somewhat the same drawback\nas copy_all_sequences but to a lesser degree.\n\n> >\n> > 5) Incorporate the pg_sequence_state function to fetch the sequence\n> > value from the publisher, along with the page LSN. Incorporate\n> > SetSequence function, which will procure a new relfilenode for the\n> > sequence and set the new relfilenode with the specified value. This\n> > will facilitate rollback in case of any failures.\n>\n> Does it mean that we create a new relfilenode for every update of the value?\n>\n\nWe need it for initial sync so that if there is an error both the\nsequence state in pg_subscription_rel and sequence values can be\nrolled back together. However, it is unclear whether we need to create\na new relfilenode while copying existing sequences (say during ALTER\nSUBSCRIPTION .. REFRESH PUBLICATION SEQUENCES, or whatever command we\ndecide)? Probably the answer lies in how we want to implement this\ncommand. If we want to copy all sequence values during the command\nitself then it is probably okay but if we want to handover this task\nto the sequence-sync worker then we need some state management and a\nnew relfilenode so that on error both state and sequence values are\nrolled back.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 Jun 2024 15:29:00 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 12 Jun 2024 at 10:51, Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jun 11, 2024 at 4:06 PM vignesh C <[email protected]> wrote:\n> >\n> > Amit and I engaged in an offline discussion regarding the design and\n> > contemplated that it could be like below:\n>\n> If I understand correctly, does this require the sequences to already\n> exist on the subscribing node before creating the subscription, or\n> will it also copy any non-existing sequences?\n\nSequences must exist in the subscriber; we'll synchronize only their\nvalues. Any sequences that are not present in the subscriber will\ntrigger an error.\n\n> > 1) CREATE PUBLICATION syntax enhancement:\n> > CREATE PUBLICATION ... FOR ALL SEQUENCES;\n> > The addition of a new column titled \"all sequences\" in the\n> > pg_publication system table will signify whether the publication is\n> > designated as all sequences publication or not.\n> >\n> > 2) CREATE SUBSCRIPTION -- no syntax change.\n> > Upon creation of a subscription, the following additional steps will\n> > be managed by the subscriber:\n> > i) The subscriber will retrieve the list of sequences associated with\n> > the subscription's publications.\n> > ii) For each sequence: a) Retrieve the sequence value from the\n> > publisher by invoking the pg_sequence_state function. b) Set the\n> > sequence with the value obtained from the publisher. iv) Once the\n> > subscription creation is completed, all sequence values will become\n> > visible at the subscriber's end.\n> >\n> > An alternative design approach could involve retrieving the sequence\n> > list from the publisher during subscription creation and inserting the\n> > sequences with an \"init\" state into the pg_subscription_rel system\n> > table. These tasks could be executed by a single sequence sync worker,\n> > which would:\n> > i) Retrieve the list of sequences in the \"init\" state from the\n> > pg_subscription_rel system table.\n> > ii) Initiate a transaction.\n> > iii) For each sequence: a) Obtain the sequence value from the\n> > publisher by utilizing the pg_sequence_state function. b) Update the\n> > sequence with the value obtained from the publisher.\n> > iv) Commit the transaction.\n> >\n> > The benefit with the second approach is that if there are large number\n> > of sequences, the sequence sync can be enhanced to happen in parallel\n> > and also if there are any locks held on the sequences in the\n> > publisher, the sequence worker can wait to acquire the lock instead of\n> > blocking the whole create subscription command which will delay the\n> > initial copy of the tables too.\n>\n> Yeah w.r.t. this point second approach seems better.\n\nok\n\n> > 3) Refreshing the sequence can be achieved through the existing\n> > command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change\n> > here).\n> > The subscriber identifies stale sequences, meaning sequences present\n> > in pg_subscription_rel but absent from the publication, and removes\n> > them from the pg_subscription_rel system table. The subscriber also\n> > checks for newly added sequences in the publisher and synchronizes\n> > their values from the publisher using the steps outlined in the\n> > subscription creation process. It's worth noting that previously\n> > synchronized sequences won't be synchronized again; the sequence sync\n> > will occur solely for the newly added sequences.\n>\n> > 4) Introducing a new command for refreshing all sequences: ALTER\n> > SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.\n> > The subscriber will remove stale sequences and add newly added\n> > sequences from the publisher. Following this, it will re-synchronize\n> > the sequence values for all sequences in the updated list from the\n> > publisher, following the steps outlined in the subscription creation\n> > process.\n>\n> Okay, this answers my first question: we will remove the sequences\n> that are removed from the publisher and add the new sequences. I don't\n> see any problem with this, but doesn't it seem like we are effectively\n> doing DDL replication only for sequences without having a\n> comprehensive plan for overall DDL replication?\n\nWhat I intended to convey is that we'll eliminate the sequences from\npg_subscription_rel. We won't facilitate the DDL replication of\nsequences; instead, we anticipate users to create the sequences\nthemselves.\n\n> > 5) Incorporate the pg_sequence_state function to fetch the sequence\n> > value from the publisher, along with the page LSN. Incorporate\n> > SetSequence function, which will procure a new relfilenode for the\n> > sequence and set the new relfilenode with the specified value. This\n> > will facilitate rollback in case of any failures.\n>\n> I do not understand this point, you mean whenever we are fetching the\n> sequence value from the publisher we need to create a new relfilenode\n> on the subscriber? Why not just update the catalog tuple is\n> sufficient? Or this is for handling the ALTER SEQUENCE case?\n\nSequences operate distinctively from tables. Alterations to sequences\nreflect instantly in another session, even before committing the\ntransaction. To ensure the synchronization of sequence value and state\nupdates in pg_subscription_rel, we assign it a new relfilenode. This\nstrategy ensures that any potential errors allow for the rollback of\nboth the sequence state in pg_subscription_rel and the sequence values\nsimultaneously.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 12 Jun 2024 16:08:04 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 12, 2024 at 4:08 PM vignesh C <[email protected]> wrote:\n>\n> On Wed, 12 Jun 2024 at 10:51, Dilip Kumar <[email protected]> wrote:\n> >\n> > On Tue, Jun 11, 2024 at 4:06 PM vignesh C <[email protected]> wrote:\n> > >\n> > > Amit and I engaged in an offline discussion regarding the design and\n> > > contemplated that it could be like below:\n> >\n> > If I understand correctly, does this require the sequences to already\n> > exist on the subscribing node before creating the subscription, or\n> > will it also copy any non-existing sequences?\n>\n> Sequences must exist in the subscriber; we'll synchronize only their\n> values. Any sequences that are not present in the subscriber will\n> trigger an error.\n\nOkay, that makes sense.\n\n>\n> > > 3) Refreshing the sequence can be achieved through the existing\n> > > command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change\n> > > here).\n> > > The subscriber identifies stale sequences, meaning sequences present\n> > > in pg_subscription_rel but absent from the publication, and removes\n> > > them from the pg_subscription_rel system table. The subscriber also\n> > > checks for newly added sequences in the publisher and synchronizes\n> > > their values from the publisher using the steps outlined in the\n> > > subscription creation process. It's worth noting that previously\n> > > synchronized sequences won't be synchronized again; the sequence sync\n> > > will occur solely for the newly added sequences.\n> >\n> > > 4) Introducing a new command for refreshing all sequences: ALTER\n> > > SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.\n> > > The subscriber will remove stale sequences and add newly added\n> > > sequences from the publisher. Following this, it will re-synchronize\n> > > the sequence values for all sequences in the updated list from the\n> > > publisher, following the steps outlined in the subscription creation\n> > > process.\n> >\n> > Okay, this answers my first question: we will remove the sequences\n> > that are removed from the publisher and add the new sequences. I don't\n> > see any problem with this, but doesn't it seem like we are effectively\n> > doing DDL replication only for sequences without having a\n> > comprehensive plan for overall DDL replication?\n>\n> What I intended to convey is that we'll eliminate the sequences from\n> pg_subscription_rel. We won't facilitate the DDL replication of\n> sequences; instead, we anticipate users to create the sequences\n> themselves.\n\nhmm okay.\n\n> > > 5) Incorporate the pg_sequence_state function to fetch the sequence\n> > > value from the publisher, along with the page LSN. Incorporate\n> > > SetSequence function, which will procure a new relfilenode for the\n> > > sequence and set the new relfilenode with the specified value. This\n> > > will facilitate rollback in case of any failures.\n> >\n> > I do not understand this point, you mean whenever we are fetching the\n> > sequence value from the publisher we need to create a new relfilenode\n> > on the subscriber? Why not just update the catalog tuple is\n> > sufficient? Or this is for handling the ALTER SEQUENCE case?\n>\n> Sequences operate distinctively from tables. Alterations to sequences\n> reflect instantly in another session, even before committing the\n> transaction. To ensure the synchronization of sequence value and state\n> updates in pg_subscription_rel, we assign it a new relfilenode. This\n> strategy ensures that any potential errors allow for the rollback of\n> both the sequence state in pg_subscription_rel and the sequence values\n> simultaneously.\n\nSo, you're saying that when we synchronize the sequence values on the\nsubscriber side, we will create a new relfilenode to allow reverting\nto the old state of the sequence in case of an error or transaction\nrollback? But why would we want to do that? Generally, even if you\ncall nextval() on a sequence and then roll back the transaction, the\nsequence value doesn't revert to the old value. So, what specific\nproblem on the subscriber side are we trying to avoid by operating on\na new relfilenode?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 17:08:49 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 12 Jun 2024 at 17:09, Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Jun 12, 2024 at 4:08 PM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 12 Jun 2024 at 10:51, Dilip Kumar <[email protected]> wrote:\n> > >\n> > > On Tue, Jun 11, 2024 at 4:06 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > Amit and I engaged in an offline discussion regarding the design and\n> > > > contemplated that it could be like below:\n> > >\n> > > If I understand correctly, does this require the sequences to already\n> > > exist on the subscribing node before creating the subscription, or\n> > > will it also copy any non-existing sequences?\n> >\n> > Sequences must exist in the subscriber; we'll synchronize only their\n> > values. Any sequences that are not present in the subscriber will\n> > trigger an error.\n>\n> Okay, that makes sense.\n>\n> >\n> > > > 3) Refreshing the sequence can be achieved through the existing\n> > > > command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change\n> > > > here).\n> > > > The subscriber identifies stale sequences, meaning sequences present\n> > > > in pg_subscription_rel but absent from the publication, and removes\n> > > > them from the pg_subscription_rel system table. The subscriber also\n> > > > checks for newly added sequences in the publisher and synchronizes\n> > > > their values from the publisher using the steps outlined in the\n> > > > subscription creation process. It's worth noting that previously\n> > > > synchronized sequences won't be synchronized again; the sequence sync\n> > > > will occur solely for the newly added sequences.\n> > >\n> > > > 4) Introducing a new command for refreshing all sequences: ALTER\n> > > > SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.\n> > > > The subscriber will remove stale sequences and add newly added\n> > > > sequences from the publisher. Following this, it will re-synchronize\n> > > > the sequence values for all sequences in the updated list from the\n> > > > publisher, following the steps outlined in the subscription creation\n> > > > process.\n> > >\n> > > Okay, this answers my first question: we will remove the sequences\n> > > that are removed from the publisher and add the new sequences. I don't\n> > > see any problem with this, but doesn't it seem like we are effectively\n> > > doing DDL replication only for sequences without having a\n> > > comprehensive plan for overall DDL replication?\n> >\n> > What I intended to convey is that we'll eliminate the sequences from\n> > pg_subscription_rel. We won't facilitate the DDL replication of\n> > sequences; instead, we anticipate users to create the sequences\n> > themselves.\n>\n> hmm okay.\n>\n> > > > 5) Incorporate the pg_sequence_state function to fetch the sequence\n> > > > value from the publisher, along with the page LSN. Incorporate\n> > > > SetSequence function, which will procure a new relfilenode for the\n> > > > sequence and set the new relfilenode with the specified value. This\n> > > > will facilitate rollback in case of any failures.\n> > >\n> > > I do not understand this point, you mean whenever we are fetching the\n> > > sequence value from the publisher we need to create a new relfilenode\n> > > on the subscriber? Why not just update the catalog tuple is\n> > > sufficient? Or this is for handling the ALTER SEQUENCE case?\n> >\n> > Sequences operate distinctively from tables. Alterations to sequences\n> > reflect instantly in another session, even before committing the\n> > transaction. To ensure the synchronization of sequence value and state\n> > updates in pg_subscription_rel, we assign it a new relfilenode. This\n> > strategy ensures that any potential errors allow for the rollback of\n> > both the sequence state in pg_subscription_rel and the sequence values\n> > simultaneously.\n>\n> So, you're saying that when we synchronize the sequence values on the\n> subscriber side, we will create a new relfilenode to allow reverting\n> to the old state of the sequence in case of an error or transaction\n> rollback? But why would we want to do that? Generally, even if you\n> call nextval() on a sequence and then roll back the transaction, the\n> sequence value doesn't revert to the old value. So, what specific\n> problem on the subscriber side are we trying to avoid by operating on\n> a new relfilenode?\n\nLet's consider a situation where we have two sequences: seq1 with a\nvalue of 100 and seq2 with a value of 200. Now, let's say seq1 is\nsynced and updated to 100, then we attempt to synchronize seq2,\nthere's a failure due to the sequence not existing or encountering\nsome other issue. In this scenario, we don't want to halt operations\nwhere seq1 is synchronized, but the sequence state for sequence isn't\nchanged to \"ready\" in pg_subscription_rel.\nUpdating the sequence data directly reflects the sequence change\nimmediately. However, if we assign a new relfile node for the sequence\nand update the sequence value for the new relfile node, until the\ntransaction is committed, other concurrent users will still be\nutilizing the old relfile node for the sequence, and only the old data\nwill be visible. Once all sequences are synchronized, and the sequence\nstate is updated in pg_subscription_rel, the transaction will either\nbe committed or aborted. If committed, users will be able to observe\nthe new sequence values because the sequences will be updated with the\nnew relfile node containing the updated sequence value.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 13 Jun 2024 10:10:24 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 13, 2024 at 10:10 AM vignesh C <[email protected]> wrote:\n>\n> > So, you're saying that when we synchronize the sequence values on the\n> > subscriber side, we will create a new relfilenode to allow reverting\n> > to the old state of the sequence in case of an error or transaction\n> > rollback? But why would we want to do that? Generally, even if you\n> > call nextval() on a sequence and then roll back the transaction, the\n> > sequence value doesn't revert to the old value. So, what specific\n> > problem on the subscriber side are we trying to avoid by operating on\n> > a new relfilenode?\n>\n> Let's consider a situation where we have two sequences: seq1 with a\n> value of 100 and seq2 with a value of 200. Now, let's say seq1 is\n> synced and updated to 100, then we attempt to synchronize seq2,\n> there's a failure due to the sequence not existing or encountering\n> some other issue. In this scenario, we don't want to halt operations\n> where seq1 is synchronized, but the sequence state for sequence isn't\n> changed to \"ready\" in pg_subscription_rel.\n\nThanks for the explanation, but I am still not getting it completely,\ndo you mean to say unless all the sequences are not synced any of the\nsequences would not be marked \"ready\" in pg_subscription_rel? Is that\nnecessary? I mean why we can not sync the sequences one by one and\nmark them ready? Why it is necessary to either have all the sequences\nsynced or none of them?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 10:27:21 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 13 Jun 2024 at 10:27, Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Jun 13, 2024 at 10:10 AM vignesh C <[email protected]> wrote:\n> >\n> > > So, you're saying that when we synchronize the sequence values on the\n> > > subscriber side, we will create a new relfilenode to allow reverting\n> > > to the old state of the sequence in case of an error or transaction\n> > > rollback? But why would we want to do that? Generally, even if you\n> > > call nextval() on a sequence and then roll back the transaction, the\n> > > sequence value doesn't revert to the old value. So, what specific\n> > > problem on the subscriber side are we trying to avoid by operating on\n> > > a new relfilenode?\n> >\n> > Let's consider a situation where we have two sequences: seq1 with a\n> > value of 100 and seq2 with a value of 200. Now, let's say seq1 is\n> > synced and updated to 100, then we attempt to synchronize seq2,\n> > there's a failure due to the sequence not existing or encountering\n> > some other issue. In this scenario, we don't want to halt operations\n> > where seq1 is synchronized, but the sequence state for sequence isn't\n> > changed to \"ready\" in pg_subscription_rel.\n>\n> Thanks for the explanation, but I am still not getting it completely,\n> do you mean to say unless all the sequences are not synced any of the\n> sequences would not be marked \"ready\" in pg_subscription_rel? Is that\n> necessary? I mean why we can not sync the sequences one by one and\n> mark them ready? Why it is necessary to either have all the sequences\n> synced or none of them?\n\nSince updating the sequence is one operation and setting\npg_subscription_rel is another, I was trying to avoid a situation\nwhere the sequence is updated but its state is not reflected in\npg_subscription_rel. It seems you are suggesting that it's acceptable\nfor the sequence to be updated even if its state isn't updated in\npg_subscription_rel, and in such cases, the sequence value does not\nneed to be reverted.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 13 Jun 2024 11:53:34 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 13, 2024 at 11:53 AM vignesh C <[email protected]> wrote:\n>\n> On Thu, 13 Jun 2024 at 10:27, Dilip Kumar <[email protected]> wrote:\n\n> > Thanks for the explanation, but I am still not getting it completely,\n> > do you mean to say unless all the sequences are not synced any of the\n> > sequences would not be marked \"ready\" in pg_subscription_rel? Is that\n> > necessary? I mean why we can not sync the sequences one by one and\n> > mark them ready? Why it is necessary to either have all the sequences\n> > synced or none of them?\n>\n> Since updating the sequence is one operation and setting\n> pg_subscription_rel is another, I was trying to avoid a situation\n> where the sequence is updated but its state is not reflected in\n> pg_subscription_rel. It seems you are suggesting that it's acceptable\n> for the sequence to be updated even if its state isn't updated in\n> pg_subscription_rel, and in such cases, the sequence value does not\n> need to be reverted.\n\nRight, the complexity we're adding to achieve a behavior that may not\nbe truly desirable is a concern. For instance, if we mark the status\nas ready but do not sync the sequences, it could lead to issues.\nHowever, if we have synced some sequences but encounter a failure\nwithout marking the status as ready, I don't consider it inconsistent\nin any way. But anyway, now I understand your thinking behind that so\nit's a good idea to leave this design behavior for a later decision.\nGathering more opinions and insights during later stages will provide\na clearer perspective on how to proceed with this aspect. Thanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 12:12:20 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 12, 2024 at 6:59 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jun 12, 2024 at 10:44 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Jun 11, 2024 at 7:36 PM vignesh C <[email protected]> wrote:\n> > >\n> > > 1) CREATE PUBLICATION syntax enhancement:\n> > > CREATE PUBLICATION ... FOR ALL SEQUENCES;\n> > > The addition of a new column titled \"all sequences\" in the\n> > > pg_publication system table will signify whether the publication is\n> > > designated as all sequences publication or not.\n> > >\n> >\n> > The first approach sounds like we don't create entries for sequences\n> > in pg_subscription_rel. In this case, how do we know all sequences\n> > that we need to refresh when executing the REFRESH PUBLICATION\n> > SEQUENCES command you mentioned below?\n> >\n>\n> As per my understanding, we should be creating entries for sequences\n> in pg_subscription_rel similar to tables. The difference would be that\n> we won't need all the sync_states (i = initialize, d = data is being\n> copied, f = finished table copy, s = synchronized, r = ready) as we\n> don't need any synchronization with apply workers.\n\nAgreed.\n\n>\n> > > 2) CREATE SUBSCRIPTION -- no syntax change.\n> > > Upon creation of a subscription, the following additional steps will\n> > > be managed by the subscriber:\n> > > i) The subscriber will retrieve the list of sequences associated with\n> > > the subscription's publications.\n> > > ii) For each sequence: a) Retrieve the sequence value from the\n> > > publisher by invoking the pg_sequence_state function. b) Set the\n> > > sequence with the value obtained from the publisher. iv) Once the\n> > > subscription creation is completed, all sequence values will become\n> > > visible at the subscriber's end.\n> >\n> > Sequence values are always copied from the publisher? or does it\n> > happen only when copy_data = true?\n> >\n>\n> It is better to do it when \"copy_data = true\" to keep it compatible\n> with the table's behavior.\n\n+1\n\n>\n> > Probably we can\n> > start with a single worker and extend it to have multiple workers.\n>\n> Yeah, starting with a single worker sounds good for now. Do you think\n> we should sync all the sequences in a single transaction or have some\n> threshold value above which a different transaction would be required\n> or maybe a different sequence sync worker altogether? Now, having\n> multiple sequence-sync workers requires some synchronization so that\n> only a single worker is allocated for one sequence.\n>\n> The simplest thing is to use a single sequence sync worker that syncs\n> all sequences in one transaction but with a large number of sequences,\n> it could be inefficient. OTOH, I am not sure if it would be a problem\n> in reality.\n\nI think that we can start with using a single worker and one\ntransaction, and measure the performance with a large number of\nsequences.\n\n> > Or yet another idea I came up with is that a tablesync worker will\n> > synchronize both the table and sequences owned by the table. That is,\n> > after the tablesync worker caught up with the apply worker, the\n> > tablesync worker synchronizes sequences associated with the target\n> > table as well. One benefit would be that at the time of initial table\n> > sync being completed, the table and its sequence data are consistent.\n\nCorrection; it's not guaranteed that the sequence data and table data\nare consistent even in this case since the tablesync worker could get\non-disk sequence data that might have already been updated.\n\n> > As soon as new changes come to the table, it would become inconsistent\n> > so it might not be helpful much, though. Also, sequences that are not\n> > owned by any table will still need to be synchronized by someone.\n> >\n>\n> The other thing to consider in this idea is that we somehow need to\n> distinguish the sequences owned by the table.\n\nI think we can check pg_depend. The owned sequences reference to the table.\n\n>\n> > >\n> > > 3) Refreshing the sequence can be achieved through the existing\n> > > command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change\n> > > here).\n> > > The subscriber identifies stale sequences, meaning sequences present\n> > > in pg_subscription_rel but absent from the publication, and removes\n> > > them from the pg_subscription_rel system table. The subscriber also\n> > > checks for newly added sequences in the publisher and synchronizes\n> > > their values from the publisher using the steps outlined in the\n> > > subscription creation process. It's worth noting that previously\n> > > synchronized sequences won't be synchronized again; the sequence sync\n> > > will occur solely for the newly added sequences.\n> > >\n> > > 4) Introducing a new command for refreshing all sequences: ALTER\n> > > SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.\n> > > The subscriber will remove stale sequences and add newly added\n> > > sequences from the publisher. Following this, it will re-synchronize\n> > > the sequence values for all sequences in the updated list from the\n> > > publisher, following the steps outlined in the subscription creation\n> > > process.\n> >\n> > The difference between 3) and 4) is whether or not to re-synchronize\n> > the previously synchronized sequences. Do we really want to introduce\n> > a new command for 4)? I felt that we can invent an option say\n> > copy_all_sequence for the REFRESH PUBLICATION command to cover the 4)\n> > case.\n> >\n>\n> Yeah, that is also an option but it could confuse along with copy_data\n> option. Say the user has selected copy_data = false but\n> copy_all_sequences = true then the first option indicates to *not*\n> copy the data of table and sequences and the second option indicates\n> to copy the sequences data which sounds contradictory. The other idea\n> is to have an option copy_existing_sequences (which indicates to copy\n> existing sequence values) but that also has somewhat the same drawback\n> as copy_all_sequences but to a lesser degree.\n\nGood point. And I understood that the REFRESH PUBLICATION SEQUENCES\ncommand would be helpful when users want to synchronize sequences\nbetween two nodes before upgrading.\n\n>\n> > >\n> > > 5) Incorporate the pg_sequence_state function to fetch the sequence\n> > > value from the publisher, along with the page LSN. Incorporate\n> > > SetSequence function, which will procure a new relfilenode for the\n> > > sequence and set the new relfilenode with the specified value. This\n> > > will facilitate rollback in case of any failures.\n> >\n> > Does it mean that we create a new relfilenode for every update of the value?\n> >\n>\n> We need it for initial sync so that if there is an error both the\n> sequence state in pg_subscription_rel and sequence values can be\n> rolled back together.\n\nAgreed.\n\n> However, it is unclear whether we need to create\n> a new relfilenode while copying existing sequences (say during ALTER\n> SUBSCRIPTION .. REFRESH PUBLICATION SEQUENCES, or whatever command we\n> decide)? Probably the answer lies in how we want to implement this\n> command. If we want to copy all sequence values during the command\n> itself then it is probably okay but if we want to handover this task\n> to the sequence-sync worker then we need some state management and a\n> new relfilenode so that on error both state and sequence values are\n> rolled back.\n\nWhat state transition of pg_subscription_rel entries for sequences do\nwe need while copying sequences values? For example, we insert an\nentry with 'init' state at CREATE SUBSCRIPTION and then the\nsequence-sync worker updates to 'ready' and copies the sequence data.\nAnd at REFRESH PUBLICATION SEQUENCES, we update the state back to\n'init' again so that the sequence-sync worker can process it? Given\nREFRESH PUBLICATION SEQUENCES won't be executed very frequently, it\nmight be acceptable to transactionally update sequence values.\n\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 16:38:41 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 13, 2024 at 1:09 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jun 12, 2024 at 6:59 PM Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > Yeah, starting with a single worker sounds good for now. Do you think\n> > we should sync all the sequences in a single transaction or have some\n> > threshold value above which a different transaction would be required\n> > or maybe a different sequence sync worker altogether? Now, having\n> > multiple sequence-sync workers requires some synchronization so that\n> > only a single worker is allocated for one sequence.\n> >\n> > The simplest thing is to use a single sequence sync worker that syncs\n> > all sequences in one transaction but with a large number of sequences,\n> > it could be inefficient. OTOH, I am not sure if it would be a problem\n> > in reality.\n>\n> I think that we can start with using a single worker and one\n> transaction, and measure the performance with a large number of\n> sequences.\n>\n\nFair enough. However, this raises the question Dilip and Vignesh are\ndiscussing whether we need a new relfilenode for sequence update even\nduring initial sync? As per my understanding, the idea is that similar\nto tables, the CREATE SUBSCRIPTION command (with copy_data = true)\nwill create the new sequence entries in pg_subscription_rel with the\nstate as 'i'. Then the sequence-sync worker would start a transaction\nand one-by-one copy the latest sequence values for each sequence (that\nhas state as 'i' in pg_subscription_rel) and mark its state as ready\n'r' and commit the transaction. Now if there is an error during this\noperation it will restart the entire operation. The idea of creating a\nnew relfilenode is to handle the error so that if there is a rollback,\nthe sequence state will be rolled back to 'i' and the sequence value\nwill also be rolled back. The other option could be that we update the\nsequence value without a new relfilenode and if the transaction rolled\nback then only the sequence's state will be rolled back to 'i'. This\nwould work with a minor inconsistency that sequence values will be\nup-to-date even when the sequence state is 'i' in pg_subscription_rel.\nI am not sure if that matters because anyway, they can quickly be\nout-of-sync with the publisher again.\n\nNow, say we don't want to maintain the state of sequences for initial\nsync at all then after the error how will we detect if there are any\npending sequences to be synced? One possibility is that we maintain a\nsubscription level flag 'subsequencesync' in 'pg_subscription' to\nindicate whether sequences need sync. This flag would indicate whether\nto sync all the sequences in pg_susbcription_rel. This would mean that\nif there is an error while syncing the sequences we will resync all\nthe sequences again. This could be acceptable considering the chances\nof error during sequence sync are low. The benefit is that both the\nREFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same\nidea and sync all sequences without needing a new relfilenode. Users\ncan always refer 'subsequencesync' flag in 'pg_subscription' to see if\nall the sequences are synced after executing the command.\n\n> > > Or yet another idea I came up with is that a tablesync worker will\n> > > synchronize both the table and sequences owned by the table. That is,\n> > > after the tablesync worker caught up with the apply worker, the\n> > > tablesync worker synchronizes sequences associated with the target\n> > > table as well. One benefit would be that at the time of initial table\n> > > sync being completed, the table and its sequence data are consistent.\n>\n> Correction; it's not guaranteed that the sequence data and table data\n> are consistent even in this case since the tablesync worker could get\n> on-disk sequence data that might have already been updated.\n>\n\nThe benefit of this approach is not clear to me. Our aim is to sync\nall sequences before the upgrade, so not sure if this helps because\nanyway both table values and corresponding sequences can again be\nout-of-sync very quickly.\n\n> >\n> > > >\n> > > > 3) Refreshing the sequence can be achieved through the existing\n> > > > command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change\n> > > > here).\n> > > > The subscriber identifies stale sequences, meaning sequences present\n> > > > in pg_subscription_rel but absent from the publication, and removes\n> > > > them from the pg_subscription_rel system table. The subscriber also\n> > > > checks for newly added sequences in the publisher and synchronizes\n> > > > their values from the publisher using the steps outlined in the\n> > > > subscription creation process. It's worth noting that previously\n> > > > synchronized sequences won't be synchronized again; the sequence sync\n> > > > will occur solely for the newly added sequences.\n> > > >\n> > > > 4) Introducing a new command for refreshing all sequences: ALTER\n> > > > SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.\n> > > > The subscriber will remove stale sequences and add newly added\n> > > > sequences from the publisher. Following this, it will re-synchronize\n> > > > the sequence values for all sequences in the updated list from the\n> > > > publisher, following the steps outlined in the subscription creation\n> > > > process.\n> > >\n> > > The difference between 3) and 4) is whether or not to re-synchronize\n> > > the previously synchronized sequences. Do we really want to introduce\n> > > a new command for 4)? I felt that we can invent an option say\n> > > copy_all_sequence for the REFRESH PUBLICATION command to cover the 4)\n> > > case.\n> > >\n> >\n> > Yeah, that is also an option but it could confuse along with copy_data\n> > option. Say the user has selected copy_data = false but\n> > copy_all_sequences = true then the first option indicates to *not*\n> > copy the data of table and sequences and the second option indicates\n> > to copy the sequences data which sounds contradictory. The other idea\n> > is to have an option copy_existing_sequences (which indicates to copy\n> > existing sequence values) but that also has somewhat the same drawback\n> > as copy_all_sequences but to a lesser degree.\n>\n> Good point. And I understood that the REFRESH PUBLICATION SEQUENCES\n> command would be helpful when users want to synchronize sequences\n> between two nodes before upgrading.\n>\n\nRight.\n\n> >\n> > > >\n> > > > 5) Incorporate the pg_sequence_state function to fetch the sequence\n> > > > value from the publisher, along with the page LSN. Incorporate\n> > > > SetSequence function, which will procure a new relfilenode for the\n> > > > sequence and set the new relfilenode with the specified value. This\n> > > > will facilitate rollback in case of any failures.\n> > >\n> > > Does it mean that we create a new relfilenode for every update of the value?\n> > >\n> >\n> > We need it for initial sync so that if there is an error both the\n> > sequence state in pg_subscription_rel and sequence values can be\n> > rolled back together.\n>\n> Agreed.\n>\n> > However, it is unclear whether we need to create\n> > a new relfilenode while copying existing sequences (say during ALTER\n> > SUBSCRIPTION .. REFRESH PUBLICATION SEQUENCES, or whatever command we\n> > decide)? Probably the answer lies in how we want to implement this\n> > command. If we want to copy all sequence values during the command\n> > itself then it is probably okay but if we want to handover this task\n> > to the sequence-sync worker then we need some state management and a\n> > new relfilenode so that on error both state and sequence values are\n> > rolled back.\n>\n> What state transition of pg_subscription_rel entries for sequences do\n> we need while copying sequences values? For example, we insert an\n> entry with 'init' state at CREATE SUBSCRIPTION and then the\n> sequence-sync worker updates to 'ready' and copies the sequence data.\n> And at REFRESH PUBLICATION SEQUENCES, we update the state back to\n> 'init' again so that the sequence-sync worker can process it? Given\n> REFRESH PUBLICATION SEQUENCES won't be executed very frequently, it\n> might be acceptable to transactionally update sequence values.\n>\n\nDo you mean that sync the sequences during the REFRESH PUBLICATION\nSEQUENCES command itself? If so, there is an argument that we can do\nthe same during CREATE SUBSCRIPTION. It would be beneficial to keep\nthe method to sync the sequences same for both the CREATE and REFRESH\ncommands. I have speculated on one idea above and would be happy to\nsee your thoughts.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 Jun 2024 15:36:05 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 13, 2024 at 7:06 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jun 13, 2024 at 1:09 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Jun 12, 2024 at 6:59 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > >\n> > > Yeah, starting with a single worker sounds good for now. Do you think\n> > > we should sync all the sequences in a single transaction or have some\n> > > threshold value above which a different transaction would be required\n> > > or maybe a different sequence sync worker altogether? Now, having\n> > > multiple sequence-sync workers requires some synchronization so that\n> > > only a single worker is allocated for one sequence.\n> > >\n> > > The simplest thing is to use a single sequence sync worker that syncs\n> > > all sequences in one transaction but with a large number of sequences,\n> > > it could be inefficient. OTOH, I am not sure if it would be a problem\n> > > in reality.\n> >\n> > I think that we can start with using a single worker and one\n> > transaction, and measure the performance with a large number of\n> > sequences.\n> >\n>\n> Fair enough. However, this raises the question Dilip and Vignesh are\n> discussing whether we need a new relfilenode for sequence update even\n> during initial sync? As per my understanding, the idea is that similar\n> to tables, the CREATE SUBSCRIPTION command (with copy_data = true)\n> will create the new sequence entries in pg_subscription_rel with the\n> state as 'i'. Then the sequence-sync worker would start a transaction\n> and one-by-one copy the latest sequence values for each sequence (that\n> has state as 'i' in pg_subscription_rel) and mark its state as ready\n> 'r' and commit the transaction. Now if there is an error during this\n> operation it will restart the entire operation. The idea of creating a\n> new relfilenode is to handle the error so that if there is a rollback,\n> the sequence state will be rolled back to 'i' and the sequence value\n> will also be rolled back. The other option could be that we update the\n> sequence value without a new relfilenode and if the transaction rolled\n> back then only the sequence's state will be rolled back to 'i'. This\n> would work with a minor inconsistency that sequence values will be\n> up-to-date even when the sequence state is 'i' in pg_subscription_rel.\n> I am not sure if that matters because anyway, they can quickly be\n> out-of-sync with the publisher again.\n\nI think it would be fine in many cases even if the sequence value is\nup-to-date even when the sequence state is 'i' in pg_subscription_rel.\nBut the case we would like to avoid is where suppose the sequence-sync\nworker does both synchronizing sequence values and updating the\nsequence states for all sequences in one transaction, and if there is\nan error we end up retrying the synchronization for all sequences.\n\n>\n> Now, say we don't want to maintain the state of sequences for initial\n> sync at all then after the error how will we detect if there are any\n> pending sequences to be synced? One possibility is that we maintain a\n> subscription level flag 'subsequencesync' in 'pg_subscription' to\n> indicate whether sequences need sync. This flag would indicate whether\n> to sync all the sequences in pg_susbcription_rel. This would mean that\n> if there is an error while syncing the sequences we will resync all\n> the sequences again. This could be acceptable considering the chances\n> of error during sequence sync are low. The benefit is that both the\n> REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same\n> idea and sync all sequences without needing a new relfilenode. Users\n> can always refer 'subsequencesync' flag in 'pg_subscription' to see if\n> all the sequences are synced after executing the command.\n\nI think that REFRESH PUBLICATION {SEQUENCES} can be executed even\nwhile the sequence-sync worker is synchronizing sequences. In this\ncase, the worker might not see new sequences added by the concurrent\nREFRESH PUBLICATION {SEQUENCES} command since it's already running.\nThe worker could end up marking the subsequencesync as completed while\nnot synchronizing these new sequences.\n\n>\n> > > > Or yet another idea I came up with is that a tablesync worker will\n> > > > synchronize both the table and sequences owned by the table. That is,\n> > > > after the tablesync worker caught up with the apply worker, the\n> > > > tablesync worker synchronizes sequences associated with the target\n> > > > table as well. One benefit would be that at the time of initial table\n> > > > sync being completed, the table and its sequence data are consistent.\n> >\n> > Correction; it's not guaranteed that the sequence data and table data\n> > are consistent even in this case since the tablesync worker could get\n> > on-disk sequence data that might have already been updated.\n> >\n>\n> The benefit of this approach is not clear to me. Our aim is to sync\n> all sequences before the upgrade, so not sure if this helps because\n> anyway both table values and corresponding sequences can again be\n> out-of-sync very quickly.\n\nRight.\n\nGiven that our aim is to sync all sequences before the upgrade, do we\nneed to synchronize sequences even at CREATE SUBSCRIPTION time? In\ncases where there are a large number of sequences, synchronizing\nsequences in addition to tables could be overhead and make less sense,\nbecause sequences can again be out-of-sync quickly and typically\nCREATE SUBSCRIPTION is not created just before the upgrade.\n\n>\n> > >\n> > > > >\n> > > > > 5) Incorporate the pg_sequence_state function to fetch the sequence\n> > > > > value from the publisher, along with the page LSN. Incorporate\n> > > > > SetSequence function, which will procure a new relfilenode for the\n> > > > > sequence and set the new relfilenode with the specified value. This\n> > > > > will facilitate rollback in case of any failures.\n> > > >\n> > > > Does it mean that we create a new relfilenode for every update of the value?\n> > > >\n> > >\n> > > We need it for initial sync so that if there is an error both the\n> > > sequence state in pg_subscription_rel and sequence values can be\n> > > rolled back together.\n> >\n> > Agreed.\n> >\n> > > However, it is unclear whether we need to create\n> > > a new relfilenode while copying existing sequences (say during ALTER\n> > > SUBSCRIPTION .. REFRESH PUBLICATION SEQUENCES, or whatever command we\n> > > decide)? Probably the answer lies in how we want to implement this\n> > > command. If we want to copy all sequence values during the command\n> > > itself then it is probably okay but if we want to handover this task\n> > > to the sequence-sync worker then we need some state management and a\n> > > new relfilenode so that on error both state and sequence values are\n> > > rolled back.\n> >\n> > What state transition of pg_subscription_rel entries for sequences do\n> > we need while copying sequences values? For example, we insert an\n> > entry with 'init' state at CREATE SUBSCRIPTION and then the\n> > sequence-sync worker updates to 'ready' and copies the sequence data.\n> > And at REFRESH PUBLICATION SEQUENCES, we update the state back to\n> > 'init' again so that the sequence-sync worker can process it? Given\n> > REFRESH PUBLICATION SEQUENCES won't be executed very frequently, it\n> > might be acceptable to transactionally update sequence values.\n> >\n>\n> Do you mean that sync the sequences during the REFRESH PUBLICATION\n> SEQUENCES command itself? If so, there is an argument that we can do\n> the same during CREATE SUBSCRIPTION. It would be beneficial to keep\n> the method to sync the sequences same for both the CREATE and REFRESH\n> commands. I have speculated on one idea above and would be happy to\n> see your thoughts.\n\nI meant that the REFRESH PUBLICATION SEQUENCES command updates all\nsequence states in pg_subscription_rel to 'init' state, and the\nsequence-sync worker can do the synchronization work. We use the same\nmethod for both the CREATE SUBSCRIPTION and REFRESH PUBLICATION\n{SEQUENCES} commands.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 21:44:05 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 13, 2024 at 03:36:05PM +0530, Amit Kapila wrote:\n> Fair enough. However, this raises the question Dilip and Vignesh are\n> discussing whether we need a new relfilenode for sequence update even\n> during initial sync? As per my understanding, the idea is that similar\n> to tables, the CREATE SUBSCRIPTION command (with copy_data = true)\n> will create the new sequence entries in pg_subscription_rel with the\n> state as 'i'. Then the sequence-sync worker would start a transaction\n> and one-by-one copy the latest sequence values for each sequence (that\n> has state as 'i' in pg_subscription_rel) and mark its state as ready\n> 'r' and commit the transaction. Now if there is an error during this\n> operation it will restart the entire operation.\n\nHmm. You mean to use only one transaction for all the sequences?\nI've heard about deployments with a lot of them. Could it be a\nproblem to process them in batches, as well? If you maintain a state\nfor each one of them in pg_subscription_rel, it does not strike me as\nan issue, while being more flexible than an all-or-nothing.\n\n> The idea of creating a\n> new relfilenode is to handle the error so that if there is a rollback,\n> the sequence state will be rolled back to 'i' and the sequence value\n> will also be rolled back. The other option could be that we update the\n> sequence value without a new relfilenode and if the transaction rolled\n> back then only the sequence's state will be rolled back to 'i'. This\n> would work with a minor inconsistency that sequence values will be\n> up-to-date even when the sequence state is 'i' in pg_subscription_rel.\n> I am not sure if that matters because anyway, they can quickly be\n> out-of-sync with the publisher again.\n\nSeeing a mention to relfilenodes specifically for sequences freaks me\nout a bit, because there's some work I have been doing in this area\nand sequences may not have a need for a physical relfilenode at all.\nBut I guess that you refer to the fact that like tables, relfilenodes\nwould only be created as required because anything you'd do in the\napply worker path would just call some of the routines of sequence.h,\nright?\n\n> Now, say we don't want to maintain the state of sequences for initial\n> sync at all then after the error how will we detect if there are any\n> pending sequences to be synced? One possibility is that we maintain a\n> subscription level flag 'subsequencesync' in 'pg_subscription' to\n> indicate whether sequences need sync. This flag would indicate whether\n> to sync all the sequences in pg_susbcription_rel. This would mean that\n> if there is an error while syncing the sequences we will resync all\n> the sequences again. This could be acceptable considering the chances\n> of error during sequence sync are low.\n\nThere could be multiple subscriptions to a single database that point\nto the same set of sequences. Is there any conflict issue to worry\nabout here?\n\n> The benefit is that both the\n> REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same\n> idea and sync all sequences without needing a new relfilenode. Users\n> can always refer 'subsequencesync' flag in 'pg_subscription' to see if\n> all the sequences are synced after executing the command.\n\nThat would be cheaper, indeed. Isn't a boolean too limiting?\nIsn't that something you'd want to track with a LSN as \"the point in\nWAL where all the sequences have been synced\"? \n\nThe approach of doing all the sync work from the subscriber, while\nhaving a command that can be kicked from the subscriber side is a good\nuser experience.\n--\nMichael", "msg_date": "Fri, 14 Jun 2024 08:45:35 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jun 13, 2024 at 6:14 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Jun 13, 2024 at 7:06 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Jun 13, 2024 at 1:09 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Wed, Jun 12, 2024 at 6:59 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > >\n> > > > Yeah, starting with a single worker sounds good for now. Do you think\n> > > > we should sync all the sequences in a single transaction or have some\n> > > > threshold value above which a different transaction would be required\n> > > > or maybe a different sequence sync worker altogether? Now, having\n> > > > multiple sequence-sync workers requires some synchronization so that\n> > > > only a single worker is allocated for one sequence.\n> > > >\n> > > > The simplest thing is to use a single sequence sync worker that syncs\n> > > > all sequences in one transaction but with a large number of sequences,\n> > > > it could be inefficient. OTOH, I am not sure if it would be a problem\n> > > > in reality.\n> > >\n> > > I think that we can start with using a single worker and one\n> > > transaction, and measure the performance with a large number of\n> > > sequences.\n> > >\n> >\n> > Fair enough. However, this raises the question Dilip and Vignesh are\n> > discussing whether we need a new relfilenode for sequence update even\n> > during initial sync? As per my understanding, the idea is that similar\n> > to tables, the CREATE SUBSCRIPTION command (with copy_data = true)\n> > will create the new sequence entries in pg_subscription_rel with the\n> > state as 'i'. Then the sequence-sync worker would start a transaction\n> > and one-by-one copy the latest sequence values for each sequence (that\n> > has state as 'i' in pg_subscription_rel) and mark its state as ready\n> > 'r' and commit the transaction. Now if there is an error during this\n> > operation it will restart the entire operation. The idea of creating a\n> > new relfilenode is to handle the error so that if there is a rollback,\n> > the sequence state will be rolled back to 'i' and the sequence value\n> > will also be rolled back. The other option could be that we update the\n> > sequence value without a new relfilenode and if the transaction rolled\n> > back then only the sequence's state will be rolled back to 'i'. This\n> > would work with a minor inconsistency that sequence values will be\n> > up-to-date even when the sequence state is 'i' in pg_subscription_rel.\n> > I am not sure if that matters because anyway, they can quickly be\n> > out-of-sync with the publisher again.\n>\n> I think it would be fine in many cases even if the sequence value is\n> up-to-date even when the sequence state is 'i' in pg_subscription_rel.\n> But the case we would like to avoid is where suppose the sequence-sync\n> worker does both synchronizing sequence values and updating the\n> sequence states for all sequences in one transaction, and if there is\n> an error we end up retrying the synchronization for all sequences.\n>\n\nThe one idea to avoid this is to update sequences in chunks (say 100\nor some threshold number of sequences in one transaction). Then we\nwould only redo the sync for the last and pending set of sequences.\n\n> >\n> > Now, say we don't want to maintain the state of sequences for initial\n> > sync at all then after the error how will we detect if there are any\n> > pending sequences to be synced? One possibility is that we maintain a\n> > subscription level flag 'subsequencesync' in 'pg_subscription' to\n> > indicate whether sequences need sync. This flag would indicate whether\n> > to sync all the sequences in pg_susbcription_rel. This would mean that\n> > if there is an error while syncing the sequences we will resync all\n> > the sequences again. This could be acceptable considering the chances\n> > of error during sequence sync are low. The benefit is that both the\n> > REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same\n> > idea and sync all sequences without needing a new relfilenode. Users\n> > can always refer 'subsequencesync' flag in 'pg_subscription' to see if\n> > all the sequences are synced after executing the command.\n>\n> I think that REFRESH PUBLICATION {SEQUENCES} can be executed even\n> while the sequence-sync worker is synchronizing sequences. In this\n> case, the worker might not see new sequences added by the concurrent\n> REFRESH PUBLICATION {SEQUENCES} command since it's already running.\n> The worker could end up marking the subsequencesync as completed while\n> not synchronizing these new sequences.\n>\n\nThis is possible but we could avoid REFRESH PUBLICATION {SEQUENCES} by\nnot allowing to change the subsequencestate during the time\nsequence-worker is syncing the sequences. This could be restrictive\nbut there doesn't seem to be cases where user would like to\nimmediately refresh sequences after creating the subscription.\n\n> >\n> > > > > Or yet another idea I came up with is that a tablesync worker will\n> > > > > synchronize both the table and sequences owned by the table. That is,\n> > > > > after the tablesync worker caught up with the apply worker, the\n> > > > > tablesync worker synchronizes sequences associated with the target\n> > > > > table as well. One benefit would be that at the time of initial table\n> > > > > sync being completed, the table and its sequence data are consistent.\n> > >\n> > > Correction; it's not guaranteed that the sequence data and table data\n> > > are consistent even in this case since the tablesync worker could get\n> > > on-disk sequence data that might have already been updated.\n> > >\n> >\n> > The benefit of this approach is not clear to me. Our aim is to sync\n> > all sequences before the upgrade, so not sure if this helps because\n> > anyway both table values and corresponding sequences can again be\n> > out-of-sync very quickly.\n>\n> Right.\n>\n> Given that our aim is to sync all sequences before the upgrade, do we\n> need to synchronize sequences even at CREATE SUBSCRIPTION time? In\n> cases where there are a large number of sequences, synchronizing\n> sequences in addition to tables could be overhead and make less sense,\n> because sequences can again be out-of-sync quickly and typically\n> CREATE SUBSCRIPTION is not created just before the upgrade.\n>\n\nI think for the upgrade one should be creating a subscription just\nbefore the upgrade. Isn't something similar is done even in the\nupgrade steps you shared once [1]? Typically users should get all the\ndata from the publisher before the upgrade of the publisher via\ncreating a subscription. Also, it would be better to keep the\nimplementation of sequences close to tables wherever possible. Having\nsaid that, I understand your point as well and if you strongly feel\nthat we don't need to sync sequences at the time of CREATE\nSUBSCRIPTION and others also don't see any problem with it then we can\nconsider that as well.\n\n> > >\n> >\n> > Do you mean that sync the sequences during the REFRESH PUBLICATION\n> > SEQUENCES command itself? If so, there is an argument that we can do\n> > the same during CREATE SUBSCRIPTION. It would be beneficial to keep\n> > the method to sync the sequences same for both the CREATE and REFRESH\n> > commands. I have speculated on one idea above and would be happy to\n> > see your thoughts.\n>\n> I meant that the REFRESH PUBLICATION SEQUENCES command updates all\n> sequence states in pg_subscription_rel to 'init' state, and the\n> sequence-sync worker can do the synchronization work. We use the same\n> method for both the CREATE SUBSCRIPTION and REFRESH PUBLICATION\n> {SEQUENCES} commands.\n>\n\nMarking the state as 'init' when we would have already synced the\nsequences sounds a bit odd but otherwise, this could also work if we\naccept that even if the sequences are synced and value could remain in\n'init' state (on rollbacks).\n\n\n[1] - https://knock.app/blog/zero-downtime-postgres-upgrades\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 14 Jun 2024 12:33:48 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, Jun 14, 2024 at 5:16 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jun 13, 2024 at 03:36:05PM +0530, Amit Kapila wrote:\n> > Fair enough. However, this raises the question Dilip and Vignesh are\n> > discussing whether we need a new relfilenode for sequence update even\n> > during initial sync? As per my understanding, the idea is that similar\n> > to tables, the CREATE SUBSCRIPTION command (with copy_data = true)\n> > will create the new sequence entries in pg_subscription_rel with the\n> > state as 'i'. Then the sequence-sync worker would start a transaction\n> > and one-by-one copy the latest sequence values for each sequence (that\n> > has state as 'i' in pg_subscription_rel) and mark its state as ready\n> > 'r' and commit the transaction. Now if there is an error during this\n> > operation it will restart the entire operation.\n>\n> Hmm. You mean to use only one transaction for all the sequences?\n> I've heard about deployments with a lot of them. Could it be a\n> problem to process them in batches, as well?\n\nI don't think so. We can even sync one sequence per transaction but\nthen it would be resource and time consuming without much gain. As\nmentioned in a previous email, we might want to sync 100 or some other\nthreshold number of sequences per transaction. The other possibility\nis to make a subscription-level option for this batch size but I don't\nsee much advantage in doing so as it won't be convenient for users to\nset it. I feel we should pick some threshold number that is neither\ntoo low nor too high and if we later see any problem with it, we can\nmake it a configurable knob.\n\n>\n> > The idea of creating a\n> > new relfilenode is to handle the error so that if there is a rollback,\n> > the sequence state will be rolled back to 'i' and the sequence value\n> > will also be rolled back. The other option could be that we update the\n> > sequence value without a new relfilenode and if the transaction rolled\n> > back then only the sequence's state will be rolled back to 'i'. This\n> > would work with a minor inconsistency that sequence values will be\n> > up-to-date even when the sequence state is 'i' in pg_subscription_rel.\n> > I am not sure if that matters because anyway, they can quickly be\n> > out-of-sync with the publisher again.\n>\n> Seeing a mention to relfilenodes specifically for sequences freaks me\n> out a bit, because there's some work I have been doing in this area\n> and sequences may not have a need for a physical relfilenode at all.\n> But I guess that you refer to the fact that like tables, relfilenodes\n> would only be created as required because anything you'd do in the\n> apply worker path would just call some of the routines of sequence.h,\n> right?\n>\n\nYes, I think so. The only thing the patch expects is a way to rollback\nthe sequence changes if the transaction rolls back during the initial\nsync. But I am not sure if we need such a behavior. The discussion for\nthe same is in progress. Let's wait for the outcome.\n\n> > Now, say we don't want to maintain the state of sequences for initial\n> > sync at all then after the error how will we detect if there are any\n> > pending sequences to be synced? One possibility is that we maintain a\n> > subscription level flag 'subsequencesync' in 'pg_subscription' to\n> > indicate whether sequences need sync. This flag would indicate whether\n> > to sync all the sequences in pg_susbcription_rel. This would mean that\n> > if there is an error while syncing the sequences we will resync all\n> > the sequences again. This could be acceptable considering the chances\n> > of error during sequence sync are low.\n>\n> There could be multiple subscriptions to a single database that point\n> to the same set of sequences. Is there any conflict issue to worry\n> about here?\n>\n\nI don't think so. In the worst case, the same value would be copied\ntwice. The same scenario in case of tables could lead to duplicate\ndata or unique key violation ERRORs which is much worse. So, I expect\nusers to be careful about the same.\n\n> > The benefit is that both the\n> > REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same\n> > idea and sync all sequences without needing a new relfilenode. Users\n> > can always refer 'subsequencesync' flag in 'pg_subscription' to see if\n> > all the sequences are synced after executing the command.\n>\n> That would be cheaper, indeed. Isn't a boolean too limiting?\n>\n\nIn this idea, we only need a flag to say whether the sequence sync is\nrequired or not.\n\n> Isn't that something you'd want to track with a LSN as \"the point in\n> WAL where all the sequences have been synced\"?\n>\n\nIt won't be any better for the required purpose because after CREATE\nSUBSCRIPTION, if REFERESH wants to toggle the flag to indicate the\nsequences need sync again then using LSN would mean we need to set it\nto Invalid value.\n\n> The approach of doing all the sync work from the subscriber, while\n> having a command that can be kicked from the subscriber side is a good\n> user experience.\n>\n\nThank you for endorsing the idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 14 Jun 2024 15:30:17 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, Jun 14, 2024 at 4:04 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jun 13, 2024 at 6:14 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Thu, Jun 13, 2024 at 7:06 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Thu, Jun 13, 2024 at 1:09 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Wed, Jun 12, 2024 at 6:59 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > >\n> > > > > Yeah, starting with a single worker sounds good for now. Do you think\n> > > > > we should sync all the sequences in a single transaction or have some\n> > > > > threshold value above which a different transaction would be required\n> > > > > or maybe a different sequence sync worker altogether? Now, having\n> > > > > multiple sequence-sync workers requires some synchronization so that\n> > > > > only a single worker is allocated for one sequence.\n> > > > >\n> > > > > The simplest thing is to use a single sequence sync worker that syncs\n> > > > > all sequences in one transaction but with a large number of sequences,\n> > > > > it could be inefficient. OTOH, I am not sure if it would be a problem\n> > > > > in reality.\n> > > >\n> > > > I think that we can start with using a single worker and one\n> > > > transaction, and measure the performance with a large number of\n> > > > sequences.\n> > > >\n> > >\n> > > Fair enough. However, this raises the question Dilip and Vignesh are\n> > > discussing whether we need a new relfilenode for sequence update even\n> > > during initial sync? As per my understanding, the idea is that similar\n> > > to tables, the CREATE SUBSCRIPTION command (with copy_data = true)\n> > > will create the new sequence entries in pg_subscription_rel with the\n> > > state as 'i'. Then the sequence-sync worker would start a transaction\n> > > and one-by-one copy the latest sequence values for each sequence (that\n> > > has state as 'i' in pg_subscription_rel) and mark its state as ready\n> > > 'r' and commit the transaction. Now if there is an error during this\n> > > operation it will restart the entire operation. The idea of creating a\n> > > new relfilenode is to handle the error so that if there is a rollback,\n> > > the sequence state will be rolled back to 'i' and the sequence value\n> > > will also be rolled back. The other option could be that we update the\n> > > sequence value without a new relfilenode and if the transaction rolled\n> > > back then only the sequence's state will be rolled back to 'i'. This\n> > > would work with a minor inconsistency that sequence values will be\n> > > up-to-date even when the sequence state is 'i' in pg_subscription_rel.\n> > > I am not sure if that matters because anyway, they can quickly be\n> > > out-of-sync with the publisher again.\n> >\n> > I think it would be fine in many cases even if the sequence value is\n> > up-to-date even when the sequence state is 'i' in pg_subscription_rel.\n> > But the case we would like to avoid is where suppose the sequence-sync\n> > worker does both synchronizing sequence values and updating the\n> > sequence states for all sequences in one transaction, and if there is\n> > an error we end up retrying the synchronization for all sequences.\n> >\n>\n> The one idea to avoid this is to update sequences in chunks (say 100\n> or some threshold number of sequences in one transaction). Then we\n> would only redo the sync for the last and pending set of sequences.\n\nThat could be one idea.\n\n>\n> > >\n> > > Now, say we don't want to maintain the state of sequences for initial\n> > > sync at all then after the error how will we detect if there are any\n> > > pending sequences to be synced? One possibility is that we maintain a\n> > > subscription level flag 'subsequencesync' in 'pg_subscription' to\n> > > indicate whether sequences need sync. This flag would indicate whether\n> > > to sync all the sequences in pg_susbcription_rel. This would mean that\n> > > if there is an error while syncing the sequences we will resync all\n> > > the sequences again. This could be acceptable considering the chances\n> > > of error during sequence sync are low. The benefit is that both the\n> > > REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same\n> > > idea and sync all sequences without needing a new relfilenode. Users\n> > > can always refer 'subsequencesync' flag in 'pg_subscription' to see if\n> > > all the sequences are synced after executing the command.\n> >\n> > I think that REFRESH PUBLICATION {SEQUENCES} can be executed even\n> > while the sequence-sync worker is synchronizing sequences. In this\n> > case, the worker might not see new sequences added by the concurrent\n> > REFRESH PUBLICATION {SEQUENCES} command since it's already running.\n> > The worker could end up marking the subsequencesync as completed while\n> > not synchronizing these new sequences.\n> >\n>\n> This is possible but we could avoid REFRESH PUBLICATION {SEQUENCES} by\n> not allowing to change the subsequencestate during the time\n> sequence-worker is syncing the sequences. This could be restrictive\n> but there doesn't seem to be cases where user would like to\n> immediately refresh sequences after creating the subscription.\n\nI'm concerned that users would not be able to add sequences during the\ntime the sequence-worker is syncing the sequences. For example,\nsuppose we have 10000 sequences and execute REFRESH PUBLICATION\n{SEQUENCES} to synchronize 10000 sequences. Now if we add one sequence\nto the publication and want to synchronize it to the subscriber, we\nhave to wait for the current REFRESH PUBLICATION {SEQUENCES} to\ncomplete, and then execute it again, synchronizing 10001 sequences,\ninstead of synchronizing only the new one.\n\n>\n> > >\n> > > > > > Or yet another idea I came up with is that a tablesync worker will\n> > > > > > synchronize both the table and sequences owned by the table. That is,\n> > > > > > after the tablesync worker caught up with the apply worker, the\n> > > > > > tablesync worker synchronizes sequences associated with the target\n> > > > > > table as well. One benefit would be that at the time of initial table\n> > > > > > sync being completed, the table and its sequence data are consistent.\n> > > >\n> > > > Correction; it's not guaranteed that the sequence data and table data\n> > > > are consistent even in this case since the tablesync worker could get\n> > > > on-disk sequence data that might have already been updated.\n> > > >\n> > >\n> > > The benefit of this approach is not clear to me. Our aim is to sync\n> > > all sequences before the upgrade, so not sure if this helps because\n> > > anyway both table values and corresponding sequences can again be\n> > > out-of-sync very quickly.\n> >\n> > Right.\n> >\n> > Given that our aim is to sync all sequences before the upgrade, do we\n> > need to synchronize sequences even at CREATE SUBSCRIPTION time? In\n> > cases where there are a large number of sequences, synchronizing\n> > sequences in addition to tables could be overhead and make less sense,\n> > because sequences can again be out-of-sync quickly and typically\n> > CREATE SUBSCRIPTION is not created just before the upgrade.\n> >\n>\n> I think for the upgrade one should be creating a subscription just\n> before the upgrade. Isn't something similar is done even in the\n> upgrade steps you shared once [1]?\n\nI might be missing something but in the blog post they created\nsubscriptions in various ways, waited for the initial table data sync\nto complete, and then set the sequence values with a buffer based on\nthe old cluster. What I imagined with this sequence synchronization\nfeature is that after the initial table sync completes, we stop to\nexecute further transactions on the publisher, synchronize sequences\nusing REFRESH PUBLICATION {SEQUENCES}, and resume the application to\nexecute transactions on the subscriber. So a subscription would be\ncreated just before the upgrade, but sequence synchronization would\nnot necessarily happen at the same time of the initial table data\nsynchronization.\n\n> Typically users should get all the\n> data from the publisher before the upgrade of the publisher via\n> creating a subscription. Also, it would be better to keep the\n> implementation of sequences close to tables wherever possible. Having\n> said that, I understand your point as well and if you strongly feel\n> that we don't need to sync sequences at the time of CREATE\n> SUBSCRIPTION and others also don't see any problem with it then we can\n> consider that as well.\n\nI see your point that it's better to keep the implementation of\nsequences close to the table one. So I agree that we can start with\nthis approach, and we will see how it works in practice and consider\nother options later.\n\n>\n> > > >\n> > >\n> > > Do you mean that sync the sequences during the REFRESH PUBLICATION\n> > > SEQUENCES command itself? If so, there is an argument that we can do\n> > > the same during CREATE SUBSCRIPTION. It would be beneficial to keep\n> > > the method to sync the sequences same for both the CREATE and REFRESH\n> > > commands. I have speculated on one idea above and would be happy to\n> > > see your thoughts.\n> >\n> > I meant that the REFRESH PUBLICATION SEQUENCES command updates all\n> > sequence states in pg_subscription_rel to 'init' state, and the\n> > sequence-sync worker can do the synchronization work. We use the same\n> > method for both the CREATE SUBSCRIPTION and REFRESH PUBLICATION\n> > {SEQUENCES} commands.\n> >\n>\n> Marking the state as 'init' when we would have already synced the\n> sequences sounds a bit odd but otherwise, this could also work if we\n> accept that even if the sequences are synced and value could remain in\n> 'init' state (on rollbacks).\n\nI mean that it's just for identifying sequences that need to be\nsynced. With the idea of using sequence states in pg_subscription_rel,\nthe REFRESH PUBLICATION SEQUENCES command needs to change states to\nsomething so that the sequence-sync worker can identify which sequence\nneeds to be synced. If 'init' sounds odd, we can invent a new state\nfor sequences, say 'needs-to-be-syned'.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 18 Jun 2024 10:59:29 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Jun 18, 2024 at 7:30 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Jun 14, 2024 at 4:04 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Jun 13, 2024 at 6:14 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> >\n> > > >\n> > > > Now, say we don't want to maintain the state of sequences for initial\n> > > > sync at all then after the error how will we detect if there are any\n> > > > pending sequences to be synced? One possibility is that we maintain a\n> > > > subscription level flag 'subsequencesync' in 'pg_subscription' to\n> > > > indicate whether sequences need sync. This flag would indicate whether\n> > > > to sync all the sequences in pg_susbcription_rel. This would mean that\n> > > > if there is an error while syncing the sequences we will resync all\n> > > > the sequences again. This could be acceptable considering the chances\n> > > > of error during sequence sync are low. The benefit is that both the\n> > > > REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same\n> > > > idea and sync all sequences without needing a new relfilenode. Users\n> > > > can always refer 'subsequencesync' flag in 'pg_subscription' to see if\n> > > > all the sequences are synced after executing the command.\n> > >\n> > > I think that REFRESH PUBLICATION {SEQUENCES} can be executed even\n> > > while the sequence-sync worker is synchronizing sequences. In this\n> > > case, the worker might not see new sequences added by the concurrent\n> > > REFRESH PUBLICATION {SEQUENCES} command since it's already running.\n> > > The worker could end up marking the subsequencesync as completed while\n> > > not synchronizing these new sequences.\n> > >\n> >\n> > This is possible but we could avoid REFRESH PUBLICATION {SEQUENCES} by\n> > not allowing to change the subsequencestate during the time\n> > sequence-worker is syncing the sequences. This could be restrictive\n> > but there doesn't seem to be cases where user would like to\n> > immediately refresh sequences after creating the subscription.\n>\n> I'm concerned that users would not be able to add sequences during the\n> time the sequence-worker is syncing the sequences. For example,\n> suppose we have 10000 sequences and execute REFRESH PUBLICATION\n> {SEQUENCES} to synchronize 10000 sequences. Now if we add one sequence\n> to the publication and want to synchronize it to the subscriber, we\n> have to wait for the current REFRESH PUBLICATION {SEQUENCES} to\n> complete, and then execute it again, synchronizing 10001 sequences,\n> instead of synchronizing only the new one.\n>\n\nI see your point and it could hurt such scenarios even though they\nwon't be frequent. So, let's focus on our other approach of\nmaintaining the flag at a per-sequence level in pg_subscription_rel.\n\n> >\n> > > >\n> > > > > > > Or yet another idea I came up with is that a tablesync worker will\n> > > > > > > synchronize both the table and sequences owned by the table. That is,\n> > > > > > > after the tablesync worker caught up with the apply worker, the\n> > > > > > > tablesync worker synchronizes sequences associated with the target\n> > > > > > > table as well. One benefit would be that at the time of initial table\n> > > > > > > sync being completed, the table and its sequence data are consistent.\n> > > > >\n> > > > > Correction; it's not guaranteed that the sequence data and table data\n> > > > > are consistent even in this case since the tablesync worker could get\n> > > > > on-disk sequence data that might have already been updated.\n> > > > >\n> > > >\n> > > > The benefit of this approach is not clear to me. Our aim is to sync\n> > > > all sequences before the upgrade, so not sure if this helps because\n> > > > anyway both table values and corresponding sequences can again be\n> > > > out-of-sync very quickly.\n> > >\n> > > Right.\n> > >\n> > > Given that our aim is to sync all sequences before the upgrade, do we\n> > > need to synchronize sequences even at CREATE SUBSCRIPTION time? In\n> > > cases where there are a large number of sequences, synchronizing\n> > > sequences in addition to tables could be overhead and make less sense,\n> > > because sequences can again be out-of-sync quickly and typically\n> > > CREATE SUBSCRIPTION is not created just before the upgrade.\n> > >\n> >\n> > I think for the upgrade one should be creating a subscription just\n> > before the upgrade. Isn't something similar is done even in the\n> > upgrade steps you shared once [1]?\n>\n> I might be missing something but in the blog post they created\n> subscriptions in various ways, waited for the initial table data sync\n> to complete, and then set the sequence values with a buffer based on\n> the old cluster. What I imagined with this sequence synchronization\n> feature is that after the initial table sync completes, we stop to\n> execute further transactions on the publisher, synchronize sequences\n> using REFRESH PUBLICATION {SEQUENCES}, and resume the application to\n> execute transactions on the subscriber. So a subscription would be\n> created just before the upgrade, but sequence synchronization would\n> not necessarily happen at the same time of the initial table data\n> synchronization.\n>\n\nIt depends on the exact steps of the upgrade. For example, if one\nstops the publisher before adding sequences to a subscription either\nvia create subscription or alter subscription add/set command then\nthere won't be a need for a separate refresh but OTOH, if one follows\nthe steps you mentioned then the refresh would be required. As you are\nokay, with syncing the sequences while creating a subscription in the\nbelow part of the email, there is not much point in arguing about this\nfurther.\n\n> > Typically users should get all the\n> > data from the publisher before the upgrade of the publisher via\n> > creating a subscription. Also, it would be better to keep the\n> > implementation of sequences close to tables wherever possible. Having\n> > said that, I understand your point as well and if you strongly feel\n> > that we don't need to sync sequences at the time of CREATE\n> > SUBSCRIPTION and others also don't see any problem with it then we can\n> > consider that as well.\n>\n> I see your point that it's better to keep the implementation of\n> sequences close to the table one. So I agree that we can start with\n> this approach, and we will see how it works in practice and consider\n> other options later.\n>\n\nmakes sense.\n\n> >\n> > Marking the state as 'init' when we would have already synced the\n> > sequences sounds a bit odd but otherwise, this could also work if we\n> > accept that even if the sequences are synced and value could remain in\n> > 'init' state (on rollbacks).\n>\n> I mean that it's just for identifying sequences that need to be\n> synced. With the idea of using sequence states in pg_subscription_rel,\n> the REFRESH PUBLICATION SEQUENCES command needs to change states to\n> something so that the sequence-sync worker can identify which sequence\n> needs to be synced. If 'init' sounds odd, we can invent a new state\n> for sequences, say 'needs-to-be-syned'.\n>\n\nAgreed and I am not sure which is better because there is a value in\nkeeping the state name the same for both sequences and tables. We\nprobably need more comments in code and doc updates to make the\nbehavior clear. We can start with the sequence state as 'init' for\n'needs-to-be-sycned' and 'ready' for 'synced' and can change if others\nfeel so during the review.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 Jun 2024 16:10:25 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 18 Jun 2024 at 16:10, Amit Kapila <[email protected]> wrote:\n>\n>\n> Agreed and I am not sure which is better because there is a value in\n> keeping the state name the same for both sequences and tables. We\n> probably need more comments in code and doc updates to make the\n> behavior clear. We can start with the sequence state as 'init' for\n> 'needs-to-be-sycned' and 'ready' for 'synced' and can change if others\n> feel so during the review.\n\nHere is a patch which does the sequence synchronization in the\nfollowing lines from the above discussion:\nThis commit introduces sequence synchronization during 1) creation of\nsubscription for initial sync of sequences 2) refresh publication to\nsynchronize the sequences for the newly created sequences 3) refresh\npublication sequences for synchronizing all the sequences.\n1) During subscription creation with CREATE SUBSCRIPTION (no syntax change):\n - The subscriber retrieves sequences associated with publications.\n - Sequences are added in the 'init' state to the pg_subscription_rel table.\n - Sequence synchronization worker will be started if there are any\nsequences to be synchronized\n - A new sequence synchronization worker handles synchronization in\nbatches of 100 sequences:\n a) Retrieves sequence values using pg_sequence_state from the publisher.\n b) Sets sequence values accordingly.\n c) Updates sequence state to 'READY' in pg_susbcripion_rel\n d) Commits batches of 100 synchronized sequences.\n2) Refreshing sequences with ALTER SUBSCRIPTION ... REFRESH\nPUBLICATION (no syntax change):\n - Stale sequences are removed from pg_subscription_rel.\n - Newly added sequences in the publisher are added in 'init' state\nto pg_subscription_rel.\n - Sequence synchronization will be done by sequence sync worker as\nlisted in subscription creation process.\n - Sequence synchronization occurs for newly added sequences only.\n3) Introduce new command ALTER SUBSCRIPTION ... REFRESH PUBLICATION\nSEQUENCES for refreshing all sequences:\n - Removes stale sequences and adds newly added sequences from the\npublisher to pg_subscription_rel.\n - Resets all sequences in pg_subscription_rel to 'init' state.\n - Initiates sequence synchronization for all sequences by sequence\nsync worker as listed in subscription creation process.\n\nRegards,\nVignesh", "msg_date": "Wed, 19 Jun 2024 20:33:37 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 19 Jun 2024 at 20:33, vignesh C <[email protected]> wrote:\n>\n> On Tue, 18 Jun 2024 at 16:10, Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > Agreed and I am not sure which is better because there is a value in\n> > keeping the state name the same for both sequences and tables. We\n> > probably need more comments in code and doc updates to make the\n> > behavior clear. We can start with the sequence state as 'init' for\n> > 'needs-to-be-sycned' and 'ready' for 'synced' and can change if others\n> > feel so during the review.\n>\n> Here is a patch which does the sequence synchronization in the\n> following lines from the above discussion:\n> This commit introduces sequence synchronization during 1) creation of\n> subscription for initial sync of sequences 2) refresh publication to\n> synchronize the sequences for the newly created sequences 3) refresh\n> publication sequences for synchronizing all the sequences.\n> 1) During subscription creation with CREATE SUBSCRIPTION (no syntax change):\n> - The subscriber retrieves sequences associated with publications.\n> - Sequences are added in the 'init' state to the pg_subscription_rel table.\n> - Sequence synchronization worker will be started if there are any\n> sequences to be synchronized\n> - A new sequence synchronization worker handles synchronization in\n> batches of 100 sequences:\n> a) Retrieves sequence values using pg_sequence_state from the publisher.\n> b) Sets sequence values accordingly.\n> c) Updates sequence state to 'READY' in pg_susbcripion_rel\n> d) Commits batches of 100 synchronized sequences.\n> 2) Refreshing sequences with ALTER SUBSCRIPTION ... REFRESH\n> PUBLICATION (no syntax change):\n> - Stale sequences are removed from pg_subscription_rel.\n> - Newly added sequences in the publisher are added in 'init' state\n> to pg_subscription_rel.\n> - Sequence synchronization will be done by sequence sync worker as\n> listed in subscription creation process.\n> - Sequence synchronization occurs for newly added sequences only.\n> 3) Introduce new command ALTER SUBSCRIPTION ... REFRESH PUBLICATION\n> SEQUENCES for refreshing all sequences:\n> - Removes stale sequences and adds newly added sequences from the\n> publisher to pg_subscription_rel.\n> - Resets all sequences in pg_subscription_rel to 'init' state.\n> - Initiates sequence synchronization for all sequences by sequence\n> sync worker as listed in subscription creation process.\n\nHere is an updated patch with a few fixes to remove an unused\nfunction, changed a few references of table to sequence and added one\nCHECK_FOR_INTERRUPTS in the sequence sync worker loop.\n\nRegards,\nVignesh", "msg_date": "Thu, 20 Jun 2024 18:23:34 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jun 19, 2024 at 8:33 PM vignesh C <[email protected]> wrote:\n>\n> On Tue, 18 Jun 2024 at 16:10, Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > Agreed and I am not sure which is better because there is a value in\n> > keeping the state name the same for both sequences and tables. We\n> > probably need more comments in code and doc updates to make the\n> > behavior clear. We can start with the sequence state as 'init' for\n> > 'needs-to-be-sycned' and 'ready' for 'synced' and can change if others\n> > feel so during the review.\n>\n> Here is a patch which does the sequence synchronization in the\n> following lines from the above discussion:\n>\n\nThanks for summarizing the points discussed. I would like to confirm\nwhether the patch replicates new sequences that are created\nimplicitly/explicitly for a publication defined as ALL SEQUENCES.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Jun 2024 18:44:50 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 20 Jun 2024 at 18:45, Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jun 19, 2024 at 8:33 PM vignesh C <[email protected]> wrote:\n> >\n> > On Tue, 18 Jun 2024 at 16:10, Amit Kapila <[email protected]> wrote:\n> > >\n> > >\n> > > Agreed and I am not sure which is better because there is a value in\n> > > keeping the state name the same for both sequences and tables. We\n> > > probably need more comments in code and doc updates to make the\n> > > behavior clear. We can start with the sequence state as 'init' for\n> > > 'needs-to-be-sycned' and 'ready' for 'synced' and can change if others\n> > > feel so during the review.\n> >\n> > Here is a patch which does the sequence synchronization in the\n> > following lines from the above discussion:\n> >\n>\n> Thanks for summarizing the points discussed. I would like to confirm\n> whether the patch replicates new sequences that are created\n> implicitly/explicitly for a publication defined as ALL SEQUENCES.\n\nCurrently, FOR ALL SEQUENCES publication both explicitly created\nsequences and implicitly created sequences will be synchronized during\nthe creation of subscriptions (using CREATE SUBSCRIPTION) and\nrefreshing publication sequences(using ALTER SUBSCRIPTION ... REFRESH\nPUBLICATION SEQUENCES).\nTherefore, the explicitly created sequence seq1:\nCREATE SEQUENCE seq1;\nand the implicitly created sequence seq_test2_c2_seq for seq_test2 table:\nCREATE TABLE seq_test2 (c1 int, c2 SERIAL);\nwill both be synchronized.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 24 Jun 2024 12:57:09 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 20 Jun 2024 at 18:24, vignesh C <[email protected]> wrote:\n>\n> On Wed, 19 Jun 2024 at 20:33, vignesh C <[email protected]> wrote:\n> >\n> > On Tue, 18 Jun 2024 at 16:10, Amit Kapila <[email protected]> wrote:\n> > >\n> > >\n> > > Agreed and I am not sure which is better because there is a value in\n> > > keeping the state name the same for both sequences and tables. We\n> > > probably need more comments in code and doc updates to make the\n> > > behavior clear. We can start with the sequence state as 'init' for\n> > > 'needs-to-be-sycned' and 'ready' for 'synced' and can change if others\n> > > feel so during the review.\n> >\n> > Here is a patch which does the sequence synchronization in the\n> > following lines from the above discussion:\n> > This commit introduces sequence synchronization during 1) creation of\n> > subscription for initial sync of sequences 2) refresh publication to\n> > synchronize the sequences for the newly created sequences 3) refresh\n> > publication sequences for synchronizing all the sequences.\n> > 1) During subscription creation with CREATE SUBSCRIPTION (no syntax change):\n> > - The subscriber retrieves sequences associated with publications.\n> > - Sequences are added in the 'init' state to the pg_subscription_rel table.\n> > - Sequence synchronization worker will be started if there are any\n> > sequences to be synchronized\n> > - A new sequence synchronization worker handles synchronization in\n> > batches of 100 sequences:\n> > a) Retrieves sequence values using pg_sequence_state from the publisher.\n> > b) Sets sequence values accordingly.\n> > c) Updates sequence state to 'READY' in pg_susbcripion_rel\n> > d) Commits batches of 100 synchronized sequences.\n> > 2) Refreshing sequences with ALTER SUBSCRIPTION ... REFRESH\n> > PUBLICATION (no syntax change):\n> > - Stale sequences are removed from pg_subscription_rel.\n> > - Newly added sequences in the publisher are added in 'init' state\n> > to pg_subscription_rel.\n> > - Sequence synchronization will be done by sequence sync worker as\n> > listed in subscription creation process.\n> > - Sequence synchronization occurs for newly added sequences only.\n> > 3) Introduce new command ALTER SUBSCRIPTION ... REFRESH PUBLICATION\n> > SEQUENCES for refreshing all sequences:\n> > - Removes stale sequences and adds newly added sequences from the\n> > publisher to pg_subscription_rel.\n> > - Resets all sequences in pg_subscription_rel to 'init' state.\n> > - Initiates sequence synchronization for all sequences by sequence\n> > sync worker as listed in subscription creation process.\n>\n> Here is an updated patch with a few fixes to remove an unused\n> function, changed a few references of table to sequence and added one\n> CHECK_FOR_INTERRUPTS in the sequence sync worker loop.\n\nHi Vignesh,\n\nI have reviewed the patches and I have following comments:\n\n===== tablesync.c ======\n1. process_syncing_sequences_for_apply can crash with:\n2024-06-21 15:25:17.208 IST [3681269] LOG: logical replication apply\nworker for subscription \"test1\" has started\n2024-06-21 15:28:10.127 IST [3682329] LOG: logical replication\nsequences synchronization worker for subscription \"test1\" has started\n2024-06-21 15:28:10.146 IST [3682329] LOG: logical replication\nsynchronization for subscription \"test1\", sequence \"s1\" has finished\n2024-06-21 15:28:10.149 IST [3682329] LOG: logical replication\nsynchronization for subscription \"test1\", sequence \"s2\" has finished\n2024-06-21 15:28:10.149 IST [3682329] LOG: logical replication\nsequences synchronization worker for subscription \"test1\" has finished\n2024-06-21 15:29:53.535 IST [3682767] LOG: logical replication\nsequences synchronization worker for subscription \"test1\" has started\nTRAP: failed Assert(\"nestLevel > 0 && (nestLevel <= GUCNestLevel ||\n(nestLevel == GUCNestLevel + 1 && !isCommit))\"), File: \"guc.c\", Line:\n2273, PID: 3682767\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (ExceptionalCondition+0xbb)[0x5b2a61861c99]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (AtEOXact_GUC+0x7b)[0x5b2a618bddfa]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (RestoreUserContext+0xc7)[0x5b2a618a6937]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (+0x1ff7dfa)[0x5b2a61115dfa]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (+0x1ff7eb4)[0x5b2a61115eb4]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (SequencesyncWorkerMain+0x33)[0x5b2a61115fe7]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (BackgroundWorkerMain+0x4ad)[0x5b2a61029cae]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (postmaster_child_launch+0x236)[0x5b2a6102fb36]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (+0x1f1d12a)[0x5b2a6103b12a]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (+0x1f1df0f)[0x5b2a6103bf0f]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (+0x1f1bf71)[0x5b2a61039f71]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (+0x1f16f73)[0x5b2a61034f73]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (PostmasterMain+0x18fb)[0x5b2a61034445]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (+0x1ab1ab8)[0x5b2a60bcfab8]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7b76bc629d90]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7b76bc629e40]\npostgres: logical replication sequencesync worker for subscription\n16389 sync 0 (_start+0x25)[0x5b2a601491a5]\n\nAnalysis:\nSuppose there are two sequences (s1, s2) on publisher.\nSO, during initial sync.\nin loop,\n+ foreach(lc, table_states_not_ready)\n\ntable_states_not_ready -> it contains both s1 and s2.\nSo, for s1 a sequence sync will be started. It will sync all sequences\nand the sequence sync worker will exit.\nNow, for s2 again a sequence sync will start. It will give the above error.\n\nIs this loop required? Instead we can just use a bool like\n'is_any_sequence_not_ready'. Thoughts?\n\n===== sequencesync.c =====\n2. function name should be 'LogicalRepSyncSequences' instead of\n'LogicalRepSyncSeqeunces'\n\n3. In function 'LogicalRepSyncSeqeunces'\n sequencerel = table_open(seqinfo->relid, RowExclusiveLock);\\\n There is a extra '\\' symbol\n\n4. In function LogicalRepSyncSeqeunces:\n+ ereport(LOG,\n+ errmsg(\"logical replication synchronization for subscription \\\"%s\\\",\nsequence \\\"%s\\\" has finished\",\n+ get_subscription_name(subid, false), RelationGetRelationName(sequencerel)));\n+ table_close(sequencerel, NoLock);\n+\n+ currseq++;\n+\n+ if (currseq % MAX_SEQUENCES_SYNC_PER_BATCH == 0 || currseq ==\nlist_length(sequences))\n+ CommitTransactionCommand();\n\n\nThe above message gets logged even if the changes are not committed.\nSuppose the sequence worker exits before commit due to some reason.\nThought the log will show that sequence is synced, the sequence will\nbe in 'init' state. I think this is not desirable.\nMaybe we should log the synced sequences at commit time? Thoughts?\n\n===== General ====\n5. We can use other macros like 'foreach_ptr' instead of 'foreach'\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Tue, 25 Jun 2024 17:52:54 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 25 Jun 2024 at 17:53, Shlok Kyal <[email protected]> wrote:\n>\n> On Thu, 20 Jun 2024 at 18:24, vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 19 Jun 2024 at 20:33, vignesh C <[email protected]> wrote:\n> > >\n> > > On Tue, 18 Jun 2024 at 16:10, Amit Kapila <[email protected]> wrote:\n> > > >\n> > > >\n> > > > Agreed and I am not sure which is better because there is a value in\n> > > > keeping the state name the same for both sequences and tables. We\n> > > > probably need more comments in code and doc updates to make the\n> > > > behavior clear. We can start with the sequence state as 'init' for\n> > > > 'needs-to-be-sycned' and 'ready' for 'synced' and can change if others\n> > > > feel so during the review.\n> > >\n> > > Here is a patch which does the sequence synchronization in the\n> > > following lines from the above discussion:\n> > > This commit introduces sequence synchronization during 1) creation of\n> > > subscription for initial sync of sequences 2) refresh publication to\n> > > synchronize the sequences for the newly created sequences 3) refresh\n> > > publication sequences for synchronizing all the sequences.\n> > > 1) During subscription creation with CREATE SUBSCRIPTION (no syntax change):\n> > > - The subscriber retrieves sequences associated with publications.\n> > > - Sequences are added in the 'init' state to the pg_subscription_rel table.\n> > > - Sequence synchronization worker will be started if there are any\n> > > sequences to be synchronized\n> > > - A new sequence synchronization worker handles synchronization in\n> > > batches of 100 sequences:\n> > > a) Retrieves sequence values using pg_sequence_state from the publisher.\n> > > b) Sets sequence values accordingly.\n> > > c) Updates sequence state to 'READY' in pg_susbcripion_rel\n> > > d) Commits batches of 100 synchronized sequences.\n> > > 2) Refreshing sequences with ALTER SUBSCRIPTION ... REFRESH\n> > > PUBLICATION (no syntax change):\n> > > - Stale sequences are removed from pg_subscription_rel.\n> > > - Newly added sequences in the publisher are added in 'init' state\n> > > to pg_subscription_rel.\n> > > - Sequence synchronization will be done by sequence sync worker as\n> > > listed in subscription creation process.\n> > > - Sequence synchronization occurs for newly added sequences only.\n> > > 3) Introduce new command ALTER SUBSCRIPTION ... REFRESH PUBLICATION\n> > > SEQUENCES for refreshing all sequences:\n> > > - Removes stale sequences and adds newly added sequences from the\n> > > publisher to pg_subscription_rel.\n> > > - Resets all sequences in pg_subscription_rel to 'init' state.\n> > > - Initiates sequence synchronization for all sequences by sequence\n> > > sync worker as listed in subscription creation process.\n> >\n> > Here is an updated patch with a few fixes to remove an unused\n> > function, changed a few references of table to sequence and added one\n> > CHECK_FOR_INTERRUPTS in the sequence sync worker loop.\n>\n> Hi Vignesh,\n>\n> I have reviewed the patches and I have following comments:\n>\n> ===== tablesync.c ======\n> 1. process_syncing_sequences_for_apply can crash with:\n> 2024-06-21 15:25:17.208 IST [3681269] LOG: logical replication apply\n> worker for subscription \"test1\" has started\n> 2024-06-21 15:28:10.127 IST [3682329] LOG: logical replication\n> sequences synchronization worker for subscription \"test1\" has started\n> 2024-06-21 15:28:10.146 IST [3682329] LOG: logical replication\n> synchronization for subscription \"test1\", sequence \"s1\" has finished\n> 2024-06-21 15:28:10.149 IST [3682329] LOG: logical replication\n> synchronization for subscription \"test1\", sequence \"s2\" has finished\n> 2024-06-21 15:28:10.149 IST [3682329] LOG: logical replication\n> sequences synchronization worker for subscription \"test1\" has finished\n> 2024-06-21 15:29:53.535 IST [3682767] LOG: logical replication\n> sequences synchronization worker for subscription \"test1\" has started\n> TRAP: failed Assert(\"nestLevel > 0 && (nestLevel <= GUCNestLevel ||\n> (nestLevel == GUCNestLevel + 1 && !isCommit))\"), File: \"guc.c\", Line:\n> 2273, PID: 3682767\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (ExceptionalCondition+0xbb)[0x5b2a61861c99]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (AtEOXact_GUC+0x7b)[0x5b2a618bddfa]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (RestoreUserContext+0xc7)[0x5b2a618a6937]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (+0x1ff7dfa)[0x5b2a61115dfa]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (+0x1ff7eb4)[0x5b2a61115eb4]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (SequencesyncWorkerMain+0x33)[0x5b2a61115fe7]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (BackgroundWorkerMain+0x4ad)[0x5b2a61029cae]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (postmaster_child_launch+0x236)[0x5b2a6102fb36]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (+0x1f1d12a)[0x5b2a6103b12a]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (+0x1f1df0f)[0x5b2a6103bf0f]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (+0x1f1bf71)[0x5b2a61039f71]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (+0x1f16f73)[0x5b2a61034f73]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (PostmasterMain+0x18fb)[0x5b2a61034445]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (+0x1ab1ab8)[0x5b2a60bcfab8]\n> /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7b76bc629d90]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7b76bc629e40]\n> postgres: logical replication sequencesync worker for subscription\n> 16389 sync 0 (_start+0x25)[0x5b2a601491a5]\n>\n> Analysis:\n> Suppose there are two sequences (s1, s2) on publisher.\n> SO, during initial sync.\n> in loop,\n> + foreach(lc, table_states_not_ready)\n>\n> table_states_not_ready -> it contains both s1 and s2.\n> So, for s1 a sequence sync will be started. It will sync all sequences\n> and the sequence sync worker will exit.\n> Now, for s2 again a sequence sync will start. It will give the above error.\n>\n> Is this loop required? Instead we can just use a bool like\n> 'is_any_sequence_not_ready'. Thoughts?\n>\n> ===== sequencesync.c =====\n> 2. function name should be 'LogicalRepSyncSequences' instead of\n> 'LogicalRepSyncSeqeunces'\n>\n> 3. In function 'LogicalRepSyncSeqeunces'\n> sequencerel = table_open(seqinfo->relid, RowExclusiveLock);\\\n> There is a extra '\\' symbol\n>\n> 4. In function LogicalRepSyncSeqeunces:\n> + ereport(LOG,\n> + errmsg(\"logical replication synchronization for subscription \\\"%s\\\",\n> sequence \\\"%s\\\" has finished\",\n> + get_subscription_name(subid, false), RelationGetRelationName(sequencerel)));\n> + table_close(sequencerel, NoLock);\n> +\n> + currseq++;\n> +\n> + if (currseq % MAX_SEQUENCES_SYNC_PER_BATCH == 0 || currseq ==\n> list_length(sequences))\n> + CommitTransactionCommand();\n>\n>\n> The above message gets logged even if the changes are not committed.\n> Suppose the sequence worker exits before commit due to some reason.\n> Thought the log will show that sequence is synced, the sequence will\n> be in 'init' state. I think this is not desirable.\n> Maybe we should log the synced sequences at commit time? Thoughts?\n>\n> ===== General ====\n> 5. We can use other macros like 'foreach_ptr' instead of 'foreach'\n\nThanks for the comments, the attached patch has the fixes for the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 25 Jun 2024 21:09:17 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Here are my initial review comments for the first patch v20240625-0001.\n\n======\nGeneral\n\n1. Missing docs?\n\nSection 9.17. \"Sequence Manipulation Functions\" [1] describes some\nfunctions. Shouldn't your new function be documented here also?\n\n~~~\n\n2. Missing tests?\n\nShouldn't there be some test code that at least executes your new\npg_sequence_state function to verify that sane values are returned?\n\n======\nCommit Message\n\n3.\nThis patch introduces new functionalities to PostgreSQL:\n- pg_sequence_state allows retrieval of sequence values using LSN.\n- SetSequence enables updating sequences with user-specified values.\n\n~\n\n3a.\nI didn't understand why this says \"using LSN\" because IIUC 'lsn' is an\noutput parameter of that function. Don't you mean \"... retrieval of\nsequence values including LSN\"?\n\n~\n\n3b.\nDoes \"user-specified\" make sense? Is this going to be exposed to a\nuser? How about just \"specified\"?\n\n======\nsrc/backend/commands/sequence.c\n\n4. SetSequence:\n\n+void\n+SetSequence(Oid seq_relid, int64 value)\n\nWould 'new_last_value' be a better parameter name here?\n\n~~~\n\n5.\nThis new function logic looks pretty similar to the do_setval()\nfunction. Can you explain (maybe in the function comment) some info\nabout how and why it differs from that other function?\n\n~~~\n\n6.\nI saw that RelationNeedsWAL() is called 2 times. It may make no sense,\nbut is it possible to assign that to a variable 1st time so you don't\nneed to call it 2nd time within the critical section?\n\n~~~\n\nNITPICK - remove junk (') char in comment\n\nNITPICK - missing periods (.) in multi-sentence comment\n\n~~~\n\n7.\n-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)\n+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,\n+ XLogRecPtr *lsn)\n\n7a.\nThe existing parameters were described in the function comment. So,\nthe new 'lsn' parameter should be described here also.\n\n~\n\n7b.\nMaybe the new parameter name should be 'lsn_res' or 'lsn_out' or\nsimilar to emphasise that this is a returned value.\n\n~~\n\nNITPICK - tweaked comment. YMMV.\n\n~~~\n\n8. pg_sequence_state:\n\nShould you give descriptions of the output parameters in the function\nheader comment? Otherwise, where are they described so called knows\nwhat they mean?\n\n~~~\n\nNITPICK - /relid/seq_relid/\n\nNITPICK - declare the variables in the same order as the output parameters\n\nNITPICK - An alternative to the memset for nulls is just to use static\ninitialisation\n\"bool nulls[4] = {false, false, false, false};\"\n\n======\n+extern void SetSequence(Oid seq_relid, int64 value);\n\n9.\nWould 'SetSequenceLastValue' be a better name for what this function is doing?\n\n======\n\n99.\nSee also my attached diff which is a top-up patch implementing those\nnitpicks mentioned above. Please apply any of these that you agree\nwith.\n\n======\n[1] https://www.postgresql.org/docs/devel/functions-sequence.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 26 Jun 2024 19:10:39 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Here are some review comments for the patch v20240625-0002\n\n======\nCommit Message\n\n1.\nThis commit enhances logical replication by enabling the inclusion of all\nsequences in publications. This improvement facilitates seamless\nsynchronization of sequence data during operations such as\nCREATE SUBSCRIPTION, REFRESH PUBLICATION, and REFRESH PUBLICATION SEQUENCES.\n\n~\n\nIsn't this description getting ahead of the functionality a bit? For\nexample, it talks about operations like REFRESH PUBLICATION SEQUENCES\nbut AFAIK that syntax does not exist just yet.\n\n~~~\n\n2.\nThe commit message should mention that you are only introducing new\nsyntax for \"FOR ALL SEQUENCES\" here, but syntax for \"FOR SEQUENCE\" is\nbeing deferred to some later patch. Without such a note it is not\nclear why the gram.y syntax and docs seemed only half done.\n\n======\ndoc/src/sgml/ref/create_publication.sgml\n\n3.\n <varlistentry id=\"sql-createpublication-params-for-all-tables\">\n <term><literal>FOR ALL TABLES</literal></term>\n+ <term><literal>FOR ALL SEQUENCES</literal></term>\n <listitem>\n <para>\n- Marks the publication as one that replicates changes for all tables in\n- the database, including tables created in the future.\n+ Marks the publication as one that replicates changes for all tables or\n+ sequences in the database, including tables created in the future.\n\nIt might be better here to keep descriptions for \"ALL TABLES\" and \"ALL\nSEQUENCES\" separated, otherwise the wording does not quite seem\nappropriate for sequences (e.g. where it says \"including tables\ncreated in the future\").\n\n~~~\n\nNITPICK - missing spaces\nNITPICK - removed Oxford commas since previously there were none\n\n~~~\n\n4.\n+ If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,\n+ <literal>FOR ALL SEQUENCES</literal>,or <literal>FOR TABLES IN\nSCHEMA</literal>\n+ are not specified, then the publication starts out with an empty set of\n+ tables. That is useful if tables or schemas are to be added later.\n\nIt seems like \"FOR ALL SEQUENCES\" is out of place since it is jammed\nbetween other clauses referring to TABLES. Would it be better to\nmention SEQUENCES last in the list?\n\n~~~\n\n5.\n+ rights on the table. The <command>FOR ALL TABLES</command>,\n+ <command>FOR ALL SEQUENCES</command>, and\n <command>FOR TABLES IN SCHEMA</command> clauses require the invoking\n\nditto of #4 above.\n\n======\nsrc/backend/catalog/pg_publication.c\n\nGetAllSequencesPublicationRelations:\n\nNITPICK - typo /relation/relations/\n\n======\nsrc/backend/commands/publicationcmds.c\n\n6.\n+ foreach(lc, stmt->for_all_objects)\n+ {\n+ char *val = strVal(lfirst(lc));\n+\n+ if (strcmp(val, \"tables\") == 0)\n+ for_all_tables = true;\n+ else if (strcmp(val, \"sequences\") == 0)\n+ for_all_sequences = true;\n+ }\n\nConsider the foreach_ptr macro to slightly simplify this code.\nActually, this whole logic seems cumbersome -- can’t the parser assign\nflags automatically. Please see my more detailed comment #10 below\nabout this in gram.y\n\n~~~\n\n7.\n /* FOR ALL TABLES requires superuser */\n- if (stmt->for_all_tables && !superuser())\n+ if (for_all_tables && !superuser())\n ereport(ERROR,\n (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n errmsg(\"must be superuser to create FOR ALL TABLES publication\")));\n\n+ /* FOR ALL SEQUENCES requires superuser */\n+ if (for_all_sequences && !superuser())\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"must be superuser to create FOR ALL SEQUENCES publication\")));\n+\n\nThe current code is easy to read, but I wonder if it should try harder\nto share common code, or at least a common translatable message like\n\"must be superuser to create %s publication\".\n\n~~~\n\n8.\n- else\n+\n+ /*\n+ * If the publication might have either tables or sequences (directly or\n+ * through a schema), process that.\n+ */\n+ if (!for_all_tables || !for_all_sequences)\n\nI did not understand why this code cannot just say \"else\" like before,\nbecause the direct or through-schema syntax cannot be specified at the\nsame time as \"FOR ALL ...\", so why is the more complicated condition\nnecessary? Also, the similar code in AlterPublicationOptions() was not\nchanged to be like this.\n\n======\nsrc/backend/parser/gram.y\n\n9. comment\n\n *\n * CREATE PUBLICATION FOR ALL TABLES [WITH options]\n *\n+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]\n+ *\n * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]\n\nThe comment is not quite correct because actually you are allowing\nsimultaneous FOR ALL TABLES, SEQUENCES. It should be more like:\n\nCREATE PUBLICATION FOR ALL pub_obj_type [,...] [WITH options]\n\npub_obj_type is one of:\nTABLES\nSEQUENCES\n\n~~~\n\n10.\n+pub_obj_type: TABLES\n+ { $$ = (Node *) makeString(\"tables\"); }\n+ | SEQUENCES\n+ { $$ = (Node *) makeString(\"sequences\"); }\n+ ;\n+\n+pub_obj_type_list: pub_obj_type\n+ { $$ = list_make1($1); }\n+ | pub_obj_type_list ',' pub_obj_type\n+ { $$ = lappend($1, $3); }\n+ ;\n\nIIUC the only thing you need is a flag to say if FOR ALL TABLE is in\neffect and another flag to say if FOR ALL SEQUENCES is in effect. So,\nIt seemed clunky to build up a temporary list of \"tables\" or\n\"sequences\" strings here, which is subsequently scanned by\nCreatePublication to be turned back into booleans.\n\nCan't we just change the CreatePublicationStmt field to have:\n\nA) a 'for_all_types' bitmask instead of a list:\n0x0000 means FOR ALL is not specified\n0x0001 means ALL TABLES\n0x0010 means ALL SEQUENCES\n\nOr, B) have 2 boolean fields ('for_all_tables' and 'for_all_sequences')\n\n...where the gram.y code can be written to assign the flag/s values directly?\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n11.\n if (pubinfo->puballtables)\n appendPQExpBufferStr(query, \" FOR ALL TABLES\");\n\n+ if (pubinfo->puballsequences)\n+ appendPQExpBufferStr(query, \" FOR ALL SEQUENCES\");\n+\n\nHmm. Is that correct? It looks like a possible bug, because if both\nflags are true it will give invalid syntax like \"FOR ALL TABLES FOR\nALL SEQUENCES\" instead of \"FOR ALL TABLES, SEQUENCES\"\n\n======\nsrc/bin/pg_dump/t/002_pg_dump.pl\n\n12.\nThis could also try the test scenario of both FOR ALL being\nsimultaneously set (\"FOR ALL TABLES, SEQUENCES\") to check for bugs\nlike the suspected one in dump.c review comment #11 above.\n\n======\nsrc/bin/psql/describe.c\n\n13.\n+ if (pset.sversion >= 170000)\n+ printfPQExpBuffer(&buf,\n+ \"SELECT pubname AS \\\"%s\\\",\\n\"\n+ \" pg_catalog.pg_get_userbyid(pubowner) AS \\\"%s\\\",\\n\"\n+ \" puballtables AS \\\"%s\\\",\\n\"\n+ \" puballsequences AS \\\"%s\\\",\\n\"\n+ \" pubinsert AS \\\"%s\\\",\\n\"\n+ \" pubupdate AS \\\"%s\\\",\\n\"\n+ \" pubdelete AS \\\"%s\\\"\",\n+ gettext_noop(\"Name\"),\n+ gettext_noop(\"Owner\"),\n+ gettext_noop(\"All tables\"),\n+ gettext_noop(\"All sequences\"),\n+ gettext_noop(\"Inserts\"),\n+ gettext_noop(\"Updates\"),\n+ gettext_noop(\"Deletes\"));\n+ else\n+ printfPQExpBuffer(&buf,\n+ \"SELECT pubname AS \\\"%s\\\",\\n\"\n+ \" pg_catalog.pg_get_userbyid(pubowner) AS \\\"%s\\\",\\n\"\n+ \" puballtables AS \\\"%s\\\",\\n\"\n+ \" pubinsert AS \\\"%s\\\",\\n\"\n+ \" pubupdate AS \\\"%s\\\",\\n\"\n+ \" pubdelete AS \\\"%s\\\"\",\n+ gettext_noop(\"Name\"),\n+ gettext_noop(\"Owner\"),\n+ gettext_noop(\"All tables\"),\n+ gettext_noop(\"Inserts\"),\n+ gettext_noop(\"Updates\"),\n+ gettext_noop(\"Deletes\"));\n+\n\nIMO this should be coded differently so that only the\n\"puballsequences\" column is guarded by the (pset.sversion >= 170000),\nand everything else is the same as before. This suggested way would\nalso be consistent with the existing code version checks (e.g. for\n\"pubtruncate\" or for \"pubviaroot\").\n\n~~~\n\nNITPICK - Add blank lines\nNITPICK - space in \"ncols ++\"\n\n======\nsrc/bin/psql/tab-complete.c\n\n14.\nHmm. When I tried this, it didn't seem to be working properly.\n\nFor example \"CREATE PUBLICATION pub1 FOR ALL\" only completes with\n\"TABLES\" but not \"SEQUENCES\".\nFor example \"CREATE PUBLICATION pub1 FOR ALL SEQ\" doesn't complete\n\"SEQUENCES\" properly\n\n======\nsrc/include/catalog/pg_publication.h\n\nNITPICK - move the extern to be adjacent to others like it.\n\n======\nsrc/include/nodes/parsenodes.h\n\n15.\n- bool for_all_tables; /* Special publication for all tables in db */\n+ List *for_all_objects; /* Special publication for all objects in\n+ * db */\n } CreatePublicationStmt;\n\nI felt this List logic is a bit strange. See my comment #10 in gram.y\nfor more details.\n\n~~~\n\n16.\n- bool for_all_tables; /* Special publication for all tables in db */\n+ List *for_all_objects; /* Special publication for all objects in\n+ * db */\n\nDitto comment #15 in AlterPublicationStmt\n\n======\nsrc/test/regress/sql/publication.sql\n\n17.\n+CREATE SEQUENCE testpub_seq0;\n+CREATE SEQUENCE pub_test.testpub_seq1;\n+\n+SET client_min_messages = 'ERROR';\n+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;\n+RESET client_min_messages;\n+\n+SELECT pubname, puballtables, puballsequences FROM pg_publication\nWHERE pubname = 'testpub_forallsequences';\n+\\d+ pub_test.testpub_seq1\n\nShould you also do \"\\d+ tespub_seq0\" here? Otherwise what was the\npoint of defining the seq0 sequence being in this test?\n\n~~~\n\n18.\nMaybe there are missing test cases for different syntax combinations like:\n\nFOR ALL TABLES, SEQUENCES\nFOR ALL SEQUENCES, TABLES\n\nNote that the current list logic of this patch even considers my\nfollowing bogus statement syntax is OK.\n\ntest_pub=# CREATE PUBLICATION pub_silly FOR ALL TABLES, SEQUENCES,\nTABLES, TABLES, TABLES, SEQUENCES;\nCREATE PUBLICATION\ntest_pub=#\n\n======\n99.\nPlease also refer to the attached nitpicks patch which implements all\nthe cosmetic issues identified above as NITPICKS.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 1 Jul 2024 17:27:22 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 26 Jun 2024 at 14:41, Peter Smith <[email protected]> wrote:\n>\n> Here are my initial review comments for the first patch v20240625-0001.\n>\n> ======\n> General\n>\n> 6.\n> I saw that RelationNeedsWAL() is called 2 times. It may make no sense,\n> but is it possible to assign that to a variable 1st time so you don't\n> need to call it 2nd time within the critical section?\n>\n\nI felt this is ok, we do similarly in other places also like\nfill_seq_fork_with_data function in the same file.\n\nI have fixed the other comments and merged the nitpicks changes. The\nattached patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 2 Jul 2024 15:09:59 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Here are my comments for patch v20240702-0001\n\nThey are all cosmetic and/or typos. Apart from these the 0001 patch LGTM.\n\n======\ndoc/src/sgml/func.sgml\n\nSection 9.17. Sequence Manipulation Functions\n\npg_sequence_state:\nnitpick - typo /whethere/whether/\nnitpick - reworded slightly using a ChatGPT suggestion. (YMMV, so it\nis fine also if you prefer the current wording)\n\n======\nsrc/backend/commands/sequence.c\n\nSetSequenceLastValue:\nnitpick - typo in function comment /diffrent/different/\n\npg_sequence_state:\nnitpick - function comment wording: /page LSN/the page LSN/\nnitpick - moved some comment details about 'lsn_ret' into the function header\nnitpick - rearranged variable assignments to have consistent order\nwith the values\nnitpick - tweaked comments\nnitpick - typo /whethere/whether/\n\n======\n99.\nPlease see the attached diffs patch which implements all those\nnitpicks mentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 3 Jul 2024 12:54:18 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, 1 Jul 2024 at 12:57, Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for the patch v20240625-0002\n>\n> ======\n> Commit Message\n>\n> 1.\n> This commit enhances logical replication by enabling the inclusion of all\n> sequences in publications. This improvement facilitates seamless\n> synchronization of sequence data during operations such as\n> CREATE SUBSCRIPTION, REFRESH PUBLICATION, and REFRESH PUBLICATION SEQUENCES.\n>\n> ~\n>\n> Isn't this description getting ahead of the functionality a bit? For\n> example, it talks about operations like REFRESH PUBLICATION SEQUENCES\n> but AFAIK that syntax does not exist just yet.\n>\n> ~~~\n>\n> 2.\n> The commit message should mention that you are only introducing new\n> syntax for \"FOR ALL SEQUENCES\" here, but syntax for \"FOR SEQUENCE\" is\n> being deferred to some later patch. Without such a note it is not\n> clear why the gram.y syntax and docs seemed only half done.\n>\n> ======\n> doc/src/sgml/ref/create_publication.sgml\n>\n> 3.\n> <varlistentry id=\"sql-createpublication-params-for-all-tables\">\n> <term><literal>FOR ALL TABLES</literal></term>\n> + <term><literal>FOR ALL SEQUENCES</literal></term>\n> <listitem>\n> <para>\n> - Marks the publication as one that replicates changes for all tables in\n> - the database, including tables created in the future.\n> + Marks the publication as one that replicates changes for all tables or\n> + sequences in the database, including tables created in the future.\n>\n> It might be better here to keep descriptions for \"ALL TABLES\" and \"ALL\n> SEQUENCES\" separated, otherwise the wording does not quite seem\n> appropriate for sequences (e.g. where it says \"including tables\n> created in the future\").\n>\n> ~~~\n>\n> NITPICK - missing spaces\n> NITPICK - removed Oxford commas since previously there were none\n>\n> ~~~\n>\n> 4.\n> + If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,\n> + <literal>FOR ALL SEQUENCES</literal>,or <literal>FOR TABLES IN\n> SCHEMA</literal>\n> + are not specified, then the publication starts out with an empty set of\n> + tables. That is useful if tables or schemas are to be added later.\n>\n> It seems like \"FOR ALL SEQUENCES\" is out of place since it is jammed\n> between other clauses referring to TABLES. Would it be better to\n> mention SEQUENCES last in the list?\n>\n> ~~~\n>\n> 5.\n> + rights on the table. The <command>FOR ALL TABLES</command>,\n> + <command>FOR ALL SEQUENCES</command>, and\n> <command>FOR TABLES IN SCHEMA</command> clauses require the invoking\n>\n> ditto of #4 above.\n>\n> ======\n> src/backend/catalog/pg_publication.c\n>\n> GetAllSequencesPublicationRelations:\n>\n> NITPICK - typo /relation/relations/\n>\n> ======\n> src/backend/commands/publicationcmds.c\n>\n> 6.\n> + foreach(lc, stmt->for_all_objects)\n> + {\n> + char *val = strVal(lfirst(lc));\n> +\n> + if (strcmp(val, \"tables\") == 0)\n> + for_all_tables = true;\n> + else if (strcmp(val, \"sequences\") == 0)\n> + for_all_sequences = true;\n> + }\n>\n> Consider the foreach_ptr macro to slightly simplify this code.\n> Actually, this whole logic seems cumbersome -- can’t the parser assign\n> flags automatically. Please see my more detailed comment #10 below\n> about this in gram.y\n>\n> ~~~\n>\n> 7.\n> /* FOR ALL TABLES requires superuser */\n> - if (stmt->for_all_tables && !superuser())\n> + if (for_all_tables && !superuser())\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> errmsg(\"must be superuser to create FOR ALL TABLES publication\")));\n>\n> + /* FOR ALL SEQUENCES requires superuser */\n> + if (for_all_sequences && !superuser())\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"must be superuser to create FOR ALL SEQUENCES publication\")));\n> +\n>\n> The current code is easy to read, but I wonder if it should try harder\n> to share common code, or at least a common translatable message like\n> \"must be superuser to create %s publication\".\n>\n> ~~~\n>\n> 8.\n> - else\n> +\n> + /*\n> + * If the publication might have either tables or sequences (directly or\n> + * through a schema), process that.\n> + */\n> + if (!for_all_tables || !for_all_sequences)\n>\n> I did not understand why this code cannot just say \"else\" like before,\n> because the direct or through-schema syntax cannot be specified at the\n> same time as \"FOR ALL ...\", so why is the more complicated condition\n> necessary? Also, the similar code in AlterPublicationOptions() was not\n> changed to be like this.\n>\n> ======\n> src/backend/parser/gram.y\n>\n> 9. comment\n>\n> *\n> * CREATE PUBLICATION FOR ALL TABLES [WITH options]\n> *\n> + * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]\n> + *\n> * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]\n>\n> The comment is not quite correct because actually you are allowing\n> simultaneous FOR ALL TABLES, SEQUENCES. It should be more like:\n>\n> CREATE PUBLICATION FOR ALL pub_obj_type [,...] [WITH options]\n>\n> pub_obj_type is one of:\n> TABLES\n> SEQUENCES\n>\n> ~~~\n>\n> 10.\n> +pub_obj_type: TABLES\n> + { $$ = (Node *) makeString(\"tables\"); }\n> + | SEQUENCES\n> + { $$ = (Node *) makeString(\"sequences\"); }\n> + ;\n> +\n> +pub_obj_type_list: pub_obj_type\n> + { $$ = list_make1($1); }\n> + | pub_obj_type_list ',' pub_obj_type\n> + { $$ = lappend($1, $3); }\n> + ;\n>\n> IIUC the only thing you need is a flag to say if FOR ALL TABLE is in\n> effect and another flag to say if FOR ALL SEQUENCES is in effect. So,\n> It seemed clunky to build up a temporary list of \"tables\" or\n> \"sequences\" strings here, which is subsequently scanned by\n> CreatePublication to be turned back into booleans.\n>\n> Can't we just change the CreatePublicationStmt field to have:\n>\n> A) a 'for_all_types' bitmask instead of a list:\n> 0x0000 means FOR ALL is not specified\n> 0x0001 means ALL TABLES\n> 0x0010 means ALL SEQUENCES\n>\n> Or, B) have 2 boolean fields ('for_all_tables' and 'for_all_sequences')\n>\n> ...where the gram.y code can be written to assign the flag/s values directly?\n>\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 11.\n> if (pubinfo->puballtables)\n> appendPQExpBufferStr(query, \" FOR ALL TABLES\");\n>\n> + if (pubinfo->puballsequences)\n> + appendPQExpBufferStr(query, \" FOR ALL SEQUENCES\");\n> +\n>\n> Hmm. Is that correct? It looks like a possible bug, because if both\n> flags are true it will give invalid syntax like \"FOR ALL TABLES FOR\n> ALL SEQUENCES\" instead of \"FOR ALL TABLES, SEQUENCES\"\n>\n> ======\n> src/bin/pg_dump/t/002_pg_dump.pl\n>\n> 12.\n> This could also try the test scenario of both FOR ALL being\n> simultaneously set (\"FOR ALL TABLES, SEQUENCES\") to check for bugs\n> like the suspected one in dump.c review comment #11 above.\n>\n> ======\n> src/bin/psql/describe.c\n>\n> 13.\n> + if (pset.sversion >= 170000)\n> + printfPQExpBuffer(&buf,\n> + \"SELECT pubname AS \\\"%s\\\",\\n\"\n> + \" pg_catalog.pg_get_userbyid(pubowner) AS \\\"%s\\\",\\n\"\n> + \" puballtables AS \\\"%s\\\",\\n\"\n> + \" puballsequences AS \\\"%s\\\",\\n\"\n> + \" pubinsert AS \\\"%s\\\",\\n\"\n> + \" pubupdate AS \\\"%s\\\",\\n\"\n> + \" pubdelete AS \\\"%s\\\"\",\n> + gettext_noop(\"Name\"),\n> + gettext_noop(\"Owner\"),\n> + gettext_noop(\"All tables\"),\n> + gettext_noop(\"All sequences\"),\n> + gettext_noop(\"Inserts\"),\n> + gettext_noop(\"Updates\"),\n> + gettext_noop(\"Deletes\"));\n> + else\n> + printfPQExpBuffer(&buf,\n> + \"SELECT pubname AS \\\"%s\\\",\\n\"\n> + \" pg_catalog.pg_get_userbyid(pubowner) AS \\\"%s\\\",\\n\"\n> + \" puballtables AS \\\"%s\\\",\\n\"\n> + \" pubinsert AS \\\"%s\\\",\\n\"\n> + \" pubupdate AS \\\"%s\\\",\\n\"\n> + \" pubdelete AS \\\"%s\\\"\",\n> + gettext_noop(\"Name\"),\n> + gettext_noop(\"Owner\"),\n> + gettext_noop(\"All tables\"),\n> + gettext_noop(\"Inserts\"),\n> + gettext_noop(\"Updates\"),\n> + gettext_noop(\"Deletes\"));\n> +\n>\n> IMO this should be coded differently so that only the\n> \"puballsequences\" column is guarded by the (pset.sversion >= 170000),\n> and everything else is the same as before. This suggested way would\n> also be consistent with the existing code version checks (e.g. for\n> \"pubtruncate\" or for \"pubviaroot\").\n>\n> ~~~\n>\n> NITPICK - Add blank lines\n> NITPICK - space in \"ncols ++\"\n>\n> ======\n> src/bin/psql/tab-complete.c\n>\n> 14.\n> Hmm. When I tried this, it didn't seem to be working properly.\n>\n> For example \"CREATE PUBLICATION pub1 FOR ALL\" only completes with\n> \"TABLES\" but not \"SEQUENCES\".\n> For example \"CREATE PUBLICATION pub1 FOR ALL SEQ\" doesn't complete\n> \"SEQUENCES\" properly\n>\n> ======\n> src/include/catalog/pg_publication.h\n>\n> NITPICK - move the extern to be adjacent to others like it.\n>\n> ======\n> src/include/nodes/parsenodes.h\n>\n> 15.\n> - bool for_all_tables; /* Special publication for all tables in db */\n> + List *for_all_objects; /* Special publication for all objects in\n> + * db */\n> } CreatePublicationStmt;\n>\n> I felt this List logic is a bit strange. See my comment #10 in gram.y\n> for more details.\n>\n> ~~~\n>\n> 16.\n> - bool for_all_tables; /* Special publication for all tables in db */\n> + List *for_all_objects; /* Special publication for all objects in\n> + * db */\n>\n> Ditto comment #15 in AlterPublicationStmt\n>\n> ======\n> src/test/regress/sql/publication.sql\n>\n> 17.\n> +CREATE SEQUENCE testpub_seq0;\n> +CREATE SEQUENCE pub_test.testpub_seq1;\n> +\n> +SET client_min_messages = 'ERROR';\n> +CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;\n> +RESET client_min_messages;\n> +\n> +SELECT pubname, puballtables, puballsequences FROM pg_publication\n> WHERE pubname = 'testpub_forallsequences';\n> +\\d+ pub_test.testpub_seq1\n>\n> Should you also do \"\\d+ tespub_seq0\" here? Otherwise what was the\n> point of defining the seq0 sequence being in this test?\n>\n> ~~~\n>\n> 18.\n> Maybe there are missing test cases for different syntax combinations like:\n>\n> FOR ALL TABLES, SEQUENCES\n> FOR ALL SEQUENCES, TABLES\n>\n> Note that the current list logic of this patch even considers my\n> following bogus statement syntax is OK.\n>\n> test_pub=# CREATE PUBLICATION pub_silly FOR ALL TABLES, SEQUENCES,\n> TABLES, TABLES, TABLES, SEQUENCES;\n> CREATE PUBLICATION\n> test_pub=#\n>\n> ======\n> 99.\n> Please also refer to the attached nitpicks patch which implements all\n> the cosmetic issues identified above as NITPICKS.\n\nThank you for your feedback. I have addressed all the comments in the\nattached patch.\n\nRegards,\nVignesh", "msg_date": "Wed, 3 Jul 2024 20:09:24 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 3 Jul 2024 at 08:24, Peter Smith <[email protected]> wrote:\n>\n> Here are my comments for patch v20240702-0001\n>\n> They are all cosmetic and/or typos. Apart from these the 0001 patch LGTM.\n>\n> ======\n> doc/src/sgml/func.sgml\n>\n> Section 9.17. Sequence Manipulation Functions\n>\n> pg_sequence_state:\n> nitpick - typo /whethere/whether/\n> nitpick - reworded slightly using a ChatGPT suggestion. (YMMV, so it\n> is fine also if you prefer the current wording)\n>\n> ======\n> src/backend/commands/sequence.c\n>\n> SetSequenceLastValue:\n> nitpick - typo in function comment /diffrent/different/\n>\n> pg_sequence_state:\n> nitpick - function comment wording: /page LSN/the page LSN/\n> nitpick - moved some comment details about 'lsn_ret' into the function header\n> nitpick - rearranged variable assignments to have consistent order\n> with the values\n> nitpick - tweaked comments\n> nitpick - typo /whethere/whether/\n>\n> ======\n> 99.\n> Please see the attached diffs patch which implements all those\n> nitpicks mentioned above.\n\nThank you for your feedback. I have addressed all the comments in the\nv20240703 version patch attached at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm0mSSrvHNRnC67f0HWMpoLW9UzxGVXimhwbRtKjE7Aa-Q%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 3 Jul 2024 20:10:55 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh. Here are my comments for the latest patch v20240703-0001.\n\n======\ndoc/src/sgml/func.sgml\n\nnitpick - /lsn/LSN/ (all other doc pages I found use uppercase for this acronym)\n\n======\nsrc/backend/commands/sequence.c\n\nnitpick - /lsn/LSN/\n\n======\nPlease see attached nitpicks diff.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 4 Jul 2024 11:09:42 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 4 Jul 2024 at 06:40, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh. Here are my comments for the latest patch v20240703-0001.\n>\n> ======\n> doc/src/sgml/func.sgml\n>\n> nitpick - /lsn/LSN/ (all other doc pages I found use uppercase for this acronym)\n>\n> ======\n> src/backend/commands/sequence.c\n>\n> nitpick - /lsn/LSN/\n\nThanks for the comments, the attached patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Thu, 4 Jul 2024 09:05:20 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Here are my review comments for the patch v20240703-0002\n\n======\ndoc/src/sgml/ref/create_publication.sgml\n\nnitpick - consider putting the \"FOR ALL SEQUENCES\" para last, because\neventually when more sequence syntax is added IMO it will be better to\ndescribe all the TABLES together, and then describe all the SEQUENCES\ntogether.\n\nnitpick - /synchronizing changes/synchronizes changes/\n\nQuestion: Was there a reason you chose wording \"synchronizes changes\"\ninstead of having same \"replicates changes\" wording of FOR ALL TABLES?\n\n======\nsrc/backend/catalog/system_views.sql\n\n1.\nShould there be some new test for the view? Otherwise, AFAICT this\npatch has no tests that will exercise the new function\npg_get_publication_sequences.\n\n======\nsrc/backend/commands/publicationcmds.c\n\n2.\n+ errmsg(\"must be superuser to create FOR ALL %s publication\",\n+ stmt->for_all_tables ? \"TABLES\" : \"SEQUENCES\")));\n\nnitpick - the combined error message may be fine, but I think\ntranslators will prefer the substitution to be the full \"FOR ALL\nTABLES\" and \"FOR ALL SEQUENCES\" instead of just the keywords that are\ndifferent.\n\n======\nsrc/backend/parser/gram.y\n\n3.\nSome of these new things maybe could be named better?\n\n'preprocess_allpubobjtype_list' => 'preprocess_pub_all_objtype_list'\n\n'AllPublicationObjSpec *allpublicationobjectspec;' =>\n'PublicationAllObjSpec *publicationallobjectspec;'\n\n(I didn't include these in nitpicks diffs because you probably have\nbetter ideas than I do for good names)\n\n~~~\n\nnitpick - typo in comment /SCHEMAS/SEQUENCES/\n\npreprocess_allpubobjtype_list:\nnitpick - typo /allbjects_list/all_objects_list/\nnitpick - simplify /allpubob/obj/\nnitpick - add underscores in the enums\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n4.\n+ if (pubinfo->puballtables || pubinfo->puballsequences)\n+ {\n+ appendPQExpBufferStr(query, \" FOR ALL\");\n+ if (pubinfo->puballtables && pubinfo->puballsequences)\n+ appendPQExpBufferStr(query, \" TABLES, SEQUENCES\");\n+ else if (pubinfo->puballtables)\n+ appendPQExpBufferStr(query, \" TABLES\");\n+ else\n+ appendPQExpBufferStr(query, \" SEQUENCES\");\n+ }\n\nnitpick - it seems over-complicated; See nitpicks diff for my suggestion.\n\n======\nsrc/include/nodes/parsenodes.h\n\nnitpick - put underscores in the enum values\n\n~~\n\n5.\n- bool for_all_tables; /* Special publication for all tables in db */\n+ List *for_all_objects; /* Special publication for all objects in\n+ * db */\n\nIs this OK? Saying \"for all objects\" seemed misleading.\n\n======\nsrc/test/regress/sql/publication.sql\n\nnitpick - some small changes to comments, e.g. writing keywords in uppercase\n\n~~~\n\n6.\nI asked this before in a previous review [1-#17] -- I didn't\nunderstand the point of the sequence 'testpub_seq0' since nobody seems\nto be doing anything with it. Should it just be removed? Or is there a\nmissing test case to use it?\n\n~~~\n\n7.\nOther things to consider:\n\n(I didn't include these in my attached diff)\n\n* could use a single CREATE SEQUENCE stmt instead of multiple\n\n* could use a single DROP PUBLICATION stmt instead of multiple\n\n* shouldn't all publication names ideally have a 'regress_' prefix?\n\n======\n99.\nPlease refer to the attached nitpicks diff which has implementation\nfor the nitpicks cited above.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPvrk75vSDkaXJVmhhZuuqQSY98btWJV%3DBMZAnyTtKRB4g%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 4 Jul 2024 17:13:54 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "The latest (v20240704) patch 0001 LGTM\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 4 Jul 2024 17:44:30 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh.\n\nAfter applying the v20240703-0003 patch, I was always getting errors\nwhen running the subscription TAP tests.\n\n# +++ tap check in src/test/subscription +++\nt/001_rep_changes.pl ............... ok\nt/002_types.pl ..................... ok\nt/003_constraints.pl ............... ok\nt/004_sync.pl ...................... ok\nt/005_encoding.pl .................. ok\nt/006_rewrite.pl ................... ok\nt/007_ddl.pl ....................... 3/?\n# Failed test 'Alter subscription set publication throws warning for\nnon-existent publication'\n# at t/007_ddl.pl line 67.\nBailout called. Further testing stopped: pg_ctl stop failed\n# Tests were run but no plan was declared and done_testing() was not seen.\nFAILED--Further testing stopped: pg_ctl stop failed\nmake: *** [check] Error 255\n\n~~~\n\nThe publisher log shows an Assert TRAP occurred:\n\n2024-07-04 18:15:40.089 AEST [745] mysub1 LOG: statement: SELECT\nDISTINCT s.schemaname, s.sequencename\n FROM pg_catalog.pg_publication_sequences s\nWHERE s.pubname IN ('mypub', 'non_existent_pub', 'non_existent_pub1',\n'non_existent_pub2')\nTRAP: failed Assert(\"IsA(list, OidList)\"), File:\n\"../../../src/include/nodes/pg_list.h\", Line: 323, PID: 745\n\n~~~\n\nA debugging backtrace looks like below:\n\nCore was generated by `postgres: publisher: walsender postgres\npostgres [local] SELECT '.\nProgram terminated with signal 6, Aborted.\n#0 0x00007f36f44f02c7 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install\nglibc-2.17-260.el7_6.6.x86_64 pcre-8.32-17.el7.x86_64\n(gdb) bt\n#0 0x00007f36f44f02c7 in raise () from /lib64/libc.so.6\n#1 0x00007f36f44f19b8 in abort () from /lib64/libc.so.6\n#2 0x0000000000bb8be1 in ExceptionalCondition (conditionName=0xc7aa6c\n\"IsA(list, OidList)\",\n fileName=0xc7aa10 \"../../../src/include/nodes/pg_list.h\",\nlineNumber=323) at assert.c:66\n#3 0x00000000005f2c57 in list_nth_oid (list=0x27948f0, n=0) at\n../../../src/include/nodes/pg_list.h:323\n#4 0x00000000005f5491 in pg_get_publication_sequences\n(fcinfo=0x2796a00) at pg_publication.c:1334\n#5 0x0000000000763d10 in ExecMakeTableFunctionResult\n(setexpr=0x27b2fd8, econtext=0x27b2ef8, argContext=0x2796900,\n...\n\nSomething goes wrong indexing into that 'sequences' list.\n\n1329 funcctx = SRF_PERCALL_SETUP();\n1330 sequences = (List *) funcctx->user_fctx;\n1331\n1332 if (funcctx->call_cntr < list_length(sequences))\n1333 {\n1334 Oid relid = list_nth_oid(sequences, funcctx->call_cntr);\n1335\n1336 SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));\n1337 }\n\n======\n\nPerhaps now it is time to create a CF entry for this thread because\nthe cfbot could have detected the error earlier.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 5 Jul 2024 14:15:36 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 4 Jul 2024 at 12:44, Peter Smith <[email protected]> wrote:\n>\n> Here are my review comments for the patch v20240703-0002\n>\n> ======\n> doc/src/sgml/ref/create_publication.sgml\n>\n>\n> Question: Was there a reason you chose wording \"synchronizes changes\"\n> instead of having same \"replicates changes\" wording of FOR ALL TABLES?\n\nSince at this point we are only supporting sync of sequences, there\nare no incremental changes being replicated to subscribers. I thought\nsynchronization is better suited here.\n\n> ======\n> src/backend/catalog/system_views.sql\n>\n> 1.\n> Should there be some new test for the view? Otherwise, AFAICT this\n> patch has no tests that will exercise the new function\n> pg_get_publication_sequences.\n\npg_publication_sequences view uses pg_get_publication_sequences which\nwill be tested with 3rd patch while creating subscription/refreshing\npublication sequences. I felt it is ok not to have a test here.\n\n> 5.\n> - bool for_all_tables; /* Special publication for all tables in db */\n> + List *for_all_objects; /* Special publication for all objects in\n> + * db */\n>\n> Is this OK? Saying \"for all objects\" seemed misleading.\n\nThis change is not required, reverting it.\n\n> 6.\n> I asked this before in a previous review [1-#17] -- I didn't\n> understand the point of the sequence 'testpub_seq0' since nobody seems\n> to be doing anything with it. Should it just be removed? Or is there a\n> missing test case to use it?\n\nSince we are having all sequences published I wanted to have a\nsequence in another schema also. Adding describe for it too.\n\n> ~~~\n>\n> 7.\n> Other things to consider:\n>\n> (I didn't include these in my attached diff)\n>\n> * could use a single CREATE SEQUENCE stmt instead of multiple\n\nCREATE SEQUENCE does not support specifying multiple sequences in one\nstatement, skipping this.\n\nThe rest of the comments are fixed, the attached v20240705 version\npatch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Fri, 5 Jul 2024 17:28:26 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 5 Jul 2024 at 09:46, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh.\n>\n> After applying the v20240703-0003 patch, I was always getting errors\n> when running the subscription TAP tests.\n>\n> # +++ tap check in src/test/subscription +++\n> t/001_rep_changes.pl ............... ok\n> t/002_types.pl ..................... ok\n> t/003_constraints.pl ............... ok\n> t/004_sync.pl ...................... ok\n> t/005_encoding.pl .................. ok\n> t/006_rewrite.pl ................... ok\n> t/007_ddl.pl ....................... 3/?\n> # Failed test 'Alter subscription set publication throws warning for\n> non-existent publication'\n> # at t/007_ddl.pl line 67.\n> Bailout called. Further testing stopped: pg_ctl stop failed\n> # Tests were run but no plan was declared and done_testing() was not seen.\n> FAILED--Further testing stopped: pg_ctl stop failed\n> make: *** [check] Error 255\n>\n> ~~~\n>\n> The publisher log shows an Assert TRAP occurred:\n>\n> 2024-07-04 18:15:40.089 AEST [745] mysub1 LOG: statement: SELECT\n> DISTINCT s.schemaname, s.sequencename\n> FROM pg_catalog.pg_publication_sequences s\n> WHERE s.pubname IN ('mypub', 'non_existent_pub', 'non_existent_pub1',\n> 'non_existent_pub2')\n> TRAP: failed Assert(\"IsA(list, OidList)\"), File:\n> \"../../../src/include/nodes/pg_list.h\", Line: 323, PID: 745\n>\n> ~~~\n>\n> A debugging backtrace looks like below:\n>\n> Core was generated by `postgres: publisher: walsender postgres\n> postgres [local] SELECT '.\n> Program terminated with signal 6, Aborted.\n> #0 0x00007f36f44f02c7 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install\n> glibc-2.17-260.el7_6.6.x86_64 pcre-8.32-17.el7.x86_64\n> (gdb) bt\n> #0 0x00007f36f44f02c7 in raise () from /lib64/libc.so.6\n> #1 0x00007f36f44f19b8 in abort () from /lib64/libc.so.6\n> #2 0x0000000000bb8be1 in ExceptionalCondition (conditionName=0xc7aa6c\n> \"IsA(list, OidList)\",\n> fileName=0xc7aa10 \"../../../src/include/nodes/pg_list.h\",\n> lineNumber=323) at assert.c:66\n> #3 0x00000000005f2c57 in list_nth_oid (list=0x27948f0, n=0) at\n> ../../../src/include/nodes/pg_list.h:323\n> #4 0x00000000005f5491 in pg_get_publication_sequences\n> (fcinfo=0x2796a00) at pg_publication.c:1334\n> #5 0x0000000000763d10 in ExecMakeTableFunctionResult\n> (setexpr=0x27b2fd8, econtext=0x27b2ef8, argContext=0x2796900,\n> ...\n>\n> Something goes wrong indexing into that 'sequences' list.\n>\n> 1329 funcctx = SRF_PERCALL_SETUP();\n> 1330 sequences = (List *) funcctx->user_fctx;\n> 1331\n> 1332 if (funcctx->call_cntr < list_length(sequences))\n> 1333 {\n> 1334 Oid relid = list_nth_oid(sequences, funcctx->call_cntr);\n> 1335\n> 1336 SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));\n> 1337 }\n\nI was not able to reproduce this issue after several runs, but looks\nlike sequences need to be initialized here.\n\n> Perhaps now it is time to create a CF entry for this thread because\n> the cfbot could have detected the error earlier.\n\nI have added a commitfest entry for the same at [1].\n\nThe v20240705 version patch attached at [2] has the change for the same.\n\n[1] - https://commitfest.postgresql.org/49/5111/\n[2] - https://www.postgresql.org/message-id/CALDaNm3WvLUesGq54JagEkbBh4CBfMoT84Rw7HjL8KML_BSzPw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 5 Jul 2024 17:35:34 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, Jul 5, 2024 at 9:58 PM vignesh C <[email protected]> wrote:\n>\n> On Thu, 4 Jul 2024 at 12:44, Peter Smith <[email protected]> wrote:\n> >\n> > 1.\n> > Should there be some new test for the view? Otherwise, AFAICT this\n> > patch has no tests that will exercise the new function\n> > pg_get_publication_sequences.\n>\n> pg_publication_sequences view uses pg_get_publication_sequences which\n> will be tested with 3rd patch while creating subscription/refreshing\n> publication sequences. I felt it is ok not to have a test here.\n>\n\nOTOH, if there had been such a test here then the (\"sequence = NIL\")\nbug in patch 0002 code would have been caught earlier in patch 0002\ntesting instead of later in patch 0003 testing. In general, I think\neach patch should be self-contained w.r.t. to testing all of its new\ncode, but if you think another test here is overkill then I am fine\nwith that too.\n\n//////////\n\nMeanwhile, here are my review comments for patch v20240705-0002\n\n======\ndoc/src/sgml/ref/create_publication.sgml\n\n1.\nThe CREATE PUBLICATION page has many examples showing many different\ncombinations of syntax. I think it would not hurt to add another one\nshowing SEQUENCES being used.\n\n======\nsrc/backend/commands/publicationcmds.c\n\n2.\n+ if (form->puballsequences && !superuser_arg(newOwnerId))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"permission denied to change owner of publication \\\"%s\\\"\",\n+ NameStr(form->pubname)),\n+ errhint(\"The owner of a FOR ALL SEQUENCES publication must be a\nsuperuser.\")));\n\nYou might consider combining this with the previous error in the same\nway that the \"FOR ALL TABLES\" and \"FOR ALL SEQUENCES\" errors were\ncombined in CreatePublication. The result would be less code. But, I\nalso think your current code is fine, so I am just putting this out as\nan idea in case you prefer it.\n\n======\nsrc/backend/parser/gram.y\n\nnitpick - added a space in the comment\nnitpick - changed the call order slightly because $6 comes before $7\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n3. getPublications\n\n- if (fout->remoteVersion >= 130000)\n+ if (fout->remoteVersion >= 170000)\n\nThis should be 180000.\n\n======\nsrc/bin/psql/describe.c\n\n4. describeOneTableDetails\n\n+ /* print any publications */\n+ if (pset.sversion >= 170000)\n+ {\n\nThis should be 180000.\n\n~~~\n\ndescribeOneTableDetails:\nnitpick - removed a redundant \"else\"\nnitpick - simplified the \"Publications:\" header logic slightly\n\n~~~\n\n5. listPublications\n\n+ if (pset.sversion >= 170000)\n+ appendPQExpBuffer(&buf,\n+ \",\\n puballsequences AS \\\"%s\\\"\",\n+ gettext_noop(\"All sequences\"));\n\nThis should be 180000.\n\n~~~\n\n6. describePublications\n\n+ has_pubsequence = (pset.sversion >= 170000);\n\nThis should be 180000.\n\n~\n\nnitpick - remove some blank lines for consistency with nearby code\n\n======\nsrc/include/nodes/parsenodes.h\n\nnitpick - minor change to comment for PublicationAllObjType\nnitpick - the meanings of the enums are self-evident; I didn't think\ncomments were very useful\n\n======\nsrc/test/regress/sql/publication.sql\n\n7.\nI think it will also be helpful to arrange for a SEQUENCE to be\npublished by *multiple* publications. This would test that they get\nlisted as expected in the \"Publications:\" part of the \"describe\" (\\d+)\nfor the sequence.\n\n======\n99.\nPlease also see the attached diffs patch which implements any nitpicks\nmentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 10 Jul 2024 14:03:44 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Here are a few comments for patch v20240705-0003.\n\n(This is a WIP. I have only looked at the docs so far.)\n\n======\ndoc/src/sgml/config.sgml\n\nnitpick - max_logical_replication_workers: /and sequence\nsynchornization worker/and a sequence synchornization worker/\n\n======\ndoc/src/sgml/logical-replication.sgml\n\nnitpick - max_logical_replication_workers: re-order list of workers to\nbe consistent with other docs 1-apply,2-parallel,3-tablesync,4-seqsync\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n1.\nIIUC the existing \"REFRESH PUBLICATION\" command will fetch and sync\nall new sequences, etc., and/or remove old ones no longer in the\npublication. But current docs do not say anything at all about\nsequences here. It should say something about sequence behaviour.\n\n~~~\n\n2.\nFor the existing \"REFRESH PUBLICATION\" there is a sub-option\n\"copy_data=true/false\". Won't this need some explanation about how it\nbehaves for sequences? Or will there be another option\n\"copy_sequences=true/false\".\n\n~~~\n\n3.\nIIUC the main difference between REFRESH PUBLICATION and REFRESH\nPUBLICATION SEQUENCES is that the 2nd command will try synchronize\nwith all the *existing* sequences to bring them to the same point as\non the publisher, but otherwise, they are the same command. If that is\ncorrect understanding I don't think that distinction is made very\nclear in the current docs.\n\n~~~\n\nnitpick - the synopsis is misplaced. It should not be between ENABLE\nand DISABLE. I moved it. Also, it should say \"REFRESH PUBLICATION\nSEQUENCES\" because that is how the new syntax is defined in gram.y\n\nnitpick - REFRESH SEQUENCES. Renamed to \"REFRESH PUBLICATION\nSEQUENCES\". And, shouldn't \"from the publisher\" say \"with the\npublisher\"?\n\nnitpick - changed the varlistentry \"id\".\n\n======\n99.\nPlease also see the attached diffs patch which implements any nitpicks\nmentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 10 Jul 2024 18:16:03 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh. Here are the rest of my comments for patch v20240705-0003.\n\n(Apologies for the length of this post; but it was unavoidable due to\nthis being the 1st review of a very large large 1700-line patch)\n\n======\nsrc/backend/catalog/pg_subscription.c\n\n1. GetSubscriptionSequences\n\n+/*\n+ * Get the sequences for the subscription.\n+ *\n+ * The returned list is palloc'ed in the current memory context.\n+ */\n\nIs that comment right? The palloc seems to be done in\nCacheMemoryContext, not in the current context.\n\n~\n\n2.\nThe code is very similar to the other function\nGetSubscriptionRelations(). In fact I did not understand how the 2\nfunctions know what they are returning:\n\nE.g. how does GetSubscriptionRelations not return sequences too?\nE.g. how does GetSubscriptionSequences not return relations too?\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nCreateSubscription:\nnitpick - put the sequence logic *after* the relations logic because\nthat is the order that seems used everywhere else.\n\n~~~\n\n3. AlterSubscription_refresh\n\n- logicalrep_worker_stop(sub->oid, relid);\n+ /* Stop the worker if relation kind is not sequence*/\n+ if (relkind != RELKIND_SEQUENCE)\n+ logicalrep_worker_stop(sub->oid, relid);\n\nCan you give more reasons in the comment why skip the stop for sequence worker?\n\n~\n\nnitpick - period and space in the comment\n\n~~~\n\n4.\n for (off = 0; off < remove_rel_len; off++)\n {\n if (sub_remove_rels[off].state != SUBREL_STATE_READY &&\n- sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)\n+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE &&\n+ get_rel_relkind(sub_remove_rels[off].relid) != RELKIND_SEQUENCE)\n {\nWould this new logic perhaps be better written as:\n\nif (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)\n continue;\n\n~~~\n\nAlterSubscription_refreshsequences:\nnitpick - rename AlterSubscription_refresh_sequences\n\n~\n5.\nThere is significant code overlap between the existing\nAlterSubscription_refresh and the new function\nAlterSubscription_refreshsequences. I wonder if it is better to try to\ncombine the logic and just pass another parameter to\nAlterSubscription_refresh saying to update the existing sequences if\nnecessary. Particularly since the AlterSubscription_refresh is already\ntweaked to work for sequences. Of course, the resulting combined\nfunction would be large and complex, but maybe that would still be\nbetter than having giant slabs of nearly identical cut/paste code.\nThoughts?\n\n~~~\n\ncheck_publications_origin:\nnitpick - move variable declarations\n~~~\n\nfetch_sequence_list:\nnitpick - change /tablelist/seqlist/\nnitpick - tweak the spaces of the SQL for alignment (similar to\nfetch_table_list)\n\n~\n\n6.\n+ \" WHERE s.pubname IN (\");\n+ first = true;\n+ foreach_ptr(String, pubname, publications)\n+ {\n+ if (first)\n+ first = false;\n+ else\n+ appendStringInfoString(&cmd, \", \");\n+\n+ appendStringInfoString(&cmd, quote_literal_cstr(pubname->sval));\n+ }\n+ appendStringInfoChar(&cmd, ')');\n\nIMO this can be written much better by using get_publications_str()\nfunction to do all this list work.\n\n======\nsrc/backend/replication/logical/launcher.c\n\n7. logicalrep_worker_find\n\n/*\n * Walks the workers array and searches for one that matches given\n * subscription id and relid.\n *\n * We are only interested in the leader apply worker or table sync worker.\n */\n\nThe above function comment (not in the patch 0003) is stale because\nthis AFAICT this is also going to return sequence workers if it finds\none.\n\n~~~\n\n8. logicalrep_sequence_sync_worker_find\n\n+/*\n+ * Walks the workers array and searches for one that matches given\n+ * subscription id.\n+ *\n+ * We are only interested in the sequence sync worker.\n+ */\n+LogicalRepWorker *\n+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)\n\nThere are other similar functions for walking the workers array to\nsearch for a worker. Instead of having different functions for\ndifferent cases, wouldn't it be cleaner to combine these into a single\nfunction, where you pass a parameter (e.g. a mask of worker types that\nyou are interested in finding)?\n\n~\n\nnitpick - declare a for loop variable 'i'\n\n~~~\n\n9. logicalrep_apply_worker_find\n\n+static LogicalRepWorker *\n+logicalrep_apply_worker_find(Oid subid, bool only_running)\n\nAll the other find* functions assume the lock is already held\n(Assert(LWLockHeldByMe(LogicalRepWorkerLock));). But this one is\ndifferent. IMO it might be better to acquire the lock in the caller to\nmake all the find* functions look the same. Anyway, that will help to\ncombine everything into 1 \"find\" worker as suggested in the previous\nreview comment #8.\n\n~\n\nnitpick - declare a for loop variable 'i'\nnitpick - removed unnecessary parens in condition.\n\n~~~\n\n10. logicalrep_worker_launch\n\n/*----------\n* Sanity checks:\n* - must be valid worker type\n* - tablesync workers are only ones to have relid\n* - parallel apply worker is the only kind of subworker\n*/\n\nThe above code-comment (not in the 0003 patch) seems stale. This\nshould now also mention sequence sync workers, right?\n\n~~~\n\n11.\n- Assert(is_tablesync_worker == OidIsValid(relid));\n+ Assert(is_tablesync_worker == OidIsValid(relid) ||\nis_sequencesync_worker == OidIsValid(relid));\n\nIIUC there is only a single sequence sync worker for handling all the\nsequences. So, what does the 'relid' actually mean here when there are\nmultiple sequences?\n\n~~~\n\n12. logicalrep_seqsyncworker_failuretime\n\n+/*\n+ * Set the sequence sync worker failure time\n+ *\n+ * Called on sequence sync worker failure exit.\n+ */\n\n12a.\nThe comment should be improved to make it more clear that the failure\ntime of the sync worker information is stored with the *apply* worker.\nSee also other review comments in this post about this area -- perhaps\nall this can be removed?\n\n~\n\n12b.\nCurious if this had to be a separate exit handler or if may this could\nhave been handled by the existing logicalrep_worker_onexit handler.\nSee also other review comments int this post about this area --\nperhaps all this can be removed?\n\n======\n.../replication/logical/sequencesync.c\n\n13. fetch_sequence_data\n\n13a.\nThe function comment has no explanation of what exactly the returned\nvalue means. It seems like it is what you will assign as 'last_value'\non the subscriber-side.\n\n~\n\n13b.\nSome of the table functions like this are called like\n'fetch_remote_table_info()'. Maybe it is better to do similar here\n(e.g. include the word \"remote\" in the function name).\n\n~\n\n14.\nThe reason for the addition logic \"(last_value + log_cnt)\" is not\nobvious. I am guessing it might be related to code from\n'nextval_internal' (fetch = log = fetch + SEQ_LOG_VALS;) but it is\ncomplicated. It is unfortunate that the field 'log_cnt' seems hardly\ncommented anywhere at all.\n\nAlso, I am not 100% sure if I trust the logic in the first place. The\ncaller of this function is doing:\nsequence_value = fetch_sequence_data(conn, remoteid, &lsn);\n/* sets the sequence with sequence_value */\nSetSequenceLastValue(RelationGetRelid(rel), sequence_value);\n\nWon't that mean you can get to a situation where subscriber-side\nresult of lastval('s') can be *ahead* from lastval('s') on the\npublisher? That doesn't seem good.\n\n~~~\n\ncopy_sequence:\n\nnitpick - ERROR message. Reword \"for table...\" to be more like the 2nd\nerror message immediately below.\nnitpick - /RelationGetRelationName(rel)/relname/\nnitpick - moved the Assert for 'relkind' to be nearer the assignment.\n\n~\n\n15.\n+ /*\n+ * Logical replication of sequences is based on decoding WAL records,\n+ * describing the \"next\" state of the sequence the current state in the\n+ * relfilenode is yet to reach. But during the initial sync we read the\n+ * current state, so we need to reconstruct the WAL record logged when we\n+ * started the current batch of sequence values.\n+ *\n+ * Otherwise we might get duplicate values (on subscriber) if we failed\n+ * over right after the sync.\n+ */\n+ sequence_value = fetch_sequence_data(conn, remoteid, &lsn);\n+\n+ /* sets the sequence with sequence_value */\n+ SetSequenceLastValue(RelationGetRelid(rel), sequence_value);\n\n(This is related to some earlier review comment #14 above). IMO all\nthis tricky commentary belongs in the function header of\n\"fetch_sequence_data\", where it should be describing that function's\nreturn value.\n\n~~~\n\nLogicalRepSyncSequences:\nnitpick - declare void param\nnitpick indentation\nnitpick - wrapping\nnitpick - /sequencerel/sequence_rel/\nnitpick - blank lines\n\n~\n\n16.\n+ if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid,\nfalse) == RLS_ENABLED)\n+ ereport(ERROR,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"user \\\"%s\\\" cannot replicate into relation with row-level\nsecurity enabled: \\\"%s\\\"\",\n+ GetUserNameFromId(GetUserId(), true),\n+ RelationGetRelationName(sequencerel)));\n\nThis should be reworded to refer to sequences instead of relations. Maybe like:\nuser \\\"%s\\\" cannot replicate into sequence \\\"%s\\\" with row-level\nsecurity enabled\"\n\n~\n\n17.\nThe Calculations involving the BATCH size seem a bit tricky.\ne.g. in 1st place it is doing: (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)\ne.g. in 2nd place it is doing: (next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0)\n\nMaybe this batch logic can be simplified somehow using a bool variable\nfor the calculation?\n\nAlso, where does the number 100 come from? Why not 1000? Why not 10?\nWhy have batching at all? Maybe there should be some comment to\ndescribe the reason and the chosen value.\n\n~\n\n18.\n+ next_seq = curr_seq + 1;\n+ if (((next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) || next_seq == seq_count)\n+ {\n+ /* LOG all the sequences synchronized during current batch. */\n+ int i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);\n+ for (; i <= curr_seq; i++)\n+ {\n+ SubscriptionRelState *done_seq;\n+ done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences, i));\n+ ereport(LOG,\n+ errmsg(\"logical replication synchronization for subscription \\\"%s\\\",\nsequence \\\"%s\\\" has finished\",\n+ get_subscription_name(subid, false), get_rel_name(done_seq->relid)));\n+ }\n+\n+ CommitTransactionCommand();\n+ }\n+\n+ curr_seq++;\n\nI feel this batching logic needs more comments describing what you are\ndoing here.\n\n~~~\n\nSequencesyncWorkerMain:\nnitpick - spaces in the function comment\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n19. finish_sync_worker\n\n-finish_sync_worker(void)\n+finish_sync_worker(bool istable)\n\nIMO, for better readability (here and in the callers) the new\nparameter should be the enum LogicalRepWorkerType. Since we have that\nenum, might as well make good use of it.\n\n~\n\nnitpick - /sequences synchronization worker/sequence synchronization worker/\nnitpick - comment tweak\n\n~\n\n20.\n+ char relkind;\n+\n+ if (!started_tx)\n+ {\n+ StartTransactionCommand();\n+ started_tx = true;\n+ }\n+\n+ relkind = get_rel_relkind(rstate->relid);\n+ if (relkind == RELKIND_SEQUENCE)\n+ continue;\n\nI am wondering is it possible to put the relkind check can come\n*before* the TX code here, because in case there are *only* sequences\nthen maybe every would be skipped and there would have been no need\nfor any TX at all in the first place.\n\n~~~\n\nprocess_syncing_sequences_for_apply:\n\nnitpick - fix typo and slight reword function header comment. Also\n/last start time/last failure time/\nnitpick - tweak comments\nnitpick - blank lines\n\n~\n\n21.\n+ if (!started_tx)\n+ {\n+ StartTransactionCommand();\n+ started_tx = true;\n+ }\n+\n+ relkind = get_rel_relkind(rstate->relid);\n+ if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)\n+ continue;\n\nWondering (like in review comment #20) if it is possible to swap those\nbecause maybe there was no reason for any TX if the other condition\nwould always continue.\n\n~~~\n\n22.\n+ if (nsyncworkers < max_sync_workers_per_subscription)\n+ {\n+ TimestampTz now = GetCurrentTimestamp();\n+ if (!MyLogicalRepWorker->sequencesync_failure_time ||\n+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,\n+ now, wal_retrieve_retry_interval))\n+ {\n+ MyLogicalRepWorker->sequencesync_failure_time = 0;\n\nIt seems to me that storing 'sequencesync_failure_time' logic may be\nunnecessarily complicated. Can't the same \"throttling\" be achieved by\nstoring the synchronization worker 'start time' instead of 'fail\ntime', in which case then you won't have to mess around with\nconsidering if the sync worker failed or just exited normally etc? You\nmight also be able to remove all the\nlogicalrep_seqsyncworker_failuretime() exit handler code.\n\n~~~\n\nprocess_syncing_tables:\nnitpick - let's process tables before sequences (because all other\ncode is generally in this same order)\nnitpick - removed some excessive comments about code that is not\nsupposed to happen\n\n======\nsrc/backend/replication/logical/worker.c\n\nshould_apply_changes_for_rel:\nnitpick - IMO there were excessive comments for something that is not\ngoing to happen\n\n~~~\n\n23. InitializeLogRepWorker\n\n/*\n * Common initialization for leader apply worker, parallel apply worker and\n * tablesync worker.\n *\n * Initialize the database connection, in-memory subscription and necessary\n * config options.\n */\n\nThat comment (not part of patch 0003) is stale; it should now mention\nthe sequence sync worker as well, right?\n\n~\n\nnitpick - Tweak plural /sequences sync worker/sequence sync worker/\n\n~~~\n\n24. SetupApplyOrSyncWorker\n\n/* Common function to setup the leader apply or tablesync worker. */\n\nThat comment (not part of patch 0003) is stale; it should now mention\nthe sequence sync worker as well, right?\n\n======\nsrc/include/nodes/parsenodes.h\n\n25.\n ALTER_SUBSCRIPTION_ADD_PUBLICATION,\n ALTER_SUBSCRIPTION_DROP_PUBLICATION,\n ALTER_SUBSCRIPTION_REFRESH,\n+ ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,\n\nFor consistency with your new enum it would be better to also change\nthe existing enum name ALTER_SUBSCRIPTION_REFRESH ==>\nALTER_SUBSCRIPTION_REFRESH_PUBLICATION.\n\n======\nsrc/include/replication/logicalworker.h\n\nnitpick - IMO should change the function name\n/SequencesyncWorkerMain/SequenceSyncWorkerMain/, and in passing make\nthe same improvement to the TablesyncWorkerMain function name.\n\n======\nsrc/include/replication/worker_internal.h\n\n26.\n WORKERTYPE_PARALLEL_APPLY,\n+ WORKERTYPE_SEQUENCESYNC,\n } LogicalRepWorkerType;\n\nAFAIK the enum order should not matter here so it would be better to\nput the WORKERTYPE_SEQUENCESYNC directly after the\nWORKERTYPE_TABLESYNC to keep the similar things together.\n\n~\n\nnitpick - IMO change the macro name\n/isSequencesyncWorker/isSequenceSyncWorker/, and in passing make the\nsame improvement to the isTablesyncWorker macro name.\n\n======\nsrc/test/subscription/t/034_sequences.pl\n\nnitpick - Copyright year\nnitpick - Modify the \"Create subscriber node\" comment for consistency\nnitpick - Modify comments slightly for the setup structure parts\nnitpick - Add or remove various blank lines\nnitpick - Since you have sequences 's2' and 's3', IMO it makes more\nsense to call the original sequence 's1' instead of just 's'\nnitpick - Rearrange so the CREATE PUBLICATION/SUBSCRIPTION can stay together\nnitpick - Modified some comment styles to clearly delineate all the\nmain \"TEST\" scenarios\nnitpick - In the REFRESH PUBLICATION test the create new sequence and\nupdate existing can be combined (like you do in a later test).\nnitpick - Changed some of the test messages for REFRESH PUBLICATION\nwhich seemed wrong\nnitpick - Added another test for 's1' in REFRESH PUBLICATION SEQUENCES\nnitpick - Changed some of the test messages for REFRESH PUBLICATION\nSEQUENCES which seemed wrong\n\n~\n\n27.\nIIUC the preferred practice is to give these test object names a\n'regress_' prefix.\n\n~\n\n28.\n+# Check the data on subscriber\n+$result = $node_subscriber->safe_psql(\n+ 'postgres', qq(\n+ SELECT * FROM s;\n+));\n+\n+is($result, '132|0|t', 'initial test data replicated');\n\n28a.\nMaybe it is better to say \"SELECT last_value, log_cnt, is_called\"\ninstead of \"SELECT *\" ?\nNote - this is in a couple of places.\n\n~\n\n28b.\nCan you explain why the expected sequence value its 132, because\nAFAICT you only called nextval('s') 100 times, so why isn't it 100?\nMy guess is that it seems to be related to code in \"nextval_internal\"\n(fetch = log = fetch + SEQ_LOG_VALS;) but it kind of defies\nexpectations of the test, so if it really is correct then it needs\ncommentary.\n\nActually, I found other regression test code that deals with this:\n-- log_cnt can be higher if there is a checkpoint just at the right\n-- time, so just test for the expected range\nSELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM\nfoo_seq_new;\n\nDo you have to do something similar? Or is this a bug? See my other\nreview comments for function fetch_sequence_data in sequencesync.c\n\n======\n99.\nPlease also see the attached diffs patch which implements any nitpicks\nmentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 12 Jul 2024 12:52:25 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi,\n\nI was reading back through this thread to find out how the proposed new\ncommand for refreshing sequences, came about. The patch 0705 introduces a\nnew command syntax for ALTER SUBSCRIPTION ... REFRESH SEQUENCES\n\nSo now there are 2 forms of subscription refresh.\n\n#1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option [=\nvalue] [, ... ] ) ]\n\n#2. ALTER SUBSCRIPTION name REFRESH SEQUENCES\n\n~~~~\n\nIMO, that separation seems complicated. It leaves many questions like:\n* It causes a bit of initial confusion. e.g. When I saw the REFRESH\nSEQUENCES I first assumed that was needed because sequences were\nnot covered by the existing REFRESH PUBLICATION\n* Why wasn't command #2 called ALTER SUBSCRIPTION REFRESH PUBLICATION\nSEQUENCES? E.g. missing keyword PUBLICATION. It seems inconsistent.\n* I expect sequence values can become stale pretty much immediately after\ncommand #1, so the user will want to use command #2 anyway...\n* ... but if command #2 also does add/remove changed sequences same as\ncommand #1 then what benefit was there of having the command #1 for\nsequences?\n* There is a separation of sequences (from tables) in command #2 but there\nis no separation currently possible in command #1. It seemed inconsistent.\n\n~~~\n\nIIUC some of the goals I saw in the thread are to:\n* provide a way to fetch and refresh sequences that also keeps behaviors\n(e.g. copy_data etc.) consistent with the refresh of subscription tables\n* provide a way to fetch and refresh *only* sequences\n\nI felt you could just enhance the existing refresh command syntax (command\n#1), instead of introducing a new one it would be simpler and it would\nstill meet those same objectives.\n\nSynopsis:\nALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES | SEQUENCES | ALL] [\nWITH ( refresh_option [= value] [, ... ] ) ]\n\nMy only change is the introduction of the optional \"[TABLES | SEQUENCES |\nALL]\" clause.\n\nI believe that can do everything your current patch does, plus more:\n* Can refresh *only* TABLES if that is what you want (current patch 0705\ncannot do this)\n* Can refresh *only* SEQUENCES (same as current patch 0705 command #2)\n* Has better integration with refresh options like \"copy_data\" (current\npatch 0705 command #2 doesn't have options)\n* Existing REFRESH PUBLICATION syntax still works as-is. You can decide\nlater what is PG18 default if the \"[TABLES | SEQUENCES | ALL]\" is omitted.\n\n~~~\n\nMore examples using proposed syntax.\n\nex1.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = false)\n- same as PG17 functionality for ALTER SUBSCRIPTION sub REFRESH PUBLICATION\nWITH (copy_data = false)\n\nex2.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = true)\n- same as PG17 functionality for ALTER SUBSCRIPTION sub REFRESH PUBLICATION\nWITH (copy_data = true)\n\nex3.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data =\nfalse)\n- this adds/removes only sequences to pg_subscription_rel but doesn't\nupdate their sequence values\n\nex4.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = true)\n- this adds/removes only sequences to pg_subscription_rel and also updates\nall sequence values.\n- this is equivalent behaviour of what your current 0705 patch is doing for\ncommand #2, ALTER SUBSCRIPTION sub REFRESH SEQUENCES\n\nex5.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION ALL WITH (copy_data = false)\n- this is equivalent behaviour of what your current 0705 patch is doing for\ncommand #1, ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data =\nfalse)\n\nex6.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION ALL WITH (copy_data = true)\n- this adds/removes tables and sequences and updates all table initial data\nsequence values.- I think it is equivalent to your current 0705 patch doing\ncommand #1 ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data =\ntrue), followed by another command #2 ALTER SUBSCRIPTION sub REFRESH\nSEQUENCES\n\nex7.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES\n- Because default copy_data is true you do not need to specify options, so\nthis is the same behaviour as your current 0705 patch command #2, ALTER\nSUBSCRIPTION sub REFRESH SEQUENCES.\n\n~~~\n\nI hope this post was able to demonstrate that by enhancing the existing\ncommand:\n- it is less tricky to understand the separate command distinctions\n- there is more functionality/flexibility possible\n- there is better integration with the refresh options like copy_data\n- behaviour for tables/sequences is more consistent\n\nAnyway, it is just my opinion. Maybe there are some pitfalls I'm unaware of.\n\nThoughts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nHi, I was reading back through this thread to find out how the proposed new command for refreshing sequences,  came about. The patch 0705 introduces a new command syntax for ALTER SUBSCRIPTION ... REFRESH SEQUENCESSo now there are 2 forms of subscription refresh.#1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option [= value] [, ... ] ) ]#2. ALTER SUBSCRIPTION name REFRESH SEQUENCES~~~~IMO, that separation seems complicated. It leaves many questions like:* It causes a bit of initial confusion. e.g. When I saw the REFRESH SEQUENCES I first assumed that was needed because sequences were not covered by the existing REFRESH PUBLICATION* Why wasn't command #2 called ALTER SUBSCRIPTION REFRESH PUBLICATION SEQUENCES? E.g. missing keyword PUBLICATION. It seems inconsistent.* I expect sequence values can become stale pretty much immediately after command #1, so the user will want to use command #2 anyway...* ... but if command #2 also does add/remove changed sequences same as command #1 then what benefit was there of having the command #1 for sequences?* There is a separation of sequences (from tables) in command #2 but there is no separation currently possible in command #1. It seemed inconsistent.~~~IIUC some of the goals I saw in the thread are to:* provide a way to fetch and refresh sequences that also keeps behaviors (e.g. copy_data etc.) consistent with the refresh of subscription tables* provide a way to fetch and refresh *only* sequencesI felt you could just enhance the existing refresh command syntax (command #1), instead of introducing a new one it would be simpler and it would still meet those same objectives. Synopsis:ALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES | SEQUENCES | ALL] [ WITH ( refresh_option [= value] [, ... ] ) ]My only change is the introduction of the optional \"[TABLES | SEQUENCES | ALL]\" clause.I believe that can do everything your current patch does, plus more:* Can refresh *only* TABLES if that is what you want (current patch 0705 cannot do this)* Can refresh *only* SEQUENCES (same as current patch 0705 command #2)* Has better integration with refresh options like \"copy_data\" (current patch 0705 command #2 doesn't have options)* Existing REFRESH PUBLICATION syntax still works as-is. You can decide later what is PG18 default if the \"[TABLES | SEQUENCES | ALL]\" is omitted.~~~More examples using proposed syntax.ex1.ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = false)- same as PG17 functionality for ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = false)ex2.ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = true)- same as PG17 functionality for ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = true)ex3.ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = false)- this adds/removes only sequences to pg_subscription_rel but doesn't update their sequence valuesex4.ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = true)- this adds/removes only sequences to pg_subscription_rel and also updates all sequence values.- this is equivalent behaviour of what your current 0705 patch is doing for command #2, ALTER SUBSCRIPTION sub REFRESH SEQUENCESex5.ALTER SUBSCRIPTION sub REFRESH PUBLICATION ALL WITH (copy_data = false)- this is equivalent behaviour of what your current 0705 patch is doing for command #1, ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = false)ex6.ALTER SUBSCRIPTION sub REFRESH PUBLICATION ALL WITH (copy_data = true)- this adds/removes tables and sequences and updates all table initial data sequence values.- I think it is equivalent to your current 0705 patch doingcommand #1 ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = true), followed by another command #2 ALTER SUBSCRIPTION sub REFRESH SEQUENCESex7.ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES- Because default copy_data is true you do not need to specify options, so this is the same behaviour as your current 0705 patch command #2, ALTER SUBSCRIPTION sub REFRESH SEQUENCES.~~~I hope this post was able to demonstrate that by enhancing the existing command:- it is less tricky to understand the separate command distinctions- there is more functionality/flexibility possible- there is better integration with the refresh options like copy_data- behaviour for tables/sequences is more consistentAnyway, it is just my opinion. Maybe there are some pitfalls I'm unaware of.Thoughts?======Kind Regards,Peter Smith.Fujitsu Australia", "msg_date": "Tue, 16 Jul 2024 10:30:22 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 10 Jul 2024 at 09:34, Peter Smith <[email protected]> wrote:\n>\n> On Fri, Jul 5, 2024 at 9:58 PM vignesh C <[email protected]> wrote:\n> >\n> > On Thu, 4 Jul 2024 at 12:44, Peter Smith <[email protected]> wrote:\n> > >\n> > > 1.\n> > > Should there be some new test for the view? Otherwise, AFAICT this\n> > > patch has no tests that will exercise the new function\n> > > pg_get_publication_sequences.\n> >\n> > pg_publication_sequences view uses pg_get_publication_sequences which\n> > will be tested with 3rd patch while creating subscription/refreshing\n> > publication sequences. I felt it is ok not to have a test here.\n> >\n>\n> OTOH, if there had been such a test here then the (\"sequence = NIL\")\n> bug in patch 0002 code would have been caught earlier in patch 0002\n> testing instead of later in patch 0003 testing. In general, I think\n> each patch should be self-contained w.r.t. to testing all of its new\n> code, but if you think another test here is overkill then I am fine\n> with that too.\n\nMoved these changes to 0003 patch where it is actually required.\n\n> //////////\n>\n> Meanwhile, here are my review comments for patch v20240705-0002\n\nAll the comments are fixed and the attached v20240720 version patch\nhas the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Sat, 20 Jul 2024 20:36:21 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 10 Jul 2024 at 13:46, Peter Smith <[email protected]> wrote:\n>\n> Here are a few comments for patch v20240705-0003.\n>\n> (This is a WIP. I have only looked at the docs so far.)\n>\n> ======\n> doc/src/sgml/config.sgml\n>\n> nitpick - max_logical_replication_workers: /and sequence\n> synchornization worker/and a sequence synchornization worker/\n>\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> nitpick - max_logical_replication_workers: re-order list of workers to\n> be consistent with other docs 1-apply,2-parallel,3-tablesync,4-seqsync\n>\n> ======\n> doc/src/sgml/ref/alter_subscription.sgml\n>\n> 1.\n> IIUC the existing \"REFRESH PUBLICATION\" command will fetch and sync\n> all new sequences, etc., and/or remove old ones no longer in the\n> publication. But current docs do not say anything at all about\n> sequences here. It should say something about sequence behaviour.\n>\n> ~~~\n>\n> 2.\n> For the existing \"REFRESH PUBLICATION\" there is a sub-option\n> \"copy_data=true/false\". Won't this need some explanation about how it\n> behaves for sequences? Or will there be another option\n> \"copy_sequences=true/false\".\n>\n> ~~~\n>\n> 3.\n> IIUC the main difference between REFRESH PUBLICATION and REFRESH\n> PUBLICATION SEQUENCES is that the 2nd command will try synchronize\n> with all the *existing* sequences to bring them to the same point as\n> on the publisher, but otherwise, they are the same command. If that is\n> correct understanding I don't think that distinction is made very\n> clear in the current docs.\n>\n> ~~~\n>\n> nitpick - the synopsis is misplaced. It should not be between ENABLE\n> and DISABLE. I moved it. Also, it should say \"REFRESH PUBLICATION\n> SEQUENCES\" because that is how the new syntax is defined in gram.y\n>\n> nitpick - REFRESH SEQUENCES. Renamed to \"REFRESH PUBLICATION\n> SEQUENCES\". And, shouldn't \"from the publisher\" say \"with the\n> publisher\"?\n>\n> nitpick - changed the varlistentry \"id\".\n>\n> ======\n> 99.\n> Please also see the attached diffs patch which implements any nitpicks\n> mentioned above.\n\nAll these comments are handled in the v20240720 version patch attached at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm2vuO7Ya4QVTZKR9jY_mkFFcE_hKUJiXx4KUknPgGFjSg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 20 Jul 2024 20:38:18 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 12 Jul 2024 at 08:22, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh. Here are the rest of my comments for patch v20240705-0003.\n> ======\n> src/backend/catalog/pg_subscription.c\n>\n> 1. GetSubscriptionSequences\n>\n> +/*\n> + * Get the sequences for the subscription.\n> + *\n> + * The returned list is palloc'ed in the current memory context.\n> + */\n>\n> Is that comment right? The palloc seems to be done in\n> CacheMemoryContext, not in the current context.\n\nThis function is removed and GetSubscriptionRelations is being used instead.\n\n> ~\n>\n> 2.\n> The code is very similar to the other function\n> GetSubscriptionRelations(). In fact I did not understand how the 2\n> functions know what they are returning:\n>\n> E.g. how does GetSubscriptionRelations not return sequences too?\n> E.g. how does GetSubscriptionSequences not return relations too?\n\nGetSubscriptionRelations can be used, so removed the\nGetSubscriptionSequences function.\n\n>\n> 3. AlterSubscription_refresh\n>\n> - logicalrep_worker_stop(sub->oid, relid);\n> + /* Stop the worker if relation kind is not sequence*/\n> + if (relkind != RELKIND_SEQUENCE)\n> + logicalrep_worker_stop(sub->oid, relid);\n>\n> Can you give more reasons in the comment why skip the stop for sequence worker?\n>\n> ~\n>\n> nitpick - period and space in the comment\n>\n> ~~~\n>\n> 8. logicalrep_sequence_sync_worker_find\n>\n> +/*\n> + * Walks the workers array and searches for one that matches given\n> + * subscription id.\n> + *\n> + * We are only interested in the sequence sync worker.\n> + */\n> +LogicalRepWorker *\n> +logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)\n>\n> There are other similar functions for walking the workers array to\n> search for a worker. Instead of having different functions for\n> different cases, wouldn't it be cleaner to combine these into a single\n> function, where you pass a parameter (e.g. a mask of worker types that\n> you are interested in finding)?\n\nI will address this in a future version once the patch has become more stable.\n> ~~~\n>\n> 11.\n> - Assert(is_tablesync_worker == OidIsValid(relid));\n> + Assert(is_tablesync_worker == OidIsValid(relid) ||\n> is_sequencesync_worker == OidIsValid(relid));\n>\n> IIUC there is only a single sequence sync worker for handling all the\n> sequences. So, what does the 'relid' actually mean here when there are\n> multiple sequences?\n\nSequence sync workers will not have relid, modified the assert.\n\n> ~~~\n>\n> 12. logicalrep_seqsyncworker_failuretime\n> 12b.\n> Curious if this had to be a separate exit handler or if may this could\n> have been handled by the existing logicalrep_worker_onexit handler.\n> See also other review comments int this post about this area --\n> perhaps all this can be removed?\n\nThis function cannot be combined with logicalrep_worker_onexit as this\nfunction should be called only in failure case and this exit handler\nshould be removed in case of success case.\n\nThis cannot be removed because of the following reason:\nConsider the following situation: a sequence sync worker starts and\nthen encounters a failure while syncing sequences. At the same time, a\nuser initiates a \"refresh publication sequences\" operation. Given only\nthe start time, it's not possible to distinguish whether the sequence\nsync worker failed or completed successfully. This is because the\n\"refresh publication sequences\" operation would have re-added the\nsequences, making it unclear whether the sync worker's failure or\nsuccess occurred.\n\n>\n> 14.\n> The reason for the addition logic \"(last_value + log_cnt)\" is not\n> obvious. I am guessing it might be related to code from\n> 'nextval_internal' (fetch = log = fetch + SEQ_LOG_VALS;) but it is\n> complicated. It is unfortunate that the field 'log_cnt' seems hardly\n> commented anywhere at all.\n>\n> Also, I am not 100% sure if I trust the logic in the first place. The\n> caller of this function is doing:\n> sequence_value = fetch_sequence_data(conn, remoteid, &lsn);\n> /* sets the sequence with sequence_value */\n> SetSequenceLastValue(RelationGetRelid(rel), sequence_value);\n>\n> Won't that mean you can get to a situation where subscriber-side\n> result of lastval('s') can be *ahead* from lastval('s') on the\n> publisher? That doesn't seem good.\n\nAdded comments for \"last_value + log_cnt\"\nYes it can be ahead in subscribers. This will happen because every\nchange of the sequence is not wal logged. It is WAL logged once in\nSEQ_LOG_VALS. This was discussed earlier and the sequence value being\nahead was ok.\nhttps://www.postgresql.org/message-id/CA%2BTgmoaVLiKDD5vr1bzL-rxhMA37KCS_2xrqjbKVwGyqK%2BPCXQ%40mail.gmail.com\n\n> 15.\n> + /*\n> + * Logical replication of sequences is based on decoding WAL records,\n> + * describing the \"next\" state of the sequence the current state in the\n> + * relfilenode is yet to reach. But during the initial sync we read the\n> + * current state, so we need to reconstruct the WAL record logged when we\n> + * started the current batch of sequence values.\n> + *\n> + * Otherwise we might get duplicate values (on subscriber) if we failed\n> + * over right after the sync.\n> + */\n> + sequence_value = fetch_sequence_data(conn, remoteid, &lsn);\n> +\n> + /* sets the sequence with sequence_value */\n> + SetSequenceLastValue(RelationGetRelid(rel), sequence_value);\n>\n> (This is related to some earlier review comment #14 above). IMO all\n> this tricky commentary belongs in the function header of\n> \"fetch_sequence_data\", where it should be describing that function's\n> return value.\n\nMoved it to fetch_sequence_data where pg_sequence_state is called to\navoid any confusion\n\n> 17.\n> Also, where does the number 100 come from? Why not 1000? Why not 10?\n> Why have batching at all? Maybe there should be some comment to\n> describe the reason and the chosen value.\n\nAdded a comment for this. I will do one round of testing with few\nvalues and see if this value needs to be changed. I will share it\nlater.\n\n> 20.\n> + char relkind;\n> +\n> + if (!started_tx)\n> + {\n> + StartTransactionCommand();\n> + started_tx = true;\n> + }\n> +\n> + relkind = get_rel_relkind(rstate->relid);\n> + if (relkind == RELKIND_SEQUENCE)\n> + continue;\n>\n> I am wondering is it possible to put the relkind check can come\n> *before* the TX code here, because in case there are *only* sequences\n> then maybe every would be skipped and there would have been no need\n> for any TX at all in the first place.\n\nWe need to start the transaction before calling get_rel_relkind, else\nit will assert in SearchCatCacheInternal. So Skipping this.\n\n> 21.\n> + if (!started_tx)\n> + {\n> + StartTransactionCommand();\n> + started_tx = true;\n> + }\n> +\n> + relkind = get_rel_relkind(rstate->relid);\n> + if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)\n> + continue;\n>\n> Wondering (like in review comment #20) if it is possible to swap those\n> because maybe there was no reason for any TX if the other condition\n> would always continue.\n\nAs transaction is required before calling get_rel_relkind, this cannot\nbe changed. So skipping this.\n\n> ~~~\n>\n> 22.\n> + if (nsyncworkers < max_sync_workers_per_subscription)\n> + {\n> + TimestampTz now = GetCurrentTimestamp();\n> + if (!MyLogicalRepWorker->sequencesync_failure_time ||\n> + TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,\n> + now, wal_retrieve_retry_interval))\n> + {\n> + MyLogicalRepWorker->sequencesync_failure_time = 0;\n>\n> It seems to me that storing 'sequencesync_failure_time' logic may be\n> unnecessarily complicated. Can't the same \"throttling\" be achieved by\n> storing the synchronization worker 'start time' instead of 'fail\n> time', in which case then you won't have to mess around with\n> considering if the sync worker failed or just exited normally etc? You\n> might also be able to remove all the\n> logicalrep_seqsyncworker_failuretime() exit handler code.\n\nConsider the following situation: a sequence sync worker starts and\nthen encounters a failure while syncing sequences. At the same time, a\nuser initiates a \"refresh publication sequences\" operation. Given only\nthe start time, it's not possible to distinguish whether the sequence\nsync worker failed or completed successfully. This is because the\n\"refresh publication sequences\" operation would have re-added the\nsequences, making it unclear whether the sync worker's failure or\nsuccess occurred.\n\n> 28b.\n> Can you explain why the expected sequence value its 132, because\n> AFAICT you only called nextval('s') 100 times, so why isn't it 100?\n> My guess is that it seems to be related to code in \"nextval_internal\"\n> (fetch = log = fetch + SEQ_LOG_VALS;) but it kind of defies\n> expectations of the test, so if it really is correct then it needs\n> commentary.\n\nI felt adding comments for one of the tests should be enough, So I did\nnot add the comment for all of the tests.\n\n> Actually, I found other regression test code that deals with this:\n> -- log_cnt can be higher if there is a checkpoint just at the right\n> -- time, so just test for the expected range\n> SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM\n> foo_seq_new;\n>\n> Do you have to do something similar? Or is this a bug? See my other\n> review comments for function fetch_sequence_data in sequencesync.c\n\nThe comments in nextval_internal says:\n * If this is the first nextval after a checkpoint, we must force a new\n * WAL record to be written anyway, else replay starting from the\n * checkpoint would fail to advance the sequence past the logged values.\n * In this case we may as well fetch extra values.\n\nI have increased the checkpoint for this test, so this issue will not occur.\n\nAll the other comments were fixed and the same is available in the\nv20240720 version attached at [1].\n\n[1] - https://www.postgresql.org/message-id/CALDaNm2vuO7Ya4QVTZKR9jY_mkFFcE_hKUJiXx4KUknPgGFjSg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 20 Jul 2024 20:48:16 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Here are some review comments for patch v20240720-0002.\n\n======\n1. Commit message:\n\n1a.\nThe commit message is stale. It is still referring to functions and\nviews that have been moved to patch 0003.\n\n1b.\n\"ALL SEQUENCES\" is not a command. It is a clause of the CREATE\nPUBLICATION command.\n\n======\ndoc/src/sgml/ref/create_publication.sgml\n\nnitpick - publication name in the example /allsequences/all_sequences/\n\n======\nsrc/bin/psql/describe.c\n\n2. describeOneTableDetails\n\nAlthough it's not the fault of this patch, this patch propagates the\nconfusion of 'result' versus 'res'. Basically, I did not understand\nthe need for the variable 'result'. There is already a \"PGResult\n*res\", and unless I am mistaken we can just keep re-using that instead\nof introducing a 2nd variable having almost the same name and purpose.\n\n~\n\nnitpick - comment case\nnitpick - rearrange comment\n\n======\nsrc/test/regress/expected/publication.out\n\n(see publication.sql)\n\n======\nsrc/test/regress/sql/publication.sql\n\nnitpick - tweak comment\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 23 Jul 2024 15:32:50 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 16 Jul 2024 at 06:00, Peter Smith <[email protected]> wrote:\n>\n> Hi,\n>\n> I was reading back through this thread to find out how the proposed new command for refreshing sequences, came about. The patch 0705 introduces a new command syntax for ALTER SUBSCRIPTION ... REFRESH SEQUENCES\n>\n> So now there are 2 forms of subscription refresh.\n>\n> #1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option [= value] [, ... ] ) ]\n\nThis is correct.\n\n> #2. ALTER SUBSCRIPTION name REFRESH SEQUENCES\n\nThis is not correct, it is actually \"ALTER SUBSCRIPTION name REFRESH\nPUBLICATION SEQUENCES\"\n\n> ~~~~\n>\n> IMO, that separation seems complicated. It leaves many questions like:\n> * It causes a bit of initial confusion. e.g. When I saw the REFRESH SEQUENCES I first assumed that was needed because sequences were not covered by the existing REFRESH PUBLICATION\n> * Why wasn't command #2 called ALTER SUBSCRIPTION REFRESH PUBLICATION SEQUENCES? E.g. missing keyword PUBLICATION. It seems inconsistent.\n\nThis is not correct, the existing implementation uses the key word\nPUBLICATION, the actual syntax is:\n\"ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES\"\n\n> * I expect sequence values can become stale pretty much immediately after command #1, so the user will want to use command #2 anyway...\n\nYes\n\n> * ... but if command #2 also does add/remove changed sequences same as command #1 then what benefit was there of having the command #1 for sequences?\n> * There is a separation of sequences (from tables) in command #2 but there is no separation currently possible in command #1. It seemed inconsistent.\n\nThis can be enhanced if required. It is not included as of now because\nI'm not sure if there is such a use case in case of tables.\n\n> ~~~\n>\n> IIUC some of the goals I saw in the thread are to:\n> * provide a way to fetch and refresh sequences that also keeps behaviors (e.g. copy_data etc.) consistent with the refresh of subscription tables\n> * provide a way to fetch and refresh *only* sequences\n>\n> I felt you could just enhance the existing refresh command syntax (command #1), instead of introducing a new one it would be simpler and it would still meet those same objectives.\n>\n> Synopsis:\n> ALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES | SEQUENCES | ALL] [ WITH ( refresh_option [= value] [, ... ] ) ]\n>\n> My only change is the introduction of the optional \"[TABLES | SEQUENCES | ALL]\" clause.\n>\n> I believe that can do everything your current patch does, plus more:\n> * Can refresh *only* TABLES if that is what you want (current patch 0705 cannot do this)\n> * Can refresh *only* SEQUENCES (same as current patch 0705 command #2)\n> * Has better integration with refresh options like \"copy_data\" (current patch 0705 command #2 doesn't have options)\n> * Existing REFRESH PUBLICATION syntax still works as-is. You can decide later what is PG18 default if the \"[TABLES | SEQUENCES | ALL]\" is omitted.\n>\n> ~~~\n>\n> More examples using proposed syntax.\n>\n> ex1.\n> ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = false)\n> - same as PG17 functionality for ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = false)\n>\n> ex2.\n> ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = true)\n> - same as PG17 functionality for ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = true)\n>\n> ex3.\n> ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = false)\n> - this adds/removes only sequences to pg_subscription_rel but doesn't update their sequence values\n>\n> ex4.\n> ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = true)\n> - this adds/removes only sequences to pg_subscription_rel and also updates all sequence values.\n> - this is equivalent behaviour of what your current 0705 patch is doing for command #2, ALTER SUBSCRIPTION sub REFRESH SEQUENCES\n>\n> ex5.\n> ALTER SUBSCRIPTION sub REFRESH PUBLICATION ALL WITH (copy_data = false)\n> - this is equivalent behaviour of what your current 0705 patch is doing for command #1, ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = false)\n>\n> ex6.\n> ALTER SUBSCRIPTION sub REFRESH PUBLICATION ALL WITH (copy_data = true)\n> - this adds/removes tables and sequences and updates all table initial data sequence values.- I think it is equivalent to your current 0705 patch doing\n> command #1 ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = true), followed by another command #2 ALTER SUBSCRIPTION sub REFRESH SEQUENCES\n>\n> ex7.\n> ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES\n> - Because default copy_data is true you do not need to specify options, so this is the same behaviour as your current 0705 patch command #2, ALTER SUBSCRIPTION sub REFRESH SEQUENCES.\n\nI felt ex:4 is equivalent to command #2 \"ALTER SUBSCRIPTION name\nREFRESH PUBLICATION SEQUENCES\" and ex:3 just updates the\npg_subscription_rel. But I'm not seeing an equivalent for \"ALTER\nSUBSCRIPTION name REFRESH PUBLICATION with (copy_data = true)\" which\nwill identify and remove the stale entries and add entries/synchronize\nthe sequences for the newly added sequences in the publisher.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 23 Jul 2024 15:04:42 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi, here are some review comments for patch v20240720-0003.\n\nThis review is a WIP. This post is only about the docs (*.sgml) of patch 0003.\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n1. REFRESH PUBLICATION and copy_data\nnitpicks:\n- IMO the \"synchronize the sequence data\" info was misleading because\nsynchronization should only occur when copy_data=true.\n- I also felt it was strange to mention pg_subscription_rel for\nsequences, but not for tables. I modified this part too.\n- Then I moved the information about re/synchronization of sequences\ninto the \"copy_data\" part.\n- And added another link to ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES\n\nAnyway, in summary, I have updated this page quite a lot according to\nmy understanding. Please take a look at the attached nitpick for my\nsuggestions.\n\nnitpick - /The supported options are:/The only supported option is:/\n\n~~~\n\n2. REFRESH PUBLICATION SEQUENCES\nnitpick - tweaked the wording\nnitpicK - typo /syncronizes/synchronizes/\n\n======\n3. catalogs.sgml\n\nIMO something is missing in Section \"1.55. pg_subscription_rel\".\n\nCurrently, this page only talks of relations/tables, but I think it\nshould mention \"sequences\" here too, particularly since now we are\nlinking to here from ALTER SUBSCRIPTION when talking about sequences.\n\n======\n99.\nPlease see the attached diffs patch which implements any nitpicks\nmentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 24 Jul 2024 13:46:52 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jul 24, 2024 at 9:17 AM Peter Smith <[email protected]> wrote:\n>\n\nI had a look at patches v20240720* (considering these as the latest\none) and tried to do some basic testing (WIP). Few comments:\n\n1)\nI see 'last_value' is updated wrongly after create-sub. Steps:\n\n-----------\npub:\nCREATE SEQUENCE myseq0 INCREMENT 5 START 100;\nSELECT nextval('myseq0');\nSELECT nextval('myseq0');\n--last_value on pub is 105\nselect * from pg_sequences;\ncreate publication pub1 for all tables, sequences;\n\nSub:\nCREATE SEQUENCE myseq0 INCREMENT 5 START 100;\ncreate subscription sub1 connection 'dbname=postgres host=localhost\nuser=shveta port=5433' publication pub1;\n\n--check 'r' state is reached\nselect pc.relname, pr.srsubstate, pr.srsublsn from pg_subscription_rel\npr, pg_class pc where (pr.srrelid = pc.oid);\n\n--check 'last_value', it shows some random value as 136\nselect * from pg_sequences;\n-----------\n\n2)\nI can use 'for all sequences' only with 'for all tables' and can not\nuse it with the below. Shouldn't it be allowed?\n\ncreate publication pub2 for tables in schema public, for all sequences;\ncreate publication pub2 for table t1, for all sequences;\n\n3)\npreprocess_pub_all_objtype_list():\nDo we need 'alltables_specified' and 'allsequences_specified' ? Can't\nwe make a repetition check using *alltables and *allsequences?\n\n4) patch02's commit msg says : 'Additionally, a new system view,\npg_publication_sequences, has been introduced'\nBut it is not part of patch002.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 24 Jul 2024 11:52:49 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 24 Jul 2024 at 09:17, Peter Smith <[email protected]> wrote:\n>\n> Hi, here are some review comments for patch v20240720-0003.\n>\n> This review is a WIP. This post is only about the docs (*.sgml) of patch 0003.\n>\n> 3. catalogs.sgml\n>\n> IMO something is missing in Section \"1.55. pg_subscription_rel\".\n>\n> Currently, this page only talks of relations/tables, but I think it\n> should mention \"sequences\" here too, particularly since now we are\n> linking to here from ALTER SUBSCRIPTION when talking about sequences.\n\nModified it to mention sequences too.\n\nI have merged the rest of the nitpicks suggested by you.\nThe attached v20240724 version patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 24 Jul 2024 12:07:01 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 23 Jul 2024 at 11:03, Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for patch v20240720-0002.\n>\n> ======\n> 1. Commit message:\n>\n> 1a.\n> The commit message is stale. It is still referring to functions and\n> views that have been moved to patch 0003.\n\nModified\n\n> 1b.\n> \"ALL SEQUENCES\" is not a command. It is a clause of the CREATE\n> PUBLICATION command.\n\nModified\n\n> src/bin/psql/describe.c\n>\n> 2. describeOneTableDetails\n>\n> Although it's not the fault of this patch, this patch propagates the\n> confusion of 'result' versus 'res'. Basically, I did not understand\n> the need for the variable 'result'. There is already a \"PGResult\n> *res\", and unless I am mistaken we can just keep re-using that instead\n> of introducing a 2nd variable having almost the same name and purpose.\n\nThis is intentional, we cannot clear res as it will be used in many\nplaces of printTable like in\nprintTable->print_aligned_text->pg_wcssize which was earlier stored\nfrom printTableAddCell calls.\n\nThe rest of the nitpicks comments were merged.\nThe v20240724 version patch attached at [1] has the changes for the same.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm1uncevCSMqo5Nk%3DtqqV_o3KNH_jwp8URiGop_nPC8BTg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 24 Jul 2024 12:10:01 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Jul 24, 2024 at 11:52 AM shveta malik <[email protected]> wrote:\n>\n> On Wed, Jul 24, 2024 at 9:17 AM Peter Smith <[email protected]> wrote:\n> >\n>\n> I had a look at patches v20240720* (considering these as the latest\n> one) and tried to do some basic testing (WIP). Few comments:\n>\n> 1)\n> I see 'last_value' is updated wrongly after create-sub. Steps:\n>\n> -----------\n> pub:\n> CREATE SEQUENCE myseq0 INCREMENT 5 START 100;\n> SELECT nextval('myseq0');\n> SELECT nextval('myseq0');\n> --last_value on pub is 105\n> select * from pg_sequences;\n> create publication pub1 for all tables, sequences;\n>\n> Sub:\n> CREATE SEQUENCE myseq0 INCREMENT 5 START 100;\n> create subscription sub1 connection 'dbname=postgres host=localhost\n> user=shveta port=5433' publication pub1;\n>\n> --check 'r' state is reached\n> select pc.relname, pr.srsubstate, pr.srsublsn from pg_subscription_rel\n> pr, pg_class pc where (pr.srrelid = pc.oid);\n>\n> --check 'last_value', it shows some random value as 136\n> select * from pg_sequences;\n\nOkay, I see that in fetch_remote_sequence_data(), we are inserting\n'last_value + log_cnt' fetched from remote as 'last_val' on subscriber\nand thus leading to above behaviour. I did not understand why this is\ndone? This may result into issue when we insert data into a table with\nidentity column on subscriber (whose internal sequence is replicated);\nthe identity column in this case will end up having wrong value.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 24 Jul 2024 14:50:29 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 24 Jul 2024 at 11:53, shveta malik <[email protected]> wrote:\n>\n> On Wed, Jul 24, 2024 at 9:17 AM Peter Smith <[email protected]> wrote:\n> >\n>\n> I had a look at patches v20240720* (considering these as the latest\n> one) and tried to do some basic testing (WIP). Few comments:\n>\n> 1)\n> I see 'last_value' is updated wrongly after create-sub. Steps:\n>\n> -----------\n> pub:\n> CREATE SEQUENCE myseq0 INCREMENT 5 START 100;\n> SELECT nextval('myseq0');\n> SELECT nextval('myseq0');\n> --last_value on pub is 105\n> select * from pg_sequences;\n> create publication pub1 for all tables, sequences;\n>\n> Sub:\n> CREATE SEQUENCE myseq0 INCREMENT 5 START 100;\n> create subscription sub1 connection 'dbname=postgres host=localhost\n> user=shveta port=5433' publication pub1;\n>\n> --check 'r' state is reached\n> select pc.relname, pr.srsubstate, pr.srsublsn from pg_subscription_rel\n> pr, pg_class pc where (pr.srrelid = pc.oid);\n>\n> --check 'last_value', it shows some random value as 136\n> select * from pg_sequences;\n\nEarlier I was setting sequence value with the value of publisher +\nlog_cnt, that is why the difference is there. On further thinking\nsince we are not supporting incremental replication of sequences, so\nno plugin usage is involved which requires the special decoding last\nvalue and log_count. I felt we can use the exact sequence last value\nand log count to generate the similar sequence value. So Now I have\nchanged it to get the last_value and log_count from the publisher and\nset it to the same values.\n\n>\n> 2)\n> I can use 'for all sequences' only with 'for all tables' and can not\n> use it with the below. Shouldn't it be allowed?\n>\n> create publication pub2 for tables in schema public, for all sequences;\n> create publication pub2 for table t1, for all sequences;\n\nI feel this can be added as part of a later version while supporting\n\"add/drop/set sequence and add/drop/set sequences in schema\" once the\npatch is stable.\n\n> 3)\n> preprocess_pub_all_objtype_list():\n> Do we need 'alltables_specified' and 'allsequences_specified' ? Can't\n> we make a repetition check using *alltables and *allsequences?\n\nModified\n\n> 4) patch02's commit msg says : 'Additionally, a new system view,\n> pg_publication_sequences, has been introduced'\n> But it is not part of patch002.\n\nThis is removed now\n\nThe attached v20240725 version patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Thu, 25 Jul 2024 08:52:26 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jul 25, 2024 at 9:06 AM vignesh C <[email protected]> wrote:\n>\n> The attached v20240725 version patch has the changes for the same.\n\nThank You for addressing the comments. Please review below issues:\n\n1) Sub ahead of pub due to wrong initial sync of last_value for\nnon-incremented sequences. Steps at [1]\n2) Sequence's min value is not honored on sub during replication. Steps at [2]\n\n[1]:\n-----------\non PUB:\nCREATE SEQUENCE myseq001 INCREMENT 5 START 100;\nSELECT * from pg_sequences; -->shows last_val as NULL\n\non SUB:\nCREATE SEQUENCE myseq001 INCREMENT 5 START 100;\nSELECT * from pg_sequences; -->correctly shows last_val as NULL\nALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\nSELECT * from pg_sequences; -->wrongly updates last_val to 100; it is\nstill NULL on Pub.\n\nThus , SELECT nextval('myseq001') on pub gives 100, while on sub gives 105.\n-----------\n\n\n[2]:\n-----------\nPub:\nCREATE SEQUENCE myseq0 INCREMENT 5 START 10;\nSELECT * from pg_sequences;\n\nSub:\nCREATE SEQUENCE myseq0 INCREMENT 5 MINVALUE 100;\n\nPub:\nSELECT nextval('myseq0');\nSELECT nextval('myseq0');\n\nSub:\nALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\n--check 'last_value', it is 15 while min_value is 100\nSELECT * from pg_sequences;\n-----------\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 25 Jul 2024 12:08:19 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi, here are more review comments for patch v20240720-0003.\n\n======\nsrc/backend/catalog/pg_subscription.c\n\n(Numbers are starting at #4 because this is a continuation of the docs review)\n\n4. GetSubscriptionRelations\n\nnitpick - rearranged the function header comment\n\n~\n\n5.\nTBH, I'm thinking that just passing 2 parameters:\n- bool get_tables\n- bool get_sequences\nwhere one or both can be true, would have resulted in simpler code,\ninstead of introducing this new enum SubscriptionRelKind.\n\n~\n\n6.\nThe 'not_all_relations' parameter/logic feels really awkward. IMO it\nneeds a better name and reverse the meaning to remove all the \"nots\".\n\nFor example, commenting it and calling it like below could be much simpler.\n\n'all_relations'\nIf returning sequences, if all_relations=true get all sequences,\notherwise only get sequences that are in 'init' state.\nIf returning tables, if all_relation=true get all tables, otherwise\nonly get tables that have not reached 'READY' state.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nAlterSubscription_refresh:\n\nnitpick - this function comment is difficult to understand. I've\nrearranged it a bit but it could still do with some further\nimprovement.\nnitpick - move some code comments\nnitpick - I adjusted the \"stop worker\" comment slightly. Please check\nit is still correct.\nnitpick - add a blank line\n\n~\n\n7.\nThe logic seems over-complicated. For example, why is the sequence\nlist *always* fetched, but the tables list is only sometimes fetched?\nFurthermore, this 'refresh_all_sequences' parameter seems to have a\nstrange interference with tables (e.g. even though it is possible to\nrefresh all tables and sequences at the same time). It is as if the\nmeaning is 'refresh_publication_sequences' yet it is not called that\n(???)\n\nThese gripes may be related to my other thread [1] about the new ALTER\nsyntax. (I feel that there should be the ability to refresh ALL TABLES\nor ALL SEQUENCES independently if the user wants to). IIUC, it would\nsimplify this function logic as well as being more flexible. Anyway, I\nwill leave the discussion about syntax to that other thread.\n\n~\n\n8.\n+ if (relkind != RELKIND_SEQUENCE)\n+ logicalrep_worker_stop(sub->oid, relid);\n\n /*\n * For READY state, we would have already dropped the\n * tablesync origin.\n */\n- if (state != SUBREL_STATE_READY)\n+ if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)\n\nIt might be better to have a single \"if (relkind != RELKIND_SEQUENCE)\"\nhere and combine both of these codes under that.\n\n~\n\n9.\n ereport(DEBUG1,\n- (errmsg_internal(\"table \\\"%s.%s\\\" removed from subscription \\\"%s\\\"\",\n+ (errmsg_internal(\"%s \\\"%s.%s\\\" removed from subscription \\\"%s\\\"\",\n+ get_namespace_name(get_rel_namespace(relid)),\n+ get_rel_name(relid),\n+ sub->name,\n+ get_rel_relkind(relid) == RELKIND_SEQUENCE ? \"sequence\" : \"table\")));\n\nIIUC prior conDitions mean get_rel_relkind(relid) == RELKIND_SEQUENCE\nwill be impossible here.\n\n~~~\n\n10. AlterSubscription\n\n+ PreventInTransactionBlock(isTopLevel, \"ALTER SUBSCRIPTION ...\nREFRESH PUBLICATION SEQUENCES\");\n\nIIUC the docs page for ALTER SUBSCRIPTION was missing this information\nabout \"REFRESH PUBLICATION SEQUENCES\" in transactions. Docs need more\nupdates.\n\n======\nsrc/backend/replication/logical/launcher.c\n\nlogicalrep_worker_find:\nnitpick - tweak comment to say \"or\" instead of \"and\"\n\n~~~\n\n11.\n+/*\n+ * Return the pid of the apply worker for one that matches given\n+ * subscription id.\n+ */\n+static LogicalRepWorker *\n+logicalrep_apply_worker_find(Oid subid, bool only_running)\n\nThe function comment is wrong. This is not returning a PID.\n\n~~~\n\n12.\n+ if (is_sequencesync_worker)\n+ Assert(!OidIsValid(relid));\n\nShould we the Assert to something more like:\nAssert(!is_sequencesync_worker || !OidIsValid(relid));\n\nOtherwise, in NODEBUG current code will compile into an empty\ncondition statement, which is a bit odd.\n\n~~~\n\nlogicalrep_seqsyncworker_failuretime:\nnitpick - tweak function comment\nnitpick - add blank line\n\n======\n.../replication/logical/sequencesync.c\n\n13. fetch_remote_sequence_data\n\nThe \"current state\" mentioned in the function comment is a bit vague.\nCan't tell from this comment what it is returning without looking\ndeeper into the function code.\n\n~\n\nnitpick - typo \"scenarios\" in comment\n\n~~~\n\ncopy_sequence:\nnitpick - typo \"withe\" in function comment\nnitpick - typo /retreived/retrieved/\nnitpick - add/remove blank lines\n\n~~~\n\nLogicalRepSyncSequences:\nnitpick - move a comment.\nnitpick - remove blank line\n\n14.\n+ /*\n+ * Verify whether the current batch of sequences is synchronized or if\n+ * there are no remaining sequences to synchronize.\n+ */\n+ if ((((curr_seq + 1) % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||\n+ (curr_seq + 1) == seq_count)\n\n\nAll this \"curr_seq + 1\" maths seems unnecessarily tricky. Can't we\njust increment the cur_seq? before this calculation?\n\n~\n\nnitpick - simplify the comment about batching\nnitpick - added a comment to the commit\n\n======\nsrc/backend/replication/logical/tablesync.c\n\nfinish_sync_worker:\nnitpick - added an Assert so the if/else is less risky.\nnitpick - modify the comment about failure time when it is a clean exit\n\n~~~\n\n15. process_syncing_sequences_for_apply\n\n+ /* We need up-to-date sync state info for subscription sequences here. */\n+ FetchTableStates(&started_tx, SUB_REL_KIND_ALL);\n\nShould that say SUB_REL_KIND_SEQUENCE?\n\n~\n\n16.\n+ /*\n+ * If there are free sync worker slot(s), start a new sequence\n+ * sync worker, and break from the loop.\n+ */\n+ if (nsyncworkers < max_sync_workers_per_subscription)\n\nShould this \"if\" have some \"else\" code to log a warning if we have run\nout of free workers? Otherwise, how will the user know that the system\nmay need tuning?\n\n~~~\n\n17. FetchTableStates\n\n /* Fetch all non-ready tables. */\n- rstates = GetSubscriptionRelations(MySubscription->oid, true);\n+ rstates = GetSubscriptionRelations(MySubscription->oid, rel_type, true);\n\nThis feels risky. IMO there needs to be some prior Assert about the\nrel_type. For example, if it happened to be SUB_REL_KIND_SEQUENCE then\nthis function code doesn't seem to make sense.\n\n~~~\n\n======\nsrc/backend/replication/logical/worker.c\n\n18. SetupApplyOrSyncWorker\n\n+\n+ if (isSequenceSyncWorker(MyLogicalRepWorker))\n+ before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);\n\nProbably that should be using macro am_sequencesync_worker(), right?\n\n======\nsrc/include/catalog/pg_subscription_rel.h\n\n19.\n+typedef enum\n+{\n+ SUB_REL_KIND_TABLE,\n+ SUB_REL_KIND_SEQUENCE,\n+ SUB_REL_KIND_ALL,\n+} SubscriptionRelKind;\n+\n\nI was not sure how helpful this is; it might not be needed. e.g. see\nreview comment for GetSubscriptionRelations\n\n~~~\n\n20.\n+extern List *GetSubscriptionRelations(Oid subid, SubscriptionRelKind reltype,\n+ bool not_ready);\n\nThere is a mismatch with the ‘not_ready’ parameter name here and in\nthe function implementation\n\n======\nsrc/test/subscription/t/034_sequences.pl\n\nnitpick - removed a blank line\n\n======\n99.\nPlease also see the attached diffs patch which implements all the\nnitpicks mentioned above.\n\n======\n[1] syntax - https://www.postgresql.org/message-id/CAHut%2BPuFH1OCj-P1UKoRQE2X4-0zMG%2BN1V7jdn%3DtOQV4RNbAbw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 25 Jul 2024 17:24:28 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Jul 25, 2024 at 12:08 PM shveta malik <[email protected]> wrote:\n>\n> On Thu, Jul 25, 2024 at 9:06 AM vignesh C <[email protected]> wrote:\n> >\n> > The attached v20240725 version patch has the changes for the same.\n>\n> Thank You for addressing the comments. Please review below issues:\n>\n> 1) Sub ahead of pub due to wrong initial sync of last_value for\n> non-incremented sequences. Steps at [1]\n> 2) Sequence's min value is not honored on sub during replication. Steps at [2]\n\nOne more issue:\n3) Sequence datatype's range is not honored on sub during\nreplication, while it is honored for tables.\n\n\nBehaviour for tables:\n---------------------\nPub: create table tab1( i integer);\nSub: create table tab1( i smallint);\n\nPub: insert into tab1 values(generate_series(1, 32768));\n\nError on sub:\n2024-07-25 10:38:06.446 IST [178680] ERROR: value \"32768\" is out of\nrange for type smallint\n\n---------------------\nBehaviour for sequences:\n---------------------\n\nPub:\nCREATE SEQUENCE myseq_i as integer INCREMENT 10000 START 1;\n\nSub:\nCREATE SEQUENCE myseq_i as smallint INCREMENT 10000 START 1;\n\nPub:\nSELECT nextval('myseq_i');\nSELECT nextval('myseq_i');\nSELECT nextval('myseq_i');\nSELECT nextval('myseq_i');\nSELECT nextval('myseq_i'); -->brings value to 40001\n\nSub:\nALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\nSELECT * from pg_sequences; -->last_val reached till 40001, while the\nrange is till 32767.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 25 Jul 2024 15:40:58 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Here are some review comments for latest patch v20240725-0002\n\n======\ndoc/src/sgml/ref/create_publication.sgml\n\nnitpick - tweak to the description of the example.\n\n======\nsrc/backend/parser/gram.y\n\npreprocess_pub_all_objtype_list:\nnitpick - typo \"allbjects_list\"\nnitpick - reword function header\nnitpick - /alltables/all_tables/\nnitpick - /allsequences/all_sequences/\nnitpick - I think code is safe as-is because makeNode internally does\npalloc0, but OTOH adding Assert would be nicer just to remove any\ndoubts.\n\n======\nsrc/bin/psql/describe.c\n\n1.\n+ /* Print any publications */\n+ if (pset.sversion >= 180000)\n+ {\n+ int tuples = 0;\n\nNo need to assign value 0 here, because this will be unconditionally\nassigned before use anyway.\n\n~~~~\n\n2. describePublications\n\n has_pubviaroot = (pset.sversion >= 130000);\n+ has_pubsequence = (pset.sversion >= 18000);\n\nThat's a bug! Should be 180000, not 18000.\n\n======\n\nAnd, please see the attached diffs patch, which implements the\nnitpicks mentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 26 Jul 2024 12:33:55 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh,\n\nThere are still pending changes from my previous review of the\n0720-0003 patch [1], but here are some new review comments for your\nlatest patch v20240525-0003.\n\n======\ndoc/src/sgml/catalogs.sgml\n\nnitpick - fix plurals and tweak the description.\n\n~~~\n\n1.\n <para>\n- This catalog only contains tables known to the subscription after running\n- either <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link> or\n- <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION\n... REFRESH\n+ This catalog only contains tables and sequences known to the subscription\n+ after running either\n+ <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>\n+ or <link linkend=\"sql-altersubscription\"><command>ALTER\nSUBSCRIPTION ... REFRESH\n PUBLICATION</command></link>.\n </para>\n\nShouldn't this mention \"REFRESH PUBLICATION SEQUENCES\" too?\n\n======\nsrc/backend/commands/sequence.c\n\nSetSequenceLastValue:\nnitpick - maybe change: /log_cnt/new_log_cnt/ for consistency with the\nother parameter, and to emphasise the old log_cnt is overwritten\n\n======\nsrc/backend/replication/logical/sequencesync.c\n\n2.\n+/*\n+ * fetch_remote_sequence_data\n+ *\n+ * Fetch sequence data (current state) from the remote node, including\n+ * the latest sequence value from the publisher and the Page LSN for the\n+ * sequence.\n+ */\n+static int64\n+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid,\n+ int64 *log_cnt, XLogRecPtr *lsn)\n\n2a.\nNow you are also returning the 'log_cnt' but that is not mentioned by\nthe function comment.\n\n~\n\n2b.\nIs it better to name these returned by-ref ptrs like 'ret_log_cnt',\nand 'ret_lsn' to emphasise they are output variables? YMMV.\n\n~~~\n\n3.\n+ /* Process the sequence. */\n+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);\n+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))\n\nThis will have one-and-only-one tuple for the discovered sequence,\nwon't it? So, why is this a while loop?\n\n======\nsrc/include/commands/sequence.h\n\nnitpick - maybe change: /log_cnt/new_log_cnt/ (same as earlier in this post)\n\n======\nsrc/test/subscription/t/034_sequences.pl\n\n4.\nQ. Should we be suspicious that log_cnt changes from '32' to '31', or\nis there a valid explanation? It smells like some calculation is\noff-by-one, but without debugging I can't tell if it is right or\nwrong.\n\n======\nPlease also see the attached diffs patch, which implements the\nnitpicks mentioned above.\n\n======\n[1] 0720-0003 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPsfsfzyBrmo8E43qFMp9_bmen2tuCsNYN8sX%3Dfa86SdfA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 26 Jul 2024 16:16:30 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 25 Jul 2024 at 12:54, Peter Smith <[email protected]> wrote:\n>\n> Hi, here are more review comments for patch v20240720-0003.\n> 7.\n> The logic seems over-complicated. For example, why is the sequence\n> list *always* fetched, but the tables list is only sometimes fetched?\n> Furthermore, this 'refresh_all_sequences' parameter seems to have a\n> strange interference with tables (e.g. even though it is possible to\n> refresh all tables and sequences at the same time). It is as if the\n> meaning is 'refresh_publication_sequences' yet it is not called that\n> (???)\n>\n> These gripes may be related to my other thread [1] about the new ALTER\n> syntax. (I feel that there should be the ability to refresh ALL TABLES\n> or ALL SEQUENCES independently if the user wants to). IIUC, it would\n> simplify this function logic as well as being more flexible. Anyway, I\n> will leave the discussion about syntax to that other thread.\n\n1) ALTER SUBCRIPTION ... REFRESH PUBLICATION\nThis command will refresh both tables and sequences. It will remove\nstale tables and sequences and include newly added tables and\nsequences.\n2) ALTER SUBCRIPTION ... REFRESH PUBLICATION SEQUENCES\nThis command will refresh only sequences. It will remove stale\nsequences and synchronize all sequences including the existing\nsequences.\nSo the table will be fetched only for the first command.\nI have changed refresh_publication_sequences parameter to tables,\nsequences, all_relations with this the function should be easier to\nunderstand and remove any confusions.\n\n> ~\n>\n> 9.\n> ereport(DEBUG1,\n> - (errmsg_internal(\"table \\\"%s.%s\\\" removed from subscription \\\"%s\\\"\",\n> + (errmsg_internal(\"%s \\\"%s.%s\\\" removed from subscription \\\"%s\\\"\",\n> + get_namespace_name(get_rel_namespace(relid)),\n> + get_rel_name(relid),\n> + sub->name,\n> + get_rel_relkind(relid) == RELKIND_SEQUENCE ? \"sequence\" : \"table\")));\n>\n> IIUC prior conDitions mean get_rel_relkind(relid) == RELKIND_SEQUENCE\n> will be impossible here.\n\nConsider a scenario where logical replication is setup with sequences\nseq1, seq2.\nNow drop sequence seq1 and do \"ALTER SUBSCRIPTION sub REFRESH PUBLICATION\"\nIt will hit this code to generate the log:\nDEBUG: sequence \"public.seq1\" removed from subscription \"test1\"\n\n> ======\n> .../replication/logical/sequencesync.c\n>\n> 13. fetch_remote_sequence_data\n>\n> The \"current state\" mentioned in the function comment is a bit vague.\n> Can't tell from this comment what it is returning without looking\n> deeper into the function code.\n\nAdded more comments to clarify.\n\n> ~~~\n>\n> 15. process_syncing_sequences_for_apply\n>\n> + /* We need up-to-date sync state info for subscription sequences here. */\n> + FetchTableStates(&started_tx, SUB_REL_KIND_ALL);\n>\n> Should that say SUB_REL_KIND_SEQUENCE?\n\nWe cannot pass SUB_REL_KIND_SEQUENCE here because the\npg_subscription_rel table is shared between sequences and tables. As\nchanges to either sequences or relations can affect the validity of\nrelation states, we update both table_states_not_ready and\nsequence_states_not_ready simultaneously to ensure consistency, rather\nthan updating them separately. I have removed the relation kind\nparameter now. Fetch tables is called to fetch all tables and\nsequences before calling process_syncing_tables_for_apply and\nprocess_syncing_sequences_for_apply now.\n\n> ~\n>\n> 16.\n> + /*\n> + * If there are free sync worker slot(s), start a new sequence\n> + * sync worker, and break from the loop.\n> + */\n> + if (nsyncworkers < max_sync_workers_per_subscription)\n>\n> Should this \"if\" have some \"else\" code to log a warning if we have run\n> out of free workers? Otherwise, how will the user know that the system\n> may need tuning?\n\nI felt no need to log here else we will get a lot of log messages\nwhich might not be required. Similar logic is used for tablesync to in\nprocess_syncing_tables_for_apply.\n\n> ~~~\n>\n> 17. FetchTableStates\n>\n> /* Fetch all non-ready tables. */\n> - rstates = GetSubscriptionRelations(MySubscription->oid, true);\n> + rstates = GetSubscriptionRelations(MySubscription->oid, rel_type, true);\n>\n> This feels risky. IMO there needs to be some prior Assert about the\n> rel_type. For example, if it happened to be SUB_REL_KIND_SEQUENCE then\n> this function code doesn't seem to make sense.\n\nThe pg_subscription_rel table is shared between sequences and tables.\nAs changes to either sequences or relations can affect the validity of\nrelation states, we update both table_states_not_ready and\nsequence_states_not_ready simultaneously to ensure consistency, rather\nthan updating them separately. This will update both tables and\nsequences that should be synced.\n\nThe rest of the comments are fixed. The attached v20240729 version\npatch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Mon, 29 Jul 2024 11:19:39 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 26 Jul 2024 at 08:04, Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for latest patch v20240725-0002\n>\n> ======\n> doc/src/sgml/ref/create_publication.sgml\n>\n> nitpick - tweak to the description of the example.\n>\n> ======\n> src/backend/parser/gram.y\n>\n> preprocess_pub_all_objtype_list:\n> nitpick - typo \"allbjects_list\"\n> nitpick - reword function header\n> nitpick - /alltables/all_tables/\n> nitpick - /allsequences/all_sequences/\n> nitpick - I think code is safe as-is because makeNode internally does\n> palloc0, but OTOH adding Assert would be nicer just to remove any\n> doubts.\n>\n> ======\n> src/bin/psql/describe.c\n>\n> 1.\n> + /* Print any publications */\n> + if (pset.sversion >= 180000)\n> + {\n> + int tuples = 0;\n>\n> No need to assign value 0 here, because this will be unconditionally\n> assigned before use anyway.\n>\n> ~~~~\n>\n> 2. describePublications\n>\n> has_pubviaroot = (pset.sversion >= 130000);\n> + has_pubsequence = (pset.sversion >= 18000);\n>\n> That's a bug! Should be 180000, not 18000.\n>\n> ======\n>\n> And, please see the attached diffs patch, which implements the\n> nitpicks mentioned above.\n\nThese are handled in the v20240729 version attached at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm3SucGGLe-B-a_aqWNWQZ-yfxFTiAA0JyP-SwX4jq9Y3A%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 29 Jul 2024 11:22:06 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 26 Jul 2024 at 11:46, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh,\n>\n> There are still pending changes from my previous review of the\n> 0720-0003 patch [1], but here are some new review comments for your\n> latest patch v20240525-0003.\n> 2b.\n> Is it better to name these returned by-ref ptrs like 'ret_log_cnt',\n> and 'ret_lsn' to emphasise they are output variables? YMMV.\n\nI felt this is ok as we have mentioned in function header too\n\n> ======\n> src/test/subscription/t/034_sequences.pl\n>\n> 4.\n> Q. Should we be suspicious that log_cnt changes from '32' to '31', or\n> is there a valid explanation? It smells like some calculation is\n> off-by-one, but without debugging I can't tell if it is right or\n> wrong.\n\n It works like this: for every 33 nextval we will get log_cnt as 0. So\nfor 33 * 6(198) log_cnt will be 0, then for 199 log_cnt will be 32 and\nfor 200 log_cnt will be 31. This pattern repeats, so this is ok.\n\nThese are handled in the v20240729 version attached at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm3SucGGLe-B-a_aqWNWQZ-yfxFTiAA0JyP-SwX4jq9Y3A%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 29 Jul 2024 11:24:25 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 25 Jul 2024 at 12:08, shveta malik <[email protected]> wrote:\n>\n> On Thu, Jul 25, 2024 at 9:06 AM vignesh C <[email protected]> wrote:\n> >\n> > The attached v20240725 version patch has the changes for the same.\n>\n> Thank You for addressing the comments. Please review below issues:\n>\n> 1) Sub ahead of pub due to wrong initial sync of last_value for\n> non-incremented sequences. Steps at [1]\n> 2) Sequence's min value is not honored on sub during replication. Steps at [2]\n>\n> [1]:\n> -----------\n> on PUB:\n> CREATE SEQUENCE myseq001 INCREMENT 5 START 100;\n> SELECT * from pg_sequences; -->shows last_val as NULL\n>\n> on SUB:\n> CREATE SEQUENCE myseq001 INCREMENT 5 START 100;\n> SELECT * from pg_sequences; -->correctly shows last_val as NULL\n> ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\n> SELECT * from pg_sequences; -->wrongly updates last_val to 100; it is\n> still NULL on Pub.\n>\n> Thus , SELECT nextval('myseq001') on pub gives 100, while on sub gives 105.\n> -----------\n>\n>\n> [2]:\n> -----------\n> Pub:\n> CREATE SEQUENCE myseq0 INCREMENT 5 START 10;\n> SELECT * from pg_sequences;\n>\n> Sub:\n> CREATE SEQUENCE myseq0 INCREMENT 5 MINVALUE 100;\n>\n> Pub:\n> SELECT nextval('myseq0');\n> SELECT nextval('myseq0');\n>\n> Sub:\n> ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\n> --check 'last_value', it is 15 while min_value is 100\n> SELECT * from pg_sequences;\n\nThanks for reporting this, these issues are fixed in the attached\nv20240730_2 version patch.\n\nRegards,\nVignesh", "msg_date": "Mon, 29 Jul 2024 16:17:35 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 25 Jul 2024 at 15:41, shveta malik <[email protected]> wrote:\n>\n> On Thu, Jul 25, 2024 at 12:08 PM shveta malik <[email protected]> wrote:\n> >\n> > On Thu, Jul 25, 2024 at 9:06 AM vignesh C <[email protected]> wrote:\n> > >\n> > > The attached v20240725 version patch has the changes for the same.\n> >\n> > Thank You for addressing the comments. Please review below issues:\n> >\n> > 1) Sub ahead of pub due to wrong initial sync of last_value for\n> > non-incremented sequences. Steps at [1]\n> > 2) Sequence's min value is not honored on sub during replication. Steps at [2]\n>\n> One more issue:\n> 3) Sequence datatype's range is not honored on sub during\n> replication, while it is honored for tables.\n>\n>\n> Behaviour for tables:\n> ---------------------\n> Pub: create table tab1( i integer);\n> Sub: create table tab1( i smallint);\n>\n> Pub: insert into tab1 values(generate_series(1, 32768));\n>\n> Error on sub:\n> 2024-07-25 10:38:06.446 IST [178680] ERROR: value \"32768\" is out of\n> range for type smallint\n>\n> ---------------------\n> Behaviour for sequences:\n> ---------------------\n>\n> Pub:\n> CREATE SEQUENCE myseq_i as integer INCREMENT 10000 START 1;\n>\n> Sub:\n> CREATE SEQUENCE myseq_i as smallint INCREMENT 10000 START 1;\n>\n> Pub:\n> SELECT nextval('myseq_i');\n> SELECT nextval('myseq_i');\n> SELECT nextval('myseq_i');\n> SELECT nextval('myseq_i');\n> SELECT nextval('myseq_i'); -->brings value to 40001\n>\n> Sub:\n> ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\n> SELECT * from pg_sequences; -->last_val reached till 40001, while the\n> range is till 32767.\n\nThis issue is addressed in the v20240730_2 version patch attached at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm3%2BXzHAbgyn8gmbBLK5goyv_uyGgHEsTQxRZ8bVk6nAEg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 29 Jul 2024 16:18:33 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh,\n\nHere are my review comments for your latest 0730_2* patches.\n\nPatch v20240730_2-0001 looks good to me.\n\nPatch v20240730_2-0002 looks good to me.\n\nMy comments for the v20240730_2-0003 patch are below:\n\n//////////\n\nGENERAL\n\n1. Inconsistent terms\n\nI've noticed there are many variations of how the sequence sync worker is known:\n- \"sequencesync worker\"\n- \"sequence sync worker\"\n- \"sequence-sync worker\"\n- \"sequence synchronization worker\"\n- more?\n\nWe must settle on some standardized name.\n\nAFAICT we generally use \"table synchronization worker\" in the docs,\nand \"tablesync worker\" in the code and comments. IMO, we should do\nsame as that for sequences -- e.g. \"sequence synchronization worker\"\nin the docs, and \"sequencesync worker\" in the code and comments.\n\n======\ndoc/src/sgml/catalogs.sgml\n\nnitpick - the links should jump directly to REFRESH PUBLICATION or\nREFRESH PUBLICATION SEQUENCES. Currently they go to the top of the\nALTER SUBSCRIPTION page which is not as useful.\n\n======\nsrc/backend/commands/sequence.c\n\ndo_setval:\nnitpick - minor wording in the function header\nnitpick - change some param names to more closely resemble the fields\nthey get assigned to (/logcnt/log_cnt/, /iscalled/is_called/)\n\n~\n\n2.\n seq->is_called = iscalled;\n- seq->log_cnt = 0;\n+ seq->log_cnt = (logcnt == SEQ_LOG_CNT_INVALID) ? 0: logcnt;\n\nThe logic here for SEQ_LOG_CNT_INVALID seemed strange. Why not just\n#define SEQ_LOG_CNT_INVALID as 0 in the first place if that is what\nyou will assign for invalid? Then you won't need to do anything here\nexcept seq->log_cnt = log_cnt;\n\n======\nsrc/backend/catalog/pg_subscription.c\n\nHasSubscriptionRelations:\nnitpick - I think the comment \"If even a single tuple exists...\" is\nnot quite accurate. e.g. It also has to be the right kind of tuple.\n\n~~\n\nGetSubscriptionRelations:\nnitpick - Give more description in the function header about the other\nparameters.\nnitpick - I felt that a better name for 'all_relations' is all_states.\nBecause in my mind *all relations* sounds more like when both\n'all_tables' and 'all_sequences' are true.\nnitpick - IMO add an Assert to be sure something is being fetched.\nAssert(get_tables || get_sequences);\nnitpick - Rephrase the \"skip the tables\" and \"skip the sequences\"\ncomments to be more aligned with the code condition.\n\n~\n\n3.\n- if (not_ready)\n+ /* Get the relations that are not in ready state */\n+ if (get_tables && !all_relations)\n ScanKeyInit(&skey[nkeys++],\n Anum_pg_subscription_rel_srsubstate,\n BTEqualStrategyNumber, F_CHARNE,\n CharGetDatum(SUBREL_STATE_READY));\n+ /* Get the sequences that are in init state */\n+ else if (get_sequences && !all_relations)\n+ ScanKeyInit(&skey[nkeys++],\n+ Anum_pg_subscription_rel_srsubstate,\n+ BTEqualStrategyNumber, F_CHAREQ,\n+ CharGetDatum(SUBREL_STATE_INIT));\n\nThis is quite tricky, using multiple flags (get_tables and\nget_sequences) in such a way. It might even be a bug -- e.g. Is the\n'else' keyword correct? Otherwise, when both get_tables and\nget_sequences are true, and all_relations is false, then the sequence\npart wouldn't even get executed (???).\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nCreateSubscription:\nnitpick - let's move the 'tables' declaration to be beside the\n'sequences' var for consistency. (in passing move other vars too)\nnitpick - it's not strictly required for the patch, but let's change\nthe 'tables' loop to be consistent with the new sequences loop.\n\n~~~\n\n4. AlterSubscription_refresh\n\nMy first impression (from the function comment) is that these function\nparameters are a bit awkward. For example,\n- It says: If 'copy_data' parameter is true, the function will set\nthe state to \"init\"; otherwise, it will set the state to \"ready\".\n- It also says: \"If 'all_relations' is true, mark all objects with\n\"init\" state...\"\nThose statements seem to clash. e.g. if copy_data is false but\nall_relations is true, then what (???)\n\n~\n\nnitpick - tweak function comment wording.\nnitpick - introduce a 'relkind' variable to avoid multiple calls of\nget_rel_relkind(relid)\nnitpick - use an existing 'relkind' variable instead of calling\nget_rel_relkind(relid);\nnitpick - add another comment about skipping (for dropping tablesync slots)\n\n~\n\n5.\n+ /*\n+ * If all the relations should be re-synchronized, then set the\n+ * state to init for re-synchronization. This is currently\n+ * supported only for sequences.\n+ */\n+ else if (all_relations)\n+ {\n+ ereport(DEBUG1,\n+ (errmsg_internal(\"sequence \\\"%s.%s\\\" of subscription \\\"%s\\\" set to\nINIT state\",\n get_namespace_name(get_rel_namespace(relid)),\n get_rel_name(relid),\n sub->name)));\n+ UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,\n+ InvalidXLogRecPtr);\n\n(This is a continuation of my doubts regarding 'all_relations' in the\nprevious review comment #4 above)\n\nHere are some more questions about it:\n\n~\n\n5a. Why is this an 'else' of the !bsearch? It needs more explanation\nwhat this case means.\n\n~\n\n5b. Along with more description, it might be better to reverse the\n!bsearch condition, so this ('else') code is not so distantly\nseparated from the condition.\n\n~\n\n5c. Saying \"only supported for sequences\" seems strange: e.g. what\nwould it even mean to \"re-synchronize\" tables? They would all have to\nbe truncated first -- so if re-sync for tables has no meaning maybe\nthe parameter is misnamed and should just be 'resync_all_sequences' or\nsimilar? In any case, an Assert here might be good.\n\n======\nsrc/backend/replication/logical/launcher.c\n\nlogicalrep_worker_find:\n\nnitpick - I feel the function comment \"We are only interested in...\"\nis now redundant since you are passing the exact worker type you want.\nnitpick - I added an Assert for the types you are expecting to look for\nnitpick - The comment \"Search for attached worker...\" is stale now\nbecause there are more search criteria\nnitpick - IMO the \"Skip parallel apply workers.\" code is no longer\nneeded now that you are matching the worker type.\n\n~~~\n\n6. logicalrep_worker_launch\n\n * - must be valid worker type\n * - tablesync workers are only ones to have relid\n * - parallel apply worker is the only kind of subworker\n+ * - sequencesync workers will not have relid\n */\n Assert(wtype != WORKERTYPE_UNKNOWN);\n Assert(is_tablesync_worker == OidIsValid(relid));\n Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));\n+ Assert(!is_sequencesync_worker || !OidIsValid(relid));\n\nOn further reflection, is that added comment and added Assert even\nneeded? I think they can be removed because saying \"tablesync workers\nare only ones to have relid\" seems to already cover what we needed to\nsay/assert.\n\n~~~\n\nlogicalrep_worker_stop:\nnitpick - /type/wtype/ for readability\n\n~~~\n\n7.\n/*\n * Count the number of registered (not necessarily running) sync workers\n * for a subscription.\n */\nint\nlogicalrep_sync_worker_count(Oid subid)\n\n~\n\nI thought this function should count the sequencesync worker as well.\n\n======\n.../replication/logical/sequencesync.c\n\nfetch_remote_sequence_data:\nnitpick - tweaked function comment\nnitpick - /value/last_value/ for readability\n\n~\n\n8.\n+ *lsn = DatumGetInt64(slot_getattr(slot, 4, &isnull));\n+ Assert(!isnull);\n\nShould that be DatumGetUInt64?\n\n~~~\n\ncopy_sequence:\nnitpick - tweak function header.\nnitpick - renamed the sequence vars for consistency, and declared them\nall together.\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n9.\n void\n invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)\n {\n- table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;\n+ relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;\n }\n\nI assume you changed the 'table_states_validity' name because this is\nno longer exclusively for tables. So, should the function name also be\nsimilarly changed?\n\n~~~\n\nprocess_syncing_sequences_for_apply:\nnitpick - tweaked the function comment\nnitpick - cannot just say \"if there is not one already.\" a sequence\nsyn worker might not even be needed.\nnitpick - added blank line for readability\n\n~\n\n10.\n+ if (syncworker)\n+ {\n+ /* Now safe to release the LWLock */\n+ LWLockRelease(LogicalRepWorkerLock);\n+ break;\n+ }\n+ else\n+ {\n\nThis 'else' can be removed if you wish to pull back all the indentation.\n\n~~~\n\n11.\nprocess_syncing_tables(XLogRecPtr current_lsn)\n\nIs the function name still OK given that is is now also syncing for sequences?\n\n~~~\n\nFetchTableStates:\nnitpick - Reworded some of the function comment\nnitpick - Function comment is stale because it is still referring to\nthe function parameter which this patch removed.\nnitpick - tweak a comment\n\n======\nsrc/include/commands/sequence.h\n\n12.\n+#define SEQ_LOG_CNT_INVALID (-1)\n\nSee a previous review comment (#2 above) where I wondered why not use\nvalue 0 for this.\n\n~~~\n\n13.\n extern void SequenceChangePersistence(Oid relid, char newrelpersistence);\n extern void DeleteSequenceTuple(Oid relid);\n extern void ResetSequence(Oid seq_relid);\n+extern void do_setval(Oid relid, int64 next, bool iscalled, int64 logcnt);\n extern void ResetSequenceCaches(void);\n\ndo_setval() was an OK function name when it was static, but as an\nexposed API it seems like a terrible name. IMO rename it to something\nlike 'SetSequence' to match the other API functions nearby.\n\n~\n\nnitpick - same change to the parameter names as suggested for the\nimplementation.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 31 Jul 2024 17:25:39 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:\n>\n> On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n> >\n> >\n> >\n> > On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n> >>\n> >> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n> >> [...]\n> >> A new catalog table, pg_subscription_seq, has been introduced for\n> >> mapping subscriptions to sequences. Additionally, the sequence LSN\n> >> (Log Sequence Number) is stored, facilitating determination of\n> >> sequence changes occurring before or after the returned sequence\n> >> state.\n> >\n> >\n> > Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n> > something.\n>\n> We'll require the lsn because the sequence LSN informs the user that\n> it has been synchronized up to the LSN in pg_subscription_seq. Since\n> we are not supporting incremental sync, the user will be able to\n> identify if he should run refresh sequences or not by checking the lsn\n> of the pg_subscription_seq and the lsn of the sequence(using\n> pg_sequence_state added) in the publisher.\n\nHow the user will know from seq's lsn that he needs to run refresh.\nlsn indicates page_lsn and thus the sequence might advance on pub\nwithout changing lsn and thus lsn may look the same on subscriber even\nthough a sequence-refresh is needed. Am I missing something here?\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 31 Jul 2024 14:38:46 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Sat, 20 Jul 2024 at 20:48, vignesh C <[email protected]> wrote:\n>\n> On Fri, 12 Jul 2024 at 08:22, Peter Smith <[email protected]> wrote:\n> >\n> > Hi Vignesh. Here are the rest of my comments for patch v20240705-0003.\n> > ======\n> >\n> > 8. logicalrep_sequence_sync_worker_find\n> >\n> > +/*\n> > + * Walks the workers array and searches for one that matches given\n> > + * subscription id.\n> > + *\n> > + * We are only interested in the sequence sync worker.\n> > + */\n> > +LogicalRepWorker *\n> > +logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)\n> >\n> > There are other similar functions for walking the workers array to\n> > search for a worker. Instead of having different functions for\n> > different cases, wouldn't it be cleaner to combine these into a single\n> > function, where you pass a parameter (e.g. a mask of worker types that\n> > you are interested in finding)?\n\nThis is fixed in the v20240730_2 version attached at [1].\n\n> > 17.\n> > Also, where does the number 100 come from? Why not 1000? Why not 10?\n> > Why have batching at all? Maybe there should be some comment to\n> > describe the reason and the chosen value.\n\nI had run some tests with 10/100 and 1000 sequences per batch for\n10000 sequences. The results for it:\n10 per batch - 4.94 seconds\n100 per batch - 4.87 seconds\n1000 per batch - 4.53 seconds\n\nThere is not much time difference between each of them. Currently, it\nis set to 100, which seems fine since it will not generate a lot of\ntransactions. Additionally, the locks on the sequences will be\nperiodically released during the commit transaction.\n\nI had used the test from the attached patch by changing\nmax_sequences_sync_per_batch to 10/100/100 in 035_sequences.pl to\nverify this.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm3%2BXzHAbgyn8gmbBLK5goyv_uyGgHEsTQxRZ8bVk6nAEg%40mail.gmail.com\n\nRegards,\nVignesh", "msg_date": "Wed, 31 Jul 2024 15:03:15 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh,\n\nI have a question about the subscriber-side behaviour of currval().\n\n======\n\nAFAIK it is normal for currval() to give error is nextval() has not\nyet been called [1]\n\nFor example.\ntest_pub=# create sequence s1;\nCREATE SEQUENCE\ntest_pub=# select * from currval('s1');\n2024-08-01 07:42:48.619 AEST [24131] ERROR: currval of sequence \"s1\"\nis not yet defined in this session\n2024-08-01 07:42:48.619 AEST [24131] STATEMENT: select * from currval('s1');\nERROR: currval of sequence \"s1\" is not yet defined in this session\ntest_pub=# select * from nextval('s1');\n nextval\n---------\n 1\n(1 row)\n\ntest_pub=# select * from currval('s1');\n currval\n---------\n 1\n(1 row)\n\ntest_pub=#\n\n~~~\n\nOTOH, I was hoping to be able to use currval() at the subscriber=side\nto see the current sequence value after issuing ALTER .. REFRESH\nPUBLICATION SEQUENCES.\n\nUnfortunately, it has the same behaviour where currval() cannot be\nused without nextval(). But, on the subscriber, you probably never\nwant to do an explicit nextval() independently of the publisher.\n\nIs this currently a bug, or maybe a quirk that should be documented?\n\nFor example:\n\nPublisher\n==========\n\ntest_pub=# create sequence s1;\nCREATE SEQUENCE\ntest_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;\nCREATE PUBLICATION\ntest_pub=# select * from nextval('s1');\n nextval\n---------\n 1\n(1 row)\n\ntest_pub=# select * from nextval('s1');\n nextval\n---------\n 2\n(1 row)\n\ntest_pub=# select * from nextval('s1');\n nextval\n---------\n 3\n(1 row)\n\ntest_pub=#\n\nSubscriber\n==========\n\n(Notice currval() always gives an error unless nextval() is used prior).\n\ntest_sub=# create sequence s1;\nCREATE SEQUENCE\ntest_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'\nPUBLICATION pub1;\n2024-08-01 07:51:06.955 AEST [24325] WARNING: subscriptions created\nby regression test cases should have names starting with \"regress_\"\nWARNING: subscriptions created by regression test cases should have\nnames starting with \"regress_\"\nNOTICE: created replication slot \"sub1\" on publisher\nCREATE SUBSCRIPTION\ntest_sub=# 2024-08-01 07:51:07.023 AEST [4211] LOG: logical\nreplication apply worker for subscription \"sub1\" has started\n2024-08-01 07:51:07.037 AEST [4213] LOG: logical replication sequence\nsynchronization worker for subscription \"sub1\" has started\n2024-08-01 07:51:07.063 AEST [4213] LOG: logical replication\nsynchronization for subscription \"sub1\", sequence \"s1\" has finished\n2024-08-01 07:51:07.063 AEST [4213] LOG: logical replication sequence\nsynchronization worker for subscription \"sub1\" has finished\n\ntest_sub=# SELECT * FROM currval('s1');\n2024-08-01 07:51:19.688 AEST [24325] ERROR: currval of sequence \"s1\"\nis not yet defined in this session\n2024-08-01 07:51:19.688 AEST [24325] STATEMENT: SELECT * FROM currval('s1');\nERROR: currval of sequence \"s1\" is not yet defined in this session\ntest_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\nALTER SUBSCRIPTION\ntest_sub=# 2024-08-01 07:51:35.298 AEST [4993] LOG: logical\nreplication sequence synchronization worker for subscription \"sub1\"\nhas started\n\ntest_sub=# 2024-08-01 07:51:35.321 AEST [4993] LOG: logical\nreplication synchronization for subscription \"sub1\", sequence \"s1\" has\nfinished\n2024-08-01 07:51:35.321 AEST [4993] LOG: logical replication sequence\nsynchronization worker for subscription \"sub1\" has finished\n\ntest_sub=#\ntest_sub=# SELECT * FROM currval('s1');\n2024-08-01 07:51:41.438 AEST [24325] ERROR: currval of sequence \"s1\"\nis not yet defined in this session\n2024-08-01 07:51:41.438 AEST [24325] STATEMENT: SELECT * FROM currval('s1');\nERROR: currval of sequence \"s1\" is not yet defined in this session\ntest_sub=#\ntest_sub=# SELECT * FROM nextval('s1');\n nextval\n---------\n 4\n(1 row)\n\ntest_sub=# SELECT * FROM currval('s1');\n currval\n---------\n 4\n(1 row)\n\ntest_sub=#\n\n======\n[1] https://www.postgresql.org/docs/current/functions-sequence.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Thu, 1 Aug 2024 08:27:38 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh,\n\nI noticed that when replicating sequences (using the latest patches\n0730_2*) the subscriber-side checks the *existence* of the sequence,\nbut apparently it is not checking other sequence attributes.\n\nFor example, consider:\n\nPublisher: \"CREATE SEQUENCE s1 START 1 INCREMENT 2;\" should be a\nsequence of only odd numbers.\nSubscriber: \"CREATE SEQUENCE s1 START 2 INCREMENT 2;\" should be a\nsequence of only even numbers.\n\nBecause the names match, currently the patch allows replication of the\ns1 sequence. I think that might lead to unexpected results on the\nsubscriber. IMO it might be safer to report ERROR unless the sequences\nmatch properly (i.e. not just a name check).\n\nBelow is a demonstration the problem:\n\n==========\nPublisher:\n==========\n\n(publisher sequence is odd numbers)\n\ntest_pub=# create sequence s1 start 1 increment 2;\nCREATE SEQUENCE\ntest_pub=# select * from nextval('s1');\n nextval\n---------\n 1\n(1 row)\n\ntest_pub=# select * from nextval('s1');\n nextval\n---------\n 3\n(1 row)\n\ntest_pub=# select * from nextval('s1');\n nextval\n---------\n 5\n(1 row)\n\ntest_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;\nCREATE PUBLICATION\ntest_pub=#\n\n==========\nSubscriber:\n==========\n\n(subscriber sequence is even numbers)\n\ntest_sub=# create sequence s1 start 2 increment 2;\nCREATE SEQUENCE\ntest_sub=# SELECT * FROM nextval('s1');\n nextval\n---------\n 2\n(1 row)\n\ntest_sub=# SELECT * FROM nextval('s1');\n nextval\n---------\n 4\n(1 row)\n\ntest_sub=# SELECT * FROM nextval('s1');\n nextval\n---------\n 6\n(1 row)\n\ntest_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'\nPUBLICATION pub1;\n2024-08-01 08:43:04.198 AEST [24325] WARNING: subscriptions created\nby regression test cases should have names starting with \"regress_\"\nWARNING: subscriptions created by regression test cases should have\nnames starting with \"regress_\"\nNOTICE: created replication slot \"sub1\" on publisher\nCREATE SUBSCRIPTION\ntest_sub=# 2024-08-01 08:43:04.294 AEST [26240] LOG: logical\nreplication apply worker for subscription \"sub1\" has started\n2024-08-01 08:43:04.309 AEST [26244] LOG: logical replication\nsequence synchronization worker for subscription \"sub1\" has started\n2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication\nsynchronization for subscription \"sub1\", sequence \"s1\" has finished\n2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication\nsequence synchronization worker for subscription \"sub1\" has finished\n\n(after the CREATE SUBSCRIPTION we are getting replicated odd values\nfrom the publisher, even though the subscriber side sequence was\nsupposed to be even numbers)\n\ntest_sub=# SELECT * FROM nextval('s1');\n nextval\n---------\n 7\n(1 row)\n\ntest_sub=# SELECT * FROM nextval('s1');\n nextval\n---------\n 9\n(1 row)\n\ntest_sub=# SELECT * FROM nextval('s1');\n nextval\n---------\n 11\n(1 row)\n\n(Looking at the description you would expect odd values for this\nsequence to be impossible)\n\ntest_sub=# \\dS+ s1\n Sequence \"public.s1\"\n Type | Start | Minimum | Maximum | Increment | Cycles? | Cache\n--------+-------+---------+---------------------+-----------+---------+-------\n bigint | 2 | 1 | 9223372036854775807 | 2 | no | 1\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 1 Aug 2024 08:55:13 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Aug 1, 2024 at 9:26 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Jul 29, 2024 at 4:17 PM vignesh C <[email protected]> wrote:\n> >\n> > Thanks for reporting this, these issues are fixed in the attached\n> > v20240730_2 version patch.\n> >\n\nI was reviewing the design of patch003, and I have a query. Do we need\nto even start an apply worker and create replication slot when\nsubscription created is for 'sequences only'? IIUC, currently logical\nreplication apply worker is the one launching sequence-sync worker\nwhenever needed. I think it should be the launcher doing this job and\nthus apply worker may even not be needed for current functionality of\nsequence sync? Going forward when we implement incremental sync of\nsequences, then we may have apply worker started but now it is not\nneeded.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 2 Aug 2024 14:24:45 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, Aug 2, 2024 at 2:24 PM shveta malik <[email protected]> wrote:\n>\n> On Thu, Aug 1, 2024 at 9:26 AM shveta malik <[email protected]> wrote:\n> >\n> > On Mon, Jul 29, 2024 at 4:17 PM vignesh C <[email protected]> wrote:\n> > >\n> > > Thanks for reporting this, these issues are fixed in the attached\n> > > v20240730_2 version patch.\n> > >\n>\n> I was reviewing the design of patch003, and I have a query. Do we need\n> to even start an apply worker and create replication slot when\n> subscription created is for 'sequences only'? IIUC, currently logical\n> replication apply worker is the one launching sequence-sync worker\n> whenever needed. I think it should be the launcher doing this job and\n> thus apply worker may even not be needed for current functionality of\n> sequence sync? Going forward when we implement incremental sync of\n> sequences, then we may have apply worker started but now it is not\n> needed.\n>\n\nAlso, can we please mention the state change and 'who does what' atop\nsequencesync.c file similar to what we have atop tablesync.c file\notherwise it is difficult to figure out the flow.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 2 Aug 2024 14:33:32 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 31 Jul 2024 at 12:56, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh,\n>\n> Here are my review comments for your latest 0730_2* patches.\n>\n> Patch v20240730_2-0001 looks good to me.\n>\n> Patch v20240730_2-0002 looks good to me.\n>\n> My comments for the v20240730_2-0003 patch are below:\n> ~~~\n>\n> 4. AlterSubscription_refresh\n>\n> My first impression (from the function comment) is that these function\n> parameters are a bit awkward. For example,\n> - It says: If 'copy_data' parameter is true, the function will set\n> the state to \"init\"; otherwise, it will set the state to \"ready\".\n> - It also says: \"If 'all_relations' is true, mark all objects with\n> \"init\" state...\"\n> Those statements seem to clash. e.g. if copy_data is false but\n> all_relations is true, then what (???)\n\nall_relations will be true only for \"ALTER SUBSCRIPTION ... REFRESH\nPUBLICATION SEQUENCES\". With option is not supported along with this\ncommand so copy_data with false option is not possible here. Added an\nassert for this.\n>\n> 8.\n> + *lsn = DatumGetInt64(slot_getattr(slot, 4, &isnull));\n> + Assert(!isnull);\n>\n> Should that be DatumGetUInt64?\n\nIt should be DatumGetLSN here.\n\nThe rest of the comments are fixed. The attached v20240805 version\npatch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Mon, 5 Aug 2024 09:59:54 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 1 Aug 2024 at 03:33, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh,\n>\n> I have a question about the subscriber-side behaviour of currval().\n>\n> ======\n>\n> AFAIK it is normal for currval() to give error is nextval() has not\n> yet been called [1]\n>\n> For example.\n> test_pub=# create sequence s1;\n> CREATE SEQUENCE\n> test_pub=# select * from currval('s1');\n> 2024-08-01 07:42:48.619 AEST [24131] ERROR: currval of sequence \"s1\"\n> is not yet defined in this session\n> 2024-08-01 07:42:48.619 AEST [24131] STATEMENT: select * from currval('s1');\n> ERROR: currval of sequence \"s1\" is not yet defined in this session\n> test_pub=# select * from nextval('s1');\n> nextval\n> ---------\n> 1\n> (1 row)\n>\n> test_pub=# select * from currval('s1');\n> currval\n> ---------\n> 1\n> (1 row)\n>\n> test_pub=#\n>\n> ~~~\n>\n> OTOH, I was hoping to be able to use currval() at the subscriber=side\n> to see the current sequence value after issuing ALTER .. REFRESH\n> PUBLICATION SEQUENCES.\n>\n> Unfortunately, it has the same behaviour where currval() cannot be\n> used without nextval(). But, on the subscriber, you probably never\n> want to do an explicit nextval() independently of the publisher.\n>\n> Is this currently a bug, or maybe a quirk that should be documented?\n\nThe currval returns the most recent value obtained from the nextval\nfunction for a given sequence within the current session. This\nfunction is specific to the session, meaning it only provides the last\nsequence value retrieved during that session. However, if you call\ncurrval before using nextval in the same session, you'll encounter an\nerror stating \"currval of the sequence is not yet defined in this\nsession.\" Meaning even in the publisher this value is only visible in\nthe current session and not in a different session. Alternatively you\ncan use the following to get the last_value of the sequence: SELECT\nlast_value FROM sequence_name. I feel this need not be documented as\nthe similar issue is present in the publisher and there is an \"SELECT\nlast_value FROM sequence_name\" to get the last_value.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 5 Aug 2024 10:14:03 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 1 Aug 2024 at 04:25, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh,\n>\n> I noticed that when replicating sequences (using the latest patches\n> 0730_2*) the subscriber-side checks the *existence* of the sequence,\n> but apparently it is not checking other sequence attributes.\n>\n> For example, consider:\n>\n> Publisher: \"CREATE SEQUENCE s1 START 1 INCREMENT 2;\" should be a\n> sequence of only odd numbers.\n> Subscriber: \"CREATE SEQUENCE s1 START 2 INCREMENT 2;\" should be a\n> sequence of only even numbers.\n>\n> Because the names match, currently the patch allows replication of the\n> s1 sequence. I think that might lead to unexpected results on the\n> subscriber. IMO it might be safer to report ERROR unless the sequences\n> match properly (i.e. not just a name check).\n>\n> Below is a demonstration the problem:\n>\n> ==========\n> Publisher:\n> ==========\n>\n> (publisher sequence is odd numbers)\n>\n> test_pub=# create sequence s1 start 1 increment 2;\n> CREATE SEQUENCE\n> test_pub=# select * from nextval('s1');\n> nextval\n> ---------\n> 1\n> (1 row)\n>\n> test_pub=# select * from nextval('s1');\n> nextval\n> ---------\n> 3\n> (1 row)\n>\n> test_pub=# select * from nextval('s1');\n> nextval\n> ---------\n> 5\n> (1 row)\n>\n> test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;\n> CREATE PUBLICATION\n> test_pub=#\n>\n> ==========\n> Subscriber:\n> ==========\n>\n> (subscriber sequence is even numbers)\n>\n> test_sub=# create sequence s1 start 2 increment 2;\n> CREATE SEQUENCE\n> test_sub=# SELECT * FROM nextval('s1');\n> nextval\n> ---------\n> 2\n> (1 row)\n>\n> test_sub=# SELECT * FROM nextval('s1');\n> nextval\n> ---------\n> 4\n> (1 row)\n>\n> test_sub=# SELECT * FROM nextval('s1');\n> nextval\n> ---------\n> 6\n> (1 row)\n>\n> test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'\n> PUBLICATION pub1;\n> 2024-08-01 08:43:04.198 AEST [24325] WARNING: subscriptions created\n> by regression test cases should have names starting with \"regress_\"\n> WARNING: subscriptions created by regression test cases should have\n> names starting with \"regress_\"\n> NOTICE: created replication slot \"sub1\" on publisher\n> CREATE SUBSCRIPTION\n> test_sub=# 2024-08-01 08:43:04.294 AEST [26240] LOG: logical\n> replication apply worker for subscription \"sub1\" has started\n> 2024-08-01 08:43:04.309 AEST [26244] LOG: logical replication\n> sequence synchronization worker for subscription \"sub1\" has started\n> 2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication\n> synchronization for subscription \"sub1\", sequence \"s1\" has finished\n> 2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication\n> sequence synchronization worker for subscription \"sub1\" has finished\n>\n> (after the CREATE SUBSCRIPTION we are getting replicated odd values\n> from the publisher, even though the subscriber side sequence was\n> supposed to be even numbers)\n>\n> test_sub=# SELECT * FROM nextval('s1');\n> nextval\n> ---------\n> 7\n> (1 row)\n>\n> test_sub=# SELECT * FROM nextval('s1');\n> nextval\n> ---------\n> 9\n> (1 row)\n>\n> test_sub=# SELECT * FROM nextval('s1');\n> nextval\n> ---------\n> 11\n> (1 row)\n>\n> (Looking at the description you would expect odd values for this\n> sequence to be impossible)\n>\n> test_sub=# \\dS+ s1\n> Sequence \"public.s1\"\n> Type | Start | Minimum | Maximum | Increment | Cycles? | Cache\n> --------+-------+---------+---------------------+-----------+---------+-------\n> bigint | 2 | 1 | 9223372036854775807 | 2 | no | 1\n\nEven if we check the sequence definition during the CREATE\nSUBSCRIPTION/ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES commands, there's still\na chance that the sequence definition might change after the command\nhas been executed. Currently, there's no mechanism to lock a sequence,\nand we also permit replication of table data even if the table\nstructures differ, such as mismatched data types like int and\nsmallint. I have modified it to log a warning to inform users that the\nsequence options on the publisher and subscriber are not the same and\nadvise them to ensure that the sequence definitions are consistent\nbetween both.\nThe v20240805 version patch attached at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm1Y_ot-jFRfmtwDuwmFrgSSYHjVuy28RspSopTtwzXy8w%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 5 Aug 2024 10:26:38 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 31 Jul 2024 at 14:39, shveta malik <[email protected]> wrote:\n>\n> On Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:\n> >\n> > On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n> > >\n> > >\n> > >\n> > > On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n> > >>\n> > >> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n> > >> [...]\n> > >> A new catalog table, pg_subscription_seq, has been introduced for\n> > >> mapping subscriptions to sequences. Additionally, the sequence LSN\n> > >> (Log Sequence Number) is stored, facilitating determination of\n> > >> sequence changes occurring before or after the returned sequence\n> > >> state.\n> > >\n> > >\n> > > Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n> > > something.\n> >\n> > We'll require the lsn because the sequence LSN informs the user that\n> > it has been synchronized up to the LSN in pg_subscription_seq. Since\n> > we are not supporting incremental sync, the user will be able to\n> > identify if he should run refresh sequences or not by checking the lsn\n> > of the pg_subscription_seq and the lsn of the sequence(using\n> > pg_sequence_state added) in the publisher.\n>\n> How the user will know from seq's lsn that he needs to run refresh.\n> lsn indicates page_lsn and thus the sequence might advance on pub\n> without changing lsn and thus lsn may look the same on subscriber even\n> though a sequence-refresh is needed. Am I missing something here?\n\nWhen a sequence is synchronized to the subscriber, the page LSN of the\nsequence from the publisher is also retrieved and stored in\npg_subscriber_rel as shown below:\n--- Publisher page lsn\npublisher=# select pg_sequence_state('seq1');\n pg_sequence_state\n--------------------\n (0/1510E38,65,1,t)\n(1 row)\n\n--- Subscriber stores the publisher's page lsn for the sequence\nsubscriber=# select * from pg_subscription_rel where srrelid = 16384;\n srsubid | srrelid | srsubstate | srsublsn\n---------+---------+------------+-----------\n 16389 | 16384 | r | 0/1510E38\n(1 row)\n\nIf changes are made to the sequence, such as performing many nextvals,\nthe page LSN will be updated. Currently the sequence values are\nprefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for\nthe prefetched values, once the prefetched values are consumed the lsn\nwill get updated.\nFor example:\n--- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)\npublisher=# select pg_sequence_state('seq1');\n pg_sequence_state\n----------------------\n (0/1558CA8,143,22,t)\n(1 row)\n\nThe user can then compare this updated value with the sequence's LSN\nin pg_subscription_rel to determine when to re-synchronize the\nsequence.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 5 Aug 2024 11:04:31 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 1 Aug 2024 at 09:26, shveta malik <[email protected]> wrote:\n>\n> On Mon, Jul 29, 2024 at 4:17 PM vignesh C <[email protected]> wrote:\n> >\n> > Thanks for reporting this, these issues are fixed in the attached\n> > v20240730_2 version patch.\n> >\n>\n> Thanks for addressing the comments. Please find few comments on patch001 alone:\n>\n> Potential Bug:\n> 1) 'last_value' returned by pg_sequence_state() is wrong initially.\n>\n> postgres=# create sequence myseq5;\n> CREATE SEQUENCE\n>\n> postgres=# select * from pg_sequence_state('myseq5');\n> page_lsn | last_value | log_cnt | is_called\n> -----------+------------+---------+-----------\n> 0/1579C78 | 1 | 0 | f\n>\n> postgres=# SELECT nextval('myseq5') ;\n> nextval\n> ---------\n> 1\n>\n> postgres=# select * from pg_sequence_state('myseq5');\n> page_lsn | last_value | log_cnt | is_called\n> -----------+------------+---------+-----------\n> 0/1579FD8 | 1 | 32 | t\n>\n>\n> Both calls returned 1. First call should have returned NULL.\n\nI noticed the same behavior for selecting from a sequence:\npostgres=# select * from myseq5;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 1 | 0 | f\n(1 row)\n\npostgres=# select nextval('myseq5');\n nextval\n---------\n 1\n(1 row)\n\npostgres=# select * from myseq5;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 1 | 32 | t\n(1 row)\n\nBy default it shows the last_value as the start value for the\nsequence. So this looks ok to me.\n\n> 2)\n> func.sgml:\n> a) pg_sequence_state : Don't we need to give input arg as regclass\n> like we give in nextval,setval etc?\n> b) Should 'lsn' be changed to 'page_lsn' as returned in output of\n> pg_sequence_state()\n\nModified\n\n>\n> 3)\n> read_seq_tuple() header says:\n> * lsn_ret will be set to the page LSN if the caller requested it.\n> * This allows the caller to determine which sequence changes are\n> * before/after the returned sequence state.\n>\n> How, using lsn which is page-lsn and not sequence value/change lsn,\n> does the user interpret if sequence changes are before/after the\n> returned sequence state? Can you please elaborate or amend the\n> comment?\n\nI have added this to pg_sequence_state function header in 003 patch as\nthe subscriber side changes are present here, I felt that is more apt\nto mention. This is also added in sequencesync.c file header.\n\nThe attached v20240805_2 version patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Mon, 5 Aug 2024 14:34:34 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 2 Aug 2024 at 14:24, shveta malik <[email protected]> wrote:\n>\n> On Thu, Aug 1, 2024 at 9:26 AM shveta malik <[email protected]> wrote:\n> >\n> > On Mon, Jul 29, 2024 at 4:17 PM vignesh C <[email protected]> wrote:\n> > >\n> > > Thanks for reporting this, these issues are fixed in the attached\n> > > v20240730_2 version patch.\n> > >\n>\n> I was reviewing the design of patch003, and I have a query. Do we need\n> to even start an apply worker and create replication slot when\n> subscription created is for 'sequences only'? IIUC, currently logical\n> replication apply worker is the one launching sequence-sync worker\n> whenever needed. I think it should be the launcher doing this job and\n> thus apply worker may even not be needed for current functionality of\n> sequence sync? Going forward when we implement incremental sync of\n> sequences, then we may have apply worker started but now it is not\n> needed.\n\nI believe the current method of having the apply worker initiate the\nsequence sync worker is advantageous for several reasons:\na) Reduces Launcher Load: This approach prevents overloading the\nlauncher, which must handle various other subscription requests.\nb) Facilitates Incremental Sync: It provides a more straightforward\npath to extend support for incremental sequence synchronization.\nc) Reuses Existing Code: It leverages the existing tablesync worker\ncode for starting the tablesync process, avoiding the need to\nduplicate code in the launcher.\nd) Simplified Code Maintenance: Centralizing sequence synchronization\nlogic within the apply worker can simplify code maintenance and\nupdates, as changes will only need to be made in one place rather than\nacross multiple components.\ne) Better Monitoring and Debugging: With sequence synchronization\nbeing handled by the apply worker, you can more effectively monitor\nand debug synchronization processes since all related operations are\nmanaged by a single component.\n\nAlso, I noticed that even when a publication has no tables, we create\nreplication slot and start apply worker.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 5 Aug 2024 14:36:37 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 2 Aug 2024 at 14:33, shveta malik <[email protected]> wrote:\n>\n> On Fri, Aug 2, 2024 at 2:24 PM shveta malik <[email protected]> wrote:\n> >\n> > On Thu, Aug 1, 2024 at 9:26 AM shveta malik <[email protected]> wrote:\n> > >\n> > > On Mon, Jul 29, 2024 at 4:17 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > Thanks for reporting this, these issues are fixed in the attached\n> > > > v20240730_2 version patch.\n> > > >\n> >\n> > I was reviewing the design of patch003, and I have a query. Do we need\n> > to even start an apply worker and create replication slot when\n> > subscription created is for 'sequences only'? IIUC, currently logical\n> > replication apply worker is the one launching sequence-sync worker\n> > whenever needed. I think it should be the launcher doing this job and\n> > thus apply worker may even not be needed for current functionality of\n> > sequence sync? Going forward when we implement incremental sync of\n> > sequences, then we may have apply worker started but now it is not\n> > needed.\n> >\n>\n> Also, can we please mention the state change and 'who does what' atop\n> sequencesync.c file similar to what we have atop tablesync.c file\n> otherwise it is difficult to figure out the flow.\n\nI have added this in sequencesync.c file, the changes for the same are\navailable at v20240805_2 version patch at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm1kk1MHGk3BU_XTxay%3DdR6sMHnm4TT5cmVz2f_JXkWENQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 5 Aug 2024 14:39:44 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, Aug 5, 2024 at 2:36 PM vignesh C <[email protected]> wrote:\n>\n> On Fri, 2 Aug 2024 at 14:24, shveta malik <[email protected]> wrote:\n> >\n> > On Thu, Aug 1, 2024 at 9:26 AM shveta malik <[email protected]> wrote:\n> > >\n> > > On Mon, Jul 29, 2024 at 4:17 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > Thanks for reporting this, these issues are fixed in the attached\n> > > > v20240730_2 version patch.\n> > > >\n> >\n> > I was reviewing the design of patch003, and I have a query. Do we need\n> > to even start an apply worker and create replication slot when\n> > subscription created is for 'sequences only'? IIUC, currently logical\n> > replication apply worker is the one launching sequence-sync worker\n> > whenever needed. I think it should be the launcher doing this job and\n> > thus apply worker may even not be needed for current functionality of\n> > sequence sync?\n>\n\nBut that would lead to maintaining all sequence-sync of each\nsubscription by launcher. Say there are 100 sequences per subscription\nand some of them from each subscription are failing due to some\nreasons then the launcher will be responsible for ensuring all the\nsequences are synced. I think it would be better to handle\nper-subscription work by the apply worker.\n\n>\n> Going forward when we implement incremental sync of\n> > sequences, then we may have apply worker started but now it is not\n> > needed.\n>\n> I believe the current method of having the apply worker initiate the\n> sequence sync worker is advantageous for several reasons:\n> a) Reduces Launcher Load: This approach prevents overloading the\n> launcher, which must handle various other subscription requests.\n> b) Facilitates Incremental Sync: It provides a more straightforward\n> path to extend support for incremental sequence synchronization.\n> c) Reuses Existing Code: It leverages the existing tablesync worker\n> code for starting the tablesync process, avoiding the need to\n> duplicate code in the launcher.\n> d) Simplified Code Maintenance: Centralizing sequence synchronization\n> logic within the apply worker can simplify code maintenance and\n> updates, as changes will only need to be made in one place rather than\n> across multiple components.\n> e) Better Monitoring and Debugging: With sequence synchronization\n> being handled by the apply worker, you can more effectively monitor\n> and debug synchronization processes since all related operations are\n> managed by a single component.\n>\n> Also, I noticed that even when a publication has no tables, we create\n> replication slot and start apply worker.\n>\n\nAs far as I understand slots and origins are primarily required for\nincremental sync. Would it be used only for sequence-sync cases? If\nnot then we can avoid creating those. I agree that it would add some\ncomplexity to the code with sequence-specific checks, so we can create\na top-up patch for this if required and evaluate its complexity versus\nthe benefit it produces.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Aug 2024 17:27:55 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, Aug 5, 2024 at 11:04 AM vignesh C <[email protected]> wrote:\n>\n> On Wed, 31 Jul 2024 at 14:39, shveta malik <[email protected]> wrote:\n> >\n> > On Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n> > > >\n> > > >\n> > > >\n> > > > On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n> > > >>\n> > > >> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n> > > >> [...]\n> > > >> A new catalog table, pg_subscription_seq, has been introduced for\n> > > >> mapping subscriptions to sequences. Additionally, the sequence LSN\n> > > >> (Log Sequence Number) is stored, facilitating determination of\n> > > >> sequence changes occurring before or after the returned sequence\n> > > >> state.\n> > > >\n> > > >\n> > > > Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n> > > > something.\n> > >\n> > > We'll require the lsn because the sequence LSN informs the user that\n> > > it has been synchronized up to the LSN in pg_subscription_seq. Since\n> > > we are not supporting incremental sync, the user will be able to\n> > > identify if he should run refresh sequences or not by checking the lsn\n> > > of the pg_subscription_seq and the lsn of the sequence(using\n> > > pg_sequence_state added) in the publisher.\n> >\n> > How the user will know from seq's lsn that he needs to run refresh.\n> > lsn indicates page_lsn and thus the sequence might advance on pub\n> > without changing lsn and thus lsn may look the same on subscriber even\n> > though a sequence-refresh is needed. Am I missing something here?\n>\n> When a sequence is synchronized to the subscriber, the page LSN of the\n> sequence from the publisher is also retrieved and stored in\n> pg_subscriber_rel as shown below:\n> --- Publisher page lsn\n> publisher=# select pg_sequence_state('seq1');\n> pg_sequence_state\n> --------------------\n> (0/1510E38,65,1,t)\n> (1 row)\n>\n> --- Subscriber stores the publisher's page lsn for the sequence\n> subscriber=# select * from pg_subscription_rel where srrelid = 16384;\n> srsubid | srrelid | srsubstate | srsublsn\n> ---------+---------+------------+-----------\n> 16389 | 16384 | r | 0/1510E38\n> (1 row)\n>\n> If changes are made to the sequence, such as performing many nextvals,\n> the page LSN will be updated. Currently the sequence values are\n> prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for\n> the prefetched values, once the prefetched values are consumed the lsn\n> will get updated.\n> For example:\n> --- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)\n> publisher=# select pg_sequence_state('seq1');\n> pg_sequence_state\n> ----------------------\n> (0/1558CA8,143,22,t)\n> (1 row)\n>\n> The user can then compare this updated value with the sequence's LSN\n> in pg_subscription_rel to determine when to re-synchronize the\n> sequence.\n\nThanks for the details. But I was referring to the case where we are\nin between pre-fetched values on publisher (say at 25th value), while\non subscriber we are slightly behind (say at 15th value), but page-lsn\nwill be the same on both. Since the subscriber is behind, a\nsequence-refresh is needed on sub, but by looking at lsn (which is\nsame), one can not say that for sure. Let me know if I have\nmisunderstood it.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 5 Aug 2024 18:05:13 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, Aug 5, 2024 at 5:28 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Aug 5, 2024 at 2:36 PM vignesh C <[email protected]> wrote:\n> >\n> > On Fri, 2 Aug 2024 at 14:24, shveta malik <[email protected]> wrote:\n> > >\n> > > On Thu, Aug 1, 2024 at 9:26 AM shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Mon, Jul 29, 2024 at 4:17 PM vignesh C <[email protected]> wrote:\n> > > > >\n> > > > > Thanks for reporting this, these issues are fixed in the attached\n> > > > > v20240730_2 version patch.\n> > > > >\n> > >\n> > > I was reviewing the design of patch003, and I have a query. Do we need\n> > > to even start an apply worker and create replication slot when\n> > > subscription created is for 'sequences only'? IIUC, currently logical\n> > > replication apply worker is the one launching sequence-sync worker\n> > > whenever needed. I think it should be the launcher doing this job and\n> > > thus apply worker may even not be needed for current functionality of\n> > > sequence sync?\n> >\n>\n> But that would lead to maintaining all sequence-sync of each\n> subscription by launcher. Say there are 100 sequences per subscription\n> and some of them from each subscription are failing due to some\n> reasons then the launcher will be responsible for ensuring all the\n> sequences are synced. I think it would be better to handle\n> per-subscription work by the apply worker.\n\nI thought we can give that task to sequence-sync worker. Once sequence\nsync worker is started by launcher, it keeps on syncing until all the\nsequences are synced (even failed ones) and then exits only after all\nare synced; instead of apply worker starting it multiple times for\nfailed sequences. Launcher to start sequence sync worker when signaled\nby 'alter-sub refresh seq'.\nBut after going through details given by Vignesh in [1], I also see\nthe benefits of using apply worker for this task. Since apply worker\nis already looping and doing that for table-sync, we can reuse the\nsame code for sequence sync and maintenance will be easy. So looks\nokay if we go with existing apply worker design.\n\n[1]: https://www.postgresql.org/message-id/CALDaNm1KO8f3Fj%2BRHHXM%3DUSGwOcW242M1jHee%3DX_chn2ToiCpw%40mail.gmail.com\n\n>\n> >\n> > Going forward when we implement incremental sync of\n> > > sequences, then we may have apply worker started but now it is not\n> > > needed.\n> >\n> > I believe the current method of having the apply worker initiate the\n> > sequence sync worker is advantageous for several reasons:\n> > a) Reduces Launcher Load: This approach prevents overloading the\n> > launcher, which must handle various other subscription requests.\n> > b) Facilitates Incremental Sync: It provides a more straightforward\n> > path to extend support for incremental sequence synchronization.\n> > c) Reuses Existing Code: It leverages the existing tablesync worker\n> > code for starting the tablesync process, avoiding the need to\n> > duplicate code in the launcher.\n> > d) Simplified Code Maintenance: Centralizing sequence synchronization\n> > logic within the apply worker can simplify code maintenance and\n> > updates, as changes will only need to be made in one place rather than\n> > across multiple components.\n> > e) Better Monitoring and Debugging: With sequence synchronization\n> > being handled by the apply worker, you can more effectively monitor\n> > and debug synchronization processes since all related operations are\n> > managed by a single component.\n> >\n> > Also, I noticed that even when a publication has no tables, we create\n> > replication slot and start apply worker.\n> >\n>\n> As far as I understand slots and origins are primarily required for\n> incremental sync. Would it be used only for sequence-sync cases? If\n> not then we can avoid creating those. I agree that it would add some\n> complexity to the code with sequence-specific checks, so we can create\n> a top-up patch for this if required and evaluate its complexity versus\n> the benefit it produces.\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n", "msg_date": "Tue, 6 Aug 2024 08:49:23 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Aug 6, 2024 at 8:49 AM shveta malik <[email protected]> wrote:\n>\n\nDo we need some kind of coordination between table sync and sequence\nsync for internally generated sequences? Lets say we have an identity\ncolumn with a 'GENERATED ALWAYS' sequence. When the sequence is synced\nto subscriber, subscriber can also do an insert to table (extra one)\nincrementing the sequence and then when publisher performs an insert,\napply worker will blindly copy that row to sub's table making identity\ncolumn's duplicate entries.\n\nCREATE TABLE color ( color_id INT GENERATED ALWAYS AS\nIDENTITY,color_name VARCHAR NOT NULL);\n\nPub: insert into color(color_name) values('red');\n\nSub: perform sequence refresh and check 'r' state is reached, then do insert:\ninsert into color(color_name) values('yellow');\n\nPub: insert into color(color_name) values('blue');\n\nAfter above, data on Pub: (1, 'red') ;(2, 'blue'),\n\nAfter above, data on Sub: (1, 'red') ;(2, 'yellow'); (2, 'blue'),\n\nIdentity column has duplicate values. Should the apply worker error\nout while inserting such a row to the table? Or it is not in the\nscope of this project?\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 6 Aug 2024 09:28:34 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Aug 6, 2024 at 8:49 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Aug 5, 2024 at 5:28 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Aug 5, 2024 at 2:36 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Fri, 2 Aug 2024 at 14:24, shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Thu, Aug 1, 2024 at 9:26 AM shveta malik <[email protected]> wrote:\n> > > > >\n> > > > > On Mon, Jul 29, 2024 at 4:17 PM vignesh C <[email protected]> wrote:\n> > > > > >\n> > > > > > Thanks for reporting this, these issues are fixed in the attached\n> > > > > > v20240730_2 version patch.\n> > > > > >\n> > > >\n> > > > I was reviewing the design of patch003, and I have a query. Do we need\n> > > > to even start an apply worker and create replication slot when\n> > > > subscription created is for 'sequences only'? IIUC, currently logical\n> > > > replication apply worker is the one launching sequence-sync worker\n> > > > whenever needed. I think it should be the launcher doing this job and\n> > > > thus apply worker may even not be needed for current functionality of\n> > > > sequence sync?\n> > >\n> >\n> > But that would lead to maintaining all sequence-sync of each\n> > subscription by launcher. Say there are 100 sequences per subscription\n> > and some of them from each subscription are failing due to some\n> > reasons then the launcher will be responsible for ensuring all the\n> > sequences are synced. I think it would be better to handle\n> > per-subscription work by the apply worker.\n>\n> I thought we can give that task to sequence-sync worker. Once sequence\n> sync worker is started by launcher, it keeps on syncing until all the\n> sequences are synced (even failed ones) and then exits only after all\n> are synced; instead of apply worker starting it multiple times for\n> failed sequences. Launcher to start sequence sync worker when signaled\n> by 'alter-sub refresh seq'.\n> But after going through details given by Vignesh in [1], I also see\n> the benefits of using apply worker for this task. Since apply worker\n> is already looping and doing that for table-sync, we can reuse the\n> same code for sequence sync and maintenance will be easy. So looks\n> okay if we go with existing apply worker design.\n>\n\nFair enough. However, I was wondering whether apply_worker should exit\nafter syncing all sequences for a sequence-only subscription or should\nit be there for future commands that can refresh the subscription and\nadd additional tables or sequences?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 6 Aug 2024 09:54:35 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Aug 6, 2024 at 9:54 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Aug 6, 2024 at 8:49 AM shveta malik <[email protected]> wrote:\n> >\n> > > > > I was reviewing the design of patch003, and I have a query. Do we need\n> > > > > to even start an apply worker and create replication slot when\n> > > > > subscription created is for 'sequences only'? IIUC, currently logical\n> > > > > replication apply worker is the one launching sequence-sync worker\n> > > > > whenever needed. I think it should be the launcher doing this job and\n> > > > > thus apply worker may even not be needed for current functionality of\n> > > > > sequence sync?\n> > > >\n> > >\n> > > But that would lead to maintaining all sequence-sync of each\n> > > subscription by launcher. Say there are 100 sequences per subscription\n> > > and some of them from each subscription are failing due to some\n> > > reasons then the launcher will be responsible for ensuring all the\n> > > sequences are synced. I think it would be better to handle\n> > > per-subscription work by the apply worker.\n> >\n> > I thought we can give that task to sequence-sync worker. Once sequence\n> > sync worker is started by launcher, it keeps on syncing until all the\n> > sequences are synced (even failed ones) and then exits only after all\n> > are synced; instead of apply worker starting it multiple times for\n> > failed sequences. Launcher to start sequence sync worker when signaled\n> > by 'alter-sub refresh seq'.\n> > But after going through details given by Vignesh in [1], I also see\n> > the benefits of using apply worker for this task. Since apply worker\n> > is already looping and doing that for table-sync, we can reuse the\n> > same code for sequence sync and maintenance will be easy. So looks\n> > okay if we go with existing apply worker design.\n> >\n>\n> Fair enough. However, I was wondering whether apply_worker should exit\n> after syncing all sequences for a sequence-only subscription\n\nIf apply worker exits, then on next sequence-refresh, we need a way to\nwake-up launcher to start apply worker which then will start\ntable-sync worker. Instead, won't it be better if the launcher starts\ntable-sync worker directly without the need of apply worker being\npresent (which I stated earlier).\n\n> or should\n> it be there for future commands that can refresh the subscription and\n> add additional tables or sequences?\n\nIf we stick with apply worker starting table sync worker when needed\nby continuously checking seq-sync states ('i'/'r'), then IMO, it is\nbetter that apply-worker stays. But if we want apply-worker to exit\nand start only when needed, then why not to start sequence-sync worker\ndirectly for seq-only subscriptions?\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 6 Aug 2024 10:24:18 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 6 Aug 2024 at 10:24, shveta malik <[email protected]> wrote:\n>\n> On Tue, Aug 6, 2024 at 9:54 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Aug 6, 2024 at 8:49 AM shveta malik <[email protected]> wrote:\n> > >\n> > > > > > I was reviewing the design of patch003, and I have a query. Do we need\n> > > > > > to even start an apply worker and create replication slot when\n> > > > > > subscription created is for 'sequences only'? IIUC, currently logical\n> > > > > > replication apply worker is the one launching sequence-sync worker\n> > > > > > whenever needed. I think it should be the launcher doing this job and\n> > > > > > thus apply worker may even not be needed for current functionality of\n> > > > > > sequence sync?\n> > > > >\n> > > >\n> > > > But that would lead to maintaining all sequence-sync of each\n> > > > subscription by launcher. Say there are 100 sequences per subscription\n> > > > and some of them from each subscription are failing due to some\n> > > > reasons then the launcher will be responsible for ensuring all the\n> > > > sequences are synced. I think it would be better to handle\n> > > > per-subscription work by the apply worker.\n> > >\n> > > I thought we can give that task to sequence-sync worker. Once sequence\n> > > sync worker is started by launcher, it keeps on syncing until all the\n> > > sequences are synced (even failed ones) and then exits only after all\n> > > are synced; instead of apply worker starting it multiple times for\n> > > failed sequences. Launcher to start sequence sync worker when signaled\n> > > by 'alter-sub refresh seq'.\n> > > But after going through details given by Vignesh in [1], I also see\n> > > the benefits of using apply worker for this task. Since apply worker\n> > > is already looping and doing that for table-sync, we can reuse the\n> > > same code for sequence sync and maintenance will be easy. So looks\n> > > okay if we go with existing apply worker design.\n> > >\n> >\n> > Fair enough. However, I was wondering whether apply_worker should exit\n> > after syncing all sequences for a sequence-only subscription\n>\n> If apply worker exits, then on next sequence-refresh, we need a way to\n> wake-up launcher to start apply worker which then will start\n> table-sync worker. Instead, won't it be better if the launcher starts\n> table-sync worker directly without the need of apply worker being\n> present (which I stated earlier).\n\nI favour the current design because it ensures the system remains\nextendable for future incremental sequence synchronization. If the\nlauncher were responsible for starting the sequence sync worker, it\nwould add extra load that could hinder its ability to service other\nsubscriptions and complicate the design for supporting incremental\nsync of sequences. Additionally, this approach offers the other\nbenefits mentioned in [1].\n\n> > or should\n> > it be there for future commands that can refresh the subscription and\n> > add additional tables or sequences?\n>\n> If we stick with apply worker starting table sync worker when needed\n> by continuously checking seq-sync states ('i'/'r'), then IMO, it is\n> better that apply-worker stays. But if we want apply-worker to exit\n> and start only when needed, then why not to start sequence-sync worker\n> directly for seq-only subscriptions?\n\nThere is a risk that sequence synchronization might fail if the\nsequence value from the publisher falls outside the defined minvalue\nor maxvalue range. The apply worker must be active to determine\nwhether to initiate the sequence sync worker after the\nwal_retrieve_retry_interval period. Typically, publications consisting\nsolely of sequences are uncommon. However, if a user wishes to use\nsuch publications, they can disable the subscription if necessary and\nre-enable it when a sequence refresh is needed.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm1KO8f3Fj%2BRHHXM%3DUSGwOcW242M1jHee%3DX_chn2ToiCpw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 6 Aug 2024 12:44:17 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Here are some review comments for the patch v20240805_2-0003.\n\n======\ndoc/src/sgml/catalogs.sgml\n\nnitpick - removed the word \"either\"\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\nI felt the discussions about \"how to handle warnings\" are a bit scattered:\ne.g.1 - ALTER SUBSCRIPTION REFRESH PUBLICATION copy data referred to\nCREATE SUBSCRIPTION copy data.\ne.g.2 - ALTER SUBSCRIPTION REFRESH explains what to do, but now the\nexplanation is in 2 places.\ne.g.3 - CREATE SUBSCRIPTION copy data explains what to do (again), but\nIMO it belongs better in the common \"Notes\" part\n\nFYI, I've moved all the information to one place (in the CREATE\nSUBSCRIPTION \"Notes\") and others refer to this central place. See the\nattached nitpicks diff.\n\nREFRESH PUBLICATION copy_data\nnitpick - now refers to CREATE SUBSCRIPTION \"Notes\". I also moved it\nto be nearer to the other sequence stuff.\n\nREFRESH PUBLICATION SEQUENCES:\nnitpick - now refers to CREATE SUBSCRIPTION \"Notes\".\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\nREFRESH PUBLICATION copy_data\nnitpick - now refers to CREATE SUBSCRIPTION \"Notes\"\n\nNotes:\nnitpick - the explanation of, and what to do about sequence WARNINGS,\nis moved to here\n\n======\nsrc/backend/commands/sequence.c\n\npg_sequence_state:\nnitpick - I just moved the comment in pg_sequence_state() to below the\nNOTE, which talks about \"page LSN\".\n\n======\nsrc/backend/catalog/pg_subscription.c\n\n1. HasSubscriptionRelations\n\nShould function 'HasSubscriptionRelations' be renamed to\n'HasSubscriptionTables'?\n\n~~~\n\nGetSubscriptionRelations:\nnitpick - tweak some \"skip\" comments.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n2. CreateSubscription\n\n tables = fetch_table_list(wrconn, publications);\n- foreach(lc, tables)\n+ foreach_ptr(RangeVar, rv, tables)\n+ {\n+ Oid relid;\n+\n+ relid = RangeVarGetRelid(rv, AccessShareLock, false);\n+\n+ /* Check for supported relkind. */\n+ CheckSubscriptionRelkind(get_rel_relkind(relid),\n+ rv->schemaname, rv->relname);\n+\n+ AddSubscriptionRelState(subid, relid, table_state,\n+ InvalidXLogRecPtr, true);\n+ }\n+\n+ /* Add the sequences in init state */\n+ sequences = fetch_sequence_list(wrconn, publications);\n+ foreach_ptr(RangeVar, rv, sequences)\n\nThese 2 loops (first for tables and then for sequences) seem to be\nexecuting the same code. If you wanted you could combine the lists\nup-front, and then have one code loop instead of 2. It would mean less\ncode. OTOH, maybe the current code is more readable? I am not sure\nwhat is best, so just bringing this to your attention.\n\n~~~\n\nAlterSubscription_refresh:\nnitpick = typo /indicating tha/indicating that/\n\n~~~\n\n3. fetch_sequence_list\n\n+ appendStringInfoString(&cmd, \"SELECT DISTINCT n.nspname, c.relname,\ns.seqtypid, s.seqmin, s.seqmax, s.seqstart, s.seqincrement,\ns.seqcycle\"\n+ \" FROM pg_publication p, LATERAL\npg_get_publication_sequences(p.pubname::text) gps(relid),\"\n+ \" pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace JOIN\npg_sequence s ON c.oid = s.seqrelid\"\n+ \" WHERE c.oid = gps.relid AND p.pubname IN (\");\n+ get_publications_str(publications, &cmd, true);\n+ appendStringInfoChar(&cmd, ')');\n\nPlease wrap this better to make the SQL more readable.\n\n~~\n\n4.\n+ if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||\n+ seqform->seqmax != seqmax || seqform->seqstart != seqstart ||\n+ seqform->seqincrement != seqincrement ||\n+ seqform->seqcycle != seqcycle)\n+ ereport(WARNING,\n+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"Sequence option in remote and local is not same for \\\"%s.%s\\\"\",\n+ get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid)),\n+ errhint(\"Alter/Re-create the sequence using the same options as in remote.\"));\n\n4a.\nAre these really known as \"options\"? Or should they be called\n\"sequence parameters\", or something else, like \"sequence attributes\"?\n\n4a.\nIs there a way to give more helpful information by identifying what\nwas different in the log? OTOH, maybe it would become too messy if\nthere were multiple differences...\n\n======\nsrc/backend/replication/logical/launcher.c\n\n5. logicalrep_sync_worker_count\n\n- if (isTablesyncWorker(w) && w->subid == subid)\n+ if ((isTableSyncWorker(w) || isSequenceSyncWorker(w)) &&\n+ w->subid == subid)\n\nYou could micro-optimize this -- it may be more efficient to write the\ncondition the other way around.\n\nSUGGESTION\nif (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))\n\n======\n.../replication/logical/sequencesync.c\n\nFile header comment:\nnitpick - there seems a large cut/paste mistake (the first 2\nparagraphs are almost the same).\nnitpick - reworded with the help of Chat-GPT for slightly better\nclarity. Also fixed a couple of typos.\nnitpick - it mentioned MAX_SEQUENCES_SYNC_PER_BATCH several times so I\nchanged the wording of one of them\n\n~~~\n\nfetch_remote_sequence_data:\nnitpick - all other params have the same name as sequence members, so\nchange the parameter name /lsn/page_lsn/\n\n~\n\ncopy_sequence:\nnitpick - rename var /seq_lsn/seq_page_lsn/\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n6. process_syncing_sequences_for_apply\n\n+ * If a sequencesync worker is running already, there is no need to start a new\n+ * one; the existing sequencesync worker will synchronize all the sequences. If\n+ * there are still any sequences to be synced after the sequencesync worker\n+ * exited, then a new sequencesync worker can be started in the next iteration.\n+ * To prevent starting the sequencesync worker at a high frequency after a\n+ * failure, we store its last failure time. We start the sync worker for the\n+ * same relation after waiting at least wal_retrieve_retry_interval.\n\nWhy is it talking about \"We start the sync worker for the same\nrelation ...\". The sequencesync_failuretime is per sync worker, not\nper relation. And, I don't see any 'same relation' check in the code.\n\n======\nsrc/include/catalog/pg_subscription_rel.h\n\nGetSubscriptionRelations:\nnitpick - changed parameter name /all_relations/all_states/\n\n======\nsrc/test/subscription/t/034_sequences.pl\n\nnitpick - add some ########## comments to highlight the main test\nparts to make it easier to read.\nnitpick - fix typo /syned/synced/\n\n7. More test cases?\nIIUC you can also get a sequence mismatch warning during \"ALTER ...\nREFRESH PUBLICATION\", and \"CREATE SUBSCRIPTION\". So, should those be\ntested also?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 6 Aug 2024 19:08:00 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, 5 Aug 2024 at 18:05, shveta malik <[email protected]> wrote:\n>\n> On Mon, Aug 5, 2024 at 11:04 AM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 31 Jul 2024 at 14:39, shveta malik <[email protected]> wrote:\n> > >\n> > > On Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n> > > > >\n> > > > >\n> > > > >\n> > > > > On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n> > > > >>\n> > > > >> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n> > > > >> [...]\n> > > > >> A new catalog table, pg_subscription_seq, has been introduced for\n> > > > >> mapping subscriptions to sequences. Additionally, the sequence LSN\n> > > > >> (Log Sequence Number) is stored, facilitating determination of\n> > > > >> sequence changes occurring before or after the returned sequence\n> > > > >> state.\n> > > > >\n> > > > >\n> > > > > Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n> > > > > something.\n> > > >\n> > > > We'll require the lsn because the sequence LSN informs the user that\n> > > > it has been synchronized up to the LSN in pg_subscription_seq. Since\n> > > > we are not supporting incremental sync, the user will be able to\n> > > > identify if he should run refresh sequences or not by checking the lsn\n> > > > of the pg_subscription_seq and the lsn of the sequence(using\n> > > > pg_sequence_state added) in the publisher.\n> > >\n> > > How the user will know from seq's lsn that he needs to run refresh.\n> > > lsn indicates page_lsn and thus the sequence might advance on pub\n> > > without changing lsn and thus lsn may look the same on subscriber even\n> > > though a sequence-refresh is needed. Am I missing something here?\n> >\n> > When a sequence is synchronized to the subscriber, the page LSN of the\n> > sequence from the publisher is also retrieved and stored in\n> > pg_subscriber_rel as shown below:\n> > --- Publisher page lsn\n> > publisher=# select pg_sequence_state('seq1');\n> > pg_sequence_state\n> > --------------------\n> > (0/1510E38,65,1,t)\n> > (1 row)\n> >\n> > --- Subscriber stores the publisher's page lsn for the sequence\n> > subscriber=# select * from pg_subscription_rel where srrelid = 16384;\n> > srsubid | srrelid | srsubstate | srsublsn\n> > ---------+---------+------------+-----------\n> > 16389 | 16384 | r | 0/1510E38\n> > (1 row)\n> >\n> > If changes are made to the sequence, such as performing many nextvals,\n> > the page LSN will be updated. Currently the sequence values are\n> > prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for\n> > the prefetched values, once the prefetched values are consumed the lsn\n> > will get updated.\n> > For example:\n> > --- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)\n> > publisher=# select pg_sequence_state('seq1');\n> > pg_sequence_state\n> > ----------------------\n> > (0/1558CA8,143,22,t)\n> > (1 row)\n> >\n> > The user can then compare this updated value with the sequence's LSN\n> > in pg_subscription_rel to determine when to re-synchronize the\n> > sequence.\n>\n> Thanks for the details. But I was referring to the case where we are\n> in between pre-fetched values on publisher (say at 25th value), while\n> on subscriber we are slightly behind (say at 15th value), but page-lsn\n> will be the same on both. Since the subscriber is behind, a\n> sequence-refresh is needed on sub, but by looking at lsn (which is\n> same), one can not say that for sure. Let me know if I have\n> misunderstood it.\n\nYes, at present, if the value is within the pre-fetched range, we\ncannot distinguish it solely using the page_lsn. However, the\npg_sequence_state function also provides last_value and log_cnt, which\ncan be used to handle these specific cases.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 6 Aug 2024 17:12:27 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Aug 6, 2024 at 5:13 PM vignesh C <[email protected]> wrote:\n>\n> On Mon, 5 Aug 2024 at 18:05, shveta malik <[email protected]> wrote:\n> >\n> > On Mon, Aug 5, 2024 at 11:04 AM vignesh C <[email protected]> wrote:\n> > >\n> > > On Wed, 31 Jul 2024 at 14:39, shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:\n> > > > >\n> > > > > On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n> > > > > >\n> > > > > >\n> > > > > >\n> > > > > > On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n> > > > > >>\n> > > > > >> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n> > > > > >> [...]\n> > > > > >> A new catalog table, pg_subscription_seq, has been introduced for\n> > > > > >> mapping subscriptions to sequences. Additionally, the sequence LSN\n> > > > > >> (Log Sequence Number) is stored, facilitating determination of\n> > > > > >> sequence changes occurring before or after the returned sequence\n> > > > > >> state.\n> > > > > >\n> > > > > >\n> > > > > > Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n> > > > > > something.\n> > > > >\n> > > > > We'll require the lsn because the sequence LSN informs the user that\n> > > > > it has been synchronized up to the LSN in pg_subscription_seq. Since\n> > > > > we are not supporting incremental sync, the user will be able to\n> > > > > identify if he should run refresh sequences or not by checking the lsn\n> > > > > of the pg_subscription_seq and the lsn of the sequence(using\n> > > > > pg_sequence_state added) in the publisher.\n> > > >\n> > > > How the user will know from seq's lsn that he needs to run refresh.\n> > > > lsn indicates page_lsn and thus the sequence might advance on pub\n> > > > without changing lsn and thus lsn may look the same on subscriber even\n> > > > though a sequence-refresh is needed. Am I missing something here?\n> > >\n> > > When a sequence is synchronized to the subscriber, the page LSN of the\n> > > sequence from the publisher is also retrieved and stored in\n> > > pg_subscriber_rel as shown below:\n> > > --- Publisher page lsn\n> > > publisher=# select pg_sequence_state('seq1');\n> > > pg_sequence_state\n> > > --------------------\n> > > (0/1510E38,65,1,t)\n> > > (1 row)\n> > >\n> > > --- Subscriber stores the publisher's page lsn for the sequence\n> > > subscriber=# select * from pg_subscription_rel where srrelid = 16384;\n> > > srsubid | srrelid | srsubstate | srsublsn\n> > > ---------+---------+------------+-----------\n> > > 16389 | 16384 | r | 0/1510E38\n> > > (1 row)\n> > >\n> > > If changes are made to the sequence, such as performing many nextvals,\n> > > the page LSN will be updated. Currently the sequence values are\n> > > prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for\n> > > the prefetched values, once the prefetched values are consumed the lsn\n> > > will get updated.\n> > > For example:\n> > > --- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)\n> > > publisher=# select pg_sequence_state('seq1');\n> > > pg_sequence_state\n> > > ----------------------\n> > > (0/1558CA8,143,22,t)\n> > > (1 row)\n> > >\n> > > The user can then compare this updated value with the sequence's LSN\n> > > in pg_subscription_rel to determine when to re-synchronize the\n> > > sequence.\n> >\n> > Thanks for the details. But I was referring to the case where we are\n> > in between pre-fetched values on publisher (say at 25th value), while\n> > on subscriber we are slightly behind (say at 15th value), but page-lsn\n> > will be the same on both. Since the subscriber is behind, a\n> > sequence-refresh is needed on sub, but by looking at lsn (which is\n> > same), one can not say that for sure. Let me know if I have\n> > misunderstood it.\n>\n> Yes, at present, if the value is within the pre-fetched range, we\n> cannot distinguish it solely using the page_lsn.\n>\n\nThis makes sense to me.\n\n>\n> However, the\n> pg_sequence_state function also provides last_value and log_cnt, which\n> can be used to handle these specific cases.\n>\n\nBTW, can we document all these steps for users to know when to refresh\nthe sequences, if not already documented?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Aug 2024 08:09:09 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh,\n\nThis is mostly a repeat of my previous mail from a while ago [1] but\nincludes some corrections, answers, and more examples. I'm going to\ntry to persuade one last time because the current patch is becoming\nstable, so I wanted to revisit this syntax proposal before it gets too\nlate to change anything.\n\nIf there is some problem with the proposed idea please let me know\nbecause I can see only the advantages and no disadvantages of doing it\nthis way.\n\n~~~\n\nThe current patchset offers two forms of subscription refresh:\n1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option\n[= value] [, ... ] ) ]\n2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES\n\nSince 'copy_data' is the only supported refresh_option, really it is more like:\n1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( copy_data [=\ntrue|false] ) ]\n2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES\n\n~~~\n\nI proposed previously that instead of having 2 commands for refreshing\nsubscriptions we should have a single refresh command:\n\nALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES|SEQUENCES] [ WITH\n( copy_data [= true|false] ) ]\n\nWhy?\n\n- IMO it is less confusing than having 2 commands that both refresh\nsequences in slightly different ways\n\n- It is more flexible because apart from refreshing everything, a user\ncan choose to refresh only tables or only sequences if desired; IMO\nmore flexibility is always good.\n\n- There is no loss of functionality from the current implementation\nAFAICT. You can still say \"ALTER SUBSCRIPTION sub REFRESH PUBLICATION\nSEQUENCES\" exactly the same as the patchset allows.\n\n- The implementation code will become simpler. For example, the\ncurrent implementation of AlterSubscription_refresh(...) includes the\n(hacky?) 'resync_all_sequences' parameter and has an overcomplicated\nrelationship with other parameters as demonstrated by the assertions\nbelow. IMO using the prosed syntax means this coding will become not\nonly simpler, but shorter too.\n+ /* resync_all_sequences cannot be specified with refresh_tables */\n+ Assert(!(resync_all_sequences && refresh_tables));\n+\n+ /* resync_all_sequences cannot be specified with copy_data as false */\n+ Assert(!(resync_all_sequences && !copy_data));\n\n~~~\n\nSo, to continue this proposal, let the meaning of 'copy_data' for\nSEQUENCES be as follows:\n\n- when copy_data == false: it means don't copy data (i.e. don't\nsynchronize anything). Add/remove sequences from pg_subscriber_rel as\nneeded.\n\n- when copy_data == true: it means to copy data (i.e. synchronize) for\nall sequences. Add/remove sequences from pg_subscriber_rel as needed)\n\n\n~~~\n\nEXAMPLES using the proposed syntax:\n\nRefreshing TABLES only...\n\nex1.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = false)\n- same as PG17 functionality for \"ALTER SUBSCRIPTION sub REFRESH\nPUBLICATION WITH (copy_data = false)\"\n\nex2.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = true)\n- same as PG17 functionality for \"ALTER SUBSCRIPTION sub REFRESH\nPUBLICATION WITH (copy_data = true)\"\n\nex3. (using default copy_data)\nALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES\n- same as ex2.\n\n~\n\nRefreshing SEQUENCES only...\n\nex4.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = false)\n- this adds/removes only sequences to pg_subscription_rel but doesn't\nupdate the sequence values\n\nex5.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = true)\n- this adds/removes only sequences to pg_subscription_rel and also\nupdates (synchronizes) all sequence values.\n- same functionality as \"ALTER SUBSCRIPTION sub REFRESH PUBLICATION\nSEQUENCES\" in your current patchset\n\nex6. (using default copy_data)\nALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES\n- same as ex5.\n- note, that this command has the same syntax and functionality as the\ncurrent patchset\n\n~~~\n\nWhen no object_type is specified it has intuitive meaning to refresh\nboth TABLES and SEQUENCES...\n\nex7.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = false)\n- For tables, it is the same as the PG17 functionality\n- For sequences it includes the same behaviour of ex4.\n\nex8.\nALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = true)\n- For tables, it is the same as the PG17 functionality\n- For sequences it includes the same behaviour of ex5.\n- There is one subtle difference from the current patchset because\nthis proposal will synchronize *all* sequences instead of only new\nones. But, this is a good thing. The current documentation is\ncomplicated by having to explain the differences between REFRESH\nPUBLICATION and REFRESH PUBLICATION SEQUENCES. The current patchset\nalso raises questions like how the user chooses whether to use\n\"REFRESH PUBLICATION SEQUENCES\" versus \"REFRESH PUBLICATION WITH\n(copy_data=true)\". OTHO, the proposed syntax eliminates ambiguity.\n\nex9. (using default copy_data)\nALTER SUBSCRIPTION sub REFRESH PUBLICATION\n- same as ex8\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPuFH1OCj-P1UKoRQE2X4-0zMG%2BN1V7jdn%3DtOQV4RNbAbw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 7 Aug 2024 14:42:00 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, Aug 5, 2024 at 10:26 AM vignesh C <[email protected]> wrote:\n>\n> On Thu, 1 Aug 2024 at 04:25, Peter Smith <[email protected]> wrote:\n> >\n> > Hi Vignesh,\n> >\n> > I noticed that when replicating sequences (using the latest patches\n> > 0730_2*) the subscriber-side checks the *existence* of the sequence,\n> > but apparently it is not checking other sequence attributes.\n> >\n> > For example, consider:\n> >\n> > Publisher: \"CREATE SEQUENCE s1 START 1 INCREMENT 2;\" should be a\n> > sequence of only odd numbers.\n> > Subscriber: \"CREATE SEQUENCE s1 START 2 INCREMENT 2;\" should be a\n> > sequence of only even numbers.\n> >\n> > Because the names match, currently the patch allows replication of the\n> > s1 sequence. I think that might lead to unexpected results on the\n> > subscriber. IMO it might be safer to report ERROR unless the sequences\n> > match properly (i.e. not just a name check).\n> >\n> > Below is a demonstration the problem:\n> >\n> > ==========\n> > Publisher:\n> > ==========\n> >\n> > (publisher sequence is odd numbers)\n> >\n> > test_pub=# create sequence s1 start 1 increment 2;\n> > CREATE SEQUENCE\n> > test_pub=# select * from nextval('s1');\n> > nextval\n> > ---------\n> > 1\n> > (1 row)\n> >\n> > test_pub=# select * from nextval('s1');\n> > nextval\n> > ---------\n> > 3\n> > (1 row)\n> >\n> > test_pub=# select * from nextval('s1');\n> > nextval\n> > ---------\n> > 5\n> > (1 row)\n> >\n> > test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;\n> > CREATE PUBLICATION\n> > test_pub=#\n> >\n> > ==========\n> > Subscriber:\n> > ==========\n> >\n> > (subscriber sequence is even numbers)\n> >\n> > test_sub=# create sequence s1 start 2 increment 2;\n> > CREATE SEQUENCE\n> > test_sub=# SELECT * FROM nextval('s1');\n> > nextval\n> > ---------\n> > 2\n> > (1 row)\n> >\n> > test_sub=# SELECT * FROM nextval('s1');\n> > nextval\n> > ---------\n> > 4\n> > (1 row)\n> >\n> > test_sub=# SELECT * FROM nextval('s1');\n> > nextval\n> > ---------\n> > 6\n> > (1 row)\n> >\n> > test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'\n> > PUBLICATION pub1;\n> > 2024-08-01 08:43:04.198 AEST [24325] WARNING: subscriptions created\n> > by regression test cases should have names starting with \"regress_\"\n> > WARNING: subscriptions created by regression test cases should have\n> > names starting with \"regress_\"\n> > NOTICE: created replication slot \"sub1\" on publisher\n> > CREATE SUBSCRIPTION\n> > test_sub=# 2024-08-01 08:43:04.294 AEST [26240] LOG: logical\n> > replication apply worker for subscription \"sub1\" has started\n> > 2024-08-01 08:43:04.309 AEST [26244] LOG: logical replication\n> > sequence synchronization worker for subscription \"sub1\" has started\n> > 2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication\n> > synchronization for subscription \"sub1\", sequence \"s1\" has finished\n> > 2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication\n> > sequence synchronization worker for subscription \"sub1\" has finished\n> >\n> > (after the CREATE SUBSCRIPTION we are getting replicated odd values\n> > from the publisher, even though the subscriber side sequence was\n> > supposed to be even numbers)\n> >\n> > test_sub=# SELECT * FROM nextval('s1');\n> > nextval\n> > ---------\n> > 7\n> > (1 row)\n> >\n> > test_sub=# SELECT * FROM nextval('s1');\n> > nextval\n> > ---------\n> > 9\n> > (1 row)\n> >\n> > test_sub=# SELECT * FROM nextval('s1');\n> > nextval\n> > ---------\n> > 11\n> > (1 row)\n> >\n> > (Looking at the description you would expect odd values for this\n> > sequence to be impossible)\n\nI see that for such even sequences, user can still do 'setval' to a\nodd number and then nextval will keep on returning odd value.\n\npostgres=# SELECT nextval('s1');\n 6\n\npostgres=SELECT setval('s1', 43);\n 43\n\npostgres=# SELECT nextval('s1');\n 45\n\n> > test_sub=# \\dS+ s1\n> > Sequence \"public.s1\"\n> > Type | Start | Minimum | Maximum | Increment | Cycles? | Cache\n> > --------+-------+---------+---------------------+-----------+---------+-------\n> > bigint | 2 | 1 | 9223372036854775807 | 2 | no | 1\n>\n> Even if we check the sequence definition during the CREATE\n> SUBSCRIPTION/ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES commands, there's still\n> a chance that the sequence definition might change after the command\n> has been executed. Currently, there's no mechanism to lock a sequence,\n> and we also permit replication of table data even if the table\n> structures differ, such as mismatched data types like int and\n> smallint. I have modified it to log a warning to inform users that the\n> sequence options on the publisher and subscriber are not the same and\n> advise them to ensure that the sequence definitions are consistent\n> between both.\n> The v20240805 version patch attached at [1] has the changes for the same.\n> [1] - https://www.postgresql.org/message-id/CALDaNm1Y_ot-jFRfmtwDuwmFrgSSYHjVuy28RspSopTtwzXy8w%40mail.gmail.com\n\nThe behavior for applying is no different from setval. Having said\nthat, I agree that sequence definition can change even after the\nsubscription creation, but earlier we were not syncing sequences and\nthus the value of a particular sequence was going to remain in the\nrange/pattern defined by its attributes unless user sets it manually\nusing setval. But now, it is being changed in the background without\nuser's knowledge.\nThe table case is different. In case of table replication, if we have\nCHECK constraint or say primary-key etc, then the value which violates\nthese constraints will never be inserted to a table even during\nreplication on sub. For sequences, parameters (MIN,MAX, START,\nINCREMENT) can be considered similar to check-constraints, the only\ndifference is during apply, we are still overriding these and copying\npub's value. May be such inconsistencies detection can be targeted\nlater in next project. But for the time being, it will be good to add\na 'caveat' section in doc mentioning all such cases. The scope of this\nproject should be clearly documented.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 7 Aug 2024 10:27:09 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 6 Aug 2024 at 14:38, Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for the patch v20240805_2-0003.\n>\n> 4a.\n> Is there a way to give more helpful information by identifying what\n> was different in the log? OTOH, maybe it would become too messy if\n> there were multiple differences...\n\nI had considered this while implementing and did not implement it to\nprint each of the parameters because it will be too messy. I felt\nexisting is better.\n\n>\n> 7. More test cases?\n> IIUC you can also get a sequence mismatch warning during \"ALTER ...\n> REFRESH PUBLICATION\", and \"CREATE SUBSCRIPTION\". So, should those be\n> tested also?\n\nSince it won't add any extra coverage, I feel no need to add this test.\n\nThe remaining comments have been addressed, and the changes are\nincluded in the attached v20240807 version patch.\n\nRegards,\nVignesh", "msg_date": "Wed, 7 Aug 2024 13:45:39 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 6 Aug 2024 at 09:28, shveta malik <[email protected]> wrote:\n>\n> On Tue, Aug 6, 2024 at 8:49 AM shveta malik <[email protected]> wrote:\n> >\n>\n> Do we need some kind of coordination between table sync and sequence\n> sync for internally generated sequences? Lets say we have an identity\n> column with a 'GENERATED ALWAYS' sequence. When the sequence is synced\n> to subscriber, subscriber can also do an insert to table (extra one)\n> incrementing the sequence and then when publisher performs an insert,\n> apply worker will blindly copy that row to sub's table making identity\n> column's duplicate entries.\n>\n> CREATE TABLE color ( color_id INT GENERATED ALWAYS AS\n> IDENTITY,color_name VARCHAR NOT NULL);\n>\n> Pub: insert into color(color_name) values('red');\n>\n> Sub: perform sequence refresh and check 'r' state is reached, then do insert:\n> insert into color(color_name) values('yellow');\n>\n> Pub: insert into color(color_name) values('blue');\n>\n> After above, data on Pub: (1, 'red') ;(2, 'blue'),\n>\n> After above, data on Sub: (1, 'red') ;(2, 'yellow'); (2, 'blue'),\n>\n> Identity column has duplicate values. Should the apply worker error\n> out while inserting such a row to the table? Or it is not in the\n> scope of this project?\n\nThis behavior is documented at [1]:\nSequence data is not replicated. The data in serial or identity\ncolumns backed by sequences will of course be replicated as part of\nthe table, but the sequence itself would still show the start value on\nthe subscriber.\n\nThis behavior is because of the above logical replication restriction.\nSo the behavior looks ok to me and I feel this is not part of the\nscope of this project. I have updated this documentation section here\nto mention sequences can be updated using ALTER SEQUENCE ... REFRESH\nPUBLICATION SEQUENCES at v20240807 version patch attached at [2].\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-restrictions.html\n[2] - https://www.postgresql.org/message-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ%2B6uJn-ZymV%3D0dbQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 7 Aug 2024 13:55:36 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, 5 Aug 2024 at 17:28, Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Aug 5, 2024 at 2:36 PM vignesh C <[email protected]> wrote:\n> >\n> > On Fri, 2 Aug 2024 at 14:24, shveta malik <[email protected]> wrote:\n> > >\n> > > On Thu, Aug 1, 2024 at 9:26 AM shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Mon, Jul 29, 2024 at 4:17 PM vignesh C <[email protected]> wrote:\n> > > > >\n> > > > > Thanks for reporting this, these issues are fixed in the attached\n> > > > > v20240730_2 version patch.\n> > > > >\n> > >\n> > > I was reviewing the design of patch003, and I have a query. Do we need\n> > > to even start an apply worker and create replication slot when\n> > > subscription created is for 'sequences only'? IIUC, currently logical\n> > > replication apply worker is the one launching sequence-sync worker\n> > > whenever needed. I think it should be the launcher doing this job and\n> > > thus apply worker may even not be needed for current functionality of\n> > > sequence sync?\n> >\n>\n> But that would lead to maintaining all sequence-sync of each\n> subscription by launcher. Say there are 100 sequences per subscription\n> and some of them from each subscription are failing due to some\n> reasons then the launcher will be responsible for ensuring all the\n> sequences are synced. I think it would be better to handle\n> per-subscription work by the apply worker.\n>\n> >\n> > Going forward when we implement incremental sync of\n> > > sequences, then we may have apply worker started but now it is not\n> > > needed.\n> >\n> > I believe the current method of having the apply worker initiate the\n> > sequence sync worker is advantageous for several reasons:\n> > a) Reduces Launcher Load: This approach prevents overloading the\n> > launcher, which must handle various other subscription requests.\n> > b) Facilitates Incremental Sync: It provides a more straightforward\n> > path to extend support for incremental sequence synchronization.\n> > c) Reuses Existing Code: It leverages the existing tablesync worker\n> > code for starting the tablesync process, avoiding the need to\n> > duplicate code in the launcher.\n> > d) Simplified Code Maintenance: Centralizing sequence synchronization\n> > logic within the apply worker can simplify code maintenance and\n> > updates, as changes will only need to be made in one place rather than\n> > across multiple components.\n> > e) Better Monitoring and Debugging: With sequence synchronization\n> > being handled by the apply worker, you can more effectively monitor\n> > and debug synchronization processes since all related operations are\n> > managed by a single component.\n> >\n> > Also, I noticed that even when a publication has no tables, we create\n> > replication slot and start apply worker.\n> >\n>\n> As far as I understand slots and origins are primarily required for\n> incremental sync. Would it be used only for sequence-sync cases? If\n> not then we can avoid creating those. I agree that it would add some\n> complexity to the code with sequence-specific checks, so we can create\n> a top-up patch for this if required and evaluate its complexity versus\n> the benefit it produces.\n\nI have added a XXX todo comments in the v20240807 version patch\nattached at [1]. I will handle this as a separate patch once the\ncurrent patch is stable.\n[1] - https://www.postgresql.org/message-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ%2B6uJn-ZymV%3D0dbQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 7 Aug 2024 13:59:03 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 7 Aug 2024 at 08:09, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Aug 6, 2024 at 5:13 PM vignesh C <[email protected]> wrote:\n> >\n> > On Mon, 5 Aug 2024 at 18:05, shveta malik <[email protected]> wrote:\n> > >\n> > > On Mon, Aug 5, 2024 at 11:04 AM vignesh C <[email protected]> wrote:\n> > > >\n> > > > On Wed, 31 Jul 2024 at 14:39, shveta malik <[email protected]> wrote:\n> > > > >\n> > > > > On Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:\n> > > > > >\n> > > > > > On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n> > > > > > >\n> > > > > > >\n> > > > > > >\n> > > > > > > On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n> > > > > > >>\n> > > > > > >> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n> > > > > > >> [...]\n> > > > > > >> A new catalog table, pg_subscription_seq, has been introduced for\n> > > > > > >> mapping subscriptions to sequences. Additionally, the sequence LSN\n> > > > > > >> (Log Sequence Number) is stored, facilitating determination of\n> > > > > > >> sequence changes occurring before or after the returned sequence\n> > > > > > >> state.\n> > > > > > >\n> > > > > > >\n> > > > > > > Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n> > > > > > > something.\n> > > > > >\n> > > > > > We'll require the lsn because the sequence LSN informs the user that\n> > > > > > it has been synchronized up to the LSN in pg_subscription_seq. Since\n> > > > > > we are not supporting incremental sync, the user will be able to\n> > > > > > identify if he should run refresh sequences or not by checking the lsn\n> > > > > > of the pg_subscription_seq and the lsn of the sequence(using\n> > > > > > pg_sequence_state added) in the publisher.\n> > > > >\n> > > > > How the user will know from seq's lsn that he needs to run refresh.\n> > > > > lsn indicates page_lsn and thus the sequence might advance on pub\n> > > > > without changing lsn and thus lsn may look the same on subscriber even\n> > > > > though a sequence-refresh is needed. Am I missing something here?\n> > > >\n> > > > When a sequence is synchronized to the subscriber, the page LSN of the\n> > > > sequence from the publisher is also retrieved and stored in\n> > > > pg_subscriber_rel as shown below:\n> > > > --- Publisher page lsn\n> > > > publisher=# select pg_sequence_state('seq1');\n> > > > pg_sequence_state\n> > > > --------------------\n> > > > (0/1510E38,65,1,t)\n> > > > (1 row)\n> > > >\n> > > > --- Subscriber stores the publisher's page lsn for the sequence\n> > > > subscriber=# select * from pg_subscription_rel where srrelid = 16384;\n> > > > srsubid | srrelid | srsubstate | srsublsn\n> > > > ---------+---------+------------+-----------\n> > > > 16389 | 16384 | r | 0/1510E38\n> > > > (1 row)\n> > > >\n> > > > If changes are made to the sequence, such as performing many nextvals,\n> > > > the page LSN will be updated. Currently the sequence values are\n> > > > prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for\n> > > > the prefetched values, once the prefetched values are consumed the lsn\n> > > > will get updated.\n> > > > For example:\n> > > > --- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)\n> > > > publisher=# select pg_sequence_state('seq1');\n> > > > pg_sequence_state\n> > > > ----------------------\n> > > > (0/1558CA8,143,22,t)\n> > > > (1 row)\n> > > >\n> > > > The user can then compare this updated value with the sequence's LSN\n> > > > in pg_subscription_rel to determine when to re-synchronize the\n> > > > sequence.\n> > >\n> > > Thanks for the details. But I was referring to the case where we are\n> > > in between pre-fetched values on publisher (say at 25th value), while\n> > > on subscriber we are slightly behind (say at 15th value), but page-lsn\n> > > will be the same on both. Since the subscriber is behind, a\n> > > sequence-refresh is needed on sub, but by looking at lsn (which is\n> > > same), one can not say that for sure. Let me know if I have\n> > > misunderstood it.\n> >\n> > Yes, at present, if the value is within the pre-fetched range, we\n> > cannot distinguish it solely using the page_lsn.\n> >\n>\n> This makes sense to me.\n>\n> >\n> > However, the\n> > pg_sequence_state function also provides last_value and log_cnt, which\n> > can be used to handle these specific cases.\n> >\n>\n> BTW, can we document all these steps for users to know when to refresh\n> the sequences, if not already documented?\n\nThis has been documented in the v20240807 version attached at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ%2B6uJn-ZymV%3D0dbQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 7 Aug 2024 14:00:34 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh, Here are my v20240807-0003 review comments.\n\n======\n1. GENERAL DOCS.\n\nIMO the replication of SEQUENCES is a big enough topic that it\ndeserves to have its own section in the docs chapter 31 [1].\n\nSome of the create/alter subscription docs content would stay where it\nis in, but a new chapter would just tie everything together better. It\ncould also serve as a better place to describe the other sequence\nreplication content like:\n(a) getting a WARNING for mismatched sequences and how to handle it.\n(b) how can the user know when a subscription refresh is required to\n(re-)synchronise sequences\n(c) pub/sub examples\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n2. Restrictions\n\nSequence data is not replicated. The data in serial or identity\ncolumns backed by sequences will of course be replicated as part of\nthe table, but the sequence itself would still show the start value on\nthe subscriber. If the subscriber is used as a read-only database,\nthen this should typically not be a problem. If, however, some kind of\nswitchover or failover to the subscriber database is intended, then\nthe sequences would need to be updated to the latest values, either by\nexecuting ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES or by\ncopying the current data from the publisher (perhaps using pg_dump) or\nby determining a sufficiently high value from the tables themselves.\n\n~\n\n2a.\nThe paragraph starts by saying \"Sequence data is not replicated.\". It\nseems wrong now. Doesn't that need rewording or removing?\n\n~\n\n2b.\nShould the info \"If, however, some kind of switchover or failover...\"\nbe mentioned in the \"Logical Replication Failover\" section [2],\ninstead of here?\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n3.\nSequence values may occasionally become out of sync due to updates in\nthe publisher. To verify this, compare the\npg_subscription_rel.srsublsn on the subscriber with the page_lsn\nobtained from the pg_sequence_state for the sequence on the publisher.\nIf the sequence is still using prefetched values, the page_lsn will\nnot be updated. In such cases, you will need to directly compare the\nsequences and execute REFRESH PUBLICATION SEQUENCES if required.\n\n~\n\n3a.\nThis whole paragraph may be better put in the new chapter that was\nsuggested earlier in review comment #1.\n\n~\n\n3b.\nIs it only \"Occasionally\"? I expected subscriber-side sequences could\nbecome stale quite often.\n\n~\n\n3c.\nIs this advice very useful? It's saying if the LSN is different then\nthe sequence is out of date, but if the LSN is not different then you\ncannot tell. Why not ignore LSN altogether and just advise the user to\ndirectly compare the sequences in the first place?\n\n======\n\nAlso, there are more minor suggestions in the attached nitpicks diff.\n\n======\n[1] https://www.postgresql.org/docs/current/logical-replication.html\n[2] file:///usr/local/pg_oss/share/doc/postgresql/html/logical-replication-failover.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 8 Aug 2024 13:00:28 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Aug 7, 2024 at 10:12 AM Peter Smith <[email protected]> wrote:\n>\n> This is mostly a repeat of my previous mail from a while ago [1] but\n> includes some corrections, answers, and more examples. I'm going to\n> try to persuade one last time because the current patch is becoming\n> stable, so I wanted to revisit this syntax proposal before it gets too\n> late to change anything.\n>\n> If there is some problem with the proposed idea please let me know\n> because I can see only the advantages and no disadvantages of doing it\n> this way.\n>\n> ~~~\n>\n> The current patchset offers two forms of subscription refresh:\n> 1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option\n> [= value] [, ... ] ) ]\n> 2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES\n>\n> Since 'copy_data' is the only supported refresh_option, really it is more like:\n> 1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( copy_data [=\n> true|false] ) ]\n> 2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES\n>\n> ~~~\n>\n> I proposed previously that instead of having 2 commands for refreshing\n> subscriptions we should have a single refresh command:\n>\n> ALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES|SEQUENCES] [ WITH\n> ( copy_data [= true|false] ) ]\n>\n> Why?\n>\n> - IMO it is less confusing than having 2 commands that both refresh\n> sequences in slightly different ways\n>\n> - It is more flexible because apart from refreshing everything, a user\n> can choose to refresh only tables or only sequences if desired; IMO\n> more flexibility is always good.\n>\n> - There is no loss of functionality from the current implementation\n> AFAICT. You can still say \"ALTER SUBSCRIPTION sub REFRESH PUBLICATION\n> SEQUENCES\" exactly the same as the patchset allows.\n>\n> ~~~\n>\n> So, to continue this proposal, let the meaning of 'copy_data' for\n> SEQUENCES be as follows:\n>\n> - when copy_data == false: it means don't copy data (i.e. don't\n> synchronize anything). Add/remove sequences from pg_subscriber_rel as\n> needed.\n>\n> - when copy_data == true: it means to copy data (i.e. synchronize) for\n> all sequences. Add/remove sequences from pg_subscriber_rel as needed)\n>\n\nI find overloading the copy_data option more confusing than adding a\nnew variant for REFRESH. To make it clear, we can even think of\nextending the command as ALTER SUBSCRIPTION name REFRESH PUBLICATION\nALL SEQUENCES or something like that. I don't know where there is a\nneed or not but one can imagine extending it as ALTER SUBSCRIPTION\nname REFRESH PUBLICATION SEQUENCES [<seq_name_1>, <seq_name_2>, ..].\nThis will allow to selectively refresh the sequences.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 8 Aug 2024 09:25:04 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Aug 8, 2024 at 1:55 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Aug 7, 2024 at 10:12 AM Peter Smith <[email protected]> wrote:\n> >\n> > This is mostly a repeat of my previous mail from a while ago [1] but\n> > includes some corrections, answers, and more examples. I'm going to\n> > try to persuade one last time because the current patch is becoming\n> > stable, so I wanted to revisit this syntax proposal before it gets too\n> > late to change anything.\n> >\n> > If there is some problem with the proposed idea please let me know\n> > because I can see only the advantages and no disadvantages of doing it\n> > this way.\n> >\n> > ~~~\n> >\n> > The current patchset offers two forms of subscription refresh:\n> > 1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option\n> > [= value] [, ... ] ) ]\n> > 2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES\n> >\n> > Since 'copy_data' is the only supported refresh_option, really it is more like:\n> > 1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( copy_data [=\n> > true|false] ) ]\n> > 2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES\n> >\n> > ~~~\n> >\n> > I proposed previously that instead of having 2 commands for refreshing\n> > subscriptions we should have a single refresh command:\n> >\n> > ALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES|SEQUENCES] [ WITH\n> > ( copy_data [= true|false] ) ]\n> >\n> > Why?\n> >\n> > - IMO it is less confusing than having 2 commands that both refresh\n> > sequences in slightly different ways\n> >\n> > - It is more flexible because apart from refreshing everything, a user\n> > can choose to refresh only tables or only sequences if desired; IMO\n> > more flexibility is always good.\n> >\n> > - There is no loss of functionality from the current implementation\n> > AFAICT. You can still say \"ALTER SUBSCRIPTION sub REFRESH PUBLICATION\n> > SEQUENCES\" exactly the same as the patchset allows.\n> >\n> > ~~~\n> >\n> > So, to continue this proposal, let the meaning of 'copy_data' for\n> > SEQUENCES be as follows:\n> >\n> > - when copy_data == false: it means don't copy data (i.e. don't\n> > synchronize anything). Add/remove sequences from pg_subscriber_rel as\n> > needed.\n> >\n> > - when copy_data == true: it means to copy data (i.e. synchronize) for\n> > all sequences. Add/remove sequences from pg_subscriber_rel as needed)\n> >\n>\n> I find overloading the copy_data option more confusing than adding a\n> new variant for REFRESH. To make it clear, we can even think of\n> extending the command as ALTER SUBSCRIPTION name REFRESH PUBLICATION\n> ALL SEQUENCES or something like that. I don't know where there is a\n> need or not but one can imagine extending it as ALTER SUBSCRIPTION\n> name REFRESH PUBLICATION SEQUENCES [<seq_name_1>, <seq_name_2>, ..].\n> This will allow to selectively refresh the sequences.\n>\n\nBut, I haven't invented a new overloading for \"copy_data\" option\n(meaning \"synchronize\") for sequences. The current patchset already\ninterprets copy_data exactly this way.\n\nFor example, below are patch 0003 results:\n\nALTER SUBSCRIPTION sub1 REFRESH PUBLICATION WITH (copy_data=false)\n- this will add/remove new sequences in pg_subscription_rel, but it\nwill *not* synchronize the new sequence\n\nALTER SUBSCRIPTION sub1 REFRESH PUBLICATION WITH (copy_data=true)\n- this will add/remove new sequences in pg_subscription_rel, and it\n*will* synchronize the new sequence\n\n~\n\nI only proposed that copy_data should apply to *all* sequences, not\njust new ones.\n\n======\nKind Regards.\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Thu, 8 Aug 2024 15:38:48 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Aug 7, 2024 at 1:45 PM vignesh C <[email protected]> wrote:\n>\n>\n> The remaining comments have been addressed, and the changes are\n> included in the attached v20240807 version patch.\n\nThanks for addressing the comment. Please find few comments for v20240807 :\n\npatch002:\n1)\ncreate_publication.sgml:\n\n--I think it will be good to add another example for both tables and sequences:\nCREATE PUBLICATION all_sequences FOR ALL TABLES, SEQUENCES;\nI was trying FOR ALL TABLES, FOR ALL SEQUENCES; but I think it is not\nthe correct way, so good to have the correct way mentioned in one\nexample.\n\npatch003:\n2)\n\n * The page_lsn allows the user to determine if the sequence has been updated\n * since the last synchronization with the subscriber. This is done by\n * comparing the current page_lsn with the value stored in pg_subscription_rel\n * from the last synchronization.\n */\nDatum\npg_sequence_state(PG_FUNCTION_ARGS)\n\n--This information is still incomplete. Maybe we should mention the\nother attribute name as well which helps to determine this.\n\n3)\nShall process_syncing_sequences_for_apply() be moved to sequencesync.c\n\n4)\nWould it be better to give a single warning for all unequal sequences\n(comma separated list of sequenec names?)\n\npostgres=# create subscription sub1 connection '....' publication pub1;\nWARNING: Sequence parameter in remote and local is not same for \"public.myseq2\"\nHINT: Alter/Re-create the sequence using the same parameter as in remote.\nWARNING: Sequence parameter in remote and local is not same for \"public.myseq0\"\nHINT: Alter/Re-create the sequence using the same parameter as in remote.\nWARNING: Sequence parameter in remote and local is not same for \"public.myseq4\"\nHINT: Alter/Re-create the sequence using the same parameter as in remote.\n\n\n5)\nIIUC, sequencesync_failure_time is changed by multiple processes.\nSeq-sync worker sets it before exiting on failure, while apply worker\nresets it. Also, the applied worker reads it at a few places. Shall it\nbe accessed using LogicalRepWorkerLock?\n\n6)\nprocess_syncing_sequences_for_apply():\n\n--I feel MyLogicalRepWorker->sequencesync_failure_time should be reset\nto 0 after we are sure that logicalrep_worker_launch() has launched\nthe worker without any error. But not sure what could be the clean way\nto do it? If we move it after logicalrep_worker_launch() call, there\nare chances that seq-sync worker has started and failed already and\nhas set this failure time which will then be mistakenly reset by apply\nworker. Also moving it inside logicalrep_worker_launch() does not seem\na good way.\n\n7)\nsequencesync.c\nPostgreSQL logical replication: initial sequence synchronization\n\n--Since it is called by REFRESH also. So shall we remove 'initial'?\n\n8)\n/*\n* Process any tables that are being synchronized in parallel and\n* any newly added relations.\n*/\nprocess_syncing_relations(last_received);\n\n--I did not understand the comment very well. Why are we using 2\nseparate words 'tables' and 'relations'? I feel we should have\nmentioned sequences too in the comment.\n\n\n9)\nlogical-replication.sgml: Sequence data is not replicated.\n\n--I feel we should rephrase this line now to indicate that it could be\nreplicated by the new options.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 8 Aug 2024 12:20:59 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, Aug 8, 2024 at 11:09 AM Peter Smith <[email protected]> wrote:\n>\n> But, I haven't invented a new overloading for \"copy_data\" option\n> (meaning \"synchronize\") for sequences. The current patchset already\n> interprets copy_data exactly this way.\n>\n> For example, below are patch 0003 results:\n>\n> ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION WITH (copy_data=false)\n> - this will add/remove new sequences in pg_subscription_rel, but it\n> will *not* synchronize the new sequence\n>\n> ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION WITH (copy_data=true)\n> - this will add/remove new sequences in pg_subscription_rel, and it\n> *will* synchronize the new sequence\n>\n> ~\n>\n> I only proposed that copy_data should apply to *all* sequences, not\n> just new ones.\n>\n\nI don't like this difference because for tables, it would *not*\nconsider syncing already the existing tables whereas for sequences it\nwould consider syncing existing ones. We previously discussed adding a\nnew option like copy_all_sequences instead of adding a new variant of\ncommand but that has its own set of problems, so we agreed to proceed\nwith a new variant. See [1] ( ...Good point. And I understood that the\nREFRESH PUBLICATION SEQUENCES command would be helpful when users want\nto synchronize sequences between two nodes before upgrading.).\n\nHaving said that, if others also prefer to use copy_data for this\npurpose with a different meaning of this option w.r.t tables and\nsequences then we can still consider it.\n\n[1] - https://www.postgresql.org/message-id/CAD21AoAAszSeHNRha4HND8b9XyzNrx6jbA7t3Mbe%2BfH4hNRj9A%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 8 Aug 2024 14:57:20 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 8 Aug 2024 at 08:30, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh, Here are my v20240807-0003 review comments.\n>\n> 2a.\n> The paragraph starts by saying \"Sequence data is not replicated.\". It\n> seems wrong now. Doesn't that need rewording or removing?\n\nChanged it to incremental sequence changes.\n\n> ~\n>\n> 2b.\n> Should the info \"If, however, some kind of switchover or failover...\"\n> be mentioned in the \"Logical Replication Failover\" section [2],\n> instead of here?\n\nI think mentioning this here is appropriate. The other section focuses\nmore on how logical replication can proceed with a new primary. Once\nthe logical replication setup is complete, sequences can be refreshed\nat any time.\n\nRest of the comments are fixed, the attached v20240808 version patch\nhas the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Thu, 8 Aug 2024 21:22:15 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 7 Aug 2024 at 10:27, shveta malik <[email protected]> wrote:\n>\n> On Mon, Aug 5, 2024 at 10:26 AM vignesh C <[email protected]> wrote:\n> >\n> > On Thu, 1 Aug 2024 at 04:25, Peter Smith <[email protected]> wrote:\n> > >\n> > > Hi Vignesh,\n> > >\n> > > I noticed that when replicating sequences (using the latest patches\n> > > 0730_2*) the subscriber-side checks the *existence* of the sequence,\n> > > but apparently it is not checking other sequence attributes.\n> > >\n> > > For example, consider:\n> > >\n> > > Publisher: \"CREATE SEQUENCE s1 START 1 INCREMENT 2;\" should be a\n> > > sequence of only odd numbers.\n> > > Subscriber: \"CREATE SEQUENCE s1 START 2 INCREMENT 2;\" should be a\n> > > sequence of only even numbers.\n> > >\n> > > Because the names match, currently the patch allows replication of the\n> > > s1 sequence. I think that might lead to unexpected results on the\n> > > subscriber. IMO it might be safer to report ERROR unless the sequences\n> > > match properly (i.e. not just a name check).\n> > >\n> > > Below is a demonstration the problem:\n> > >\n> > > ==========\n> > > Publisher:\n> > > ==========\n> > >\n> > > (publisher sequence is odd numbers)\n> > >\n> > > test_pub=# create sequence s1 start 1 increment 2;\n> > > CREATE SEQUENCE\n> > > test_pub=# select * from nextval('s1');\n> > > nextval\n> > > ---------\n> > > 1\n> > > (1 row)\n> > >\n> > > test_pub=# select * from nextval('s1');\n> > > nextval\n> > > ---------\n> > > 3\n> > > (1 row)\n> > >\n> > > test_pub=# select * from nextval('s1');\n> > > nextval\n> > > ---------\n> > > 5\n> > > (1 row)\n> > >\n> > > test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;\n> > > CREATE PUBLICATION\n> > > test_pub=#\n> > >\n> > > ==========\n> > > Subscriber:\n> > > ==========\n> > >\n> > > (subscriber sequence is even numbers)\n> > >\n> > > test_sub=# create sequence s1 start 2 increment 2;\n> > > CREATE SEQUENCE\n> > > test_sub=# SELECT * FROM nextval('s1');\n> > > nextval\n> > > ---------\n> > > 2\n> > > (1 row)\n> > >\n> > > test_sub=# SELECT * FROM nextval('s1');\n> > > nextval\n> > > ---------\n> > > 4\n> > > (1 row)\n> > >\n> > > test_sub=# SELECT * FROM nextval('s1');\n> > > nextval\n> > > ---------\n> > > 6\n> > > (1 row)\n> > >\n> > > test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'\n> > > PUBLICATION pub1;\n> > > 2024-08-01 08:43:04.198 AEST [24325] WARNING: subscriptions created\n> > > by regression test cases should have names starting with \"regress_\"\n> > > WARNING: subscriptions created by regression test cases should have\n> > > names starting with \"regress_\"\n> > > NOTICE: created replication slot \"sub1\" on publisher\n> > > CREATE SUBSCRIPTION\n> > > test_sub=# 2024-08-01 08:43:04.294 AEST [26240] LOG: logical\n> > > replication apply worker for subscription \"sub1\" has started\n> > > 2024-08-01 08:43:04.309 AEST [26244] LOG: logical replication\n> > > sequence synchronization worker for subscription \"sub1\" has started\n> > > 2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication\n> > > synchronization for subscription \"sub1\", sequence \"s1\" has finished\n> > > 2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication\n> > > sequence synchronization worker for subscription \"sub1\" has finished\n> > >\n> > > (after the CREATE SUBSCRIPTION we are getting replicated odd values\n> > > from the publisher, even though the subscriber side sequence was\n> > > supposed to be even numbers)\n> > >\n> > > test_sub=# SELECT * FROM nextval('s1');\n> > > nextval\n> > > ---------\n> > > 7\n> > > (1 row)\n> > >\n> > > test_sub=# SELECT * FROM nextval('s1');\n> > > nextval\n> > > ---------\n> > > 9\n> > > (1 row)\n> > >\n> > > test_sub=# SELECT * FROM nextval('s1');\n> > > nextval\n> > > ---------\n> > > 11\n> > > (1 row)\n> > >\n> > > (Looking at the description you would expect odd values for this\n> > > sequence to be impossible)\n>\n> I see that for such even sequences, user can still do 'setval' to a\n> odd number and then nextval will keep on returning odd value.\n>\n> postgres=# SELECT nextval('s1');\n> 6\n>\n> postgres=SELECT setval('s1', 43);\n> 43\n>\n> postgres=# SELECT nextval('s1');\n> 45\n>\n> > > test_sub=# \\dS+ s1\n> > > Sequence \"public.s1\"\n> > > Type | Start | Minimum | Maximum | Increment | Cycles? | Cache\n> > > --------+-------+---------+---------------------+-----------+---------+-------\n> > > bigint | 2 | 1 | 9223372036854775807 | 2 | no | 1\n> >\n> > Even if we check the sequence definition during the CREATE\n> > SUBSCRIPTION/ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER\n> > SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES commands, there's still\n> > a chance that the sequence definition might change after the command\n> > has been executed. Currently, there's no mechanism to lock a sequence,\n> > and we also permit replication of table data even if the table\n> > structures differ, such as mismatched data types like int and\n> > smallint. I have modified it to log a warning to inform users that the\n> > sequence options on the publisher and subscriber are not the same and\n> > advise them to ensure that the sequence definitions are consistent\n> > between both.\n> > The v20240805 version patch attached at [1] has the changes for the same.\n> > [1] - https://www.postgresql.org/message-id/CALDaNm1Y_ot-jFRfmtwDuwmFrgSSYHjVuy28RspSopTtwzXy8w%40mail.gmail.com\n>\n> The behavior for applying is no different from setval. Having said\n> that, I agree that sequence definition can change even after the\n> subscription creation, but earlier we were not syncing sequences and\n> thus the value of a particular sequence was going to remain in the\n> range/pattern defined by its attributes unless user sets it manually\n> using setval. But now, it is being changed in the background without\n> user's knowledge.\n> The table case is different. In case of table replication, if we have\n> CHECK constraint or say primary-key etc, then the value which violates\n> these constraints will never be inserted to a table even during\n> replication on sub. For sequences, parameters (MIN,MAX, START,\n> INCREMENT) can be considered similar to check-constraints, the only\n> difference is during apply, we are still overriding these and copying\n> pub's value. May be such inconsistencies detection can be targeted\n> later in next project. But for the time being, it will be good to add\n> a 'caveat' section in doc mentioning all such cases. The scope of this\n> project should be clearly documented.\n\nI have added a Caveats section and mentioned it.\nThe changes for the same are available at v20240808 version attached at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm1QQK_Pgx35LrJGuRxBzzYSO8rm1YGJF4w8hYc3Gm%2B5NQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 8 Aug 2024 21:25:03 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh, I reviewed the latest v20240808-0003 patch.\n\nAttached are my minor change suggestions.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 9 Aug 2024 10:20:43 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, Aug 7, 2024 at 2:00 PM vignesh C <[email protected]> wrote:\n>\n> On Wed, 7 Aug 2024 at 08:09, Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Aug 6, 2024 at 5:13 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Mon, 5 Aug 2024 at 18:05, shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Mon, Aug 5, 2024 at 11:04 AM vignesh C <[email protected]> wrote:\n> > > > >\n> > > > > On Wed, 31 Jul 2024 at 14:39, shveta malik <[email protected]> wrote:\n> > > > > >\n> > > > > > On Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:\n> > > > > > >\n> > > > > > > On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n> > > > > > > >\n> > > > > > > >\n> > > > > > > >\n> > > > > > > > On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n> > > > > > > >>\n> > > > > > > >> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n> > > > > > > >> [...]\n> > > > > > > >> A new catalog table, pg_subscription_seq, has been introduced for\n> > > > > > > >> mapping subscriptions to sequences. Additionally, the sequence LSN\n> > > > > > > >> (Log Sequence Number) is stored, facilitating determination of\n> > > > > > > >> sequence changes occurring before or after the returned sequence\n> > > > > > > >> state.\n> > > > > > > >\n> > > > > > > >\n> > > > > > > > Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n> > > > > > > > something.\n> > > > > > >\n> > > > > > > We'll require the lsn because the sequence LSN informs the user that\n> > > > > > > it has been synchronized up to the LSN in pg_subscription_seq. Since\n> > > > > > > we are not supporting incremental sync, the user will be able to\n> > > > > > > identify if he should run refresh sequences or not by checking the lsn\n> > > > > > > of the pg_subscription_seq and the lsn of the sequence(using\n> > > > > > > pg_sequence_state added) in the publisher.\n> > > > > >\n> > > > > > How the user will know from seq's lsn that he needs to run refresh.\n> > > > > > lsn indicates page_lsn and thus the sequence might advance on pub\n> > > > > > without changing lsn and thus lsn may look the same on subscriber even\n> > > > > > though a sequence-refresh is needed. Am I missing something here?\n> > > > >\n> > > > > When a sequence is synchronized to the subscriber, the page LSN of the\n> > > > > sequence from the publisher is also retrieved and stored in\n> > > > > pg_subscriber_rel as shown below:\n> > > > > --- Publisher page lsn\n> > > > > publisher=# select pg_sequence_state('seq1');\n> > > > > pg_sequence_state\n> > > > > --------------------\n> > > > > (0/1510E38,65,1,t)\n> > > > > (1 row)\n> > > > >\n> > > > > --- Subscriber stores the publisher's page lsn for the sequence\n> > > > > subscriber=# select * from pg_subscription_rel where srrelid = 16384;\n> > > > > srsubid | srrelid | srsubstate | srsublsn\n> > > > > ---------+---------+------------+-----------\n> > > > > 16389 | 16384 | r | 0/1510E38\n> > > > > (1 row)\n> > > > >\n> > > > > If changes are made to the sequence, such as performing many nextvals,\n> > > > > the page LSN will be updated. Currently the sequence values are\n> > > > > prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for\n> > > > > the prefetched values, once the prefetched values are consumed the lsn\n> > > > > will get updated.\n> > > > > For example:\n> > > > > --- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)\n> > > > > publisher=# select pg_sequence_state('seq1');\n> > > > > pg_sequence_state\n> > > > > ----------------------\n> > > > > (0/1558CA8,143,22,t)\n> > > > > (1 row)\n> > > > >\n> > > > > The user can then compare this updated value with the sequence's LSN\n> > > > > in pg_subscription_rel to determine when to re-synchronize the\n> > > > > sequence.\n> > > >\n> > > > Thanks for the details. But I was referring to the case where we are\n> > > > in between pre-fetched values on publisher (say at 25th value), while\n> > > > on subscriber we are slightly behind (say at 15th value), but page-lsn\n> > > > will be the same on both. Since the subscriber is behind, a\n> > > > sequence-refresh is needed on sub, but by looking at lsn (which is\n> > > > same), one can not say that for sure. Let me know if I have\n> > > > misunderstood it.\n> > >\n> > > Yes, at present, if the value is within the pre-fetched range, we\n> > > cannot distinguish it solely using the page_lsn.\n> > >\n> >\n> > This makes sense to me.\n> >\n> > >\n> > > However, the\n> > > pg_sequence_state function also provides last_value and log_cnt, which\n> > > can be used to handle these specific cases.\n> > >\n> >\n> > BTW, can we document all these steps for users to know when to refresh\n> > the sequences, if not already documented?\n>\n> This has been documented in the v20240807 version attached at [1].\n> [1] - https://www.postgresql.org/message-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ%2B6uJn-ZymV%3D0dbQ%40mail.gmail.com\n>\n\nVignesh, I looked at the patch dated 240808, but I could not find\nthese steps. Are you referring to the section ' Examples:\nSynchronizing Sequences Between Publisher and Subscriber' in doc\npatch004? If not, please point me to the concerned section.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 9 Aug 2024 12:13:00 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh, here are my review comments for the sequences docs patch\nv20240808-0004.\n\n======\ndoc/src/sgml/logical-replication.sgml\n\nThe new section content looked good.\n\nJust some nitpicks including:\n- renamed the section \"Replicating Sequences\"\n- added missing mention about how to publish sequences\n- rearranged the subscription commands into a more readable list\n- some sect2 titles were very long; I shortened them.\n- added <warning> markup for the sequence definition advice\n- other minor rewording and typo fixes\n\n~\n\n1.\nIMO the \"Caveats\" section can be removed.\n- the advice to avoid changing the sequence definition is already\ngiven earlier in the \"Sequence Definition Mismatches\" section\n- the limitation of \"incremental synchronization\" is already stated in\nthe logical replication \"Limitations\" section\n- (FYI, I removed it already in my nitpicks attachment)\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\nnitpick - I reversed the paragraphs to keep the references in a natural order.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nOn Fri, Aug 9, 2024 at 1:52 AM vignesh C <[email protected]> wrote:\n>\n> On Thu, 8 Aug 2024 at 08:30, Peter Smith <[email protected]> wrote:\n> >\n> > Hi Vignesh, Here are my v20240807-0003 review comments.\n> >\n> > 2a.\n> > The paragraph starts by saying \"Sequence data is not replicated.\". It\n> > seems wrong now. Doesn't that need rewording or removing?\n>\n> Changed it to incremental sequence changes.\n>\n> > ~\n> >\n> > 2b.\n> > Should the info \"If, however, some kind of switchover or failover...\"\n> > be mentioned in the \"Logical Replication Failover\" section [2],\n> > instead of here?\n>\n> I think mentioning this here is appropriate. The other section focuses\n> more on how logical replication can proceed with a new primary. Once\n> the logical replication setup is complete, sequences can be refreshed\n> at any time.\n>\n> Rest of the comments are fixed, the attached v20240808 version patch\n> has the changes for the same.\n>\n> Regards,\n> Vignesh", "msg_date": "Fri, 9 Aug 2024 17:10:08 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 8 Aug 2024 at 12:21, shveta malik <[email protected]> wrote:\n>\n> On Wed, Aug 7, 2024 at 1:45 PM vignesh C <[email protected]> wrote:\n> >\n> >\n> > The remaining comments have been addressed, and the changes are\n> > included in the attached v20240807 version patch.\n>\n> Thanks for addressing the comment. Please find few comments for v20240807 :\n>\n> patch003:\n> 2)\n>\n> * The page_lsn allows the user to determine if the sequence has been updated\n> * since the last synchronization with the subscriber. This is done by\n> * comparing the current page_lsn with the value stored in pg_subscription_rel\n> * from the last synchronization.\n> */\n> Datum\n> pg_sequence_state(PG_FUNCTION_ARGS)\n>\n> --This information is still incomplete. Maybe we should mention the\n> other attribute name as well which helps to determine this.\n\nI have removed this comment now as suggesting that users use\npg_sequence_state and sequence when page_lsn seems complex, the same\ncan be achieved by comparing the sequence values from a single\nstatement instead of a couple of statements. Peter had felt this would\nbe easier based on comment 3c at [1].\n\n> 5)\n> IIUC, sequencesync_failure_time is changed by multiple processes.\n> Seq-sync worker sets it before exiting on failure, while apply worker\n> resets it. Also, the applied worker reads it at a few places. Shall it\n> be accessed using LogicalRepWorkerLock?\n\nIf sequenceApply worker is already running, apply worker will not\naccess sequencesync_failure_time. Only if sequence sync worker is not\nrunning apply worker will access sequencesync_failure_time in the\nbelow code. I feel no need to use LogicalRepWorkerLock in this case.\n\n...\nsyncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,\nInvalidOid, WORKERTYPE_SEQUENCESYNC,\ntrue);\nif (syncworker)\n{\n/* Now safe to release the LWLock */\nLWLockRelease(LogicalRepWorkerLock);\nbreak;\n}\n\n/*\n* Count running sync workers for this subscription, while we have the\n* lock.\n*/\nnsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);\n\n/* Now safe to release the LWLock */\nLWLockRelease(LogicalRepWorkerLock);\n\n/*\n* If there are free sync worker slot(s), start a new sequence sync\n* worker, and break from the loop.\n*/\nif (nsyncworkers < max_sync_workers_per_subscription)\n{\nTimestampTz now = GetCurrentTimestamp();\n\nif (!MyLogicalRepWorker->sequencesync_failure_time ||\nTimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,\n now, wal_retrieve_retry_interval))\n{\nMyLogicalRepWorker->sequencesync_failure_time = 0;\n\nlogicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,\nMyLogicalRepWorker->dbid,\nMySubscription->oid,\nMySubscription->name,\nMyLogicalRepWorker->userid,\nInvalidOid,\nDSM_HANDLE_INVALID);\nbreak;\n}\n}\n...\n\n> 6)\n> process_syncing_sequences_for_apply():\n>\n> --I feel MyLogicalRepWorker->sequencesync_failure_time should be reset\n> to 0 after we are sure that logicalrep_worker_launch() has launched\n> the worker without any error. But not sure what could be the clean way\n> to do it? If we move it after logicalrep_worker_launch() call, there\n> are chances that seq-sync worker has started and failed already and\n> has set this failure time which will then be mistakenly reset by apply\n> worker. Also moving it inside logicalrep_worker_launch() does not seem\n> a good way.\n\nI felt we can keep it in the existing way to keep it consistent with\ntable sync worker restart like in process_syncing_tables_for_apply.\n\nThe rest of the comments are fixed. The rest of the comments are\nfixed in the v20240809 version patch attached.\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPvaq%3D0xsDWdVQ-kdjRa8Az%2BvgiMFTvT2E2nR3N-47TO8A%40mail.gmail.com\n\nRegards,\nVignesh", "msg_date": "Fri, 9 Aug 2024 18:42:41 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 9 Aug 2024 at 12:13, shveta malik <[email protected]> wrote:\n>\n> On Wed, Aug 7, 2024 at 2:00 PM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 7 Aug 2024 at 08:09, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Tue, Aug 6, 2024 at 5:13 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > On Mon, 5 Aug 2024 at 18:05, shveta malik <[email protected]> wrote:\n> > > > >\n> > > > > On Mon, Aug 5, 2024 at 11:04 AM vignesh C <[email protected]> wrote:\n> > > > > >\n> > > > > > On Wed, 31 Jul 2024 at 14:39, shveta malik <[email protected]> wrote:\n> > > > > > >\n> > > > > > > On Mon, Jun 10, 2024 at 5:00 PM vignesh C <[email protected]> wrote:\n> > > > > > > >\n> > > > > > > > On Mon, 10 Jun 2024 at 12:24, Amul Sul <[email protected]> wrote:\n> > > > > > > > >\n> > > > > > > > >\n> > > > > > > > >\n> > > > > > > > > On Sat, Jun 8, 2024 at 6:43 PM vignesh C <[email protected]> wrote:\n> > > > > > > > >>\n> > > > > > > > >> On Wed, 5 Jun 2024 at 14:11, Amit Kapila <[email protected]> wrote:\n> > > > > > > > >> [...]\n> > > > > > > > >> A new catalog table, pg_subscription_seq, has been introduced for\n> > > > > > > > >> mapping subscriptions to sequences. Additionally, the sequence LSN\n> > > > > > > > >> (Log Sequence Number) is stored, facilitating determination of\n> > > > > > > > >> sequence changes occurring before or after the returned sequence\n> > > > > > > > >> state.\n> > > > > > > > >\n> > > > > > > > >\n> > > > > > > > > Can't it be done using pg_depend? It seems a bit excessive unless I'm missing\n> > > > > > > > > something.\n> > > > > > > >\n> > > > > > > > We'll require the lsn because the sequence LSN informs the user that\n> > > > > > > > it has been synchronized up to the LSN in pg_subscription_seq. Since\n> > > > > > > > we are not supporting incremental sync, the user will be able to\n> > > > > > > > identify if he should run refresh sequences or not by checking the lsn\n> > > > > > > > of the pg_subscription_seq and the lsn of the sequence(using\n> > > > > > > > pg_sequence_state added) in the publisher.\n> > > > > > >\n> > > > > > > How the user will know from seq's lsn that he needs to run refresh.\n> > > > > > > lsn indicates page_lsn and thus the sequence might advance on pub\n> > > > > > > without changing lsn and thus lsn may look the same on subscriber even\n> > > > > > > though a sequence-refresh is needed. Am I missing something here?\n> > > > > >\n> > > > > > When a sequence is synchronized to the subscriber, the page LSN of the\n> > > > > > sequence from the publisher is also retrieved and stored in\n> > > > > > pg_subscriber_rel as shown below:\n> > > > > > --- Publisher page lsn\n> > > > > > publisher=# select pg_sequence_state('seq1');\n> > > > > > pg_sequence_state\n> > > > > > --------------------\n> > > > > > (0/1510E38,65,1,t)\n> > > > > > (1 row)\n> > > > > >\n> > > > > > --- Subscriber stores the publisher's page lsn for the sequence\n> > > > > > subscriber=# select * from pg_subscription_rel where srrelid = 16384;\n> > > > > > srsubid | srrelid | srsubstate | srsublsn\n> > > > > > ---------+---------+------------+-----------\n> > > > > > 16389 | 16384 | r | 0/1510E38\n> > > > > > (1 row)\n> > > > > >\n> > > > > > If changes are made to the sequence, such as performing many nextvals,\n> > > > > > the page LSN will be updated. Currently the sequence values are\n> > > > > > prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for\n> > > > > > the prefetched values, once the prefetched values are consumed the lsn\n> > > > > > will get updated.\n> > > > > > For example:\n> > > > > > --- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)\n> > > > > > publisher=# select pg_sequence_state('seq1');\n> > > > > > pg_sequence_state\n> > > > > > ----------------------\n> > > > > > (0/1558CA8,143,22,t)\n> > > > > > (1 row)\n> > > > > >\n> > > > > > The user can then compare this updated value with the sequence's LSN\n> > > > > > in pg_subscription_rel to determine when to re-synchronize the\n> > > > > > sequence.\n> > > > >\n> > > > > Thanks for the details. But I was referring to the case where we are\n> > > > > in between pre-fetched values on publisher (say at 25th value), while\n> > > > > on subscriber we are slightly behind (say at 15th value), but page-lsn\n> > > > > will be the same on both. Since the subscriber is behind, a\n> > > > > sequence-refresh is needed on sub, but by looking at lsn (which is\n> > > > > same), one can not say that for sure. Let me know if I have\n> > > > > misunderstood it.\n> > > >\n> > > > Yes, at present, if the value is within the pre-fetched range, we\n> > > > cannot distinguish it solely using the page_lsn.\n> > > >\n> > >\n> > > This makes sense to me.\n> > >\n> > > >\n> > > > However, the\n> > > > pg_sequence_state function also provides last_value and log_cnt, which\n> > > > can be used to handle these specific cases.\n> > > >\n> > >\n> > > BTW, can we document all these steps for users to know when to refresh\n> > > the sequences, if not already documented?\n> >\n> > This has been documented in the v20240807 version attached at [1].\n> > [1] - https://www.postgresql.org/message-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ%2B6uJn-ZymV%3D0dbQ%40mail.gmail.com\n> >\n>\n> Vignesh, I looked at the patch dated 240808, but I could not find\n> these steps. Are you referring to the section ' Examples:\n> Synchronizing Sequences Between Publisher and Subscriber' in doc\n> patch004? If not, please point me to the concerned section.\n\nI'm referring to the \"Refreshing Stale Sequences\" part in the\nv20240809 version patch attached at [1] which only mentions directly\ncomparing the sequence values.. I have removed the reference to\npg_sequence_state now as suggesting that users use pg_sequence_state\nand sequence when page_lsn seems complex, the same can be achieved by\ncomparing the sequence values from a single statement instead of a\ncouple of statements. Peter had felt this would be easier based on\ncomment 3c at [1].\n\n[1] - https://www.postgresql.org/message-id/CALDaNm0LJCtGoBCO6DFY-RDjR8vxapW3W1f7%3D-LSQx%3DXYjqU%3Dw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 9 Aug 2024 18:48:04 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 9 Aug 2024 at 05:51, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh, I reviewed the latest v20240808-0003 patch.\n>\n> Attached are my minor change suggestions.\n\nThanks, these changes are merged in the v20240809 version posted at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm0LJCtGoBCO6DFY-RDjR8vxapW3W1f7%3D-LSQx%3DXYjqU%3Dw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 9 Aug 2024 18:49:16 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 9 Aug 2024 at 12:40, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh, here are my review comments for the sequences docs patch\n> v20240808-0004.\n>\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> The new section content looked good.\n>\n> Just some nitpicks including:\n> - renamed the section \"Replicating Sequences\"\n> - added missing mention about how to publish sequences\n> - rearranged the subscription commands into a more readable list\n> - some sect2 titles were very long; I shortened them.\n> - added <warning> markup for the sequence definition advice\n> - other minor rewording and typo fixes\n\nI have retained the caveats section for now, I will think more and\nremove it if required in the next version.\n\nThe rest of the comments are fixed in the v20240809 version patch\nattached at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm0LJCtGoBCO6DFY-RDjR8vxapW3W1f7%3D-LSQx%3DXYjqU%3Dw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 9 Aug 2024 18:51:41 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh,\n\nv20240809-0001. No comments.\nv20240809-0002. See below.\nv20240809-0003. See below.\nv20240809-0004. No comments.\n\n//////////\n\nHere are my review comments for patch v20240809-0002.\n\nnit - Tweak wording in new docs example, because a publication only\npublishes the sequences; it doesn't \"synchronize\" anything.\n\n//////////\n\nHere are my review comments for patch v20240809-0003.\n\nfetch_sequence_list:\nnit - move comment\nnit - minor rewording for parameter WARNING message\n\n======\n.../replication/logical/sequencesync.c\nsrc/backend/replication/logical/tablesync.c\n\n1.\nCurrently the declaration 'sequence_states_not_ready' list seems\nbackwards. IMO it makes more sense for the declaration to be in\nsequencesync.c, and the extern in the tablesync.c. (please also see\nreview comment #3 below which might affect this too).\n\n~~~\n\n2.\n static bool\n-FetchTableStates(bool *started_tx)\n+FetchTableStates(void)\n {\n- static bool has_subrels = false;\n-\n- *started_tx = false;\n+ static bool has_subtables = false;\n+ bool started_tx = false;\n\nMaybe give the explanation why 'has_subtables' is declared static here.\n\n~~~\n\n3.\nI am not sure that it was an improvement to move the\nprocess_syncing_sequences_for_apply() function into the\nsequencesync.c. Calling the sequence code from the tablesync code\nstill looks strange. OTOH, I see why you don't want to leave it in\ntablesync.c.\n\nPerhaps it would be better to refactor/move all following functions\nback to the (apply) worker.c instead:\n- process_syncing_relations\n- process_syncing_sequences_for_apply(void)\n- process_syncing_tables_for_apply(void)\n\nActually, now that there are 2 kinds of 'sync' workers, maybe you\nshould introduce a new module (e.g. 'commonsync.c' or\n'syncworker.c...), where you can put functions such as\nprocess_syncing_relations() plus any other code common to both\ntablesync and sequencesync. That might make more sense then having one\ncall to the other.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 12 Aug 2024 13:20:04 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh,\n\nI noticed it is not currently possible (there is no syntax way to do\nit) to ALTER an existing publication so that it will publish\nSEQUENCES.\n\nIsn't that a limitation? Why?\n\nFor example,. Why should users be prevented from changing a FOR ALL\nTABLES publication into a FOR ALL TABLES, SEQUENCES one?\n\nSimilarly, there are other combinations not possible\nDROP ALL SEQUENCES from a publication that is FOR ALL TABLES, SEQUENCES\nDROP ALL TABLES from a publication that is FOR ALL TABLES, SEQUENCES\nADD ALL TABLES to a publication that is FOR ALL SEQUENCES\n...\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 12 Aug 2024 14:28:55 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh,\n\nI found that when 2 subscriptions are both subscribing to a\npublication publishing sequences, an ERROR occurs on refresh.\n\n======\n\nPublisher:\n----------\n\ntest_pub=# create publication pub1 for all sequences;\n\nSubscriber:\n-----------\n\ntest_sub=# create subscription sub1 connection 'dbname=test_pub'\npublication pub1;\n\ntest_sub=# create subscription sub2 connection 'dbname=test_pub'\npublication pub1;\n\ntest_sub=# alter subscription sub1 refresh publication sequences;\n2024-08-12 15:04:04.947 AEST [7306] LOG: sequence \"public.seq1\" of\nsubscription \"sub1\" set to INIT state\n2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription\nsub1 refresh publication sequences;\n2024-08-12 15:04:04.947 AEST [7306] LOG: sequence \"public.seq1\" of\nsubscription \"sub1\" set to INIT state\n2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription\nsub1 refresh publication sequences;\n2024-08-12 15:04:04.947 AEST [7306] ERROR: tuple already updated by self\n2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription\nsub1 refresh publication sequences;\nERROR: tuple already updated by self\n\ntest_sub=# alter subscription sub2 refresh publication sequences;\n2024-08-12 15:04:30.427 AEST [7306] LOG: sequence \"public.seq1\" of\nsubscription \"sub2\" set to INIT state\n2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription\nsub2 refresh publication sequences;\n2024-08-12 15:04:30.427 AEST [7306] LOG: sequence \"public.seq1\" of\nsubscription \"sub2\" set to INIT state\n2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription\nsub2 refresh publication sequences;\n2024-08-12 15:04:30.427 AEST [7306] ERROR: tuple already updated by self\n2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription\nsub2 refresh publication sequences;\nERROR: tuple already updated by self\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 12 Aug 2024 15:09:47 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, 12 Aug 2024 at 08:50, Peter Smith <[email protected]> wrote:\n>\n> ~~~\n>\n> 3.\n> I am not sure that it was an improvement to move the\n> process_syncing_sequences_for_apply() function into the\n> sequencesync.c. Calling the sequence code from the tablesync code\n> still looks strange. OTOH, I see why you don't want to leave it in\n> tablesync.c.\n>\n> Perhaps it would be better to refactor/move all following functions\n> back to the (apply) worker.c instead:\n> - process_syncing_relations\n> - process_syncing_sequences_for_apply(void)\n> - process_syncing_tables_for_apply(void)\n>\n> Actually, now that there are 2 kinds of 'sync' workers, maybe you\n> should introduce a new module (e.g. 'commonsync.c' or\n> 'syncworker.c...), where you can put functions such as\n> process_syncing_relations() plus any other code common to both\n> tablesync and sequencesync. That might make more sense then having one\n> call to the other.\n\nI created syncutils.c to consolidate code that supports worker\nsynchronization, table synchronization, and sequence synchronization.\nWhile it may not align exactly with your suggestion, I included\nfunctions like finish_sync_worker, invalidate_syncing_relation_states,\nFetchRelationStates, and process_syncing_relations in this new file. I\nbelieve this organization will make the code easier to review.\n\nThe rest of the comments are also fixed in the attached v20240812\nversion patch attached.\n\nRegards,\nVignesh", "msg_date": "Mon, 12 Aug 2024 18:36:26 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, 12 Aug 2024 at 10:40, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh,\n>\n> I found that when 2 subscriptions are both subscribing to a\n> publication publishing sequences, an ERROR occurs on refresh.\n>\n> ======\n>\n> Publisher:\n> ----------\n>\n> test_pub=# create publication pub1 for all sequences;\n>\n> Subscriber:\n> -----------\n>\n> test_sub=# create subscription sub1 connection 'dbname=test_pub'\n> publication pub1;\n>\n> test_sub=# create subscription sub2 connection 'dbname=test_pub'\n> publication pub1;\n>\n> test_sub=# alter subscription sub1 refresh publication sequences;\n> 2024-08-12 15:04:04.947 AEST [7306] LOG: sequence \"public.seq1\" of\n> subscription \"sub1\" set to INIT state\n> 2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription\n> sub1 refresh publication sequences;\n> 2024-08-12 15:04:04.947 AEST [7306] LOG: sequence \"public.seq1\" of\n> subscription \"sub1\" set to INIT state\n> 2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription\n> sub1 refresh publication sequences;\n> 2024-08-12 15:04:04.947 AEST [7306] ERROR: tuple already updated by self\n> 2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription\n> sub1 refresh publication sequences;\n> ERROR: tuple already updated by self\n>\n> test_sub=# alter subscription sub2 refresh publication sequences;\n> 2024-08-12 15:04:30.427 AEST [7306] LOG: sequence \"public.seq1\" of\n> subscription \"sub2\" set to INIT state\n> 2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription\n> sub2 refresh publication sequences;\n> 2024-08-12 15:04:30.427 AEST [7306] LOG: sequence \"public.seq1\" of\n> subscription \"sub2\" set to INIT state\n> 2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription\n> sub2 refresh publication sequences;\n> 2024-08-12 15:04:30.427 AEST [7306] ERROR: tuple already updated by self\n> 2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription\n> sub2 refresh publication sequences;\n> ERROR: tuple already updated by self\n\nThis issue is fixed in the v20240812 version attached at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm3hS58W0RTbgsMTk-YvXwt956uabA%3DkYfLGUs3uRNC2Qg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 12 Aug 2024 18:37:21 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, 12 Aug 2024 at 09:59, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh,\n>\n> I noticed it is not currently possible (there is no syntax way to do\n> it) to ALTER an existing publication so that it will publish\n> SEQUENCES.\n>\n> Isn't that a limitation? Why?\n>\n> For example,. Why should users be prevented from changing a FOR ALL\n> TABLES publication into a FOR ALL TABLES, SEQUENCES one?\n>\n> Similarly, there are other combinations not possible\n> DROP ALL SEQUENCES from a publication that is FOR ALL TABLES, SEQUENCES\n> DROP ALL TABLES from a publication that is FOR ALL TABLES, SEQUENCES\n> ADD ALL TABLES to a publication that is FOR ALL SEQUENCES\n\nYes, this should be addressed. However, I'll defer it until the\ncurrent set of patches is finalized and all comments have been\nresolved.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 12 Aug 2024 18:40:58 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh, Here are my review comments for latest v20240812* patchset:\n\npatch v20240812-0001. No comments.\npatch v20240812-0002. Fixed docs.LGTM\npatch v20240812-0003. This is new refactoring. See below.\npatch v20240812-0004. (was 0003). See below.\npatch v20240812-0005. (was 0004). No comments.\n\n//////\n\npatch v20240812-0003.\n\n3.1. GENERAL\n\nHmm. I am guessing this was provided as a separate patch to aid review\nby showing that existing functions are moved? OTOH you can't really\njudge this patch properly without already knowing details of what will\ncome next in the sequencesync. i.e. As a *standalone* patch without\nthe sequencesync.c the refactoring doesn't make much sense.\n\nMaybe it is OK later to combine patches 0003 and 0004. Alternatively,\nkeep this patch separated but give greater emphasis in the comment\nheader to say this patch only exists separately in order to help the\nreview.\n\n======\nCommit message\n\n3.2.\nReorganized tablesync code to generate a syncutils file which will\nhelp in sequence synchronization worker code.\n\n~\n\n\"generate\" ??\n\n======\nsrc/backend/replication/logical/syncutils.c\n\n3.3. \"common code\" ??\n\nFYI - There are multiple code comments mentioning \"common code...\"\nwhich, in the absence of the sequencesync worker (which comes in the\nnext patch), have nothing \"common\" about them at all. Fixing them and\nthen fixing them again in the next patch might cause unnecessary code\nchurn, but OTOH they aren't correct as-is either. I have left them\nalone for now.\n\n~\n\n3.4. function names\n\nWith the re-shuffling that this patch does, and changing several from\nstatic to not-static, should the function names remain as they are?\nThey look random to me.\n- finish_sync_worker(void)\n- invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)\n- FetchTableStates(bool *started_tx)\n- process_syncing_tables(XLogRecPtr current_lsn)\n\nI think using a consistent naming convention would be better. e.g.\nSyncFinishWorker\nSyncInvalidateTableStates\nSyncFetchTableStates\nSyncProcessTables\n\n~~~\n\nnit - file header comment\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n3.5.\n-static void\n+void\n process_syncing_tables_for_sync(XLogRecPtr current_lsn)\n\n-static void\n+void\n process_syncing_tables_for_apply(XLogRecPtr current_lsn)\n\nSince these functions are no longer static should those function names\nbe changed to use the CamelCase convention for non-static API?\n\n//////////\n\npatch v20240812-0004.\n\n======\nsrc/backend/replication/logical/syncutils.c\n\nnit - file header comment (made same as patch 0003)\n\n~\n\nFetchRelationStates:\nnit - IIUC sequence states are only INIT -> READY. So the comments in\nthis function dont need to specifically talk about sequence INIT\nstate.\n\n======\nsrc/backend/utils/misc/guc_tables.c\n\n4.1.\n {\"max_sync_workers_per_subscription\",\n PGC_SIGHUP,\n REPLICATION_SUBSCRIBERS,\n- gettext_noop(\"Maximum number of table synchronization workers per\nsubscription.\"),\n+ gettext_noop(\"Maximum number of relation synchronization workers per\nsubscription.\"),\n NULL,\n },\n\nI was wondering if \"relation synchronization workers\" is meaningful to\nthe user because that seems like new terminology.\nMaybe it should say \"... of table + sequence synchronization workers...\"\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 13 Aug 2024 13:49:26 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, Aug 12, 2024 at 11:07 PM vignesh C <[email protected]> wrote:\n>\n> On Mon, 12 Aug 2024 at 10:40, Peter Smith <[email protected]> wrote:\n> >\n> > Hi Vignesh,\n> >\n> > I found that when 2 subscriptions are both subscribing to a\n> > publication publishing sequences, an ERROR occurs on refresh.\n> >\n> > ======\n> >\n> > Publisher:\n> > ----------\n> >\n> > test_pub=# create publication pub1 for all sequences;\n> >\n> > Subscriber:\n> > -----------\n> >\n> > test_sub=# create subscription sub1 connection 'dbname=test_pub'\n> > publication pub1;\n> >\n> > test_sub=# create subscription sub2 connection 'dbname=test_pub'\n> > publication pub1;\n> >\n> > test_sub=# alter subscription sub1 refresh publication sequences;\n> > 2024-08-12 15:04:04.947 AEST [7306] LOG: sequence \"public.seq1\" of\n> > subscription \"sub1\" set to INIT state\n> > 2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription\n> > sub1 refresh publication sequences;\n> > 2024-08-12 15:04:04.947 AEST [7306] LOG: sequence \"public.seq1\" of\n> > subscription \"sub1\" set to INIT state\n> > 2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription\n> > sub1 refresh publication sequences;\n> > 2024-08-12 15:04:04.947 AEST [7306] ERROR: tuple already updated by self\n> > 2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription\n> > sub1 refresh publication sequences;\n> > ERROR: tuple already updated by self\n> >\n> > test_sub=# alter subscription sub2 refresh publication sequences;\n> > 2024-08-12 15:04:30.427 AEST [7306] LOG: sequence \"public.seq1\" of\n> > subscription \"sub2\" set to INIT state\n> > 2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription\n> > sub2 refresh publication sequences;\n> > 2024-08-12 15:04:30.427 AEST [7306] LOG: sequence \"public.seq1\" of\n> > subscription \"sub2\" set to INIT state\n> > 2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription\n> > sub2 refresh publication sequences;\n> > 2024-08-12 15:04:30.427 AEST [7306] ERROR: tuple already updated by self\n> > 2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription\n> > sub2 refresh publication sequences;\n> > ERROR: tuple already updated by self\n>\n> This issue is fixed in the v20240812 version attached at [1].\n> [1] - https://www.postgresql.org/message-id/CALDaNm3hS58W0RTbgsMTk-YvXwt956uabA%3DkYfLGUs3uRNC2Qg%40mail.gmail.com\n>\n\nYes, I confirmed it is now fixed. Thanks!\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 13 Aug 2024 14:03:44 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh,\n\nI have been using the latest patchset, trying a few things using many\n(1000) sequences.\n\nHere are some observations, plus some suggestions for consideration.\n\n~~~~~\n\nOBSERVATION #1\n\nWhen 1000s of sequences are refreshed using REFRESH PUBLICATION\nSEQUENCES the logging is excessive. For example, since there is only\none sequencesync worker why does it need to broadcast that it is\n\"finished\" separately for every single sequence. That is giving 1000s\nof lines of logs which don't seem to be of much interest to a user.\n\n...\n2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0918\" has\nfinished\n2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0919\" has\nfinished\n2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0920\" has\nfinished\n2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0921\" has\nfinished\n2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0922\" has\nfinished\n2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0923\" has\nfinished\n...\n\nPerhaps just LOG when each \"batch\" is completed, but the individual\nsequence finished logs can just be DEBUG information?\n\n~~~~~\n\nOBSERVATION #2\n\nWhen 1000s of sequences are refreshed (set to INIT) then there are\n1000s of logs like below:\n\n...\n2024-08-13 16:13:57.873 AEST [10301] LOG: sequence \"public.seq_0698\"\nof subscription \"sub3\" set to INIT state\n2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription\nsub3 refresh publication sequences;\n2024-08-13 16:13:57.873 AEST [10301] LOG: sequence \"public.seq_0699\"\nof subscription \"sub3\" set to INIT state\n2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription\nsub3 refresh publication sequences;\n2024-08-13 16:13:57.873 AEST [10301] LOG: sequence \"public.seq_0700\"\nof subscription \"sub3\" set to INIT state\n2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription\nsub3 refresh publication sequences;\n2024-08-13 16:13:57.873 AEST [10301] LOG: sequence \"public.seq_0701\"\nof subscription \"sub3\" set to INIT state\n2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription\nsub3 refresh publication sequences;\n2024-08-13 16:13:57.874 AEST [10301] LOG: sequence \"public.seq_0702\"\nof subscription \"sub3\" set to INIT state\n2024-08-13 16:13:57.874 AEST [10301] STATEMENT: alter subscription\nsub3 refresh publication sequences;\n...\n\nI felt that showing the STATEMENT for all of these is overkill. How\nabout changing that ereport LOG so it does not emit the statement 1000\ntimes? Or, maybe you can implement it as a \"dynamic\" log that emits\nthe STATEMENT if there are only a few logs a few times but skips it\nfor the next 995 logs.\n\n~~~~~\n\nOBSERVATION #3\n\nThe WARNING about mismatched sequences currently looks like this:\n\n2024-08-13 16:41:45.496 AEST [10301] WARNING: Parameters differ for\nremote and local sequences \"public.seq_0999\"\n2024-08-13 16:41:45.496 AEST [10301] HINT: Alter/Re-create the\nsequence using the same parameter as in remote.\n\nAlthough you could probably deduce it from nearby logs, I think it\nmight be more helpful to also identify the subscription name in this\nWARNING message. Otherwise, if there are many publications the user\nmay have no idea where the mismatched \"remote\" is coming from.\n\n~~~~\n\nOBSERVATION #4\n\nWhen 1000s of sequences are refreshed then there are 1000s of\nassociated logs. But (given there is only one sequencesync worker)\nthose logs are not always the order that I was expecting to see them.\n\ne.g.\n...\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0885\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0887\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0888\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0889\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0890\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0906\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0566\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0568\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0569\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0570\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0571\" has\nfinished\n2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\nsynchronization for subscription \"sub3\", sequence \"seq_0582\" has\nfinished\n...\n\nIs there a way to refresh sequences in a more natural (e.g.\nalphabetical) order to make these logs more readable?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 13 Aug 2024 17:00:53 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 13 Aug 2024 at 09:19, Peter Smith <[email protected]> wrote:\n>\n> 3.1. GENERAL\n>\n> Hmm. I am guessing this was provided as a separate patch to aid review\n> by showing that existing functions are moved? OTOH you can't really\n> judge this patch properly without already knowing details of what will\n> come next in the sequencesync. i.e. As a *standalone* patch without\n> the sequencesync.c the refactoring doesn't make much sense.\n>\n> Maybe it is OK later to combine patches 0003 and 0004. Alternatively,\n> keep this patch separated but give greater emphasis in the comment\n> header to say this patch only exists separately in order to help the\n> review.\n\nI have kept this patch only to show that this patch as such has no\ncode changes. If we move this to the next patch it will be difficult\nfor reviewers to know which is new code and which is old code. During\ncommit we can merge this with the next one. I felt it is better to add\nit in the commit message instead of comment header so updated the\ncommit message.\n\n> ======\n> src/backend/replication/logical/syncutils.c\n>\n> 3.3. \"common code\" ??\n>\n> FYI - There are multiple code comments mentioning \"common code...\"\n> which, in the absence of the sequencesync worker (which comes in the\n> next patch), have nothing \"common\" about them at all. Fixing them and\n> then fixing them again in the next patch might cause unnecessary code\n> churn, but OTOH they aren't correct as-is either. I have left them\n> alone for now.\n\nWe can ignore this as this will get merged to the next one. If you\nhave any comments you can give it on top of the next(0004) patch.\n\n> ~\n>\n> 3.4. function names\n>\n> With the re-shuffling that this patch does, and changing several from\n> static to not-static, should the function names remain as they are?\n> They look random to me.\n> - finish_sync_worker(void)\n> - invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)\n> - FetchTableStates(bool *started_tx)\n> - process_syncing_tables(XLogRecPtr current_lsn)\n>\n> I think using a consistent naming convention would be better. e.g.\n> SyncFinishWorker\n> SyncInvalidateTableStates\n> SyncFetchTableStates\n> SyncProcessTables\n\nOne advantage with keeping the existing names the same wherever\npossible will help while merging the changes to back-branches. So I'm\nnot making this change.\n\n> ~~~\n>\n> nit - file header comment\n>\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> 3.5.\n> -static void\n> +void\n> process_syncing_tables_for_sync(XLogRecPtr current_lsn)\n>\n> -static void\n> +void\n> process_syncing_tables_for_apply(XLogRecPtr current_lsn)\n>\n> Since these functions are no longer static should those function names\n> be changed to use the CamelCase convention for non-static API?\n\nOne advantage with keeping the existing names the same wherever\npossible will help while merging the changes to back-branches. So I'm\nnot making this change.\n\nThe rest of the comments were fixed, the attached v20240813 has the\nchanges for the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 13 Aug 2024 17:29:49 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 13 Aug 2024 at 12:31, Peter Smith <[email protected]> wrote:\n>\n> OBSERVATION #2\n>\n> When 1000s of sequences are refreshed (set to INIT) then there are\n> 1000s of logs like below:\n>\n> ...\n> 2024-08-13 16:13:57.873 AEST [10301] LOG: sequence \"public.seq_0698\"\n> of subscription \"sub3\" set to INIT state\n> 2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription\n> sub3 refresh publication sequences;\n> 2024-08-13 16:13:57.873 AEST [10301] LOG: sequence \"public.seq_0699\"\n> of subscription \"sub3\" set to INIT state\n> 2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription\n> sub3 refresh publication sequences;\n> 2024-08-13 16:13:57.873 AEST [10301] LOG: sequence \"public.seq_0700\"\n> of subscription \"sub3\" set to INIT state\n> 2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription\n> sub3 refresh publication sequences;\n> 2024-08-13 16:13:57.873 AEST [10301] LOG: sequence \"public.seq_0701\"\n> of subscription \"sub3\" set to INIT state\n> 2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription\n> sub3 refresh publication sequences;\n> 2024-08-13 16:13:57.874 AEST [10301] LOG: sequence \"public.seq_0702\"\n> of subscription \"sub3\" set to INIT state\n> 2024-08-13 16:13:57.874 AEST [10301] STATEMENT: alter subscription\n> sub3 refresh publication sequences;\n> ...\n>\n> I felt that showing the STATEMENT for all of these is overkill. How\n> about changing that ereport LOG so it does not emit the statement 1000\n> times? Or, maybe you can implement it as a \"dynamic\" log that emits\n> the STATEMENT if there are only a few logs a few times but skips it\n> for the next 995 logs.\n\nI have changed it to debug1 log level how we do for tables, so this\nwill not appear for default log level\n\n>\n> OBSERVATION #4\n>\n> When 1000s of sequences are refreshed then there are 1000s of\n> associated logs. But (given there is only one sequencesync worker)\n> those logs are not always the order that I was expecting to see them.\n>\n> e.g.\n> ...\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0885\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0887\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0888\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0889\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0890\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0906\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0566\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0568\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0569\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0570\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0571\" has\n> finished\n> 2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication\n> synchronization for subscription \"sub3\", sequence \"seq_0582\" has\n> finished\n> ...\n>\n> Is there a way to refresh sequences in a more natural (e.g.\n> alphabetical) order to make these logs more readable?\n\nI felt this is ok, no need to order it as it can easily be done using\nsome scripts if required from logs.\n\nThe rest of the issues were fixed, the v20240813 version patch\nattached at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm1Nr_n9SBB52L8A10Txyb4nqGJWfHUapwzM5BopvjMhjA%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 13 Aug 2024 17:33:42 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, Aug 13, 2024 at 10:00 PM vignesh C <[email protected]> wrote:\n>\n> On Tue, 13 Aug 2024 at 09:19, Peter Smith <[email protected]> wrote:\n> >\n> > 3.1. GENERAL\n> >\n> > Hmm. I am guessing this was provided as a separate patch to aid review\n> > by showing that existing functions are moved? OTOH you can't really\n> > judge this patch properly without already knowing details of what will\n> > come next in the sequencesync. i.e. As a *standalone* patch without\n> > the sequencesync.c the refactoring doesn't make much sense.\n> >\n> > Maybe it is OK later to combine patches 0003 and 0004. Alternatively,\n> > keep this patch separated but give greater emphasis in the comment\n> > header to say this patch only exists separately in order to help the\n> > review.\n>\n> I have kept this patch only to show that this patch as such has no\n> code changes. If we move this to the next patch it will be difficult\n> for reviewers to know which is new code and which is old code. During\n> commit we can merge this with the next one. I felt it is better to add\n> it in the commit message instead of comment header so updated the\n> commit message.\n>\n\nYes, I wrote \"comment header\" but it was a typo; I meant \"commit\nheader\". What you did looks good now. Thanks.\n\n> > ~\n> >\n> > 3.4. function names\n> >\n> > With the re-shuffling that this patch does, and changing several from\n> > static to not-static, should the function names remain as they are?\n> > They look random to me.\n> > - finish_sync_worker(void)\n> > - invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)\n> > - FetchTableStates(bool *started_tx)\n> > - process_syncing_tables(XLogRecPtr current_lsn)\n> >\n> > I think using a consistent naming convention would be better. e.g.\n> > SyncFinishWorker\n> > SyncInvalidateTableStates\n> > SyncFetchTableStates\n> > SyncProcessTables\n>\n> One advantage with keeping the existing names the same wherever\n> possible will help while merging the changes to back-branches. So I'm\n> not making this change.\n>\n\nAccording to my understanding, the logical replication code tries to\nmaintain name conventions for static functions (snake_case) and for\nnon-static functions (CamelCase) as an aid for code readability. I\nthink we should either do our best to abide by those conventions, or\nwe might as well just forget them and have a naming free-for-all.\nSince the new syncutils.c module is being introduced by this patch, my\nguess is that any future merging to back-branches will be affected\nregardless. IMO this is an ideal opportunity to try to nudge the\nfunction names in the right direction. YMMV.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 14 Aug 2024 10:34:01 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh, Here are my review comments for the latest patchset:\n\nPatch v20240813-0001. No comments\nPatch v20240813-0002. No comments\nPatch v20240813-0003. No comments\nPatch v20240813-0004. See below\nPatch v20240813-0005. No comments\n\n//////\n\nPatch v20240813-0004\n\n======\nsrc/backend/catalog/pg_subscription.\n\nGetSubscriptionRelations:\nnit - modify a condition for readability\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nfetch_sequence_list:\nnit - changed the WARNING message. /parameters differ\nbetween.../parameters differ for.../ (FYI, Chat-GPT agrees that 2nd\nway is more correct)\nnit - other minor changes to the message and hint\n\n======\n.../replication/logical/sequencesync.c\n\n1. LogicalRepSyncSequences\n\n+ ereport(DEBUG1,\n+ errmsg(\"logical replication synchronization for subscription \\\"%s\\\",\nsequence \\\"%s\\\" has finished\",\n+ get_subscription_name(subid, false), get_rel_name(done_seq->relid)));\n\nDEBUG logs should use errmsg_internal. (fixed also nitpicks attachment).\n\n~\n\nnit - minor change to the log message counting the batched sequences\n\n~~~\n\nprocess_syncing_sequences_for_apply:\nnit - /sequence sync worker/seqeuencesync worker/\n\n======\nsrc/backend/utils/misc/guc_tables.c\n\nnit - /max workers/maximum number of workers/ (for consistency because\nall other GUCs are verbose like this; nothing just says \"max\".)\n\n======\nsrc/test/subscription/t/034_sequences.pl\n\nnit - adjust the expected WARNING message (which was modified above)\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 14 Aug 2024 13:09:07 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 14 Aug 2024 at 08:39, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh, Here are my review comments for the latest patchset:\n>\n> Patch v20240813-0001. No comments\n> Patch v20240813-0002. No comments\n> Patch v20240813-0003. No comments\n> Patch v20240813-0004. See below\n> Patch v20240813-0005. No comments\n>\n> //////\n>\n> Patch v20240813-0004\n>\n\nThe comments have been addressed, and the patch also resolves a\nlimitation where sequence parameter changes between creating or\naltering a subscription and sequence synchronization worker syncing\nwere not detected and reported to the user. This issue is now handled\nby retrieving both the sequence value and its properties in a single\nSELECT statement. The corresponding documentation also was updated.\n\nThe attached v20240814 version patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 14 Aug 2024 18:36:11 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh, I have reviewed your latest patchset:\n\nv20240814-0001. No comments\nv20240814-0002. No comments\nv20240814-0003. No comments\nv20240814-0004. See below\nv20240814-0005. No comments\n\n//////\n\nv20240814-0004.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nCreateSubscription:\nnit - XXX comments\n\nAlterSubscription_refresh:\nnit - unnecessary parens in ereport\n\nAlterSubscription:\nnit - unnecessary parens in ereport\n\nfetch_sequence_list:\nnit - unnecessary parens in ereport\n\n======\n.../replication/logical/sequencesync.c\n\n1. fetch_remote_sequence_data\n\n+ * Returns:\n+ * - TRUE if there are discrepancies between the sequence parameters in\n+ * the publisher and subscriber.\n+ * - FALSE if the parameters match.\n+ */\n+static bool\n+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,\n+ char *nspname, char *relname, int64 *log_cnt,\n+ bool *is_called, XLogRecPtr *page_lsn,\n+ int64 *last_value)\n\nIMO it is more natural to return TRUE for good results and FALSE for\nbad ones. (FYI, I have implemented this reversal in the nitpicks\nattachment).\n\n~\n\nnit - swapped columns seqmin and seqmax in the SQL to fetch them in\nthe natural order\nnit - unnecessary parens in ereport\n\n~~~\n\ncopy_sequence:\nnit - update function comment to document the output parameter\nnit - Assert that *sequence_mismatch is false on entry to this function\nnit - tweak wrapping and add \\n in the SQL\nnit - unnecessary parens in ereport\n\nreport_sequence_mismatch:\nnit - modify function comment\nnit - function name changed\n/report_sequence_mismatch/report_mismatched_sequences/ (now plural\n(and more like the other one)\n\nappend_mismatched_sequences:\nnit - param name /rel/seqrel/\n\n~~~\n\n2. LogicalRepSyncSequences:\n+ Relation sequence_rel;\n+ XLogRecPtr sequence_lsn;\n+ bool sequence_mismatch;\n\nThe 'sequence_mismatch' variable must be initialized false, otherwise\nwe cannot trust it gets assigned.\n\n~\n\nLogicalRepSyncSequences:\nnit - unnecessary parens in ereport\nnit - move the for-loop variable declaration\nnit - remove a blank line\n\nprocess_syncing_sequences_for_apply:\nnit - variable declaration indent\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 15 Aug 2024 16:27:01 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 15 Aug 2024 at 11:57, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh, I have reviewed your latest patchset:\n>\n> v20240814-0001. No comments\n> v20240814-0002. No comments\n> v20240814-0003. No comments\n> v20240814-0004. See below\n> v20240814-0005. No comments\n>\n> //////\n>\n> v20240814-0004.\n\nThese comments are addressed in the v20240815 version patch attached.\n\nRegards,\nVignesh", "msg_date": "Thu, 15 Aug 2024 13:08:53 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh. I looked at the latest v20240815* patch set.\n\nI have only the following few comments for patch v20240815-0004, below.\n\n======\nCommit message.\n\nPlease see the attachment for some suggested updates.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nCreateSubscription:\nnit - fix wording in one of the XXX comments\n\n======\n.../replication/logical/sequencesync.c\n\nreport_mismatched_sequences:\nnit - param name /warning_sequences/mismatched_seqs/\n\nappend_mismatched_sequences:\nnit - param name /warning_sequences/mismatched_seqs/\n\nLogicalRepSyncSequences:\nnit - var name /warning_sequences/mismatched_seqs/\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 16 Aug 2024 14:55:56 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 16 Aug 2024 at 10:26, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh. I looked at the latest v20240815* patch set.\n>\n> I have only the following few comments for patch v20240815-0004, below.\n\nThanks, these are handled in the v20240816 version patch attached.\n\nRegards,\nVignesh", "msg_date": "Fri, 16 Aug 2024 11:08:28 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, 16 Aug 2024 at 11:08, vignesh C <[email protected]> wrote:\n>\n> On Fri, 16 Aug 2024 at 10:26, Peter Smith <[email protected]> wrote:\n> >\n> > Hi Vignesh. I looked at the latest v20240815* patch set.\n> >\n> > I have only the following few comments for patch v20240815-0004, below.\n>\n> Thanks, these are handled in the v20240816 version patch attached.\n\nCFBot reported one warning with the patch, here is an updated patch\nfor the same.\n\nRegards,\nVignesh", "msg_date": "Sat, 17 Aug 2024 20:40:51 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Here are my review comments for the latest patchset\n\nv20240817-0001. No changes. No comments.\nv20240817-0002. No changes. No comments.\nv20240817-0003. See below.\nv20240817-0004. See below.\nv20240817-0005. No changes. No comments.\n\n//////\n\nv20240817-0003 and 0004.\n\n(This is a repeat of the same comment as in previous reviews, but lots\nmore functions seem affected now)\n\nIIUC, the LR code tries to follow function naming conventions (e.g.\nCamelCase/snake_case for exposed/static functions respectively),\nintended to make the code more readable. But, this only works if the\nconventions are followed.\n\nNow, patches 0003 and 0004 are shuffling more and more functions\nbetween modules while changing them from static to non-static (or vice\nversa). So, the function name conventions are being violated many\ntimes. IMO these functions ought to be renamed according to their new\nmodifiers to avoid the confusion caused by ignoring the name\nconventions.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 19 Aug 2024 12:16:53 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Mon, 19 Aug 2024 at 07:47, Peter Smith <[email protected]> wrote:\n>\n> Here are my review comments for the latest patchset\n>\n> v20240817-0001. No changes. No comments.\n> v20240817-0002. No changes. No comments.\n> v20240817-0003. See below.\n> v20240817-0004. See below.\n> v20240817-0005. No changes. No comments.\n>\n> //////\n>\n> v20240817-0003 and 0004.\n>\n> (This is a repeat of the same comment as in previous reviews, but lots\n> more functions seem affected now)\n>\n> IIUC, the LR code tries to follow function naming conventions (e.g.\n> CamelCase/snake_case for exposed/static functions respectively),\n> intended to make the code more readable. But, this only works if the\n> conventions are followed.\n>\n> Now, patches 0003 and 0004 are shuffling more and more functions\n> between modules while changing them from static to non-static (or vice\n> versa). So, the function name conventions are being violated many\n> times. IMO these functions ought to be renamed according to their new\n> modifiers to avoid the confusion caused by ignoring the name\n> conventions.\n\nI have handled these in the v20240819 version patch attached.\n\nRegards,\nVignesh", "msg_date": "Mon, 19 Aug 2024 18:38:17 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh, Here are my review comments for the latest patchset\n\nv20240819-0001. No changes. No comments.\nv20240819-0002. No changes. No comments.\nv20240819-0003. See below.\nv20240819-0004. See below.\nv20240819-0005. No changes. No comments.\n\n///////////////////////\n\nPATCH v20240819-0003\n\n======\nsrc/backend/replication/logical/syncutils.c\n\n3.1.\n+typedef enum\n+{\n+ SYNC_RELATION_STATE_NEEDS_REBUILD,\n+ SYNC_RELATION_STATE_REBUILD_STARTED,\n+ SYNC_RELATION_STATE_VALID,\n+} SyncingRelationsState;\n+\n+static SyncingRelationsState relation_states_validity =\nSYNC_RELATION_STATE_NEEDS_REBUILD;\n\nThere is some muddle of singular/plural names here. The\ntypedef/values/var should all match:\n\ne.g. It could be like:\nSYNC_RELATION_STATE_xxx --> SYNC_RELATION_STATES_xxx\nSyncingRelationsState --> SyncRelationStates\n\nBut, a more radical change might be better.\n\ntypedef enum\n{\nRELATION_STATES_SYNC_NEEDED,\nRELATION_STATES_SYNC_STARTED,\nRELATION_STATES_SYNCED,\n} SyncRelationStates;\n\n~~~\n\n3.2. GENERAL refactoring\n\nI don't think all of the functions moved into syncutil.c truly belong there.\n\nThis new module was introduced to be for common/util functions for\ntablesync and sequencesync, but with each patchset, it has been\nsucking in more and more functions that maybe do not quite belong\nhere.\n\nFor example, AFAIK these below have logic that is *solely* for TABLES\n(not for SEQUENCES). Perhaps it was convenient to dump them here\nbecause they are statically called, but I felt they still logically\nbelong in tablesync.c:\n- process_syncing_tables_for_sync(XLogRecPtr current_lsn)\n- process_syncing_tables_for_apply(XLogRecPtr current_lsn)\n- AllTablesyncsReady(void)\n\n~~~\n\n3.3.\n+static bool\n+FetchRelationStates(bool *started_tx)\n+{\n\nIf this function can remain static then the name should change to be\nlike fetch_table_states, right?\n\n======\nsrc/include/replication/worker_internal.h\n\n3.4.\n+extern bool wait_for_relation_state_change(Oid relid, char expected_state);\n\nIf this previously static function will be exposed now (it may not\nneed to be if some other functions are returned tablesync.c) then the\nfunction name should also be changed, right?\n\n////////////////////////\n\nPATCH v20240819-0004\n\n======\nsrc/backend/replication/logical/syncutils.c\n\n4.1 GENERAL refactoring\n\n(this is similar to review comment #3.2 above)\n\nFunctions like below have logic that is *solely* for SEQUENCES (not\nfor TABLES). I felt they logically belong in sequencesync.c, not here.\n- process_syncing_sequences_for_apply(void)\n\n~~~\n\nFetchRelationStates:\nnit - the comment change about \"not-READY tables\" (instead of\nrelations) should be already in patch 0003.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 20 Aug 2024 11:56:52 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Tue, 20 Aug 2024 at 07:27, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh, Here are my review comments for the latest patchset\n>\n> v20240819-0001. No changes. No comments.\n> v20240819-0002. No changes. No comments.\n> v20240819-0003. See below.\n> v20240819-0004. See below.\n> v20240819-0005. No changes. No comments.\n>\n\nThese comments are handled in the v20240820 version patch attached.\n\nRegards,\nVignesh", "msg_date": "Tue, 20 Aug 2024 13:14:00 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "Hi Vignesh, Here are my only review comments for the latest patch set.\n\nv20240820-0003.\n\nnit - missing period for comment in FetchRelationStates\nnit - typo in function name 'ProcessSyncingTablesFoSync'\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 21 Aug 2024 13:02:55 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 21 Aug 2024 at 08:33, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh, Here are my only review comments for the latest patch set.\n\nThanks, these issues have been addressed in the updated version.\nAdditionally, I have fixed the pgindent problems that were reported\nand included another advantage of this design in the file header of\nthe sequencesync file.\n\nRegards,\nVignesh", "msg_date": "Wed, 21 Aug 2024 11:54:28 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Wed, 21 Aug 2024 at 11:54, vignesh C <[email protected]> wrote:\n>\n> On Wed, 21 Aug 2024 at 08:33, Peter Smith <[email protected]> wrote:\n> >\n> > Hi Vignesh, Here are my only review comments for the latest patch set.\n>\n> Thanks, these issues have been addressed in the updated version.\n> Additionally, I have fixed the pgindent problems that were reported\n> and included another advantage of this design in the file header of\n> the sequencesync file.\n\nThe patch was not applied on top of head, here is a rebased version of\nthe patches.\nI have also removed an invalidation which was not required for\nsequences and a typo.\n\nRegards,\nVignesh", "msg_date": "Fri, 20 Sep 2024 09:36:44 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Fri, Sep 20, 2024 at 9:36 AM vignesh C <[email protected]> wrote:\n>\n> On Wed, 21 Aug 2024 at 11:54, vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 21 Aug 2024 at 08:33, Peter Smith <[email protected]> wrote:\n> > >\n> > > Hi Vignesh, Here are my only review comments for the latest patch set.\n> >\n> > Thanks, these issues have been addressed in the updated version.\n> > Additionally, I have fixed the pgindent problems that were reported\n> > and included another advantage of this design in the file header of\n> > the sequencesync file.\n>\n> The patch was not applied on top of head, here is a rebased version of\n> the patches.\n> I have also removed an invalidation which was not required for\n> sequences and a typo.\n>\n\nThank You for the patches. I would like to understand srsublsn and\npage_lsn more. Please see the scenario below:\n\nI have a sequence:\nCREATE SEQUENCE myseq0 INCREMENT 5 START 100;\n\nAfter refresh on sub:\npostgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\nALTER SUBSCRIPTION\n\npostgres=# select * from pg_subscription_rel;\n srsubid | srrelid | srsubstate | srsublsn\n---------+---------+------------+-----------\n 16385 | 16384 | r | 0/152F380 -->pub's page_lsn\n\n\npostgres=# select * from pg_sequence_state('myseq0');\n page_lsn | last_value | log_cnt | is_called\n-----------+------------+---------+-----------\n 0/152D830 | 105 | 31 | t -->(I am assuming 0/152D830 is\nlocal page_lsn corresponding to value-=105)\n\nNow I assume that *only* after doing next_wal for 31 times, page_lsn\nshall change. But I observe strange behaviour\n\nAfter running nextval on sub for 7 times:\npostgres=# select * from pg_sequence_state('myseq0');\n page_lsn | last_value | log_cnt | is_called\n-----------+------------+---------+-----------\n 0/152D830 | 140 | 24 | t -->correct\n\nAfter running nextval on sub for 15 more times:\npostgres=# select * from pg_sequence_state('myseq0');\n page_lsn | last_value | log_cnt | is_called\n-----------+------------+---------+-----------\n 0/152D830 | 215 | 9 | t -->correct\n(1 row)\n\nNow after running it 6 more times:\npostgres=# select * from pg_sequence_state('myseq0');\n page_lsn | last_value | log_cnt | is_called\n-----------+------------+---------+-----------\n 0/152D990 | 245 | 28 | t --> how??\n\nlast_value increased in the expected way (6*5), but page_lsn changed\nand log_cnt changed before we could complete the remaining runs as\nwell. Not sure why??\n\nNow if I do refresh again:\n\npostgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\nALTER SUBSCRIPTION\n\npostgres=# select * from pg_subscription_rel;\n srsubid | srrelid | srsubstate | srsublsn\n---------+---------+------------+-----------\n 16385 | 16384 | r | 0/152F380-->pub's page_lsn, same as old one.\n\npostgres=# select * from pg_sequence_state('myseq0');\n page_lsn | last_value | log_cnt | is_called\n-----------+------------+---------+-----------\n 0/152DDB8 | 105 | 31 | t\n(1 row)\n\nNow, what is this page_lsn = 0/152DDB8? Should it be the one\ncorresponding to last_value=105 and thus shouldn't it match the\nprevious value of 0/152D830?\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 26 Sep 2024 11:07:25 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" }, { "msg_contents": "On Thu, 26 Sept 2024 at 11:07, shveta malik <[email protected]> wrote:\n>\n> On Fri, Sep 20, 2024 at 9:36 AM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 21 Aug 2024 at 11:54, vignesh C <[email protected]> wrote:\n> > >\n> > > On Wed, 21 Aug 2024 at 08:33, Peter Smith <[email protected]> wrote:\n> > > >\n> > > > Hi Vignesh, Here are my only review comments for the latest patch set.\n> > >\n> > > Thanks, these issues have been addressed in the updated version.\n> > > Additionally, I have fixed the pgindent problems that were reported\n> > > and included another advantage of this design in the file header of\n> > > the sequencesync file.\n> >\n> > The patch was not applied on top of head, here is a rebased version of\n> > the patches.\n> > I have also removed an invalidation which was not required for\n> > sequences and a typo.\n> >\n>\n> Thank You for the patches. I would like to understand srsublsn and\n> page_lsn more. Please see the scenario below:\n>\n> I have a sequence:\n> CREATE SEQUENCE myseq0 INCREMENT 5 START 100;\n>\n> After refresh on sub:\n> postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\n> ALTER SUBSCRIPTION\n>\n> postgres=# select * from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn\n> ---------+---------+------------+-----------\n> 16385 | 16384 | r | 0/152F380 -->pub's page_lsn\n>\n>\n> postgres=# select * from pg_sequence_state('myseq0');\n> page_lsn | last_value | log_cnt | is_called\n> -----------+------------+---------+-----------\n> 0/152D830 | 105 | 31 | t -->(I am assuming 0/152D830 is\n> local page_lsn corresponding to value-=105)\n>\n> Now I assume that *only* after doing next_wal for 31 times, page_lsn\n> shall change. But I observe strange behaviour\n>\n> After running nextval on sub for 7 times:\n> postgres=# select * from pg_sequence_state('myseq0');\n> page_lsn | last_value | log_cnt | is_called\n> -----------+------------+---------+-----------\n> 0/152D830 | 140 | 24 | t -->correct\n>\n> After running nextval on sub for 15 more times:\n> postgres=# select * from pg_sequence_state('myseq0');\n> page_lsn | last_value | log_cnt | is_called\n> -----------+------------+---------+-----------\n> 0/152D830 | 215 | 9 | t -->correct\n> (1 row)\n>\n> Now after running it 6 more times:\n> postgres=# select * from pg_sequence_state('myseq0');\n> page_lsn | last_value | log_cnt | is_called\n> -----------+------------+---------+-----------\n> 0/152D990 | 245 | 28 | t --> how??\n>\n> last_value increased in the expected way (6*5), but page_lsn changed\n> and log_cnt changed before we could complete the remaining runs as\n> well. Not sure why??\n\nThis can occur if a checkpoint happened at that time. The regression\ntest also has specific handling for this, as noted in a comment within\nthe sequence.sql test file:\n-- log_cnt can be higher if there is a checkpoint just at the right\n-- time\n\n> Now if I do refresh again:\n>\n> postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;\n> ALTER SUBSCRIPTION\n>\n> postgres=# select * from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn\n> ---------+---------+------------+-----------\n> 16385 | 16384 | r | 0/152F380-->pub's page_lsn, same as old one.\n>\n> postgres=# select * from pg_sequence_state('myseq0');\n> page_lsn | last_value | log_cnt | is_called\n> -----------+------------+---------+-----------\n> 0/152DDB8 | 105 | 31 | t\n> (1 row)\n>\n> Now, what is this page_lsn = 0/152DDB8? Should it be the one\n> corresponding to last_value=105 and thus shouldn't it match the\n> previous value of 0/152D830?\n\nAfter executing REFRESH PUBLICATION SEQUENCES, the publication value\nwill be resynchronized, and a new LSN will be generated and updated\nfor the publisher sequence (using the old value). Therefore, this is\nnot a concern.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 29 Sep 2024 12:34:44 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical Replication of sequences" } ]
[ { "msg_contents": "Hackers,\n\nThe behavior of the .* jpiAnyKey jsonpath selector seems incorrect.\n\n```\nselect jsonb_path_query('[1,2,3]', '$.*');\njsonb_path_query \n------------------\n(0 rows)\n\nselect jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', '$.*');\njsonb_path_query \n------------------\n[3, 4, 5]\n```\n\nThe first example might be expected, since .* is intended for object keys, but the handing of `jpiAnyKey` has a branch for unwrapping arrays. The second example, however, just seems weird: this is .*, not .**.\n\nThe attached patch fixes it by passing the next node to `executeAnyItem()` (via `executeItemUnwrapTargetArray()`) and then properly setting `jperOk` when `executeAnyItem()` finds values and there is no current (next) node.\n\nI took this approach given what appears to be the intended behavior or $* on arrays in lax mode. However, I could see an argument that .* should not apply to arrays at all. If so, I can submit a new patch removing the branch that unwraps an array with .*.\n\nBest,\n\nDavid", "msg_date": "Tue, 4 Jun 2024 12:07:00 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Tuesday, June 4, 2024, David E. Wheeler <[email protected]> wrote:\n\n> Hackers,\n>\n> The behavior of the .* jpiAnyKey jsonpath selector seems incorrect.\n>\n> ```\n> select jsonb_path_query('[1,2,3]', '$.*');\n> jsonb_path_query\n> ------------------\n> (0 rows)\n>\n> select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', '$.*');\n> jsonb_path_query\n> ------------------\n> [3, 4, 5]\n> ```\n>\n> The first example might be expected, since .* is intended for object keys,\n> but the handing of `jpiAnyKey` has a branch for unwrapping arrays. The\n> second example, however, just seems weird: this is .*, not .**.\n>\n\nThis seems to be working correctly. Lax mode causes the first array level\nto unwrap and produce new context item values. Then the wildcard member\naccessor is applied to each. Numbers don’t have members so no matches\nexist in the first example. The object in the second indeed has a single\nmember and so matches the wildcard and its value, the array, is returned.\n\nDavid J.\n\nOn Tuesday, June 4, 2024, David E. Wheeler <[email protected]> wrote:Hackers,\n\nThe behavior of the .* jpiAnyKey jsonpath selector seems incorrect.\n\n```\nselect jsonb_path_query('[1,2,3]', '$.*');\njsonb_path_query \n------------------\n(0 rows)\n\nselect jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', '$.*');\njsonb_path_query \n------------------\n[3, 4, 5]\n```\n\nThe first example might be expected, since .* is intended for object keys, but the handing of `jpiAnyKey` has a branch for unwrapping arrays. The second example, however, just seems weird: this is .*, not .**.\nThis seems to be working correctly. Lax mode causes the first array level to unwrap and produce new context item values.  Then the wildcard member accessor is applied to each.  Numbers don’t have members so no matches exist in the first example.  The object in the second indeed has a single member and so matches the wildcard and its value, the array, is returned.David J.", "msg_date": "Tue, 4 Jun 2024 09:28:18 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Jun 4, 2024, at 12:28 PM, David G. Johnston <[email protected]> wrote:\n\n> This seems to be working correctly. Lax mode causes the first array level to unwrap and produce new context item values. Then the wildcard member accessor is applied to each. Numbers don’t have members so no matches exist in the first example. The object in the second indeed has a single member and so matches the wildcard and its value, the array, is returned.\n\nOh FFS, unwrapping still breaks my brain. You’re right, of course. Here’s a new patch that demonstrates that behavior, since that code path is not currently represented in tests AFAICT (I would have expected to have broken it with this patch).\n\nD", "msg_date": "Tue, 4 Jun 2024 20:45:03 -0400", "msg_from": "David E. Wheeler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Jun 4, 2024, at 20:45, David E. Wheeler <[email protected]> wrote:\n\n> Here’s a new patch that demonstrates that behavior, since that code path is not currently represented in tests AFAICT (I would have expected to have broken it with this patch).\n\nCommitfest link:\n\n https://commitfest.postgresql.org/48/5017/\n\nD\n\n\n\n", "msg_date": "Wed, 5 Jun 2024 10:20:47 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Jun 4, 2024, at 20:45, David E. Wheeler <[email protected]> wrote:\n\n> Oh FFS, unwrapping still breaks my brain. You’re right, of course. Here’s a new patch that demonstrates that behavior, since that code path is not currently represented in tests AFAICT (I would have expected to have broken it with this patch).\n\nRebased and moved the new tests to the end of the file.\n\nD", "msg_date": "Fri, 7 Jun 2024 10:23:00 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Jun 7, 2024, at 10:23, David E. Wheeler <[email protected]> wrote:\n\n> Rebased and moved the new tests to the end of the file.\n\nBah, sorry, that was the previous patch. Here’s v3.\n\nD", "msg_date": "Fri, 7 Jun 2024 10:23:43 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": ">Вторник, 25 июня 2024, 11:17 +07:00 от David E. Wheeler <[email protected]>:\n> \n>On Jun 7, 2024, at 10:23, David E. Wheeler < [email protected] > wrote:\n> \n>> Rebased and moved the new tests to the end of the file.\n>Bah, sorry, that was the previous patch. Here’s v3.\n>\n>D\n>  \n \n \nHi! Looks good to me, but I have several comments.\nYour patch improves tests, but why did you change formatting in jsonpath_exec.c? What's the motivation?\n \n[1] select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'strict $.*');\nI propose adding a similar test with explicitly specified lax mode: select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'lax $.*'); to show what lax mode is set by default.\n \nOdd empty result for the test: select jsonb '[1,2,3,{\"b\": [3,4,5]}]' @? 'strict $.*';\nI expected an error like in test [1]. This behavior is not obvious to me.\n \nEverything else is cool. Thanks to the patch and the discussion above, I began to understand better how wildcards in JSON work.\nBest regards, Stepan Neretin.\n \n Вторник, 25 июня 2024, 11:17 +07:00 от David E. Wheeler <[email protected]>: On Jun 7, 2024, at 10:23, David E. Wheeler <[email protected]> wrote: > Rebased and moved the new tests to the end of the file.Bah, sorry, that was the previous patch. Here’s v3.D   Hi! Looks good to me, but I have several comments.Your patch improves tests, but why did you change formatting in jsonpath_exec.c? What's the motivation? [1] select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'strict $.*');I propose adding a similar test with explicitly specified lax mode: select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'lax $.*'); to show what lax mode is set by default. Odd empty result for the test: select jsonb '[1,2,3,{\"b\": [3,4,5]}]' @? 'strict $.*';I expected an error like in test [1]. This behavior is not obvious to me. Everything else is cool. Thanks to the patch and the discussion above, I began to understand better how wildcards in JSON work.Best regards, Stepan Neretin.", "msg_date": "Tue, 25 Jun 2024 07:46:25 +0300", "msg_from": "=?UTF-8?B?0KHRgtC10L/QsNC9INCd0LXRgNC10YLQuNC9?= <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IFBhdGNoIGJ1ZzogRml4IGpzb25wYXRoIC4qIG9uIEFycmF5cw==?=" }, { "msg_contents": "On Jun 25, 2024, at 12:46 AM, Степан Неретин <[email protected]> wrote:\n\n> Hi! Looks good to me, but I have several comments.\n\nThanks for your review!\n\n> Your patch improves tests, but why did you change formatting in jsonpath_exec.c? What's the motivation?\n\nIt’s not just formatting. From the commit message:\n\n> While at it, teach `executeAnyItem()` to return `jperOk` when `found`\n> exist, not because it will be used (the result and `found` are inspected\n> by different functions), but because it seems like the proper thing to\n> return from `executeAnyItem()` when considered in isolation.\n\n\nI have since realized it’s not a complete fix for the issue, and hacked around it in my Go version. Would be fine to remove that bit, but IIRC this was the only execution function that would return `jperNotFound` when it in fact adds items to the `found` list. The current implementation only looks at one or the other, so it’s not super important, but I found the inconsistency annoying and sometimes confusing.\n\n> [1] select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'strict $.*');\n> I propose adding a similar test with explicitly specified lax mode: select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'lax $.*'); to show what lax mode is set by default.\n\nVery few of the other tests do so; I can add it if it’s important for this case, though.\n\n> Odd empty result for the test: select jsonb '[1,2,3,{\"b\": [3,4,5]}]' @? 'strict $.*';\n> I expected an error like in test [1]. This behavior is not obvious to me.\n\n@? suppresses a number of errors. Perhaps I should add a variant of the error-raising query that passes the silent arg, which would also suppress the error.\n\n> Everything else is cool. Thanks to the patch and the discussion above, I began to understand better how wildcards in JSON work.\n\nYeah, it’s kind of wild TBH.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 13:48:45 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Jun 25, 2024, at 13:48, David E. Wheeler <[email protected]> wrote:\n\n> I have since realized it’s not a complete fix for the issue, and hacked around it in my Go version. Would be fine to remove that bit, but IIRC this was the only execution function that would return `jperNotFound` when it in fact adds items to the `found` list. The current implementation only looks at one or the other, so it’s not super important, but I found the inconsistency annoying and sometimes confusing.\n\nI’ve removed this change.\n\n>> [1] select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'strict $.*');\n>> I propose adding a similar test with explicitly specified lax mode: select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'lax $.*'); to show what lax mode is set by default.\n> \n> Very few of the other tests do so; I can add it if it’s important for this case, though.\n\nWent ahead and added lax.\n\n> @? suppresses a number of errors. Perhaps I should add a variant of the error-raising query that passes the silent arg, which would also suppress the error.\n\nAdded a variant where the silent param suppresses the error, too.\n\nV2 attached and the PR updated:\n\n https://github.com/theory/postgres/pull/4/files\n\nBest,\n\nDavid", "msg_date": "Wed, 26 Jun 2024 14:16:05 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Thu, Jun 27, 2024 at 1:16 AM David E. Wheeler <[email protected]>\nwrote:\n\n> On Jun 25, 2024, at 13:48, David E. Wheeler <[email protected]> wrote:\n>\n> > I have since realized it’s not a complete fix for the issue, and hacked\n> around it in my Go version. Would be fine to remove that bit, but IIRC this\n> was the only execution function that would return `jperNotFound` when it in\n> fact adds items to the `found` list. The current implementation only looks\n> at one or the other, so it’s not super important, but I found the\n> inconsistency annoying and sometimes confusing.\n>\n> I’ve removed this change.\n>\n> >> [1] select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'strict $.*');\n> >> I propose adding a similar test with explicitly specified lax mode:\n> select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'lax $.*'); to show what\n> lax mode is set by default.\n> >\n> > Very few of the other tests do so; I can add it if it’s important for\n> this case, though.\n>\n> Went ahead and added lax.\n>\n> > @? suppresses a number of errors. Perhaps I should add a variant of the\n> error-raising query that passes the silent arg, which would also suppress\n> the error.\n>\n> Added a variant where the silent param suppresses the error, too.\n>\n> V2 attached and the PR updated:\n>\n> https://github.com/theory/postgres/pull/4/files\n>\n> Best,\n>\n> David\n>\n>\n>\n>\nHI! Now it looks good for me.\nBest regards, Stepan Neretin.\n\nOn Thu, Jun 27, 2024 at 1:16 AM David E. Wheeler <[email protected]> wrote:On Jun 25, 2024, at 13:48, David E. Wheeler <[email protected]> wrote:\n\n> I have since realized it’s not a complete fix for the issue, and hacked around it in my Go version. Would be fine to remove that bit, but IIRC this was the only execution function that would return `jperNotFound` when it in fact adds items to the `found` list. The current implementation only looks at one or the other, so it’s not super important, but I found the inconsistency annoying and sometimes confusing.\n\nI’ve removed this change.\n\n>> [1] select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'strict $.*');\n>> I propose adding a similar test with explicitly specified lax mode: select jsonb_path_query('[1,2,3,{\"b\": [3,4,5]}]', 'lax $.*'); to show what lax mode is set by default.\n> \n> Very few of the other tests do so; I can add it if it’s important for this case, though.\n\nWent ahead and added lax.\n\n> @? suppresses a number of errors. Perhaps I should add a variant of the error-raising query that passes the silent arg, which would also suppress the error.\n\nAdded a variant where the silent param suppresses the error, too.\n\nV2 attached and the PR updated:\n\n  https://github.com/theory/postgres/pull/4/files\n\nBest,\n\nDavid\n\n\nHI! Now it looks good for me.Best regards, Stepan Neretin.", "msg_date": "Thu, 27 Jun 2024 11:53:14 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Thu, Jun 27, 2024 at 11:53:14AM +0700, Stepan Neretin wrote:\n> HI! Now it looks good for me.\n\nThe tests of jsonb_jsonpath.sql include a lot of patterns for @? and\njsonb_path_query with the lax and strict modes, so shouldn't these\nadditions be grouped closer to the existing tests rather than added at \nthe end of the file?\n--\nMichael", "msg_date": "Thu, 27 Jun 2024 17:17:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Jun 27, 2024, at 04:17, Michael Paquier <[email protected]> wrote:\n\n> The tests of jsonb_jsonpath.sql include a lot of patterns for @? and\n> jsonb_path_query with the lax and strict modes, so shouldn't these\n> additions be grouped closer to the existing tests rather than added at \n> the end of the file?\n\nI think you could argue that they should go with other tests for array unwrapping, though it’s kind of mixed throughout. But that’s more the bit I was testing; almost all the tests are lax, and it’s less the strict behavior to test here than the explicit behavior of unwrapping in lax mode.\n\nBut ultimately I don’t care where they go, just that we have them.\n\nD\n\n\n\n", "msg_date": "Thu, 27 Jun 2024 11:05:31 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Jun 27, 2024, at 04:17, Michael Paquier <[email protected]> wrote:\n\n> The tests of jsonb_jsonpath.sql include a lot of patterns for @? and\n> jsonb_path_query with the lax and strict modes, so shouldn't these\n> additions be grouped closer to the existing tests rather than added at \n> the end of the file?\n\nI’ve moved them closer to other tests for unwrapping behavior in the attached updated and rebased v3 patch.\n\nBest,\n\nDavid", "msg_date": "Mon, 8 Jul 2024 12:09:15 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Mon, Jul 8, 2024 at 11:09 PM David E. Wheeler <[email protected]>\nwrote:\n\n> On Jun 27, 2024, at 04:17, Michael Paquier <[email protected]> wrote:\n>\n> > The tests of jsonb_jsonpath.sql include a lot of patterns for @? and\n> > jsonb_path_query with the lax and strict modes, so shouldn't these\n> > additions be grouped closer to the existing tests rather than added at\n> > the end of the file?\n>\n> I’ve moved them closer to other tests for unwrapping behavior in the\n> attached updated and rebased v3 patch.\n>\n> Best,\n>\n> David\n>\n>\n>\n\nHi! Looks good to me now! Please, register a patch in CF.\nBest regards, Stepan Neretin.\n\nOn Mon, Jul 8, 2024 at 11:09 PM David E. Wheeler <[email protected]> wrote:On Jun 27, 2024, at 04:17, Michael Paquier <[email protected]> wrote:\n\n> The tests of jsonb_jsonpath.sql include a lot of patterns for @? and\n> jsonb_path_query with the lax and strict modes, so shouldn't these\n> additions be grouped closer to the existing tests rather than added at \n> the end of the file?\n\nI’ve moved them closer to other tests for unwrapping behavior in the attached updated and rebased v3 patch.\n\nBest,\n\nDavid\n\nHi! Looks good to me now! Please, register a patch in CF.Best regards, Stepan Neretin.", "msg_date": "Mon, 15 Jul 2024 18:07:48 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Jul 15, 2024, at 07:07, Stepan Neretin <[email protected]> wrote:\n\n> Hi! Looks good to me now! Please, register a patch in CF.\n> Best regards, Stepan Neretin.\n\nIt’s here:\n\n https://commitfest.postgresql.org/48/5017/\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 15 Jul 2024 10:29:32 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Mon, Jul 15, 2024 at 10:29:32AM -0400, David E. Wheeler wrote:\n> It’s here:\n> \n> https://commitfest.postgresql.org/48/5017/\n\nSorry for the delay. Finally came back to it, and applied the tests.\nThanks!\n\nIt was fun to see that HEAD was silenced with the first patch of this\nthread that tweaked the behavior with arrays.\n\nRegarding the comments, I have left them out for now. That may be a\ngood start, but it also feels like we should do a much better job\noverall with the area of jsonpath_exec.c. One thing that may help as\na start is to reorganize the routines of the file and divide them into\nsub-categories, or even go through a deeper refactoring to help\nreaders go through the existing 4.5k lines of code that are in this\nsingle file..\n--\nMichael", "msg_date": "Fri, 19 Jul 2024 14:42:35 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Jul 19, 2024, at 01:42, Michael Paquier <[email protected]> wrote:\n\n> Sorry for the delay. Finally came back to it, and applied the tests.\n> Thanks!\n\nAwesome, thank you!\n\n> It was fun to see that HEAD was silenced with the first patch of this\n> thread that tweaked the behavior with arrays.\n\nUh, what? Sorry I don’t follow.\n\n> Regarding the comments, I have left them out for now. That may be a\n> good start, but it also feels like we should do a much better job\n> overall with the area of jsonpath_exec.c.\n\nI put them in because it took me a bit to track down that they were among the implementors of JsonPathGetVarCallback as I was porting the code.\n\n> One thing that may help as\n> a start is to reorganize the routines of the file and divide them into\n> sub-categories, or even go through a deeper refactoring to help\n> readers go through the existing 4.5k lines of code that are in this\n> single file..\n\nAfter I got all the tests passing in the port to Go, I split it up into 14 implementation and 15 test files (with all of jsonb_jsonpath.(sql.out) in one file). Was much easier to reason about that way.\n\nhttps://github.com/theory/sqljson/tree/main/path/exec\n\nBest,\n\nDavid\n\n\n\n\n", "msg_date": "Fri, 19 Jul 2024 09:49:50 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Fri, Jul 19, 2024 at 09:49:50AM -0400, David E. Wheeler wrote:\n>> It was fun to see that HEAD was silenced with the first patch of this\n>> thread that tweaked the behavior with arrays.\n> \n> Uh, what? Sorry I don’t follow.\n\nWhat I mean is that the main regression test suite did not complain on\nyour original patch posted here:\nhttps://www.postgresql.org/message-id/A95346F9-6147-46E0-809E-532A485D71D6%40justatheory.com\n\nBut the new tests showed a difference, and that's enough for me to see\nvalue in the new tests.\n--\nMichael", "msg_date": "Mon, 22 Jul 2024 09:54:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" }, { "msg_contents": "On Jul 21, 2024, at 20:54, Michael Paquier <[email protected]> wrote:\n\n> What I mean is that the main regression test suite did not complain on\n> your original patch posted here:\n> https://www.postgresql.org/message-id/A95346F9-6147-46E0-809E-532A485D71D6%40justatheory.com\n> \n> But the new tests showed a difference, and that's enough for me to see\n> value in the new tests.\n\nOh, got it, nice!\n\nThanks,\n\nDavid\n\n\n\n", "msg_date": "Sun, 21 Jul 2024 22:18:40 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch bug: Fix jsonpath .* on Arrays" } ]
[ { "msg_contents": "I noticed that the \"Restoring database schemas in the new cluster\" part of\npg_upgrade can take a while if you have many databases, so I experimented\nwith a couple different settings to see if there are any easy ways to speed\nit up. The FILE_COPY strategy for CREATE DATABASE helped quite\nsignificantly on my laptop. For ~3k empty databases, this step went from\n~100 seconds to ~30 seconds with the attached patch. I see commit ad43a41\nmade a similar change for initdb, so there might even be an argument for\nback-patching this to v15 (where STRATEGY was introduced). One thing I\nstill need to verify is that this doesn't harm anything when there are lots\nof objects in the databases, i.e., more WAL generated during many\nconcurrent CREATE-DATABASE-induced checkpoints.\n\nThoughts?\n\n-- \nnathan", "msg_date": "Tue, 4 Jun 2024 14:39:23 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "Em ter., 4 de jun. de 2024 às 16:39, Nathan Bossart <\[email protected]> escreveu:\n\n> I noticed that the \"Restoring database schemas in the new cluster\" part of\n> pg_upgrade can take a while if you have many databases, so I experimented\n> with a couple different settings to see if there are any easy ways to speed\n> it up. The FILE_COPY strategy for CREATE DATABASE helped quite\n> significantly on my laptop. For ~3k empty databases, this step went from\n> ~100 seconds to ~30 seconds with the attached patch. I see commit ad43a41\n> made a similar change for initdb, so there might even be an argument for\n> back-patching this to v15 (where STRATEGY was introduced). One thing I\n> still need to verify is that this doesn't harm anything when there are lots\n> of objects in the databases, i.e., more WAL generated during many\n> concurrent CREATE-DATABASE-induced checkpoints.\n>\n> Thoughts?\n>\nWhy not use it too, if not binary_upgrade?\n\nelse\n{\nappendPQExpBuffer(creaQry, \"CREATE DATABASE %s WITH TEMPLATE = template0\nSTRATEGY = FILE_COPY\",\n qdatname);\n}\n\nIt seems to me that it also improves in any use.\n\nbest regards,\nRanier Vilela\n\nEm ter., 4 de jun. de 2024 às 16:39, Nathan Bossart <[email protected]> escreveu:I noticed that the \"Restoring database schemas in the new cluster\" part of\npg_upgrade can take a while if you have many databases, so I experimented\nwith a couple different settings to see if there are any easy ways to speed\nit up.  The FILE_COPY strategy for CREATE DATABASE helped quite\nsignificantly on my laptop.  For ~3k empty databases, this step went from\n~100 seconds to ~30 seconds with the attached patch.  I see commit ad43a41\nmade a similar change for initdb, so there might even be an argument for\nback-patching this to v15 (where STRATEGY was introduced).  One thing I\nstill need to verify is that this doesn't harm anything when there are lots\nof objects in the databases, i.e., more WAL generated during many\nconcurrent CREATE-DATABASE-induced checkpoints.\n\nThoughts?Why not use it too, if not binary_upgrade?\telse\t{\t\tappendPQExpBuffer(creaQry, \"CREATE DATABASE %s WITH TEMPLATE = template0 STRATEGY = FILE_COPY\",\t\t\t\t\t\t  qdatname);\t}It seems to me that it also improves in any use.best regards,Ranier Vilela", "msg_date": "Wed, 5 Jun 2024 13:47:09 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Wed, Jun 05, 2024 at 01:47:09PM -0300, Ranier Vilela wrote:\n> Why not use it too, if not binary_upgrade?\n> \n> else\n> {\n> appendPQExpBuffer(creaQry, \"CREATE DATABASE %s WITH TEMPLATE = template0\n> STRATEGY = FILE_COPY\",\n> qdatname);\n> }\n> \n> It seems to me that it also improves in any use.\n\nWell, if that is true, and I'm not sure it is, then we should probably\nconsider changing the default STRATEGY in the server instead. I haven't\nlooked too deeply, but my assumption is that when fsync is disabled (as it\nis when restoring schemas during pg_upgrade), both checkpointing and\ncopying the template database are sufficiently fast enough to beat writing\nout all the data to WAL. Furthermore, in my test, all the databases were\nbasically empty, so I suspect that some CREATE DATABASE commands could\npiggy-back on checkpoints initiated by others. This might not be the case\nwhen there are many objects in each database, and that is a scenario I have\nyet to test.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 5 Jun 2024 12:07:58 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Wed, 5 Jun 2024 at 18:47, Ranier Vilela <[email protected]> wrote:\n>\n> Em ter., 4 de jun. de 2024 às 16:39, Nathan Bossart <[email protected]> escreveu:\n>>\n>> I noticed that the \"Restoring database schemas in the new cluster\" part of\n>> pg_upgrade can take a while if you have many databases, so I experimented\n>> with a couple different settings to see if there are any easy ways to speed\n>> it up. The FILE_COPY strategy for CREATE DATABASE helped quite\n>> significantly on my laptop. For ~3k empty databases, this step went from\n>> ~100 seconds to ~30 seconds with the attached patch. I see commit ad43a41\n>> made a similar change for initdb, so there might even be an argument for\n>> back-patching this to v15 (where STRATEGY was introduced). One thing I\n>> still need to verify is that this doesn't harm anything when there are lots\n>> of objects in the databases, i.e., more WAL generated during many\n>> concurrent CREATE-DATABASE-induced checkpoints.\n>>\n>> Thoughts?\n>\n> Why not use it too, if not binary_upgrade?\n\nBecause in the normal case (not during binary_upgrade) you don't want\nto have to generate 2 checkpoints for every created database,\nespecially not when your shared buffers are large. Checkpoints' costs\nscale approximately linearly with the size of shared buffers, so being\nable to skip those checkpoints (with strategy=WAL_LOG) will save a lot\nof performance in the systems where this performance impact matters\nmost.\n\n>> I noticed that the \"Restoring database schemas in the new cluster\" part of\n>> pg_upgrade can take a while if you have many databases, so I experimented\n>> with a couple different settings to see if there are any easy ways to speed\n>> it up. The FILE_COPY strategy for CREATE DATABASE helped quite\n>> significantly on my laptop. For ~3k empty databases, this step went from\n>> ~100 seconds to ~30 seconds with the attached patch.\n\nAs for \"on my laptop\", that sounds very reasonable, but could you\ncheck the performance on systems with larger shared buffer\nconfigurations? I'd imagine (but haven't checked) that binary upgrade\ntarget systems may already be using the shared_buffers from their\nsource system, which would cause a severe regression when the\nto-be-upgraded system has large shared buffers. For initdb the\ndatabase size is known in advance and shared_buffers is approximately\nempty, but the same is not (always) true when we're doing binary\nupgrades.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 5 Jun 2024 19:28:42 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Wed, Jun 05, 2024 at 07:28:42PM +0200, Matthias van de Meent wrote:\n> As for \"on my laptop\", that sounds very reasonable, but could you\n> check the performance on systems with larger shared buffer\n> configurations? I'd imagine (but haven't checked) that binary upgrade\n> target systems may already be using the shared_buffers from their\n> source system, which would cause a severe regression when the\n> to-be-upgraded system has large shared buffers. For initdb the\n> database size is known in advance and shared_buffers is approximately\n> empty, but the same is not (always) true when we're doing binary\n> upgrades.\n\nWill do. FWIW I haven't had much luck improving pg_upgrade times by\nadjusting server settings, but I also haven't explored it all that much.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 5 Jun 2024 12:45:48 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Wed, Jun 5, 2024 at 10:59 PM Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Wed, 5 Jun 2024 at 18:47, Ranier Vilela <[email protected]> wrote:\n> >\n> > Em ter., 4 de jun. de 2024 às 16:39, Nathan Bossart <[email protected]> escreveu:\n> >>\n> >> I noticed that the \"Restoring database schemas in the new cluster\" part of\n> >> pg_upgrade can take a while if you have many databases, so I experimented\n> >> with a couple different settings to see if there are any easy ways to speed\n> >> it up. The FILE_COPY strategy for CREATE DATABASE helped quite\n> >> significantly on my laptop. For ~3k empty databases, this step went from\n> >> ~100 seconds to ~30 seconds with the attached patch. I see commit ad43a41\n> >> made a similar change for initdb, so there might even be an argument for\n> >> back-patching this to v15 (where STRATEGY was introduced). One thing I\n> >> still need to verify is that this doesn't harm anything when there are lots\n> >> of objects in the databases, i.e., more WAL generated during many\n> >> concurrent CREATE-DATABASE-induced checkpoints.\n> >>\n> >> Thoughts?\n> >\n> > Why not use it too, if not binary_upgrade?\n>\n> Because in the normal case (not during binary_upgrade) you don't want\n> to have to generate 2 checkpoints for every created database,\n> especially not when your shared buffers are large. Checkpoints' costs\n> scale approximately linearly with the size of shared buffers, so being\n> able to skip those checkpoints (with strategy=WAL_LOG) will save a lot\n> of performance in the systems where this performance impact matters\n> most.\n\nI agree with you that we introduced the WAL_LOG strategy to avoid\nthese force checkpoints. However, in binary upgrade cases where no\noperations are happening in the system, the FILE_COPY strategy should\nbe faster.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 10:48:22 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Fri, 7 Jun 2024 at 07:18, Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Jun 5, 2024 at 10:59 PM Matthias van de Meent\n> <[email protected]> wrote:\n>>\n>> On Wed, 5 Jun 2024 at 18:47, Ranier Vilela <[email protected]> wrote:\n>>>\n>>> Why not use it too, if not binary_upgrade?\n>>\n>> Because in the normal case (not during binary_upgrade) you don't want\n>> to have to generate 2 checkpoints for every created database,\n>> especially not when your shared buffers are large. Checkpoints' costs\n>> scale approximately linearly with the size of shared buffers, so being\n>> able to skip those checkpoints (with strategy=WAL_LOG) will save a lot\n>> of performance in the systems where this performance impact matters\n>> most.\n>\n> I agree with you that we introduced the WAL_LOG strategy to avoid\n> these force checkpoints. However, in binary upgrade cases where no\n> operations are happening in the system, the FILE_COPY strategy should\n> be faster.\n\nWhile you would be correct if there were no operations happening in\nthe system, during binary upgrade we're still actively modifying\ncatalogs; and this is done with potentially many concurrent jobs. I\nthink it's not unlikely that this would impact performance.\n\nNow that I think about it, arguably, we shouldn't need to run\ncheckpoints during binary upgrade for the FILE_COPY strategy after\nwe've restored the template1 database and created a checkpoint after\nthat: All other databases use template1 as their template database,\nand the checkpoint is there mostly to guarantee the FS knows about all\nchanges in the template database before we task it with copying the\ntemplate database over to our new database, so the protections we get\nfrom more checkpoints are practically useless.\nIf such a change were implemented (i.e. no checkpoints for FILE_COPY\nin binary upgrade, with a single manual checkpoint after restoring\ntemplate1 in create_new_objects) I think most of my concerns with this\npatch would be alleviated.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Fri, 7 Jun 2024 08:27:37 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Fri, Jun 7, 2024 at 11:57 AM Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Fri, 7 Jun 2024 at 07:18, Dilip Kumar <[email protected]> wrote:\n> >\n> > On Wed, Jun 5, 2024 at 10:59 PM Matthias van de Meent\n> > <[email protected]> wrote:\n> >>\n> >> On Wed, 5 Jun 2024 at 18:47, Ranier Vilela <[email protected]> wrote:\n> >>>\n> >>> Why not use it too, if not binary_upgrade?\n> >>\n> >> Because in the normal case (not during binary_upgrade) you don't want\n> >> to have to generate 2 checkpoints for every created database,\n> >> especially not when your shared buffers are large. Checkpoints' costs\n> >> scale approximately linearly with the size of shared buffers, so being\n> >> able to skip those checkpoints (with strategy=WAL_LOG) will save a lot\n> >> of performance in the systems where this performance impact matters\n> >> most.\n> >\n> > I agree with you that we introduced the WAL_LOG strategy to avoid\n> > these force checkpoints. However, in binary upgrade cases where no\n> > operations are happening in the system, the FILE_COPY strategy should\n> > be faster.\n>\n> While you would be correct if there were no operations happening in\n> the system, during binary upgrade we're still actively modifying\n> catalogs; and this is done with potentially many concurrent jobs. I\n> think it's not unlikely that this would impact performance.\n\nMaybe, but generally, long checkpoints are problematic because they\ninvolve a lot of I/O, which hampers overall system performance.\nHowever, in the case of a binary upgrade, the concurrent operations\nare only performing a schema restore, not a real data restore.\nTherefore, it shouldn't have a significant impact, and the checkpoints\nshould also not do a lot of I/O during binary upgrade, right?\n\n> Now that I think about it, arguably, we shouldn't need to run\n> checkpoints during binary upgrade for the FILE_COPY strategy after\n> we've restored the template1 database and created a checkpoint after\n> that: All other databases use template1 as their template database,\n> and the checkpoint is there mostly to guarantee the FS knows about all\n> changes in the template database before we task it with copying the\n> template database over to our new database, so the protections we get\n> from more checkpoints are practically useless.\n> If such a change were implemented (i.e. no checkpoints for FILE_COPY\n> in binary upgrade, with a single manual checkpoint after restoring\n> template1 in create_new_objects) I think most of my concerns with this\n> patch would be alleviated.\n\nYeah, I think that's a valid point. The second checkpoint is to ensure\nthat the XLOG_DBASE_CREATE_FILE_COPY never gets replayed. However, for\nbinary upgrades, we don't need that guarantee because a checkpoint\nwill be performed during shutdown at the end of the upgrade anyway.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 13:57:49 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Fri, 7 Jun 2024 at 10:28, Dilip Kumar <[email protected]> wrote:\n>\n> On Fri, Jun 7, 2024 at 11:57 AM Matthias van de Meent\n> <[email protected]> wrote:\n>>\n>> On Fri, 7 Jun 2024 at 07:18, Dilip Kumar <[email protected]> wrote:\n>>>\n>>> On Wed, Jun 5, 2024 at 10:59 PM Matthias van de Meent\n>>> <[email protected]> wrote:\n>>>\n>>> I agree with you that we introduced the WAL_LOG strategy to avoid\n>>> these force checkpoints. However, in binary upgrade cases where no\n>>> operations are happening in the system, the FILE_COPY strategy should\n>>> be faster.\n>>\n>> While you would be correct if there were no operations happening in\n>> the system, during binary upgrade we're still actively modifying\n>> catalogs; and this is done with potentially many concurrent jobs. I\n>> think it's not unlikely that this would impact performance.\n>\n> Maybe, but generally, long checkpoints are problematic because they\n> involve a lot of I/O, which hampers overall system performance.\n> However, in the case of a binary upgrade, the concurrent operations\n> are only performing a schema restore, not a real data restore.\n> Therefore, it shouldn't have a significant impact, and the checkpoints\n> should also not do a lot of I/O during binary upgrade, right?\n\nMy primary concern isn't the IO, but the O(shared_buffers) that we\nhave to go through during a checkpoint. As I mentioned upthread, it is\nreasonably possible the new cluster is already setup with a good\nfraction of the old system's shared_buffers configured. Every\ncheckpoint has to scan all those buffers, which IMV can get (much)\nmore expensive than the IO overhead caused by the WAL_LOG strategy. It\nmay be a baseless fear as I haven't done the performance benchmarks\nfor this, but I wouldn't be surprised if shared_buffers=8GB would\nmeasurably impact the upgrade performance in the current patch (vs the\ndefault 128MB).\n\nI'll note that the documentation for upgrading with pg_upgrade has the\nstep for updating postgresql.conf / postgresql.auto.conf only after\npg_upgrade has run already, but that may not be how it's actually\nused: after all, we don't have full control in this process, the user\nis the one who provides the new cluster with initdb.\n\n>> If such a change were implemented (i.e. no checkpoints for FILE_COPY\n>> in binary upgrade, with a single manual checkpoint after restoring\n>> template1 in create_new_objects) I think most of my concerns with this\n>> patch would be alleviated.\n>\n> Yeah, I think that's a valid point. The second checkpoint is to ensure\n> that the XLOG_DBASE_CREATE_FILE_COPY never gets replayed. However, for\n> binary upgrades, we don't need that guarantee because a checkpoint\n> will be performed during shutdown at the end of the upgrade anyway.\n\nIndeed.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 7 Jun 2024 11:10:25 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Fri, Jun 7, 2024 at 2:40 PM Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Fri, 7 Jun 2024 at 10:28, Dilip Kumar <[email protected]> wrote:\n> >\n> > On Fri, Jun 7, 2024 at 11:57 AM Matthias van de Meent\n> > <[email protected]> wrote:\n> >>\n> >> On Fri, 7 Jun 2024 at 07:18, Dilip Kumar <[email protected]> wrote:\n> >>>\n> >>> On Wed, Jun 5, 2024 at 10:59 PM Matthias van de Meent\n> >>> <[email protected]> wrote:\n> >>>\n> >>> I agree with you that we introduced the WAL_LOG strategy to avoid\n> >>> these force checkpoints. However, in binary upgrade cases where no\n> >>> operations are happening in the system, the FILE_COPY strategy should\n> >>> be faster.\n> >>\n> >> While you would be correct if there were no operations happening in\n> >> the system, during binary upgrade we're still actively modifying\n> >> catalogs; and this is done with potentially many concurrent jobs. I\n> >> think it's not unlikely that this would impact performance.\n> >\n> > Maybe, but generally, long checkpoints are problematic because they\n> > involve a lot of I/O, which hampers overall system performance.\n> > However, in the case of a binary upgrade, the concurrent operations\n> > are only performing a schema restore, not a real data restore.\n> > Therefore, it shouldn't have a significant impact, and the checkpoints\n> > should also not do a lot of I/O during binary upgrade, right?\n>\n> My primary concern isn't the IO, but the O(shared_buffers) that we\n> have to go through during a checkpoint. As I mentioned upthread, it is\n> reasonably possible the new cluster is already setup with a good\n> fraction of the old system's shared_buffers configured. Every\n> checkpoint has to scan all those buffers, which IMV can get (much)\n> more expensive than the IO overhead caused by the WAL_LOG strategy. It\n> may be a baseless fear as I haven't done the performance benchmarks\n> for this, but I wouldn't be surprised if shared_buffers=8GB would\n> measurably impact the upgrade performance in the current patch (vs the\n> default 128MB).\n\nOkay, that's a valid point.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:47:21 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Fri, Jun 07, 2024 at 11:10:25AM +0200, Matthias van de Meent wrote:\n> My primary concern isn't the IO, but the O(shared_buffers) that we\n> have to go through during a checkpoint. As I mentioned upthread, it is\n> reasonably possible the new cluster is already setup with a good\n> fraction of the old system's shared_buffers configured. Every\n> checkpoint has to scan all those buffers, which IMV can get (much)\n> more expensive than the IO overhead caused by the WAL_LOG strategy. It\n> may be a baseless fear as I haven't done the performance benchmarks\n> for this, but I wouldn't be surprised if shared_buffers=8GB would\n> measurably impact the upgrade performance in the current patch (vs the\n> default 128MB).\n\nI did a handful of benchmarks on an r5.24xlarge that seem to prove your\npoint. The following are the durations of the pg_restore step of\npg_upgrade:\n\n* 10k empty databases, 128MB shared_buffers\n WAL_LOG: 1m 01s\n FILE_COPY: 0m 22s\n\n* 10k empty databases, 100GB shared_buffers\n WAL_LOG: 2m 03s\n FILE_COPY: 5m 08s\n\n* 2.5k databases with 10k tables each, 128MB shared_buffers\n WAL_LOG: 17m 20s\n FILE_COPY: 16m 44s\n\n* 2.5k databases with 10k tables each, 100GB shared_buffers\n WAL_LOG: 16m 39s\n FILE_COPY: 15m 21s\n\nI was surprised with the last result, but there's enough other stuff\nhappening during such a test that I hesitate to conclude much.\n\n> I'll note that the documentation for upgrading with pg_upgrade has the\n> step for updating postgresql.conf / postgresql.auto.conf only after\n> pg_upgrade has run already, but that may not be how it's actually\n> used: after all, we don't have full control in this process, the user\n> is the one who provides the new cluster with initdb.\n\nGood point. I think it's clear that FILE_COPY is not necessarily a win in\nall cases for pg_upgrade.\n\n>>> If such a change were implemented (i.e. no checkpoints for FILE_COPY\n>>> in binary upgrade, with a single manual checkpoint after restoring\n>>> template1 in create_new_objects) I think most of my concerns with this\n>>> patch would be alleviated.\n>>\n>> Yeah, I think that's a valid point. The second checkpoint is to ensure\n>> that the XLOG_DBASE_CREATE_FILE_COPY never gets replayed. However, for\n>> binary upgrades, we don't need that guarantee because a checkpoint\n>> will be performed during shutdown at the end of the upgrade anyway.\n> \n> Indeed.\n\nIt looks like pg_dump always uses template0, so AFAICT we don't even need\nthe suggested manual checkpoint after restoring template1.\n\nI do like the idea of skipping a bunch of unnecessary operations in binary\nupgrade mode, since it'll help me in my goal of speeding up pg_upgrade.\nBut I'm a bit hesitant to get too fancy here and to introduce a bunch of\nnew \"if (IsBinaryUpgrade)\" checks if the gains in the field won't be all\nthat exciting. However, we've already sprinkled such checks quite\nliberally, so maybe I'm being too cautious...\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 10 Jun 2024 21:01:40 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Tue, 11 Jun 2024 at 04:01, Nathan Bossart <[email protected]> wrote:\n>\n> On Fri, Jun 07, 2024 at 11:10:25AM +0200, Matthias van de Meent wrote:\n> > My primary concern isn't the IO, but the O(shared_buffers) that we\n> > have to go through during a checkpoint. As I mentioned upthread, it is\n> > reasonably possible the new cluster is already setup with a good\n> > fraction of the old system's shared_buffers configured. Every\n> > checkpoint has to scan all those buffers, which IMV can get (much)\n> > more expensive than the IO overhead caused by the WAL_LOG strategy. It\n> > may be a baseless fear as I haven't done the performance benchmarks\n> > for this, but I wouldn't be surprised if shared_buffers=8GB would\n> > measurably impact the upgrade performance in the current patch (vs the\n> > default 128MB).\n>\n> I did a handful of benchmarks on an r5.24xlarge that seem to prove your\n> point. The following are the durations of the pg_restore step of\n> pg_upgrade:\n>\n> * 10k empty databases, 128MB shared_buffers\n> WAL_LOG: 1m 01s\n> FILE_COPY: 0m 22s\n>\n> * 10k empty databases, 100GB shared_buffers\n> WAL_LOG: 2m 03s\n> FILE_COPY: 5m 08s\n>\n> * 2.5k databases with 10k tables each, 128MB shared_buffers\n> WAL_LOG: 17m 20s\n> FILE_COPY: 16m 44s\n>\n> * 2.5k databases with 10k tables each, 100GB shared_buffers\n> WAL_LOG: 16m 39s\n> FILE_COPY: 15m 21s\n>\n> I was surprised with the last result, but there's enough other stuff\n> happening during such a test that I hesitate to conclude much.\n\nIf you still have the test data set up, could you test the attached\npatch (which does skip the checkpoints in FILE_COPY mode during binary\nupgrades)?\n\n>>>> If such a change were implemented (i.e. no checkpoints for FILE_COPY\n>>>> in binary upgrade, with a single manual checkpoint after restoring\n>>>> template1 in create_new_objects) I think most of my concerns with this\n>>>> patch would be alleviated.\n>>>\n>>> Yeah, I think that's a valid point. The second checkpoint is to ensure\n>>> that the XLOG_DBASE_CREATE_FILE_COPY never gets replayed. However, for\n>>> binary upgrades, we don't need that guarantee because a checkpoint\n>>> will be performed during shutdown at the end of the upgrade anyway.\n>>\n>> Indeed.\n>\n> It looks like pg_dump always uses template0, so AFAICT we don't even need\n> the suggested manual checkpoint after restoring template1.\n\nThanks for reminding me. It seems I misunderstood the reason why we\nfirst process template1 in create_new_objects, as I didn't read the\ncomments thoroughly enough.\n\n> I do like the idea of skipping a bunch of unnecessary operations in binary\n> upgrade mode, since it'll help me in my goal of speeding up pg_upgrade.\n> But I'm a bit hesitant to get too fancy here and to introduce a bunch of\n> new \"if (IsBinaryUpgrade)\" checks if the gains in the field won't be all\n> that exciting. However, we've already sprinkled such checks quite\n> liberally, so maybe I'm being too cautious...\n\nHmm, yes. From an IO perspective I think this could be an improvement,\nbut let's check the numbers first.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Tue, 11 Jun 2024 10:39:51 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Tue, Jun 11, 2024 at 10:39:51AM +0200, Matthias van de Meent wrote:\n> On Tue, 11 Jun 2024 at 04:01, Nathan Bossart <[email protected]> wrote:\n>> I did a handful of benchmarks on an r5.24xlarge that seem to prove your\n>> point. The following are the durations of the pg_restore step of\n>> pg_upgrade:\n>>\n>> * 10k empty databases, 128MB shared_buffers\n>> WAL_LOG: 1m 01s\n>> FILE_COPY: 0m 22s\n>>\n>> * 10k empty databases, 100GB shared_buffers\n>> WAL_LOG: 2m 03s\n>> FILE_COPY: 5m 08s\n>>\n>> * 2.5k databases with 10k tables each, 128MB shared_buffers\n>> WAL_LOG: 17m 20s\n>> FILE_COPY: 16m 44s\n>>\n>> * 2.5k databases with 10k tables each, 100GB shared_buffers\n>> WAL_LOG: 16m 39s\n>> FILE_COPY: 15m 21s\n>>\n>> I was surprised with the last result, but there's enough other stuff\n>> happening during such a test that I hesitate to conclude much.\n> \n> If you still have the test data set up, could you test the attached\n> patch (which does skip the checkpoints in FILE_COPY mode during binary\n> upgrades)?\n\nWith your patch, I see the following:\n\n* 10k empty databases, 128MB shared_buffers: 0m 27s\n* 10k empty databases, 100GB shared_buffers: 1m 44s\n\nI believe the reason the large buffer cache test is still quite a bit\nslower is due to the truncation of pg_largeobject (specifically its call to\nDropRelationsAllBuffers()). This TRUNCATE command was added in commit\nbbe08b8.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 11 Jun 2024 11:03:41 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Tue, Jun 11, 2024 at 10:39:51AM +0200, Matthias van de Meent wrote:\n> On Tue, 11 Jun 2024 at 04:01, Nathan Bossart <[email protected]> wrote:\n>> It looks like pg_dump always uses template0, so AFAICT we don't even need\n>> the suggested manual checkpoint after restoring template1.\n> \n> Thanks for reminding me. It seems I misunderstood the reason why we\n> first process template1 in create_new_objects, as I didn't read the\n> comments thoroughly enough.\n\nActually, I think you are right that we need a manual checkpoint, except I\nthink we need it to be after prepare_new_globals(). set_frozenxids()\nconnects to each database (including template0) and updates a bunch of\npg_class rows, and we probably want those on disk before we start copying\nthe files to create all the user's databases.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 12 Jun 2024 09:41:01 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Wed, Jun 12, 2024 at 09:41:01AM -0500, Nathan Bossart wrote:\n> Actually, I think you are right that we need a manual checkpoint, except I\n> think we need it to be after prepare_new_globals(). set_frozenxids()\n> connects to each database (including template0) and updates a bunch of\n> pg_class rows, and we probably want those on disk before we start copying\n> the files to create all the user's databases.\n\nHere is an updated patch.\n\n-- \nnathan", "msg_date": "Fri, 14 Jun 2024 16:29:13 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Fri, 14 Jun 2024 at 23:29, Nathan Bossart <[email protected]> wrote:\n>\n> On Wed, Jun 12, 2024 at 09:41:01AM -0500, Nathan Bossart wrote:\n> > Actually, I think you are right that we need a manual checkpoint, except I\n> > think we need it to be after prepare_new_globals(). set_frozenxids()\n> > connects to each database (including template0) and updates a bunch of\n> > pg_class rows, and we probably want those on disk before we start copying\n> > the files to create all the user's databases.\n\nGood catch, I hadn't thought of that.\n\n> Here is an updated patch.\n\nLGTM.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 19 Jun 2024 15:07:42 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Fri, Jun 14, 2024 at 5:29 PM Nathan Bossart <[email protected]> wrote:\n> On Wed, Jun 12, 2024 at 09:41:01AM -0500, Nathan Bossart wrote:\n> > Actually, I think you are right that we need a manual checkpoint, except I\n> > think we need it to be after prepare_new_globals(). set_frozenxids()\n> > connects to each database (including template0) and updates a bunch of\n> > pg_class rows, and we probably want those on disk before we start copying\n> > the files to create all the user's databases.\n>\n> Here is an updated patch.\n\nOK, I have a (probably) stupid question. The comment says:\n\n+ * In binary upgrade mode, we can skip this checkpoint because neither of\n+ * these problems applies: we don't ever replay the WAL generated during\n+ * pg_upgrade, and we don't concurrently modify template0 (not to mention\n+ * that trying to take a backup during pg_upgrade is pointless).\n\nBut what happens if the system crashes during pg_upgrade? Does this\npatch make things worse than they are today? And should we care?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jun 2024 09:17:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Wed, Jun 19, 2024 at 09:17:00AM -0400, Robert Haas wrote:\n> OK, I have a (probably) stupid question. The comment says:\n> \n> + * In binary upgrade mode, we can skip this checkpoint because neither of\n> + * these problems applies: we don't ever replay the WAL generated during\n> + * pg_upgrade, and we don't concurrently modify template0 (not to mention\n> + * that trying to take a backup during pg_upgrade is pointless).\n> \n> But what happens if the system crashes during pg_upgrade? Does this\n> patch make things worse than they are today? And should we care?\n\nMy understanding is that you basically have to restart the upgrade from\nscratch if that happens. I suppose there could be a problem if you try to\nuse the half-upgraded cluster after a crash, but I imagine you have a good\nchance of encountering other problems if you do that, too. So I don't\nthink we care...\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 19 Jun 2024 08:37:17 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Wed, 19 Jun 2024 at 15:17, Robert Haas <[email protected]> wrote:\n>\n> On Fri, Jun 14, 2024 at 5:29 PM Nathan Bossart <[email protected]> wrote:\n> > On Wed, Jun 12, 2024 at 09:41:01AM -0500, Nathan Bossart wrote:\n> > > Actually, I think you are right that we need a manual checkpoint, except I\n> > > think we need it to be after prepare_new_globals(). set_frozenxids()\n> > > connects to each database (including template0) and updates a bunch of\n> > > pg_class rows, and we probably want those on disk before we start copying\n> > > the files to create all the user's databases.\n> >\n> > Here is an updated patch.\n>\n> OK, I have a (probably) stupid question. The comment says:\n>\n> + * In binary upgrade mode, we can skip this checkpoint because neither of\n> + * these problems applies: we don't ever replay the WAL generated during\n> + * pg_upgrade, and we don't concurrently modify template0 (not to mention\n> + * that trying to take a backup during pg_upgrade is pointless).\n>\n> But what happens if the system crashes during pg_upgrade? Does this\n> patch make things worse than they are today? And should we care?\n\nAs Nathan just said, AFAIK we don't have a way to resume progress from\na crashed pg_upgrade, which implies you currently have to start over\nwith a fresh dbinit.\nThis patch wouldn't change that for the worse, it'd only add one more\nprocess that depends on that behaviour.\n\nMaybe in the future that'll change if pg_upgrade and pg_dump are made\nsmart enough to resume their restore progress across forceful\ndisconnects, but for now this patch seems like a nice performance\nimprovement.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 19 Jun 2024 15:38:52 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "On Wed, Jun 19, 2024 at 08:37:17AM -0500, Nathan Bossart wrote:\n> My understanding is that you basically have to restart the upgrade from\n> scratch if that happens. I suppose there could be a problem if you try to\n> use the half-upgraded cluster after a crash, but I imagine you have a good\n> chance of encountering other problems if you do that, too. So I don't\n> think we care...\n\nIt's never been assumed that it would be safe to redo a\npg_upgradeafter a crash on a cluster initdb'd for the upgrade, so I\ndon't think we need to care about that, as well.\n\nOne failure I suspect would quickly be faced is OIDs getting reused\nagain as these are currently kept consistent.\n--\nMichael", "msg_date": "Thu, 20 Jun 2024 13:29:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" }, { "msg_contents": "Committed.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 8 Jul 2024 16:22:23 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade" } ]
[ { "msg_contents": "Hi,\n\nI am implementing my own JIT plugin (based on Cranelift) for PostgreSQL \nto learn more about the JIT and noticed an API change in PostgreSQL 17.\n\nWhen Heikki made the resource owners extensible in commit \nb8bff07daa85c837a2747b4d35cd5a27e73fb7b2 the API for JIT plugins changed \nwhen ResourceOwnerForgetJIT() was moved from the generic JIT code to the \nLLVM specific JIT code so now the resowner field of the context is only \nused by the code of the LLVM plugin.\n\nMaybe a bit late in the release cycle but should we make the resowner \nfield specific to the LLVM code too now that we already are breaking the \nAPI? I personally do not like having a LLVM JIT specific field in the \ncommon struct. Code is easier to understand if things are local. Granted \nmost JIT engines will likely need similar infrastructure but just \nproviding the struct field and nothing else does not seem very helpful.\n\nSee the attached patch.\n\nAndreas", "msg_date": "Wed, 5 Jun 2024 10:19:01 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": true, "msg_subject": "Should we move the resowner field from JitContext to LLVMJitContext?" }, { "msg_contents": "> On 5 Jun 2024, at 10:19, Andreas Karlsson <[email protected]> wrote:\n\n> When Heikki made the resource owners extensible in commit b8bff07daa85c837a2747b4d35cd5a27e73fb7b2 the API for JIT plugins changed when ResourceOwnerForgetJIT() was moved from the generic JIT code to the LLVM specific JIT code so now the resowner field of the context is only used by the code of the LLVM plugin.\n> \n> Maybe a bit late in the release cycle but should we make the resowner field specific to the LLVM code too now that we already are breaking the API? I personally do not like having a LLVM JIT specific field in the common struct. Code is easier to understand if things are local. Granted most JIT engines will likely need similar infrastructure but just providing the struct field and nothing else does not seem very helpful.\n\nI'm inclined to agree, given that the code for handling the resowner is private\nto the LLVM implementation it makes sense for the resowner to be as well. A\nfuture JIT implementation will likely need a ResourceOwner, but it might just\nas well need two or none.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 16:19:55 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we move the resowner field from JitContext to\n LLVMJitContext?" }, { "msg_contents": "On 01/07/2024 17:19, Daniel Gustafsson wrote:\n>> On 5 Jun 2024, at 10:19, Andreas Karlsson <[email protected]> wrote:\n> \n>> When Heikki made the resource owners extensible in commit b8bff07daa85c837a2747b4d35cd5a27e73fb7b2 the API for JIT plugins changed when ResourceOwnerForgetJIT() was moved from the generic JIT code to the LLVM specific JIT code so now the resowner field of the context is only used by the code of the LLVM plugin.\n>>\n>> Maybe a bit late in the release cycle but should we make the resowner field specific to the LLVM code too now that we already are breaking the API? I personally do not like having a LLVM JIT specific field in the common struct. Code is easier to understand if things are local. Granted most JIT engines will likely need similar infrastructure but just providing the struct field and nothing else does not seem very helpful.\n> \n> I'm inclined to agree, given that the code for handling the resowner is private\n> to the LLVM implementation it makes sense for the resowner to be as well. A\n> future JIT implementation will likely need a ResourceOwner, but it might just\n> as well need two or none.\n\nCommitted, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 19 Jul 2024 10:46:51 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we move the resowner field from JitContext to\n LLVMJitContext?" } ]
[ { "msg_contents": "While working on my patchset for protocol changes I realized that the\nStartupMessage/SSLRequest/GSSENCRequest was not shown correctly in the\ntracing output of libpq. With this change these messages are now shown\ncorrectly in the tracing output.\n\nTo test you can add a PQreset(conn) call to the start of the\ntest_cancel function in\nsrc/test/modules/libpq_pipeline/libpq_pipeline.c.\n\nAnd then run:\nninja -C build all install-quiet &&\nbuild/src/test/modules/libpq_pipeline/libpq_pipeline cancel\n'port=5432' -t test.trace\n\nAnd then look at the top of test.trace", "msg_date": "Wed, 5 Jun 2024 13:52:59 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "libpq: Trace StartupMessage/SSLRequest/GSSENCRequest correctly" } ]
[ { "msg_contents": "Hello hackers,\n\nAs buildfarm shows, ssl tests are not stable enough when running via meson.\nI can see the following failures for the last 90 days:\nREL_16_STABLE:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-03-12%2023%3A15%3A50\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-03-21%2000%3A35%3A23\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-03-27%2011%3A15%3A31\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-04-16%2016%3A10%3A45\n\nmaster:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-03-08%2011%3A19%3A42\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-03-11%2022%3A23%3A28\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-03-17%2023%3A03%3A50\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-03-20%2009%3A21%3A30\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-03-20%2016%3A53%3A27\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-04-07%2012%3A25%3A03\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-04-08%2019%3A50%3A13\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-04-19%2021%3A24%3A30\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-04-22%2006%3A17%3A13\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-04-29%2023%3A27%3A15\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-04-30%2000%3A24%3A28\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-06-04%2011%3A20%3A07\n\nAll the failures are caused by the server inability to restart with a\npreviously chosen random port on TCP/IP. For example:\n2024-06-04 11:30:40.227 UTC [3373644][postmaster][:0] LOG:  starting PostgreSQL 17beta1 on x86_64-linux, compiled by \nclang-13.0.1-11, 64-bit\n2024-06-04 11:30:40.231 UTC [3373644][postmaster][:0] LOG: listening on Unix socket \"/tmp/tUmT8ItNQ2/.s.PGSQL.60362\"\n2024-06-04 11:30:40.337 UTC [3373798][startup][:0] LOG:  database system was shut down at 2024-06-04 11:21:25 UTC\n...\n2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] LOG:  starting PostgreSQL 17beta1 on x86_64-linux, compiled by \nclang-13.0.1-11, 64-bit\n2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] LOG:  could not bind IPv4 address \"127.0.0.1\": Address already in use\n2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] HINT:  Is another postmaster already running on port 60362? If \nnot, wait a few seconds and retry.\n2024-06-04 11:30:45.273 UTC [3376046][postmaster][:0] WARNING: could not create listen socket for \"127.0.0.1\"\n\nI've managed to reproduce the failure locally with the following change:\n--- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n@@ -149,7 +149,7 @@ INIT\n         $ENV{PGDATABASE} = 'postgres';\n\n         # Tracking of last port value assigned to accelerate free port lookup.\n-       $last_port_assigned = int(rand() * 16384) + 49152;\n+       $last_port_assigned = int(rand() * 1024) + 49152;\n\nand multiplying one of the tests.\nfor i in `seq 50`; do cp .../src/test/ssl/t/001_ssltests.pl \\\n   .../src/test/ssl/t/001_ssltests_$i.pl; \\\n   sed -E \"s|('t/001_ssltests.pl',)|\\1\\n't/001_ssltests_$i.pl',|\" -i \\\n     .../src/test/ssl/meson.build; done\n\nThen `meson test --suite ssl` fails for me as below:\n...\n26/53 postgresql:ssl / ssl/001_ssltests_26         OK 9.03s   205 subtests passed\n27/53 postgresql:ssl / ssl/001_ssltests_18         ERROR 3.55s   (exit status 255 or signal 127 SIGinvalid)\n >>> OPENSSL=/usr/bin/openssl ...\n28/53 postgresql:ssl / ssl/001_ssltests_25         OK 10.98s   205 subtests passed\n29/53 postgresql:ssl / ssl/001_ssltests_24         OK 10.84s   205 subtests passed\n\ntestrun/ssl/001_ssltests_18/log/001_ssltests_18_primary.log contains:\n2024-06-05 11:51:22.005 UTC [710541] LOG:  could not bind IPv4 address \"127.0.0.1\": Address already in use\n2024-06-05 11:51:22.005 UTC [710541] HINT:  Is another postmaster already running on port 49632? If not, wait a few \nseconds and retry.\n\n`grep '\\b49632\\b' -r testrun/` finds:\ntestrun/ssl/001_ssltests_18/log/regress_log_001_ssltests_18:# Checking port 49632\ntestrun/ssl/001_ssltests_18/log/regress_log_001_ssltests_18:# Found port 49632\ntestrun/ssl/001_ssltests_18/log/regress_log_001_ssltests_18:Connection string: port=49632 host=/tmp/sp_VLbpjJF\ntestrun/ssl/001_ssltests_18/log/001_ssltests_18_primary.log:2024-06-05 11:51:18.896 UTC [710082] LOG:  listening on Unix \nsocket \"/tmp/sp_VLbpjJF/.s.PGSQL.49632\"\ntestrun/ssl/001_ssltests_18/log/001_ssltests_18_primary.log:2024-06-05 11:51:22.005 UTC [710541] HINT:  Is another \npostmaster already running on port 49632? If not, wait a few seconds and retry.\n...\ntestrun/ssl/001_ssltests_23/log/regress_log_001_ssltests_23:# Checking port 49632\ntestrun/ssl/001_ssltests_23/log/regress_log_001_ssltests_23:# Found port 49632\ntestrun/ssl/001_ssltests_23/log/regress_log_001_ssltests_23:Connection string: port=49632 host=/tmp/3lxVDNzuGC\ntestrun/ssl/001_ssltests_23/log/001_ssltests_23_primary.log:2024-06-05 11:51:13.337 UTC [708377] LOG:  listening on Unix \nsocket \"/tmp/3lxVDNzuGC/.s.PGSQL.49632\"\ntestrun/ssl/001_ssltests_23/log/001_ssltests_23_primary.log:2024-06-05 11:51:14.333 UTC [708715] LOG:  listening on IPv4 \naddress \"127.0.0.1\", port 49632\n...\n\nAnother case (with psql using the port):\ntestrun/ssl/001_ssltests_47/log/regress_log_001_ssltests_47:# Checking port 49448\ntestrun/ssl/001_ssltests_47/log/regress_log_001_ssltests_47:# Found port 49448\ntestrun/ssl/001_ssltests_47/log/001_ssltests_47_primary.log:2024-06-05 12:20:50.178 UTC [976826] LOG:  listening on Unix \nsocket \"/tmp/GePu6gmouP/.s.PGSQL.49448\"\ntestrun/ssl/001_ssltests_47/log/001_ssltests_47_primary.log:2024-06-05 12:20:50.491 UTC [976927] HINT:  Is another \npostmaster already running on port 49448? If not, wait a few seconds and retry.\n...\ntestrun/ssl/001_ssltests_48/log/001_ssltests_48_primary.log:2024-06-05 12:20:50.491 UTC [976943] [unknown] LOG:  \nconnection received: host=localhost port=49448\nThe broader excerpt:\n2024-06-05 12:20:50.415 UTC [976918] [unknown] LOG:  connection received: host=localhost port=50326\n2024-06-05 12:20:50.418 UTC [976918] [unknown] LOG:  could not accept SSL connection: EOF detected\n2024-06-05 12:20:50.433 UTC [976920] [unknown] LOG:  connection received: host=localhost port=49420\n2024-06-05 12:20:50.435 UTC [976920] [unknown] LOG:  could not accept SSL connection: EOF detected\n2024-06-05 12:20:50.447 UTC [976922] [unknown] LOG:  connection received: host=localhost port=49430\n2024-06-05 12:20:50.452 UTC [976922] [unknown] LOG:  could not accept SSL connection: tlsv1 alert unknown ca\n2024-06-05 12:20:50.466 UTC [976933] [unknown] LOG:  connection received: host=localhost port=49440\n2024-06-05 12:20:50.472 UTC [976933] [unknown] LOG:  could not accept SSL connection: tlsv1 alert unknown ca\n2024-06-05 12:20:50.491 UTC [976943] [unknown] LOG:  connection received: host=localhost port=49448\n2024-06-05 12:20:50.497 UTC [976943] [unknown] LOG:  could not accept SSL connection: tlsv1 alert unknown ca\n2024-06-05 12:20:50.513 UTC [976969] [unknown] LOG:  connection received: host=localhost port=49464\n2024-06-05 12:20:50.517 UTC [976969] [unknown] LOG:  could not accept SSL connection: tlsv1 alert unknown ca\n2024-06-05 12:20:50.532 UTC [976971] [unknown] LOG:  connection received: host=localhost port=49468\n\nMaybe the ssl tests should not be considered failed in case of the TCP port\nbinding error?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 5 Jun 2024 16:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "ssl tests fail due to TCP port conflict" }, { "msg_contents": "On 2024-06-05 We 09:00, Alexander Lakhin wrote:\n>\n> Another case (with psql using the port):\n> testrun/ssl/001_ssltests_47/log/regress_log_001_ssltests_47:# Checking \n> port 49448\n> testrun/ssl/001_ssltests_47/log/regress_log_001_ssltests_47:# Found \n> port 49448\n> testrun/ssl/001_ssltests_47/log/001_ssltests_47_primary.log:2024-06-05 \n> 12:20:50.178 UTC [976826] LOG:  listening on Unix socket \n> \"/tmp/GePu6gmouP/.s.PGSQL.49448\"\n> testrun/ssl/001_ssltests_47/log/001_ssltests_47_primary.log:2024-06-05 \n> 12:20:50.491 UTC [976927] HINT:  Is another postmaster already running \n> on port 49448? If not, wait a few seconds and retry.\n> ...\n> testrun/ssl/001_ssltests_48/log/001_ssltests_48_primary.log:2024-06-05 \n> 12:20:50.491 UTC [976943] [unknown] LOG:  connection received: \n> host=localhost port=49448\n> The broader excerpt:\n> 2024-06-05 12:20:50.415 UTC [976918] [unknown] LOG:  connection \n> received: host=localhost port=50326\n> 2024-06-05 12:20:50.418 UTC [976918] [unknown] LOG:  could not accept \n> SSL connection: EOF detected\n> 2024-06-05 12:20:50.433 UTC [976920] [unknown] LOG:  connection \n> received: host=localhost port=49420\n> 2024-06-05 12:20:50.435 UTC [976920] [unknown] LOG:  could not accept \n> SSL connection: EOF detected\n> 2024-06-05 12:20:50.447 UTC [976922] [unknown] LOG:  connection \n> received: host=localhost port=49430\n> 2024-06-05 12:20:50.452 UTC [976922] [unknown] LOG:  could not accept \n> SSL connection: tlsv1 alert unknown ca\n> 2024-06-05 12:20:50.466 UTC [976933] [unknown] LOG:  connection \n> received: host=localhost port=49440\n> 2024-06-05 12:20:50.472 UTC [976933] [unknown] LOG:  could not accept \n> SSL connection: tlsv1 alert unknown ca\n> 2024-06-05 12:20:50.491 UTC [976943] [unknown] LOG:  connection \n> received: host=localhost port=49448\n> 2024-06-05 12:20:50.497 UTC [976943] [unknown] LOG:  could not accept \n> SSL connection: tlsv1 alert unknown ca\n> 2024-06-05 12:20:50.513 UTC [976969] [unknown] LOG:  connection \n> received: host=localhost port=49464\n> 2024-06-05 12:20:50.517 UTC [976969] [unknown] LOG:  could not accept \n> SSL connection: tlsv1 alert unknown ca\n> 2024-06-05 12:20:50.532 UTC [976971] [unknown] LOG:  connection \n> received: host=localhost port=49468\n\n\nI think I see what's going on here. It looks like it's because we start \nthe server in unix socket mode, and then switch to using TCP as well.\n\nCan you try your test with this patch applied and see if the problem \npersists? If we start in TCP mode the framework should test for a port \nclash.\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 5 Jun 2024 14:10:21 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "\nOn 2024-06-05 We 14:10, Andrew Dunstan wrote:\n>\n> On 2024-06-05 We 09:00, Alexander Lakhin wrote:\n>>\n>> Another case (with psql using the port):\n>> testrun/ssl/001_ssltests_47/log/regress_log_001_ssltests_47:# \n>> Checking port 49448\n>> testrun/ssl/001_ssltests_47/log/regress_log_001_ssltests_47:# Found \n>> port 49448\n>> testrun/ssl/001_ssltests_47/log/001_ssltests_47_primary.log:2024-06-05 \n>> 12:20:50.178 UTC [976826] LOG:  listening on Unix socket \n>> \"/tmp/GePu6gmouP/.s.PGSQL.49448\"\n>> testrun/ssl/001_ssltests_47/log/001_ssltests_47_primary.log:2024-06-05 \n>> 12:20:50.491 UTC [976927] HINT:  Is another postmaster already \n>> running on port 49448? If not, wait a few seconds and retry.\n>> ...\n>> testrun/ssl/001_ssltests_48/log/001_ssltests_48_primary.log:2024-06-05 \n>> 12:20:50.491 UTC [976943] [unknown] LOG:  connection received: \n>> host=localhost port=49448\n>> The broader excerpt:\n>> 2024-06-05 12:20:50.415 UTC [976918] [unknown] LOG:  connection \n>> received: host=localhost port=50326\n>> 2024-06-05 12:20:50.418 UTC [976918] [unknown] LOG:  could not accept \n>> SSL connection: EOF detected\n>> 2024-06-05 12:20:50.433 UTC [976920] [unknown] LOG:  connection \n>> received: host=localhost port=49420\n>> 2024-06-05 12:20:50.435 UTC [976920] [unknown] LOG:  could not accept \n>> SSL connection: EOF detected\n>> 2024-06-05 12:20:50.447 UTC [976922] [unknown] LOG:  connection \n>> received: host=localhost port=49430\n>> 2024-06-05 12:20:50.452 UTC [976922] [unknown] LOG:  could not accept \n>> SSL connection: tlsv1 alert unknown ca\n>> 2024-06-05 12:20:50.466 UTC [976933] [unknown] LOG:  connection \n>> received: host=localhost port=49440\n>> 2024-06-05 12:20:50.472 UTC [976933] [unknown] LOG:  could not accept \n>> SSL connection: tlsv1 alert unknown ca\n>> 2024-06-05 12:20:50.491 UTC [976943] [unknown] LOG:  connection \n>> received: host=localhost port=49448\n>> 2024-06-05 12:20:50.497 UTC [976943] [unknown] LOG:  could not accept \n>> SSL connection: tlsv1 alert unknown ca\n>> 2024-06-05 12:20:50.513 UTC [976969] [unknown] LOG:  connection \n>> received: host=localhost port=49464\n>> 2024-06-05 12:20:50.517 UTC [976969] [unknown] LOG:  could not accept \n>> SSL connection: tlsv1 alert unknown ca\n>> 2024-06-05 12:20:50.532 UTC [976971] [unknown] LOG:  connection \n>> received: host=localhost port=49468\n>\n>\n> I think I see what's going on here. It looks like it's because we \n> start the server in unix socket mode, and then switch to using TCP as \n> well.\n>\n> Can you try your test with this patch applied and see if the problem \n> persists? If we start in TCP mode the framework should test for a port \n> clash.\n>\n>\n>\n\nHmm, on closer inspection we should have reserved the port anyway.  But \nwhy is the port \"already used\" on restart? We haven't previously opened \na TCP connection on that port (except when checking if we can bind it), \nand instances should be locked against using that port.\n\n\n  ... wanders away muttering and scratching head ...\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 5 Jun 2024 15:03:00 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "Hello Andrew,\n\n05.06.2024 21:10, Andrew Dunstan wrote:\n>\n> I think I see what's going on here. It looks like it's because we start the server in unix socket mode, and then \n> switch to using TCP as well.\n>\n> Can you try your test with this patch applied and see if the problem persists? If we start in TCP mode the framework \n> should test for a port clash.\n>\n\nIt seems that the failure rate decreased (I guess the patch rules out the\ncase with two servers choosing the same port), but I still got:\n\n16/53 postgresql:ssl / ssl/001_ssltests_36         OK 15.25s   205 subtests passed\n17/53 postgresql:ssl / ssl/001_ssltests_30         ERROR 3.17s   (exit status 255 or signal 127 SIGinvalid)\n\n2024-06-05 19:40:37.395 UTC [414110] LOG:  starting PostgreSQL 17beta1 on x86_64-linux, compiled by gcc-13.2.1, 64-bit\n2024-06-05 19:40:37.395 UTC [414110] LOG:  could not bind IPv4 address \"127.0.0.1\": Address already in use\n2024-06-05 19:40:37.395 UTC [414110] HINT:  Is another postmaster already running on port 50072? If not, wait a few \nseconds and retry.\n\n`grep '\\b50072\\b' -r testrun/` yields:\ntestrun/ssl/001_ssltests_34/log/001_ssltests_34_primary.log:2024-06-05 19:40:37.392 UTC [414111] [unknown] LOG:  \nconnection received: host=localhost port=50072\n(a psql case)\n\nThat is, psql from the test instance 001_ssltests_34 opened a connection to\nthe test server with the client port 50072 and it made using the port by\nthe server from the test instance 001_ssltests_30 impossible.\n\nBest regards.\nAlexander\n\n\n", "msg_date": "Wed, 5 Jun 2024 23:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "\nOn 2024-06-05 We 16:00, Alexander Lakhin wrote:\n> Hello Andrew,\n>\n> 05.06.2024 21:10, Andrew Dunstan wrote:\n>>\n>> I think I see what's going on here. It looks like it's because we \n>> start the server in unix socket mode, and then switch to using TCP as \n>> well.\n>>\n>> Can you try your test with this patch applied and see if the problem \n>> persists? If we start in TCP mode the framework should test for a \n>> port clash.\n>>\n>\n> It seems that the failure rate decreased (I guess the patch rules out the\n> case with two servers choosing the same port), but I still got:\n>\n> 16/53 postgresql:ssl / ssl/001_ssltests_36         OK 15.25s   205 \n> subtests passed\n> 17/53 postgresql:ssl / ssl/001_ssltests_30         ERROR 3.17s (exit \n> status 255 or signal 127 SIGinvalid)\n>\n> 2024-06-05 19:40:37.395 UTC [414110] LOG:  starting PostgreSQL 17beta1 \n> on x86_64-linux, compiled by gcc-13.2.1, 64-bit\n> 2024-06-05 19:40:37.395 UTC [414110] LOG:  could not bind IPv4 address \n> \"127.0.0.1\": Address already in use\n> 2024-06-05 19:40:37.395 UTC [414110] HINT:  Is another postmaster \n> already running on port 50072? If not, wait a few seconds and retry.\n>\n> `grep '\\b50072\\b' -r testrun/` yields:\n> testrun/ssl/001_ssltests_34/log/001_ssltests_34_primary.log:2024-06-05 \n> 19:40:37.392 UTC [414111] [unknown] LOG:  connection received: \n> host=localhost port=50072\n> (a psql case)\n>\n> That is, psql from the test instance 001_ssltests_34 opened a \n> connection to\n> the test server with the client port 50072 and it made using the port by\n> the server from the test instance 001_ssltests_30 impossible.\n>\n\nOh. (kicks self)\n\nShould we really be allocating ephemeral server ports in the range \n41952..65535? Maybe we should be looking for an unallocated port \nsomewhere below 41952, and above, say, 32767, so we couldn't have a \nclient socket collision.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 5 Jun 2024 16:49:41 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "\nOn 2024-06-05 We 16:49, Andrew Dunstan wrote:\n>\n> On 2024-06-05 We 16:00, Alexander Lakhin wrote:\n>> Hello Andrew,\n>>\n>> 05.06.2024 21:10, Andrew Dunstan wrote:\n>>>\n>>> I think I see what's going on here. It looks like it's because we \n>>> start the server in unix socket mode, and then switch to using TCP \n>>> as well.\n>>>\n>>> Can you try your test with this patch applied and see if the problem \n>>> persists? If we start in TCP mode the framework should test for a \n>>> port clash.\n>>>\n>>\n>> It seems that the failure rate decreased (I guess the patch rules out \n>> the\n>> case with two servers choosing the same port), but I still got:\n>>\n>> 16/53 postgresql:ssl / ssl/001_ssltests_36         OK 15.25s 205 \n>> subtests passed\n>> 17/53 postgresql:ssl / ssl/001_ssltests_30         ERROR 3.17s (exit \n>> status 255 or signal 127 SIGinvalid)\n>>\n>> 2024-06-05 19:40:37.395 UTC [414110] LOG:  starting PostgreSQL \n>> 17beta1 on x86_64-linux, compiled by gcc-13.2.1, 64-bit\n>> 2024-06-05 19:40:37.395 UTC [414110] LOG:  could not bind IPv4 \n>> address \"127.0.0.1\": Address already in use\n>> 2024-06-05 19:40:37.395 UTC [414110] HINT:  Is another postmaster \n>> already running on port 50072? If not, wait a few seconds and retry.\n>>\n>> `grep '\\b50072\\b' -r testrun/` yields:\n>> testrun/ssl/001_ssltests_34/log/001_ssltests_34_primary.log:2024-06-05 \n>> 19:40:37.392 UTC [414111] [unknown] LOG:  connection received: \n>> host=localhost port=50072\n>> (a psql case)\n>>\n>> That is, psql from the test instance 001_ssltests_34 opened a \n>> connection to\n>> the test server with the client port 50072 and it made using the port by\n>> the server from the test instance 001_ssltests_30 impossible.\n>>\n>\n> Oh. (kicks self)\n>\n> Should we really be allocating ephemeral server ports in the range \n> 41952..65535? Maybe we should be looking for an unallocated port \n> somewhere below 41952, and above, say, 32767, so we couldn't have a \n> client socket collision.\n\n\nExcept that I see this on my Ubuntu instance:\n\n$ sudo sysctl net.ipv4.ip_local_port_range\nnet.ipv4.ip_local_port_range = 32768    60999\n\nAnd indeed I see client sockets in that range.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 5 Jun 2024 17:17:04 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-06-05 We 16:00, Alexander Lakhin wrote:\n>> That is, psql from the test instance 001_ssltests_34 opened a \n>> connection to\n>> the test server with the client port 50072 and it made using the port by\n>> the server from the test instance 001_ssltests_30 impossible.\n\n> Oh. (kicks self)\n\nD'oh.\n\n> Should we really be allocating ephemeral server ports in the range \n> 41952..65535? Maybe we should be looking for an unallocated port \n> somewhere below 41952, and above, say, 32767, so we couldn't have a \n> client socket collision.\n\nHmm, are there really any standards about how these port numbers\nare used?\n\nI wonder if we don't need to just be prepared to retry the whole\nthing a few times. Even if it's true that \"clients\" shouldn't\nchoose ports below 41952, we still have a small chance of failure\nagainst a non-Postgres server starting up at the wrong time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Jun 2024 17:37:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "\nOn 2024-06-05 We 17:37, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> On 2024-06-05 We 16:00, Alexander Lakhin wrote:\n>>> That is, psql from the test instance 001_ssltests_34 opened a\n>>> connection to\n>>> the test server with the client port 50072 and it made using the port by\n>>> the server from the test instance 001_ssltests_30 impossible.\n>> Oh. (kicks self)\n> D'oh.\n>\n>> Should we really be allocating ephemeral server ports in the range\n>> 41952..65535? Maybe we should be looking for an unallocated port\n>> somewhere below 41952, and above, say, 32767, so we couldn't have a\n>> client socket collision.\n> Hmm, are there really any standards about how these port numbers\n> are used?\n>\n> I wonder if we don't need to just be prepared to retry the whole\n> thing a few times. Even if it's true that \"clients\" shouldn't\n> choose ports below 41952, we still have a small chance of failure\n> against a non-Postgres server starting up at the wrong time.\n\n\nYeah, I think you're right. One thing we should do is be careful to use \nthe port as soon as possible after we have picked it, to reduce the \npossibility that something else will use it first.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 5 Jun 2024 21:23:40 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "\nOn 2024-06-05 We 16:00, Alexander Lakhin wrote:\n> Hello Andrew,\n>\n> 05.06.2024 21:10, Andrew Dunstan wrote:\n>>\n>> I think I see what's going on here. It looks like it's because we \n>> start the server in unix socket mode, and then switch to using TCP as \n>> well.\n>>\n>> Can you try your test with this patch applied and see if the problem \n>> persists? If we start in TCP mode the framework should test for a \n>> port clash.\n>>\n>\n> It seems that the failure rate decreased (I guess the patch rules out the\n> case with two servers choosing the same port), but I still got:\n>\n> 16/53 postgresql:ssl / ssl/001_ssltests_36         OK 15.25s   205 \n> subtests passed\n> 17/53 postgresql:ssl / ssl/001_ssltests_30         ERROR 3.17s (exit \n> status 255 or signal 127 SIGinvalid)\n>\n> 2024-06-05 19:40:37.395 UTC [414110] LOG:  starting PostgreSQL 17beta1 \n> on x86_64-linux, compiled by gcc-13.2.1, 64-bit\n> 2024-06-05 19:40:37.395 UTC [414110] LOG:  could not bind IPv4 address \n> \"127.0.0.1\": Address already in use\n> 2024-06-05 19:40:37.395 UTC [414110] HINT:  Is another postmaster \n> already running on port 50072? If not, wait a few seconds and retry.\n>\n> `grep '\\b50072\\b' -r testrun/` yields:\n> testrun/ssl/001_ssltests_34/log/001_ssltests_34_primary.log:2024-06-05 \n> 19:40:37.392 UTC [414111] [unknown] LOG:  connection received: \n> host=localhost port=50072\n> (a psql case)\n>\n> That is, psql from the test instance 001_ssltests_34 opened a \n> connection to\n> the test server with the client port 50072 and it made using the port by\n> the server from the test instance 001_ssltests_30 impossible.\n>\n>\n\nAfter sleeping on it, I still think the patch would be a good thing. \nYour torture test might still show some failures, but the buildfarm \nisn't running those, and it might be enough to eliminate or at least \nsubstantially reduce buildfarm failures by reducing to almost zero the \ntime in which a competing script might grab the port. The biggest \nproblem with the current script is apparently that we delay using the \nTCP port by starting the server in Unix socket mode, and only switch to \nusing TCP when we restart. If changing that doesn't fix the problem \nwe'll have to rethink. If this isn't the cause, though, I would expect \nto have seen similar failures from other test suites.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 6 Jun 2024 07:25:26 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "On Wed, 5 Jun 2024 at 23:37, Tom Lane <[email protected]> wrote:\n>\n> Andrew Dunstan <[email protected]> writes:\n> > On 2024-06-05 We 16:00, Alexander Lakhin wrote:\n> >> That is, psql from the test instance 001_ssltests_34 opened a\n> >> connection to\n> >> the test server with the client port 50072 and it made using the port by\n> >> the server from the test instance 001_ssltests_30 impossible.\n>\n> > Oh. (kicks self)\n>\n> D'oh.\n>\n> > Should we really be allocating ephemeral server ports in the range\n> > 41952..65535? Maybe we should be looking for an unallocated port\n> > somewhere below 41952, and above, say, 32767, so we couldn't have a\n> > client socket collision.\n>\n> Hmm, are there really any standards about how these port numbers\n> are used?\n>\n> I wonder if we don't need to just be prepared to retry the whole\n> thing a few times. Even if it's true that \"clients\" shouldn't\n> choose ports below 41952, we still have a small chance of failure\n> against a non-Postgres server starting up at the wrong time.\n\nMy suggestion would be to not touch the ephemeral port range at all\nfor these ports. In practice the ephemeral port range is used for\ncases where the operating system assigns the port, and the application\ndoesn't care whot it is. Not for when you want to get a free port, but\nwant to know in advance which one it is.\n\nFor the PgBouncer test suite we do something similar as the PG its\nperl tests do, but there we allocate a port between 10200 and 32768:\nhttps://github.com/pgbouncer/pgbouncer/blob/master/test/utils.py#L192-L215\n\nSure theoretically it's possible to hit a rare case where another\nserver starts up at the wrong time, but that chance seems way lower\nthan a client starting up at the wrong time. Especially since there\naren't many servers that use a port with 5 digits.\n\nAttached is a patch that updates the port numbers.", "msg_date": "Fri, 7 Jun 2024 00:02:28 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "\nOn 2024-06-06 Th 18:02, Jelte Fennema-Nio wrote:\n> On Wed, 5 Jun 2024 at 23:37, Tom Lane <[email protected]> wrote:\n>> Andrew Dunstan <[email protected]> writes:\n>>> On 2024-06-05 We 16:00, Alexander Lakhin wrote:\n>>>> That is, psql from the test instance 001_ssltests_34 opened a\n>>>> connection to\n>>>> the test server with the client port 50072 and it made using the port by\n>>>> the server from the test instance 001_ssltests_30 impossible.\n>>> Oh. (kicks self)\n>> D'oh.\n>>\n>>> Should we really be allocating ephemeral server ports in the range\n>>> 41952..65535? Maybe we should be looking for an unallocated port\n>>> somewhere below 41952, and above, say, 32767, so we couldn't have a\n>>> client socket collision.\n>> Hmm, are there really any standards about how these port numbers\n>> are used?\n>>\n>> I wonder if we don't need to just be prepared to retry the whole\n>> thing a few times. Even if it's true that \"clients\" shouldn't\n>> choose ports below 41952, we still have a small chance of failure\n>> against a non-Postgres server starting up at the wrong time.\n> My suggestion would be to not touch the ephemeral port range at all\n> for these ports. In practice the ephemeral port range is used for\n> cases where the operating system assigns the port, and the application\n> doesn't care whot it is. Not for when you want to get a free port, but\n> want to know in advance which one it is.\n>\n> For the PgBouncer test suite we do something similar as the PG its\n> perl tests do, but there we allocate a port between 10200 and 32768:\n> https://github.com/pgbouncer/pgbouncer/blob/master/test/utils.py#L192-L215\n>\n> Sure theoretically it's possible to hit a rare case where another\n> server starts up at the wrong time, but that chance seems way lower\n> than a client starting up at the wrong time. Especially since there\n> aren't many servers that use a port with 5 digits.\n>\n> Attached is a patch that updates the port numbers.\n\n\nMakes sense to me.\n\nI still think my patch to force TCP mode for the SSL test makes sense as \nwell.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 7 Jun 2024 09:55:56 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-06-06 Th 18:02, Jelte Fennema-Nio wrote:\n>> For the PgBouncer test suite we do something similar as the PG its\n>> perl tests do, but there we allocate a port between 10200 and 32768:\n>> https://github.com/pgbouncer/pgbouncer/blob/master/test/utils.py#L192-L215\n\n> Makes sense to me.\n\n> I still think my patch to force TCP mode for the SSL test makes sense as \n> well.\n\n+1 to both things. If that doesn't get the failure rate down to an\nacceptable level, we can look at the retry idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Jun 2024 10:25:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "Hello,\n\n07.06.2024 17:25, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> I still think my patch to force TCP mode for the SSL test makes sense as\n>> well.\n> +1 to both things. If that doesn't get the failure rate down to an\n> acceptable level, we can look at the retry idea.\n\nI'd like to add that the kerberos/001_auth test also suffers from the port\nconflict, but slightly differently. Look for example at [1]:\nkrb5kdc.log contains:\nJul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](info): setting up network...\nJul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](Error): Address already in use - Cannot bind server socket \non 127.0.0.1.55853\nJul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](Error): Failed setting up a UDP socket (for 127.0.0.1.55853)\nJul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](Error): Address already in use - Error setting up network\n\nAs far as I can see, the port for kdc is chosen by\nPostgreSQL::Test::Kerberos, via\nPostgreSQL::Test::Cluster::get_free_port(), which checks only for TCP\nport availability (with can_bind()), but not for UDP, so this increases\nthe probability of the conflict for this test (a similar failure: [2]).\nAlthough we can also find a failure with TCP: [3]\n\n(It's not clear to me, what processes can use UDP ports while testing,\nbut maybe those buildfarm animals are running on the same logical\nmachine simultaneously?)\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-07-02%2009%3A27%3A15\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-05-15%2001%3A25%3A07\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-07-04%2008%3A28%3A19\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 8 Jul 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ssl tests fail due to TCP port conflict" }, { "msg_contents": "\nOn 2024-07-08 Mo 8:00 AM, Alexander Lakhin wrote:\n> Hello,\n>\n> 07.06.2024 17:25, Tom Lane wrote:\n>> Andrew Dunstan <[email protected]> writes:\n>>> I still think my patch to force TCP mode for the SSL test makes \n>>> sense as\n>>> well.\n>> +1 to both things.  If that doesn't get the failure rate down to an\n>> acceptable level, we can look at the retry idea.\n\n\nI have push patches for both of those (i.e. start SSL test nodes in TCP \nmode and change the range of ports we allocate server ports from)\n\nI didn't see this email until after I had pushed them.\n\n\n>\n> I'd like to add that the kerberos/001_auth test also suffers from the \n> port\n> conflict, but slightly differently. Look for example at [1]:\n> krb5kdc.log contains:\n> Jul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](info): \n> setting up network...\n> Jul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](Error): \n> Address already in use - Cannot bind server socket on 127.0.0.1.55853\n> Jul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](Error): \n> Failed setting up a UDP socket (for 127.0.0.1.55853)\n> Jul 02 09:29:41 andres-postgres-buildfarm-v5 krb5kdc[471964](Error): \n> Address already in use - Error setting up network\n>\n> As far as I can see, the port for kdc is chosen by\n> PostgreSQL::Test::Kerberos, via\n> PostgreSQL::Test::Cluster::get_free_port(), which checks only for TCP\n> port availability (with can_bind()), but not for UDP, so this increases\n> the probability of the conflict for this test (a similar failure: [2]).\n> Although we can also find a failure with TCP: [3]\n>\n> (It's not clear to me, what processes can use UDP ports while testing,\n> but maybe those buildfarm animals are running on the same logical\n> machine simultaneously?)\n>\n> [1] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-07-02%2009%3A27%3A15\n> [2] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-05-15%2001%3A25%3A07\n> [3] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-07-04%2008%3A28%3A19\n>\n>\n\nLet's see if this persists now we are testing for free ports in a \ndifferent range, not the range usually used for ephemeral ports.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 8 Jul 2024 15:40:37 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ssl tests fail due to TCP port conflict" } ]
[ { "msg_contents": "While looking at the recent bug report from Alexander Lakhin [1], I got \nannoyed by the relcache code, and RelationClearRelation in particular. I \npropose to refactor it for clarity.\n\n[1] \nhttps://www.postgresql.org/message-id/e56be7d9-14b1-664d-0bfc-00ce9772721c%40gmail.com\n\n## Patch 1\n\nThis is just a narrow fix for the reported bug [1], same as I posted on \nthat thread. Included here because I wrote the refactorings on top of \nthis patch and didn't commit it yet.\n\n\n## Patch 2: Simplify call to rebuild relcache entry for indexes\n\nTo rebuild a relcache entry that's been marked as invalid, \nRelationIdGetRelation() calls RelationReloadIndexInfo() for indexes and \nRelationClearRelation(rebuild == true) for other relations. However, \nRelationClearRelation calls RelationReloadIndexInfo() for indexes \nanyway, so RelationIdGetRelation() can just always call \nRelationClearRelation() and let RelationClearRelation() do the right \nthing to rebuild the relation, whether it's an index or something else. \nThat seems more straightforward.\n\nAlso add comments explaining how the rebuild works at index creation. \nIt's a bit special, see the comments.\n\n\n## Patch 3: Split RelationClearRelation into three different functions\n\nRelationClearRelation() is complicated. Depending on the 'rebuild' \nargument and the circumstances, like if it's called in a transaction and \nwhether the relation is an index, a nailed relation, a regular table, or \na relation dropped in the same xact, it does different things:\n\n- Remove the relation completely from the cache (rebuild == false),\n- Mark the entry as invalid (rebuild == true, but not in xact), or\n- Rebuild the entry (rebuild == true).\n\nThe callers have expectations on what they want it to do. Mostly the \ncallers with 'rebuild == false' expect the entry to be removed, and \ncallers with 'rebuild == true' expect it to be rebuilt or invalidated, \nbut there are exceptions. RelationForgetRelation() for example sets \nrd_droppedSubid and expects RelationClearRelation() to then merely \ninvalidate it, and the call from RelationIdGetRelation() expects it to \nrebuild, not merely invalidate it.\n\nI propose to split RelationClearRelation() into three functions:\n\nRelationInvalidateRelation: mark the relcache entry as invalid, so that \nit it is rebuilt on next access.\nRelationRebuildRelation: rebuild the relcache entry in-place.\nRelationClearRelation: Remove the entry from the relcache.\n\nThis moves the responsibility of deciding the right action to the \ncallers. Which they were mostly already doing. Each of those actions \nhave different preconditions, e.g. RelationRebuildRelation() can only be \ncalled in a valid transaction, and RelationClearRelation() can only be \ncalled if the reference count is zero. Splitting them makes those \npreconditions more clear, we can have assertions to document them in each.\n\n\n## RelationInitPhysicalAddr() call in RelationReloadNailed()\n\nOne question or little doubt I have: Before these patches, \nRelationReloadNailed() calls RelationInitPhysicalAddr() even when it \nleaves the relation as invalidated because we're not in a transaction or \nif the relation isn't currently in use. That's a bit surprising, I'd \nexpect it to be done when the entry is reloaded, not when it's \ninvalidated. That's how it's done for non-nailed relations. And in fact, \nfor a nailed relation, RelationInitPhysicalAddr() is called *again* when \nit's reloaded.\n\nIs it important? Before commit a54e1f1587, nailed non-index relations \nwere not reloaded at all, except for the call to \nRelationInitPhysicalAddr(), which seemed consistent. I think this was \nunintentional in commit a54e1f1587, or perhaps just overly defensive, as \nthere's no harm in some extra RelationInitPhysicalAddr() calls.\n\nThis patch removes that extra call from the invalidation path, but if it \nturns out to be wrong, we can easily add it to RelationInvalidateRelation.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Wed, 5 Jun 2024 16:56:33 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Relcache refactoring" }, { "msg_contents": "On Wed, Jun 5, 2024 at 9:56 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> ## Patch 3: Split RelationClearRelation into three different functions\n>\n> RelationClearRelation() is complicated. Depending on the 'rebuild'\n> argument and the circumstances, like if it's called in a transaction and\n> whether the relation is an index, a nailed relation, a regular table, or\n> a relation dropped in the same xact, it does different things:\n>\n> - Remove the relation completely from the cache (rebuild == false),\n> - Mark the entry as invalid (rebuild == true, but not in xact), or\n> - Rebuild the entry (rebuild == true).\n>\n> The callers have expectations on what they want it to do. Mostly the\n> callers with 'rebuild == false' expect the entry to be removed, and\n> callers with 'rebuild == true' expect it to be rebuilt or invalidated,\n> but there are exceptions. RelationForgetRelation() for example sets\n> rd_droppedSubid and expects RelationClearRelation() to then merely\n> invalidate it, and the call from RelationIdGetRelation() expects it to\n> rebuild, not merely invalidate it.\n>\n> I propose to split RelationClearRelation() into three functions:\n>\n> RelationInvalidateRelation: mark the relcache entry as invalid, so that\n> it it is rebuilt on next access.\n> RelationRebuildRelation: rebuild the relcache entry in-place.\n> RelationClearRelation: Remove the entry from the relcache.\n>\n> This moves the responsibility of deciding the right action to the\n> callers. Which they were mostly already doing. Each of those actions\n> have different preconditions, e.g. RelationRebuildRelation() can only be\n> called in a valid transaction, and RelationClearRelation() can only be\n> called if the reference count is zero. Splitting them makes those\n> preconditions more clear, we can have assertions to document them in each.\n>\n>\none minor issue.\n\nstatic void\nRelationClearRelation(Relation relation)\n{\n Assert(RelationHasReferenceCountZero(relation));\n Assert(!relation->rd_isnailed);\n\n /*\n * Relations created in the same transaction must never be removed, see\n * RelationFlushRelation.\n */\n Assert(relation->rd_createSubid == InvalidSubTransactionId);\n Assert(relation->rd_firstRelfilelocatorSubid == InvalidSubTransactionId);\n Assert(relation->rd_droppedSubid == InvalidSubTransactionId);\n\n /* Ensure it's closed at smgr level */\n RelationCloseSmgr(relation);\n\n /* Free AM cached data, if any */\n if (relation->rd_amcache)\n pfree(relation->rd_amcache);\n relation->rd_amcache = NULL;\n\n /* Mark it as invalid (just pro forma since we will free it below) */\n relation->rd_isvalid = false;\n\n /* Remove it from the hash table */\n RelationCacheDelete(relation);\n\n /* And release storage */\n RelationDestroyRelation(relation, false);\n}\n\n\ncan be simplified as\n\n\nstatic void\nRelationClearRelation(Relation relation)\n{\n ---bunch of Asserts\n\n /* first mark it as invalid */\n RelationInvalidateRelation(relation);\n\n /* Remove it from the hash table */\n RelationCacheDelete(relation);\n\n /* And release storage */\n RelationDestroyRelation(relation, false);\n}\n?\n\n\nin RelationRebuildRelation\nwe can also use RelationInvalidateRelation?\n\n\n\n * We assume that at the time we are called, we have at least AccessShareLock\n * on the target index. (Note: in the calls from RelationClearRelation,\n * this is legitimate because we know the rel has positive refcount.)\n\ncalling RelationClearRelation, rel->rd_refcnt == 0\nseems conflicted with the above comments in RelationReloadIndexInfo.\nso i am confused with the above comments.\n\n\n", "msg_date": "Mon, 23 Sep 2024 09:39:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Relcache refactoring" } ]
[ { "msg_contents": "While reviewing Daniel's pg_dump patch [0], I was initially confused\nbecause the return value of getTypes() isn't saved anywhere. Once I found\ncommit 92316a4, I realized that data was actually stored in a separate hash\ntable. In fact, many of the functions in this area don't actually need to\nreturn anything, so we can trim some code and hopefully reduce confusion a\nbit. Patch attached.\n\n[0] https://postgr.es/m/8F1F1E1D-D17B-4B33-B014-EDBCD15F3F0B%40yesql.se\n\n-- \nnathan", "msg_date": "Wed, 5 Jun 2024 10:13:57 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "small pg_dump code cleanup" }, { "msg_contents": "On Wed, Jun 5, 2024 at 11:14 AM Nathan Bossart <[email protected]>\nwrote:\n\n> In fact, many of the functions in this area don't actually need to\n\nreturn anything, so we can trim some code and hopefully reduce confusion a\n> bit. Patch attached.\n>\n\nNice cleanup! Two minor comments:\n\n(1) Names like `getXXX` for these functions suggest to me that they return\na value, rather than side-effecting. I realize some variants continue to\nreturn a value, but the majority no longer do. Perhaps a name like\nlookupXXX() or readXXX() would be clearer?\n\n(2) These functions malloc() a single ntups * sizeof(struct) allocation and\nthen index into it to fill-in each struct before entering it into the hash\ntable. It might be more straightforward to just malloc each individual\nstruct.\n\nNeil\n\nOn Wed, Jun 5, 2024 at 11:14 AM Nathan Bossart <[email protected]> wrote: In fact, many of the functions in this area don't actually need to\nreturn anything, so we can trim some code and hopefully reduce confusion a\nbit.  Patch attached.Nice cleanup! Two minor comments:(1) Names like `getXXX` for these functions suggest to me that they return a value, rather than side-effecting. I realize some variants continue to return a value, but the majority no longer do. Perhaps a name like lookupXXX() or readXXX() would be clearer?(2) These functions malloc() a single ntups * sizeof(struct) allocation and then index into it to fill-in each struct before entering it into the hash table. It might be more straightforward to just malloc each individual struct.Neil", "msg_date": "Wed, 5 Jun 2024 12:22:03 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: small pg_dump code cleanup" }, { "msg_contents": "On Wed, Jun 05, 2024 at 12:22:03PM -0400, Neil Conway wrote:\n> Nice cleanup! Two minor comments:\n\nThanks for taking a look.\n\n> (1) Names like `getXXX` for these functions suggest to me that they return\n> a value, rather than side-effecting. I realize some variants continue to\n> return a value, but the majority no longer do. Perhaps a name like\n> lookupXXX() or readXXX() would be clearer?\n\nWhat about collectXXX() to match similar functions in pg_dump.c (e.g.,\ncollectRoleNames(), collectComments(), collectSecLabels())?\n\n> (2) These functions malloc() a single ntups * sizeof(struct) allocation and\n> then index into it to fill-in each struct before entering it into the hash\n> table. It might be more straightforward to just malloc each individual\n> struct.\n\nThat'd increase the number of allocations quite significantly, but I'd be\nsurprised if that was noticeable outside of extreme scenarios. At the\nmoment, I'm inclined to leave these as-is for this reason and because I\ndoubt it'd result in much cleanup, but I'll yield to the majority opinion\nhere.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 5 Jun 2024 11:37:54 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: small pg_dump code cleanup" }, { "msg_contents": "On Wed, Jun 5, 2024 at 12:37 PM Nathan Bossart <[email protected]>\nwrote:\n\n> What about collectXXX() to match similar functions in pg_dump.c (e.g.,\n> collectRoleNames(), collectComments(), collectSecLabels())?\n>\n\nsgtm.\n\n\n> > (2) These functions malloc() a single ntups * sizeof(struct) allocation\n> and\n> > then index into it to fill-in each struct before entering it into the\n> hash\n> > table. It might be more straightforward to just malloc each individual\n> > struct.\n>\n> That'd increase the number of allocations quite significantly, but I'd be\n> surprised if that was noticeable outside of extreme scenarios. At the\n> moment, I'm inclined to leave these as-is for this reason and because I\n> doubt it'd result in much cleanup, but I'll yield to the majority opinion\n> here.\n>\n\nAs you say, I'd be surprised if the performance difference is noticeable.\nPersonally I don't think the marginal performance win justifies the hit to\nreadability, but I don't feel strongly about it.\n\nNeil\n\nOn Wed, Jun 5, 2024 at 12:37 PM Nathan Bossart <[email protected]> wrote:What about collectXXX() to match similar functions in pg_dump.c (e.g.,\ncollectRoleNames(), collectComments(), collectSecLabels())?sgtm. \n> (2) These functions malloc() a single ntups * sizeof(struct) allocation and\n> then index into it to fill-in each struct before entering it into the hash\n> table. It might be more straightforward to just malloc each individual\n> struct.\n\nThat'd increase the number of allocations quite significantly, but I'd be\nsurprised if that was noticeable outside of extreme scenarios.  At the\nmoment, I'm inclined to leave these as-is for this reason and because I\ndoubt it'd result in much cleanup, but I'll yield to the majority opinion\nhere.As you say, I'd be surprised if the performance difference is noticeable. Personally I don't think the marginal performance win justifies the hit to readability, but I don't feel strongly about it.Neil", "msg_date": "Wed, 5 Jun 2024 13:51:08 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: small pg_dump code cleanup" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Wed, Jun 05, 2024 at 12:22:03PM -0400, Neil Conway wrote:\n>> (1) Names like `getXXX` for these functions suggest to me that they return\n>> a value, rather than side-effecting. I realize some variants continue to\n>> return a value, but the majority no longer do. Perhaps a name like\n>> lookupXXX() or readXXX() would be clearer?\n\n> What about collectXXX() to match similar functions in pg_dump.c (e.g.,\n> collectRoleNames(), collectComments(), collectSecLabels())?\n\nPersonally I see nothing much wrong with leaving them as getXXX.\n\n>> (2) These functions malloc() a single ntups * sizeof(struct) allocation and\n>> then index into it to fill-in each struct before entering it into the hash\n>> table. It might be more straightforward to just malloc each individual\n>> struct.\n\n> That'd increase the number of allocations quite significantly, but I'd be\n> surprised if that was noticeable outside of extreme scenarios. At the\n> moment, I'm inclined to leave these as-is for this reason and because I\n> doubt it'd result in much cleanup, but I'll yield to the majority opinion\n> here.\n\nI think that would be quite an invasive change; it would require\nmany hundreds of edits like\n\n-\t\tfinfo[i].dobj.objType = DO_FUNC;\n+\t\tfinfo->dobj.objType = DO_FUNC;\n\nwhich aside from being tedious would create a back-patching hazard.\nSo I'm kind of -0.1 or so.\n\nAnother angle to this is that Coverity and possibly other tools tend\nto report that these functions leak these allocations, apparently\nbecause they don't notice that pointers into the allocations get\nstored in hash tables by a subroutine. I'm not sure if making this\nchange would make that worse or better. If we really want to change\nit, that might be worth checking somehow before we jump.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Jun 2024 13:58:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: small pg_dump code cleanup" }, { "msg_contents": "On Wed, Jun 05, 2024 at 01:58:54PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> On Wed, Jun 05, 2024 at 12:22:03PM -0400, Neil Conway wrote:\n>>> (2) These functions malloc() a single ntups * sizeof(struct) allocation and\n>>> then index into it to fill-in each struct before entering it into the hash\n>>> table. It might be more straightforward to just malloc each individual\n>>> struct.\n> \n>> That'd increase the number of allocations quite significantly, but I'd be\n>> surprised if that was noticeable outside of extreme scenarios. At the\n>> moment, I'm inclined to leave these as-is for this reason and because I\n>> doubt it'd result in much cleanup, but I'll yield to the majority opinion\n>> here.\n> \n> I think that would be quite an invasive change; it would require\n> many hundreds of edits like\n> \n> -\t\tfinfo[i].dobj.objType = DO_FUNC;\n> +\t\tfinfo->dobj.objType = DO_FUNC;\n> \n> which aside from being tedious would create a back-patching hazard.\n> So I'm kind of -0.1 or so.\n> \n> Another angle to this is that Coverity and possibly other tools tend\n> to report that these functions leak these allocations, apparently\n> because they don't notice that pointers into the allocations get\n> stored in hash tables by a subroutine. I'm not sure if making this\n> change would make that worse or better. If we really want to change\n> it, that might be worth checking somehow before we jump.\n\nAt the moment, I'm inclined to commit v1 once v18 development opens up. We\ncan consider any additional adjustments separately.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 11 Jun 2024 15:30:14 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: small pg_dump code cleanup" }, { "msg_contents": "> On 11 Jun 2024, at 22:30, Nathan Bossart <[email protected]> wrote:\n\n> At the moment, I'm inclined to commit v1 once v18 development opens up. We\n> can consider any additional adjustments separately.\n\nPatch LGTM and the tests pass, +1 on pushing this version.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 17:08:12 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: small pg_dump code cleanup" }, { "msg_contents": "Committed.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 2 Jul 2024 11:27:00 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: small pg_dump code cleanup" } ]
[ { "msg_contents": "While working on commit b631d0149, I got a bee in my bonnet about\nhow unfriendly PL/Tcl's error CONTEXT reports are:\n\n* The context reports expose PL/Tcl's internal names for the Tcl\nprocedures it creates, which'd be fine if those names were readable.\nBut actually they're something like \"__PLTcl_proc_NNNN\", where NNNN\nis the function OID. Not only is that unintelligible, but because\nthe OIDs aren't stable this forces us to disable display of the\nCONTEXT lines in all of PL/Tcl's regression tests.\n\n* The first line of the context report (almost?) always duplicates\nthe primary error message, which is redundant and not per our\nnormal reporting style.\n\nSo attached is a patch that attempts to improve this situation.\n\nThe key question is how to avoid including function OIDs in the\nstrings that will appear in the regression test outputs. The\nanswer I propose is to start with an internal name like\n\"__PLTcl_proc_NAME\", where NAME is the function's normal SQL name,\nand then append the OID only if that function name is not unique.\nAs long as we don't create test cases that involve throwing\nerrors from duplicatively-named functions, we can show the context\nreports and still have stable regression outputs. I think this will\nimprove the user experience for regular users too.\n\nPL/Tcl wants the internal names to be all-ASCII-alphanumeric,\nwhich saves it from having to think about encoding conversion\nor quoting when inserting those names into Tcl command strings.\nWhat I did in the attached is to copy only ASCII alphanumerics\nfrom the SQL name. Perhaps it's worth working harder but\nI failed to get excited about that.\n\nA few notes:\n\n* To avoid unnecessarily appending the OID when a function is\nredefined, I modified the logic to explicitly delete the old Tcl\ncommand before checking for duplication. This is okay even if the\nfunction is currently being evaluated, because Tcl's internal\nreference counting prevents it from deleting the underlying code\nobject until it's done being executed. Really we were depending on\nthat reference counting to handle such cases already, but you wouldn't\nhave known it from our comments. I added a test case to demonstrate\nexplicitly that this works correctly.\n\n* Sadly, pltcl_trigger.sql still has to suppress the context\nreports. Although its function names are now stable, the reports\ninclude trigger argument lists, which include numeric table OIDs\nso they're unstable. I don't see a way to change that without\nbreaking API for user trigger functions.\n\n* A hazard with this plan is that the regression tests' context\nreports might turn out to be platform-dependent. I experimented\nwith Tcl 8.5 and 8.6 here and found one difference: the \"missing\nclose-brace\" error reported by our tcl_error() test case shows the\nunmatched open-brace on one version but not the other. AFAICS the\npoint of that test is just to exercise some Tcl-detected error, not\nnecessarily that exact one, so I just modified the test case to cause\na different error. We might find additional problems once this patch\nhits the buildfarm or gets out into the field.\n\nI'll park this in the next CF.\n\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 05 Jun 2024 13:42:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Improving PL/Tcl's error context reports" }, { "msg_contents": "On 05/06/2024 20:42, Tom Lane wrote:\n> While working on commit b631d0149, I got a bee in my bonnet about\n> how unfriendly PL/Tcl's error CONTEXT reports are:\n> \n> * The context reports expose PL/Tcl's internal names for the Tcl\n> procedures it creates, which'd be fine if those names were readable.\n> But actually they're something like \"__PLTcl_proc_NNNN\", where NNNN\n> is the function OID. Not only is that unintelligible, but because\n> the OIDs aren't stable this forces us to disable display of the\n> CONTEXT lines in all of PL/Tcl's regression tests.\n> \n> * The first line of the context report (almost?) always duplicates\n> the primary error message, which is redundant and not per our\n> normal reporting style.\n> \n> So attached is a patch that attempts to improve this situation.\n> \n> The key question is how to avoid including function OIDs in the\n> strings that will appear in the regression test outputs. The\n> answer I propose is to start with an internal name like\n> \"__PLTcl_proc_NAME\", where NAME is the function's normal SQL name,\n> and then append the OID only if that function name is not unique.\n> As long as we don't create test cases that involve throwing\n> errors from duplicatively-named functions, we can show the context\n> reports and still have stable regression outputs. I think this will\n> improve the user experience for regular users too.\n\nYes, that sounds a lot nicer.\n\nWhat happens if you rename a function? I guess the error context will \nstill print the old name, but that's pretty harmless.\n\nHmm, could we do something with tcl namespaces to allow having two \nprocedures with the same name? E.g. create a separate namespace, based \non the OID, for each procedure. I wonder how the stack trace would look \nlike then.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 4 Jul 2024 18:27:43 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving PL/Tcl's error context reports" }, { "msg_contents": "čt 4. 7. 2024 v 17:27 odesílatel Heikki Linnakangas <[email protected]>\nnapsal:\n\n> On 05/06/2024 20:42, Tom Lane wrote:\n> > While working on commit b631d0149, I got a bee in my bonnet about\n> > how unfriendly PL/Tcl's error CONTEXT reports are:\n> >\n> > * The context reports expose PL/Tcl's internal names for the Tcl\n> > procedures it creates, which'd be fine if those names were readable.\n> > But actually they're something like \"__PLTcl_proc_NNNN\", where NNNN\n> > is the function OID. Not only is that unintelligible, but because\n> > the OIDs aren't stable this forces us to disable display of the\n> > CONTEXT lines in all of PL/Tcl's regression tests.\n> >\n> > * The first line of the context report (almost?) always duplicates\n> > the primary error message, which is redundant and not per our\n> > normal reporting style.\n> >\n> > So attached is a patch that attempts to improve this situation.\n> >\n> > The key question is how to avoid including function OIDs in the\n> > strings that will appear in the regression test outputs. The\n> > answer I propose is to start with an internal name like\n> > \"__PLTcl_proc_NAME\", where NAME is the function's normal SQL name,\n> > and then append the OID only if that function name is not unique.\n> > As long as we don't create test cases that involve throwing\n> > errors from duplicatively-named functions, we can show the context\n> > reports and still have stable regression outputs. I think this will\n> > improve the user experience for regular users too.\n>\n> Yes, that sounds a lot nicer.\n>\n> What happens if you rename a function? I guess the error context will\n> still print the old name, but that's pretty harmless.\n>\n\nThe rename should to generate different tid, so the function will be\nrecompiled\n\n<-->/************************************************************\n<--> * If it's present, must check whether it's still up to date.\n<--> * This is needed because CREATE OR REPLACE FUNCTION can modify the\n<--> * function's pg_proc entry without changing its OID.\n<--> ************************************************************/\n<-->if (prodesc != NULL &&\n<--><-->prodesc->internal_proname != NULL &&\n<--><-->prodesc->fn_xmin == HeapTupleHeaderGetRawXmin(procTup->t_data) &&\n<--><-->ItemPointerEquals(&prodesc->fn_tid, &procTup->t_self))\n<-->{\n<--><-->/* It's still up-to-date, so we can use it */\n<--><-->ReleaseSysCache(procTup);\n<--><-->return prodesc;\n<-->}\n\n\n>\n> Hmm, could we do something with tcl namespaces to allow having two\n> procedures with the same name? E.g. create a separate namespace, based\n> on the OID, for each procedure. I wonder how the stack trace would look\n> like then.\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\n>\n>\n\nčt 4. 7. 2024 v 17:27 odesílatel Heikki Linnakangas <[email protected]> napsal:On 05/06/2024 20:42, Tom Lane wrote:\n> While working on commit b631d0149, I got a bee in my bonnet about\n> how unfriendly PL/Tcl's error CONTEXT reports are:\n> \n> * The context reports expose PL/Tcl's internal names for the Tcl\n> procedures it creates, which'd be fine if those names were readable.\n> But actually they're something like \"__PLTcl_proc_NNNN\", where NNNN\n> is the function OID.  Not only is that unintelligible, but because\n> the OIDs aren't stable this forces us to disable display of the\n> CONTEXT lines in all of PL/Tcl's regression tests.\n> \n> * The first line of the context report (almost?) always duplicates\n> the primary error message, which is redundant and not per our\n> normal reporting style.\n> \n> So attached is a patch that attempts to improve this situation.\n> \n> The key question is how to avoid including function OIDs in the\n> strings that will appear in the regression test outputs.  The\n> answer I propose is to start with an internal name like\n> \"__PLTcl_proc_NAME\", where NAME is the function's normal SQL name,\n> and then append the OID only if that function name is not unique.\n> As long as we don't create test cases that involve throwing\n> errors from duplicatively-named functions, we can show the context\n> reports and still have stable regression outputs.  I think this will\n> improve the user experience for regular users too.\n\nYes, that sounds a lot nicer.\n\nWhat happens if you rename a function? I guess the error context will \nstill print the old name, but that's pretty harmless.The rename should to generate different tid, so the function will be recompiled<-->/************************************************************<--> * If it's present, must check whether it's still up to date.<--> * This is needed because CREATE OR REPLACE FUNCTION can modify the<--> * function's pg_proc entry without changing its OID.<--> ************************************************************/<-->if (prodesc != NULL &&<--><-->prodesc->internal_proname != NULL &&<--><-->prodesc->fn_xmin == HeapTupleHeaderGetRawXmin(procTup->t_data) &&<--><-->ItemPointerEquals(&prodesc->fn_tid, &procTup->t_self))<-->{<--><-->/* It's still up-to-date, so we can use it */<--><-->ReleaseSysCache(procTup);<--><-->return prodesc;<-->} \n\nHmm, could we do something with tcl namespaces to allow having two \nprocedures with the same name? E.g. create a separate namespace, based \non the OID, for each procedure. I wonder how the stack trace would look \nlike then.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 4 Jul 2024 19:15:55 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving PL/Tcl's error context reports" }, { "msg_contents": "Hi\n\n\n> Hmm, could we do something with tcl namespaces to allow having two\n> procedures with the same name? E.g. create a separate namespace, based\n> on the OID, for each procedure. I wonder how the stack trace would look\n> like then.\n>\n\nI didn't do full test, but I think so tcl uses for error messages fully\nqualified name\n\n\n\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\n>\n>\n\nHi \nHmm, could we do something with tcl namespaces to allow having two \nprocedures with the same name? E.g. create a separate namespace, based \non the OID, for each procedure. I wonder how the stack trace would look \nlike then.I didn't do full test, but I think so tcl uses for error messages fully qualified name \n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 4 Jul 2024 19:30:48 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving PL/Tcl's error context reports" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> čt 4. 7. 2024 v 17:27 odesílatel Heikki Linnakangas <[email protected]>\n> napsal:\n>> What happens if you rename a function? I guess the error context will\n>> still print the old name, but that's pretty harmless.\n\n> The rename should to generate different tid, so the function will be\n> recompiled\n\nRight. With patch:\n\nregression=# create function bogus() returns int language pltcl as\n'return [expr 1 / 0]';\nCREATE FUNCTION\nregression=# select bogus();\nERROR: divide by zero\nCONTEXT: while executing\n\"expr 1 / 0\"\n (procedure \"__PLTcl_proc_bogus\" line 2)\n invoked from within\n\"__PLTcl_proc_bogus\"\nin PL/Tcl function \"bogus\"\nregression=# alter function bogus() rename to stillbogus;\nALTER FUNCTION\nregression=# select stillbogus();\nERROR: divide by zero\nCONTEXT: while executing\n\"expr 1 / 0\"\n (procedure \"__PLTcl_proc_stillbogus\" line 2)\n invoked from within\n\"__PLTcl_proc_stillbogus\"\nin PL/Tcl function \"stillbogus\"\n\n>> Hmm, could we do something with tcl namespaces to allow having two\n>> procedures with the same name? E.g. create a separate namespace, based\n>> on the OID, for each procedure. I wonder how the stack trace would look\n>> like then.\n\nIf the namespace depends on the OID then we still have nonreproducible\nstack traces, no? We could maybe make Tcl namespaces that match the\nSQL schema names, but that doesn't get us out of the duplication\nproblem when there are similarly-named functions with different\nargument lists.\n\nAnother idea is to make the Tcl names include the SQL schema name,\nthat is __PLTcl_proc_myschema_myfunction. That avoids needing to\nappend OIDs when the problem is functions in different schemas,\nbut it doesn't move the needle for overloaded functions. On the\nwhole I feel like that'd add verbosity without buying much.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2024 13:36:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving PL/Tcl's error context reports" }, { "msg_contents": "čt 4. 7. 2024 v 19:36 odesílatel Tom Lane <[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n> > čt 4. 7. 2024 v 17:27 odesílatel Heikki Linnakangas <[email protected]>\n> > napsal:\n> >> What happens if you rename a function? I guess the error context will\n> >> still print the old name, but that's pretty harmless.\n>\n> > The rename should to generate different tid, so the function will be\n> > recompiled\n>\n> Right. With patch:\n>\n> regression=# create function bogus() returns int language pltcl as\n> 'return [expr 1 / 0]';\n> CREATE FUNCTION\n> regression=# select bogus();\n> ERROR: divide by zero\n> CONTEXT: while executing\n> \"expr 1 / 0\"\n> (procedure \"__PLTcl_proc_bogus\" line 2)\n> invoked from within\n> \"__PLTcl_proc_bogus\"\n> in PL/Tcl function \"bogus\"\n> regression=# alter function bogus() rename to stillbogus;\n> ALTER FUNCTION\n> regression=# select stillbogus();\n> ERROR: divide by zero\n> CONTEXT: while executing\n> \"expr 1 / 0\"\n> (procedure \"__PLTcl_proc_stillbogus\" line 2)\n> invoked from within\n> \"__PLTcl_proc_stillbogus\"\n> in PL/Tcl function \"stillbogus\"\n>\n> >> Hmm, could we do something with tcl namespaces to allow having two\n> >> procedures with the same name? E.g. create a separate namespace, based\n> >> on the OID, for each procedure. I wonder how the stack trace would look\n> >> like then.\n>\n> If the namespace depends on the OID then we still have nonreproducible\n> stack traces, no? We could maybe make Tcl namespaces that match the\n> SQL schema names, but that doesn't get us out of the duplication\n> problem when there are similarly-named functions with different\n> argument lists.\n>\n> Another idea is to make the Tcl names include the SQL schema name,\n> that is __PLTcl_proc_myschema_myfunction. That avoids needing to\n> append OIDs when the problem is functions in different schemas,\n> but it doesn't move the needle for overloaded functions. On the\n> whole I feel like that'd add verbosity without buying much.\n>\n\nI like the idea of using a schema name inside. It doesn't fix all, but the\ncost can be low, and some risk of duplicity is reduced.\n\nGetting unique name based on suffix _oid looks not too much nice (using\n_increment can be nicer), but it should to work\n\nPLpgSQL uses more often function signature\n\n(2024-07-04 19:49:20) postgres=# select bx(0);\nERROR: division by zero\nCONTEXT: PL/pgSQL function fx(integer) line 1 at RETURN\nPL/pgSQL function bx(integer) line 1 at RETURN\n\nWhat can be interesting information\n\nHow much work can be using modified function signature for internal name\nlike\n\n__PLTcl_proc_myschema_myfunction_integer\n\n__PLTcl_trigger_myschema_myfunction_table_schema_table\n\nIs there some size limit for variable name? I didn't find it.\n\n\n\n\n\n> regards, tom lane\n>\n\nčt 4. 7. 2024 v 19:36 odesílatel Tom Lane <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n> čt 4. 7. 2024 v 17:27 odesílatel Heikki Linnakangas <[email protected]>\n> napsal:\n>> What happens if you rename a function? I guess the error context will\n>> still print the old name, but that's pretty harmless.\n\n> The rename should to generate different tid, so the function will be\n> recompiled\n\nRight.  With patch:\n\nregression=# create function bogus() returns int language pltcl as\n'return [expr 1 / 0]';\nCREATE FUNCTION\nregression=# select bogus();\nERROR:  divide by zero\nCONTEXT:  while executing\n\"expr 1 / 0\"\n    (procedure \"__PLTcl_proc_bogus\" line 2)\n    invoked from within\n\"__PLTcl_proc_bogus\"\nin PL/Tcl function \"bogus\"\nregression=# alter function bogus() rename to stillbogus;\nALTER FUNCTION\nregression=# select stillbogus();\nERROR:  divide by zero\nCONTEXT:  while executing\n\"expr 1 / 0\"\n    (procedure \"__PLTcl_proc_stillbogus\" line 2)\n    invoked from within\n\"__PLTcl_proc_stillbogus\"\nin PL/Tcl function \"stillbogus\"\n\n>> Hmm, could we do something with tcl namespaces to allow having two\n>> procedures with the same name? E.g. create a separate namespace, based\n>> on the OID, for each procedure. I wonder how the stack trace would look\n>> like then.\n\nIf the namespace depends on the OID then we still have nonreproducible\nstack traces, no?  We could maybe make Tcl namespaces that match the\nSQL schema names, but that doesn't get us out of the duplication\nproblem when there are similarly-named functions with different\nargument lists.\n\nAnother idea is to make the Tcl names include the SQL schema name,\nthat is __PLTcl_proc_myschema_myfunction.  That avoids needing to\nappend OIDs when the problem is functions in different schemas,\nbut it doesn't move the needle for overloaded functions.  On the\nwhole I feel like that'd add verbosity without buying much.I like the idea of using a schema name inside. It doesn't fix all, but the cost can be low, and some risk of duplicity is reduced.Getting unique name based on suffix _oid looks not too much nice (using _increment can be nicer), but it should to workPLpgSQL uses more often function signature(2024-07-04 19:49:20) postgres=# select bx(0);ERROR:  division by zeroCONTEXT:  PL/pgSQL function fx(integer) line 1 at RETURNPL/pgSQL function bx(integer) line 1 at RETURNWhat can be interesting information How much work can be using modified function signature for internal name like__PLTcl_proc_myschema_myfunction_integer__PLTcl_trigger_myschema_myfunction_table_schema_tableIs there some size limit for variable name? I didn't find it.\n\n                        regards, tom lane", "msg_date": "Thu, 4 Jul 2024 19:56:13 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving PL/Tcl's error context reports" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> Getting unique name based on suffix _oid looks not too much nice (using\n> _increment can be nicer), but it should to work\n\nHmm, yeah we could do an increment. It'd make the results in cases\nof conflict invocation-order-dependent though, which seems like it\nmight be worse than using OIDs.\n\n> PLpgSQL uses more often function signature\n\n> (2024-07-04 19:49:20) postgres=# select bx(0);\n> ERROR: division by zero\n> CONTEXT: PL/pgSQL function fx(integer) line 1 at RETURN\n> PL/pgSQL function bx(integer) line 1 at RETURN\n\nOh that's a good idea! So let's use format_procedure(), same as\nplpgsql does, to generate the final context line that currently\nreads like\n\nin PL/Tcl function \"bogus\"\n\nThen, we could apply the \"pull out just alphanumerics\" rule to\nthe result of format_procedure() to generate the internal Tcl name.\nThat should greatly reduce the number of cases where we have duplicate\ninternal names we have to unique-ify.\n\n> Is there some size limit for variable name? I didn't find it.\n\nI did a quick test with 10000-character names and Tcl didn't\ncomplain, so it seems like there's no hard limit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2024 14:16:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving PL/Tcl's error context reports" }, { "msg_contents": "I wrote:\n> Pavel Stehule <[email protected]> writes:\n>> PLpgSQL uses more often function signature\n>> (2024-07-04 19:49:20) postgres=# select bx(0);\n>> ERROR: division by zero\n>> CONTEXT: PL/pgSQL function fx(integer) line 1 at RETURN\n>> PL/pgSQL function bx(integer) line 1 at RETURN\n\n> Oh that's a good idea! So let's use format_procedure(), same as\n> plpgsql does, to generate the final context line that currently\n> reads like\n\n> in PL/Tcl function \"bogus\"\n\n> Then, we could apply the \"pull out just alphanumerics\" rule to\n> the result of format_procedure() to generate the internal Tcl name.\n> That should greatly reduce the number of cases where we have duplicate\n> internal names we have to unique-ify.\n\nHere's a v2 that does it like that.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 04 Jul 2024 15:42:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving PL/Tcl's error context reports" }, { "msg_contents": "Hi\n\nčt 4. 7. 2024 v 21:42 odesílatel Tom Lane <[email protected]> napsal:\n\n> I wrote:\n> > Pavel Stehule <[email protected]> writes:\n> >> PLpgSQL uses more often function signature\n> >> (2024-07-04 19:49:20) postgres=# select bx(0);\n> >> ERROR: division by zero\n> >> CONTEXT: PL/pgSQL function fx(integer) line 1 at RETURN\n> >> PL/pgSQL function bx(integer) line 1 at RETURN\n>\n> > Oh that's a good idea! So let's use format_procedure(), same as\n> > plpgsql does, to generate the final context line that currently\n> > reads like\n>\n> > in PL/Tcl function \"bogus\"\n>\n> > Then, we could apply the \"pull out just alphanumerics\" rule to\n> > the result of format_procedure() to generate the internal Tcl name.\n> > That should greatly reduce the number of cases where we have duplicate\n> > internal names we have to unique-ify.\n>\n> Here's a v2 that does it like that.\n>\n\nI like it.\n\n- patching and compilation without any issue\n- check world passed\n\nI'll mark this as ready for commit\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n>\n\nHičt 4. 7. 2024 v 21:42 odesílatel Tom Lane <[email protected]> napsal:I wrote:\n> Pavel Stehule <[email protected]> writes:\n>> PLpgSQL uses more often function signature\n>> (2024-07-04 19:49:20) postgres=# select bx(0);\n>> ERROR:  division by zero\n>> CONTEXT:  PL/pgSQL function fx(integer) line 1 at RETURN\n>> PL/pgSQL function bx(integer) line 1 at RETURN\n\n> Oh that's a good idea!  So let's use format_procedure(), same as\n> plpgsql does, to generate the final context line that currently\n> reads like\n\n> in PL/Tcl function \"bogus\"\n\n> Then, we could apply the \"pull out just alphanumerics\" rule to\n> the result of format_procedure() to generate the internal Tcl name.\n> That should greatly reduce the number of cases where we have duplicate\n> internal names we have to unique-ify.\n\nHere's a v2 that does it like that.I like it.- patching and compilation without any issue- check world passedI'll mark this as ready for commitRegardsPavel\n\n                        regards, tom lane", "msg_date": "Fri, 5 Jul 2024 15:43:21 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving PL/Tcl's error context reports" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> čt 4. 7. 2024 v 21:42 odesílatel Tom Lane <[email protected]> napsal:\n>> Here's a v2 that does it like that.\n\n> I like it.\n> - patching and compilation without any issue\n> - check world passed\n> I'll mark this as ready for commit\n\nPushed, thanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2024 14:16:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving PL/Tcl's error context reports" } ]
[ { "msg_contents": "I was having a discussion regarding out-of-support branches and effort \nto keep them building, but could not for the life of me find any actual \ndocumented policy (although I distinctly remember that we do something...).\n\nI did find fleeting references, for example:\n\n8<-----------------------\ncommit c705646b751e08d584f6eeb098f1ed002aa7b11c\nAuthor: Tom Lane <[email protected]>\nDate: 2022-09-21 13:52:38 -0400\n\n<snip>\n\n Per project policy, this is a candidate for back-patching into\n out-of-support branches: it suppresses annoying compiler warnings\n but changes no behavior. Hence, back-patch all the way to 9.2.\n8<-----------------------\n\nand on its related thread:\n\n8<-----------------------\nHowever, I think that that would *not* be fit material for\nback-patching into out-of-support branches, since our policy\nfor them is \"no behavioral changes\".\n8<-----------------------\n\nIs the policy written down somewhere, or is it only project lore? In \neither case, what is the actual policy?\n\nThanks,\n\n--\nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 5 Jun 2024 14:07:40 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "question regarding policy for patches to out-of-support branches" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> I was having a discussion regarding out-of-support branches and effort \n> to keep them building, but could not for the life of me find any actual \n> documented policy (although I distinctly remember that we do something...).\n> Is the policy written down somewhere, or is it only project lore? In \n> either case, what is the actual policy?\n\nI believe our policy was set in this thread:\n\nhttps://www.postgresql.org/message-id/flat/2923349.1634942313%40sss.pgh.pa.us\n\nand you're right that it hasn't really been memorialized anywhere\nelse. I'm not sure where would be appropriate. Anyway, what\nI think the policy is:\n\n* Out-of-support versions back to (currently) 9.2 are still to be\nkept buildable on modern toolchains.\n\n* Build failures, regression failures, and easily-fixable compiler\nwarnings are candidates for fixes.\n\n* We aren't too excited about code that requires external dependencies\n(e.g. libpython) though, because those can be moving targets.\n\n* Under no circumstances back-patch anything that changes external\nbehavior, as the point of the exercise is to be able to test against\nthe actual behavior of the last releases of these branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Jun 2024 14:29:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question regarding policy for patches to out-of-support branches" }, { "msg_contents": "On Wed, Jun 5, 2024 at 8:29 PM Tom Lane <[email protected]> wrote:\n>\n> Joe Conway <[email protected]> writes:\n> > I was having a discussion regarding out-of-support branches and effort\n> > to keep them building, but could not for the life of me find any actual\n> > documented policy (although I distinctly remember that we do something...).\n> > Is the policy written down somewhere, or is it only project lore? In\n> > either case, what is the actual policy?\n>\n> I believe our policy was set in this thread:\n>\n> https://www.postgresql.org/message-id/flat/2923349.1634942313%40sss.pgh.pa.us\n>\n> and you're right that it hasn't really been memorialized anywhere\n> else. I'm not sure where would be appropriate.\n\nNot absolutely sure, but would at least adding a page to PostgreSQL\nWiki about this make sense ?\n\n---\nHannu\n\n\n", "msg_date": "Thu, 6 Jun 2024 10:25:26 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question regarding policy for patches to out-of-support branches" }, { "msg_contents": "On Thu, Jun 6, 2024 at 4:25 AM Hannu Krosing <[email protected]> wrote:\n> Not absolutely sure, but would at least adding a page to PostgreSQL\n> Wiki about this make sense ?\n\nI feel like we need to do something. Tom says this is a policy, and\nhe's made that comment before about other things, but the fact that\nthey're not memorialized anywhere is a huge problem, IMHO. People\ndon't read or remember every mailing list discussion forever, and even\nif they did, how would we enumerate all the policies for the benefit\nof a newcomer? Maybe this belongs in the documentation, maybe in the\nwiki, maybe someplace else, but the real issue for me is that policies\nhave to be discoverable by the people who need to adhere to them, and\nold mailing list discussions aren't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 11:33:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question regarding policy for patches to out-of-support branches" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Jun 6, 2024 at 4:25 AM Hannu Krosing <[email protected]> wrote:\n>> Not absolutely sure, but would at least adding a page to PostgreSQL\n>> Wiki about this make sense ?\n\n> I feel like we need to do something. Tom says this is a policy, and\n> he's made that comment before about other things, but the fact that\n> they're not memorialized anywhere is a huge problem, IMHO.\n\nI didn't say it wasn't ;-)\n\nISTM we have two basic choices: wiki page, or new SGML docs section.\nIn the short term I'd lean to a wiki page. It'd be reasonable for\n\nhttps://wiki.postgresql.org/wiki/Committing_checklist\n\nto link to it (and maybe the existing section there about release\nfreezes would be more apropos on a \"Project policies\" page? Not\nsure.)\n\nTo get a sense of how much of a problem we have, I grepped the git\nhistory for comments mentioning project policies. Ignoring ones\nthat are really talking about very localized issues, what I found\nis attached. It seems like it's little enough that a single wiki\npage with subsections ought to be enough. I'm not super handy with\nediting the wiki, plus I'm only firing on one cylinder today (seem\nto have acquired a head cold at pgconf), so maybe somebody else\nwould like to draft something?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 06 Jun 2024 14:12:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question regarding policy for patches to out-of-support branches" }, { "msg_contents": "On 6/6/24 14:12, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> On Thu, Jun 6, 2024 at 4:25 AM Hannu Krosing <[email protected]> wrote:\n>>> Not absolutely sure, but would at least adding a page to PostgreSQL\n>>> Wiki about this make sense ?\n> \n>> I feel like we need to do something. Tom says this is a policy, and\n>> he's made that comment before about other things, but the fact that\n>> they're not memorialized anywhere is a huge problem, IMHO.\n> \n> I didn't say it wasn't ;-)\n> \n> ISTM we have two basic choices: wiki page, or new SGML docs section.\n> In the short term I'd lean to a wiki page. It'd be reasonable for\n> \n> https://wiki.postgresql.org/wiki/Committing_checklist\n> \n> to link to it (and maybe the existing section there about release\n> freezes would be more apropos on a \"Project policies\" page? Not\n> sure.)\n> \n> To get a sense of how much of a problem we have, I grepped the git\n> history for comments mentioning project policies. Ignoring ones\n> that are really talking about very localized issues, what I found\n> is attached. It seems like it's little enough that a single wiki\n> page with subsections ought to be enough. I'm not super handy with\n> editing the wiki, plus I'm only firing on one cylinder today (seem\n> to have acquired a head cold at pgconf), so maybe somebody else\n> would like to draft something?\n\nI added them here with minimal copy editing an no attempt to organize or \nsort into groups:\n\nhttps://wiki.postgresql.org/wiki/Committing_checklist#Policies\n\nIf someone has thoughts on how to improve I am happy to make more changes.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 6 Jun 2024 15:55:15 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: question regarding policy for patches to out-of-support branches" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> On 6/6/24 14:12, Tom Lane wrote:\n>> To get a sense of how much of a problem we have, I grepped the git\n>> history for comments mentioning project policies. Ignoring ones\n>> that are really talking about very localized issues, what I found\n>> is attached. It seems like it's little enough that a single wiki\n>> page with subsections ought to be enough. I'm not super handy with\n>> editing the wiki, plus I'm only firing on one cylinder today (seem\n>> to have acquired a head cold at pgconf), so maybe somebody else\n>> would like to draft something?\n\n> I added them here with minimal copy editing an no attempt to organize or \n> sort into groups:\n> https://wiki.postgresql.org/wiki/Committing_checklist#Policies\n> If someone has thoughts on how to improve I am happy to make more changes.\n\nThanks! I summoned the energy to make a few more improvements,\nparticularly updating stuff that seemed out-of-date. I'm sure\nthere's more that could be added here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Jun 2024 22:04:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question regarding policy for patches to out-of-support branches" }, { "msg_contents": "On Thu, Jun 6, 2024 at 10:04 PM Tom Lane <[email protected]> wrote:\n> > I added them here with minimal copy editing an no attempt to organize or\n> > sort into groups:\n> > https://wiki.postgresql.org/wiki/Committing_checklist#Policies\n> > If someone has thoughts on how to improve I am happy to make more changes.\n>\n> Thanks! I summoned the energy to make a few more improvements,\n> particularly updating stuff that seemed out-of-date. I'm sure\n> there's more that could be added here.\n\nThis is nice! I wonder if we could interest anyone in creating tooling\nthat could be used to check some of this stuff -- ideally run as part\nof the regular build process, so that you fail to notice that you did\nit wrong.\n\nNot all of these rules are subject to automatic verification e.g. it's\nhard to enforce that a change to an out-of-support branch makes no\nfunctional change. But an awful lot of them could be, and I would\npersonally be significantly happier and less stressed if I knew that\n'ninja && meson test' was going to tell me that I did it wrong before\nI pushed, instead of finding out afterward and then having to drop\neverything to go clean it up.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 08:43:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question regarding policy for patches to out-of-support branches" } ]
[ { "msg_contents": "Please find attached a quick patch to prevent this particularly bad error\nmessage for running \"postgres\", when making the common mistake of\nforgetting to put the \"--single\" option first because you added an earlier\narg (esp. datadir)\n\nCurrent behavior:\n\n$ ~/pg/bin/postgres -D ~/pg/data --single\n2024-06-05 18:30:40.296 GMT [22934] FATAL: --single requires a value\n\nImproved behavior:\n\n$ ~/pg/bin/postgres -D ~/pg/data --single\n--single must be first argument.\n\nI applied it for all the \"first arg only\" flags (boot, check,\ndescribe-config, and fork), as they suffer the same fate.\n\nCheers,\nGreg", "msg_date": "Wed, 5 Jun 2024 14:51:05 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Better error message when --single is not the first arg to postgres\n executable" }, { "msg_contents": "On Wed, Jun 05, 2024 at 02:51:05PM -0400, Greg Sabino Mullane wrote:\n> Please find attached a quick patch to prevent this particularly bad error\n> message for running \"postgres\", when making the common mistake of\n> forgetting to put the \"--single\" option first because you added an earlier\n> arg (esp. datadir)\n\nCould we remove the requirement that --single must be first? I'm not\nthrilled about adding a list of \"must be first\" options that needs to stay\nupdated, but given this list probably doesn't change too frequently, maybe\nthat's still better than a more invasive patch to allow specifying these\noptions in any order...\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 5 Jun 2024 14:18:47 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" }, { "msg_contents": "On Wed, Jun 5, 2024 at 3:18 PM Nathan Bossart <[email protected]>\nwrote:\n\n> Could we remove the requirement that --single must be first? I'm not\n> thrilled about adding a list of \"must be first\" options that needs to stay\n> updated, but given this list probably doesn't change too frequently, maybe\n> that's still better than a more invasive patch to allow specifying these\n> options in any order...\n>\n\nIt would be nice, and I briefly looked into removing the \"first\"\nrequirement, but src/backend/tcop/postgres.c for one assumes that --single\nis always argv[1], and it seemed not worth the extra effort to make it work\nfor argv[N] instead of argv[1]. I don't mind it being the first argument,\nbut that confusing error message needs to go.\n\nThanks,\nGreg\n\nOn Wed, Jun 5, 2024 at 3:18 PM Nathan Bossart <[email protected]> wrote:Could we remove the requirement that --single must be first?  I'm not thrilled about adding a list of \"must be first\" options that needs to stay updated, but given this list probably doesn't change too frequently, maybe that's still better than a more invasive patch to allow specifying these options in any order...It would be nice, and I briefly looked into removing the \"first\" requirement, but src/backend/tcop/postgres.c for one assumes that --single is always argv[1], and it seemed not worth the extra effort to make it work for argv[N] instead of argv[1]. I don't mind it being the first argument, but that confusing error message needs to go.Thanks,Greg", "msg_date": "Wed, 5 Jun 2024 23:38:48 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" }, { "msg_contents": "On Wed, Jun 05, 2024 at 11:38:48PM -0400, Greg Sabino Mullane wrote:\n> On Wed, Jun 5, 2024 at 3:18 PM Nathan Bossart <[email protected]>\n> wrote:\n>> Could we remove the requirement that --single must be first? I'm not\n>> thrilled about adding a list of \"must be first\" options that needs to stay\n>> updated, but given this list probably doesn't change too frequently, maybe\n>> that's still better than a more invasive patch to allow specifying these\n>> options in any order...\n> \n> It would be nice, and I briefly looked into removing the \"first\"\n> requirement, but src/backend/tcop/postgres.c for one assumes that --single\n> is always argv[1], and it seemed not worth the extra effort to make it work\n> for argv[N] instead of argv[1]. I don't mind it being the first argument,\n> but that confusing error message needs to go.\n\nI spent some time trying to remove the must-be-first requirement and came\nup with the attached draft-grade patch. However, there's a complication:\nthe \"database\" option for single-user mode must still be listed last, at\nleast on systems where getopt() doesn't move non-options to the end of the\narray. My previous research [0] indicated that this is pretty common, and\nI noticed it because getopt() on macOS doesn't seem to reorder non-options.\nI thought about changing these to getopt_long(), which we do rely on to\nreorder non-options, but that conflicts with our ParseLongOption() \"long\nargument simulation\" that we use to allow specifying arbitrary GUCs via the\ncommand-line.\n\nThis remaining discrepancy might be okay, but I was really hoping to reduce\nthe burden on users to figure out the correct ordering of options. The\nsituations in which I've had to use single-user mode are precisely the\nsituations in which I'd rather not have to spend time learning these kinds\nof details.\n\n[0] https://postgr.es/m/20230609232257.GA121461%40nathanxps13\n\n-- \nnathan", "msg_date": "Mon, 17 Jun 2024 21:49:54 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" }, { "msg_contents": "If I am reading your patch correctly, we have lost the behavior of least\nsurprise in which the first \"meta\" argument overrides all others:\n\n$ bin/postgres --version --boot --extrastuff\npostgres (PostgreSQL) 16.2\n\nWhat about just inlining --version and --help e.g.\n\nelse if (strcmp(argv[i], \"--version\") == 0 || strcmp(argv[i], \"-V\") == 0)\n{\n fputs(PG_BACKEND_VERSIONSTR, stdout);\n exit(0);\n}\n\nI'm fine with being more persnickety about the other options; they are much\nrarer and not unixy.\n\nHowever, there's a complication:\n> ...\n> This remaining discrepancy might be okay, but I was really hoping to reduce\n> the burden on users to figure out the correct ordering of options. The\n> situations in which I've had to use single-user mode are precisely the\n> situations in which I'd rather not have to spend time learning these kinds\n> of details.\n>\n\nYes, that's unfortunate. But I'd be okay with the db-last requirement as\nlong as the error message is sane and points one in the right direction.\n\nCheers,\nGreg\n\nIf I am reading your patch correctly, we have lost the behavior of least surprise in which the first \"meta\" argument overrides all others:$ bin/postgres --version --boot --extrastuffpostgres (PostgreSQL) 16.2What about just inlining --version and --help e.g. else if (strcmp(argv[i], \"--version\") == 0 || strcmp(argv[i], \"-V\") == 0){     fputs(PG_BACKEND_VERSIONSTR, stdout);     exit(0);}I'm fine with being more persnickety about the other options; they are much rarer and not unixy.However, there's a complication:...\nThis remaining discrepancy might be okay, but I was really hoping to reduce\nthe burden on users to figure out the correct ordering of options.  The\nsituations in which I've had to use single-user mode are precisely the\nsituations in which I'd rather not have to spend time learning these kinds\nof details.Yes, that's unfortunate. But I'd be okay with the db-last requirement as long as the error message is sane and points one in the right direction.Cheers,Greg", "msg_date": "Tue, 18 Jun 2024 21:42:32 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" }, { "msg_contents": "On Tue, Jun 18, 2024 at 09:42:32PM -0400, Greg Sabino Mullane wrote:\n> If I am reading your patch correctly, we have lost the behavior of least\n> surprise in which the first \"meta\" argument overrides all others:\n> \n> $ bin/postgres --version --boot --extrastuff\n> postgres (PostgreSQL) 16.2\n\nRight, with the patch we fail if there are multiple such options specified:\n\n\t$ postgres --version --help\n\tFATAL: multiple server modes set\n\tDETAIL: Only one of --check, --boot, --describe-config, --single, --help/-?, --version/-V, -C may be set.\n\n> What about just inlining --version and --help e.g.\n> \n> else if (strcmp(argv[i], \"--version\") == 0 || strcmp(argv[i], \"-V\") == 0)\n> {\n> fputs(PG_BACKEND_VERSIONSTR, stdout);\n> exit(0);\n> }\n> \n> I'm fine with being more persnickety about the other options; they are much\n> rarer and not unixy.\n\nThat seems like it should work. I'm not sure I agree that's the least\nsurprising behavior (e.g., what exactly is the user trying to tell us with\ncommands like \"postgres --version --help --describe-config\"?), but I also\ndon't feel too strongly about it.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 19 Jun 2024 09:04:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" }, { "msg_contents": "On 19.06.24 16:04, Nathan Bossart wrote:\n>> What about just inlining --version and --help e.g.\n>>\n>> else if (strcmp(argv[i], \"--version\") == 0 || strcmp(argv[i], \"-V\") == 0)\n>> {\n>> fputs(PG_BACKEND_VERSIONSTR, stdout);\n>> exit(0);\n>> }\n>>\n>> I'm fine with being more persnickety about the other options; they are much\n>> rarer and not unixy.\n> \n> That seems like it should work. I'm not sure I agree that's the least\n> surprising behavior (e.g., what exactly is the user trying to tell us with\n> commands like \"postgres --version --help --describe-config\"?), but I also\n> don't feel too strongly about it.\n\nThere is sort of an existing convention that --help and --version behave \nlike this, meaning they act immediately and exit without considering \nother arguments.\n\nI'm not really sure all this here is worth solving. I think requiring \nthings like --single or --boot to be first seems ok, and the \nalternatives just make things more complicated.\n\n\n", "msg_date": "Wed, 19 Jun 2024 17:34:52 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" }, { "msg_contents": "On Wed, Jun 19, 2024 at 05:34:52PM +0200, Peter Eisentraut wrote:\n> I'm not really sure all this here is worth solving. I think requiring\n> things like --single or --boot to be first seems ok, and the alternatives\n> just make things more complicated.\n\nYeah, I'm fine with doing something more like what Greg originally\nproposed at this point.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 19 Jun 2024 10:58:02 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" }, { "msg_contents": "On Wed, Jun 19, 2024 at 10:58:02AM -0500, Nathan Bossart wrote:\n> On Wed, Jun 19, 2024 at 05:34:52PM +0200, Peter Eisentraut wrote:\n>> I'm not really sure all this here is worth solving. I think requiring\n>> things like --single or --boot to be first seems ok, and the alternatives\n>> just make things more complicated.\n> \n> Yeah, I'm fine with doing something more like what Greg originally\n> proposed at this point.\n\nHere's an attempt at centralizing the set of subprogram options (and also\nadding better error messages). My intent was to make it difficult to miss\nupdating all the relevant places when adding a new subprogram, but I'll\nadmit the patch is a bit more complicated than I was hoping.\n\n-- \nnathan", "msg_date": "Wed, 21 Aug 2024 15:47:11 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" }, { "msg_contents": "I'm not opposed to this new method, as long as the error code improves. :)\n\n+typedef enum Subprogram\n+{\n+ SUBPROGRAM_CHECK,\n+ SUBPROGRAM_BOOT,\n+#ifdef EXEC_BACKEND\n+ SUBPROGRAM_FORKCHILD,\n+#endif\n\nI'm not happy about making this and the const char[] change their structure\nbased on the ifdefs - could we not just leave forkchild in? Their usage is\nalready protected by the ifdefs in the calling code.\n\nHeck, we could put SUBPROGRAM_FORKCHILD first in the list, keep the ifdef\nin parse_subprogram, and start regular checking with i = 1;\nThis would reduce to a single #ifdef\n\nCheers,\nGreg\n\nI'm not opposed to this new method, as long as the error code improves. :)+typedef enum Subprogram+{+\tSUBPROGRAM_CHECK,+\tSUBPROGRAM_BOOT,+#ifdef EXEC_BACKEND+\tSUBPROGRAM_FORKCHILD,+#endifI'm not happy about making this and the const char[] change their structure based on the ifdefs - could we not just leave forkchild in? Their usage is already protected by the ifdefs in the calling code.Heck, we could put SUBPROGRAM_FORKCHILD first in the list, keep the ifdef in parse_subprogram, and start regular checking with i = 1;This would reduce to a single #ifdefCheers,Greg", "msg_date": "Sun, 25 Aug 2024 13:14:36 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" }, { "msg_contents": "On Sun, Aug 25, 2024 at 01:14:36PM -0400, Greg Sabino Mullane wrote:\n> I'm not happy about making this and the const char[] change their structure\n> based on the ifdefs - could we not just leave forkchild in? Their usage is\n> already protected by the ifdefs in the calling code.\n\nHere's an attempt at this.\n\n-- \nnathan", "msg_date": "Mon, 26 Aug 2024 10:43:05 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" }, { "msg_contents": "On Mon, Aug 26, 2024 at 11:43 AM Nathan Bossart <[email protected]>\nwrote:\n\n> On Sun, Aug 25, 2024 at 01:14:36PM -0400, Greg Sabino Mullane wrote:\n> > I'm not happy about making this and the const char[] change their\n> structure\n> > based on the ifdefs - could we not just leave forkchild in? Their usage\n> is\n> > already protected by the ifdefs in the calling code.\n>\n> Here's an attempt at this.\n>\n\nLooks great, thank you.\n\nOn Mon, Aug 26, 2024 at 11:43 AM Nathan Bossart <[email protected]> wrote:On Sun, Aug 25, 2024 at 01:14:36PM -0400, Greg Sabino Mullane wrote:\n> I'm not happy about making this and the const char[] change their structure\n> based on the ifdefs - could we not just leave forkchild in? Their usage is\n> already protected by the ifdefs in the calling code.\n\nHere's an attempt at this.Looks great, thank you.", "msg_date": "Tue, 27 Aug 2024 09:45:44 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Better error message when --single is not the first arg to\n postgres executable" } ]
[ { "msg_contents": "Hi,\n\nAt 2024.pgconf.dev, Heikki did a session on multithreading PostgreSQL\nwhich I was unfortunately unable to attend due my involvement with\nanother session, and then we had an unconference discussion which I\nwas able to attend and at which I volunteered to have a look at a\ncouple of tasks, including \"Extension Marking System (marking\nextensions as thread-safe)\". So in this email I'd like to (1) say a\nfew things about multithreading for PostgreSQL in general, (2) spell\nout my understanding of the extension compatibility problem\nspecifically, and then (3) discuss possible solutions to that problem.\nSee also https://wiki.postgresql.org/wiki/Multithreading\n\n== Multithreading Generally ==\n\nI believe there is a consensus in the PostgreSQL developer community,\nor at least among committers, that a multi-threaded programming model\nwould be superior to a multi-process programming model as we have now.\nI won't be surprised if a few people disagree with that as a general\nstatement, and others may view it as better in theory but so difficult\nin practice as to be not worth doing, but I believe that the consensus\nis otherwise. I do understand that switching to threads introduces\nsome new stability risks, which are not to be taken lightly, but it\nalso opens the door to various performance improvements, and even\nfunctionality, that are not feasible today. I do not believe that it\nwould be necessary, as has been alleged previously, to halt all other\ndevelopment for a lengthy period of time while such a conversion is\nundertaken, nor do I believe that the community would or should accept\nsuch a solution were someone to propose it. I do believe that there\nare some difficult problems to be solved in order to make it work at\nall, and I believe even more strongly that a good deal of follow-up\nwork will be necessary to reap the potential benefits of such a\nchange. I also believe that it's absolutely necessary that both models\ncoexist side by side for a period of time. I think we will eventually\nwant to abandon the multi-process model, because I think over time the\nbenefits of using threads will accumulate until they are overwhelming\nand the process model will end up appearing to be an obstacle to\nprogress. However, I don't think we'll be able to do that particularly\nsoon, because I think it's going to take a while to fully stabilize\nthe thread model even as far as the core code is concerned, and\nextensions will take even longer to catch up. I realize Heikki in\nparticular is hoping for a quick transition; I don't see that as\nfeasible, but like everything else about this, opinions are going to\nvary.\n\nObligatory disclaimer: Everything above (and below) is just a\nstatement of what I believe, and everyone is free to dispute it. As\nalways, I cannot speak to objective truth, but I can tell you what I\nthink.\n\n== The Extension Compatibility Problem ==\n\nI don't know yet whether we're going to end up with a system where the\nsame build of PostgreSQL can produce processes or threads depending on\nconfiguration or whether it's going to be a build option, but I'm\nguessing the latter is more likely. Certainly, if an extension is\nassuming that its global variables are session-local and they suddenly\nbecome global to the cluster, chaos will ensue. The same is true for\nthe core code, and will need to be solved by annotating global\nvariables so that the appropriate ones can be made thread-local and\nthe others can be given whatever treatment is appropriate considering\nhow they are used. The details of how this annotation system will work\nare TBD, but the point for this email is that extension global\nvariables, including file-level globals, will need the same kinds of\nannotations that we use in the core code in order to work. Other\nadjustments may also be needed.\n\nI think there are two severable problems here. One is that, if an\nextension is built for use with a non-threaded PostgreSQL, we\nshouldn't permit it to be used with a threaded PostgreSQL, even if the\nmajor version and other details are compatible. Hence, threading or\nthe lack of it must become part of the data set up by PG_MODULE_MAGIC.\nMaybe this problem goes away if we decide that threads-vs-processes is\na configuration option rather than a build-time option, but even then,\nwe might still end up with a build-time option indicating whether\nthreads are even a possibility, so I think it's pretty likely we need\nthis in some form. If or when the process model eventually dies, then\nwe can take this out again.\n\nThe other problem is that we probably want a way for extensions to\nsignal that they are believed to work with threading. It's a little\nbit debatable whether this is a good idea, because (1) some people are\ngoing to blindly state that their extension works fine with threading\neven if they haven't actually made the necessary changes and (2) one\ncould simply declare that making an extension thread-ready is part of\nsupporting whatever PostgreSQL release adds threading as an option and\n(3) one could also declare that extension authors should just document\nwhat they do or don't support rather than doing anything in code.\nHowever, I think it makes sense to try to make extensions fail to\ncompile against a threaded PostgreSQL unless the extension declares\nthat it supports such builds of PostgreSQL. I think that by doing\nthis, we'll make it a LOT easier for packagers to find out what\nextensions still need updating. A packager could possibly do light\ntesting of an extension and fail to miss the fact that the extension\ndoesn't actually work properly against a threaded PostgreSQL, but you\ncan't fail to notice a compile failure. There's still going to be some\nchaos because of (1), but I think we can mitigate that with good\nmessaging: documentation, wiki pages, and blog posts explaining that\nthis is coming and how to adapt to it can help a lot, IMHO.\n\n== Extension Compatibility Solutions ==\n\nThe attached patch is a sketch of one possible approach: PostgreSQL\nsignals whether it is multithreaded by defining or not defining\nPG_MULTITHREADING in pg_config_manual.h, and an extension signals\nthread-readiness by defining PG_THREADSAFE_EXTENSION before including\nany PostgreSQL headers other than postgres.h. If PostgreSQL is built\nmultithreaded and the extension does not signal thread-safety, you get\nsomething like this:\n\n../pgsql/src/test/modules/dummy_seclabel/dummy_seclabel.c:20:1: error:\nstatic assertion failed due to requirement '1 == 0': must define\nPG_THREADSAFE_EXTENSION or use unthreaded PostgreSQL\nPG_MODULE_MAGIC;\n\nI'm not entirely happy with this solution because the results are\nconfusing if PG_THREADSAFE_EXTENSION is declared after including\nfmgr.h. Perhaps this can be adequately handled by documenting and\ndemonstrating the right pattern, or maybe somebody has a better idea.\n\nAnother idea I considered was to replace the PG_MODULE_MAGIC;\ndeclaration with something that allows for arguments, like\nPG_MODULE_MAGIC(.process_model = false, .thread_model = true). But on\nfurther reflection, that seems like the wrong thing. AFAICS, that's\ngoing to tell you at runtime about something that you really want to\nknow at compile time. But this kind of idea might need more thought if\nwe decide that the *same* build of PostgreSQL can either launch\nprocesses or threads per session, because then we'd to know which\nextensions were available in whichever mode applied to the current\nsession.\n\nThat's all I've got for today.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 5 Jun 2024 16:05:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "[multithreading] extension compatibility" }, { "msg_contents": "On Wed Jun 5, 2024 at 3:05 PM CDT, Robert Haas wrote:\n> ...\n>\n> == Extension Compatibility Solutions ==\n>\n> The attached patch is a sketch of one possible approach: PostgreSQL\n> signals whether it is multithreaded by defining or not defining\n> PG_MULTITHREADING in pg_config_manual.h, and an extension signals\n> thread-readiness by defining PG_THREADSAFE_EXTENSION before including\n> any PostgreSQL headers other than postgres.h. If PostgreSQL is built\n> multithreaded and the extension does not signal thread-safety, you get\n> something like this:\n>\n> ../pgsql/src/test/modules/dummy_seclabel/dummy_seclabel.c:20:1: error:\n> static assertion failed due to requirement '1 == 0': must define\n> PG_THREADSAFE_EXTENSION or use unthreaded PostgreSQL\n> PG_MODULE_MAGIC;\n>\n> I'm not entirely happy with this solution because the results are\n> confusing if PG_THREADSAFE_EXTENSION is declared after including\n> fmgr.h. Perhaps this can be adequately handled by documenting and\n> demonstrating the right pattern, or maybe somebody has a better idea.\n>\n> Another idea I considered was to replace the PG_MODULE_MAGIC;\n> declaration with something that allows for arguments, like\n> PG_MODULE_MAGIC(.process_model = false, .thread_model = true). But on\n> further reflection, that seems like the wrong thing. AFAICS, that's\n> going to tell you at runtime about something that you really want to\n> know at compile time. But this kind of idea might need more thought if\n> we decide that the *same* build of PostgreSQL can either launch\n> processes or threads per session, because then we'd to know which\n> extensions were available in whichever mode applied to the current\n> session.\n\nNot entirely sure how I feel about the approach you've taken, but here \nis a patch that Heikki and I put together for extension compatibility. \nIt's not a build time solution, but a runtime solution. Instead of \nPG_MODULE_MAGIC, extensions would use PG_MAGIC_MODULE_REENTRANT. There \nis a GUC called `multithreaded` which controls the variable \nIsMultithreaded. We operated under the assumption that being able to \ntoggle multithreading and multi-processing without recompiling has \nvalue.\n\n-- \nTristan Partin\nhttps://tristan.partin.io", "msg_date": "Wed, 05 Jun 2024 15:32:18 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On Wed, 5 Jun 2024 at 22:05, Robert Haas <[email protected]> wrote:\n> The attached patch is a sketch of one possible approach: PostgreSQL\n> signals whether it is multithreaded by defining or not defining\n> PG_MULTITHREADING in pg_config_manual.h, and an extension signals\n> thread-readiness by defining PG_THREADSAFE_EXTENSION before including\n> any PostgreSQL headers other than postgres.h.\n\nMy first gut-reaction: It seems kinda annoying to have to do this for\nevery c that you use to build your extension, e.g. citus or postgis\nhave a ton of those.\n\nPG_MODULE_MAGIC seems like a better fit imho.\n\nIf we really want a compile time failure, then I think I'd prefer to\nhave a new postgres.h file (e.g. postgres-thread.h) that you would\ninclude instead of plain postgres.h. Basically this file could then\ncontain the below two lines:\n\n#include \"postgres.h\"\n#define PG_THREADSAFE_EXTENSION 1\n\n\n", "msg_date": "Wed, 5 Jun 2024 22:45:23 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On Wed, Jun 5, 2024 at 4:32 PM Tristan Partin <[email protected]> wrote:\n> Not entirely sure how I feel about the approach you've taken, but here\n> is a patch that Heikki and I put together for extension compatibility.\n> It's not a build time solution, but a runtime solution. Instead of\n> PG_MODULE_MAGIC, extensions would use PG_MAGIC_MODULE_REENTRANT. There\n> is a GUC called `multithreaded` which controls the variable\n> IsMultithreaded. We operated under the assumption that being able to\n> toggle multithreading and multi-processing without recompiling has\n> value.\n\nThat's interesting, because I thought Heikki was against having a\nruntime toggle.\n\nI don't think PG_MODULE_MAGIC_REENTRANT is a good syntax. It all looks\ngreat as long as we only ever need the PG_MODULE_MAGIC line to signal\none bit of information, but as soon as you get to two bits it's pretty\nquestionable, and anything more than two bits is insane. If we want to\ndo something with the PG_MODULE_MAGIC line, I think it should involve\noptions-passing of some form rather than just having an alternate\nmacro name.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Jun 2024 16:55:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On Wed Jun 5, 2024 at 3:56 PM CDT, Robert Haas wrote:\n> On Wed, Jun 5, 2024 at 4:32 PM Tristan Partin <[email protected]> wrote:\n> > Not entirely sure how I feel about the approach you've taken, but here\n> > is a patch that Heikki and I put together for extension compatibility.\n> > It's not a build time solution, but a runtime solution. Instead of\n> > PG_MODULE_MAGIC, extensions would use PG_MAGIC_MODULE_REENTRANT. There\n> > is a GUC called `multithreaded` which controls the variable\n> > IsMultithreaded. We operated under the assumption that being able to\n> > toggle multithreading and multi-processing without recompiling has\n> > value.\n>\n> That's interesting, because I thought Heikki was against having a\n> runtime toggle.\n>\n> I don't think PG_MODULE_MAGIC_REENTRANT is a good syntax. It all looks\n> great as long as we only ever need the PG_MODULE_MAGIC line to signal\n> one bit of information, but as soon as you get to two bits it's pretty\n> questionable, and anything more than two bits is insane. If we want to\n> do something with the PG_MODULE_MAGIC line, I think it should involve\n> options-passing of some form rather than just having an alternate\n> macro name.\n\nI agree that this method doesn't lend itself to future extensibility.\n\n-- \nTristan Partin\nhttps://tristan.partin.io\n\n\n", "msg_date": "Wed, 05 Jun 2024 17:21:46 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On 05/06/2024 23:55, Robert Haas wrote:\n> On Wed, Jun 5, 2024 at 4:32 PM Tristan Partin <[email protected]> wrote:\n>> Not entirely sure how I feel about the approach you've taken, but here\n>> is a patch that Heikki and I put together for extension compatibility.\n>> It's not a build time solution, but a runtime solution. Instead of\n>> PG_MODULE_MAGIC, extensions would use PG_MAGIC_MODULE_REENTRANT. There\n>> is a GUC called `multithreaded` which controls the variable\n>> IsMultithreaded. We operated under the assumption that being able to\n>> toggle multithreading and multi-processing without recompiling has\n>> value.\n> \n> That's interesting, because I thought Heikki was against having a\n> runtime toggle.\n\nI'm very much in favor of a runtime toggle. To be precise, a \nPGC_POSTMASTER setting. We'll get a lot more testing if you can easily \nturn it on/off, and so far I haven't seen anything that would require it \nto be a compile time option.\n\n> I don't think PG_MODULE_MAGIC_REENTRANT is a good syntax. It all looks\n> great as long as we only ever need the PG_MODULE_MAGIC line to signal\n> one bit of information, but as soon as you get to two bits it's pretty\n> questionable, and anything more than two bits is insane. If we want to\n> do something with the PG_MODULE_MAGIC line, I think it should involve\n> options-passing of some form rather than just having an alternate\n> macro name.\n\n+1, that would be nicer.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 6 Jun 2024 03:01:12 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On Wed, Jun 5, 2024 at 8:01 PM Heikki Linnakangas <[email protected]> wrote:\n> I'm very much in favor of a runtime toggle. To be precise, a\n> PGC_POSTMASTER setting. We'll get a lot more testing if you can easily\n> turn it on/off, and so far I haven't seen anything that would require it\n> to be a compile time option.\n\nI was thinking about global variable annotations. If someone wants to\nbuild without multithreading, I think that they won't want to still\nend up with a ton of variables being changed to thread-local. So I\nthink there has to be a build-time option controlling whether this\nbuild supports threading. I suspect there will be other people who\nwant to just shut all of this experimental code off, which is probably\ngoing to be a second driver for a build-time toggle. But even with\nthat, we can still have a GUC controlling whether threading is\nactually used. Does that make sense to you?\n\nSupposing it does, then how does the extension-marking system need to\nwork? I suppose in this world we don't want any build failures: you're\nallowed to build a non-thread-aware extension against a\nthreading-capable PostgreSQL; you're just not allowed to load the\nresulting extension when the server is in threading mode.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Jun 2024 21:10:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "Hi,\n\nOn 2024-06-05 21:10:01 -0400, Robert Haas wrote:\n> On Wed, Jun 5, 2024 at 8:01 PM Heikki Linnakangas <[email protected]> wrote:\n> > I'm very much in favor of a runtime toggle. To be precise, a\n> > PGC_POSTMASTER setting. We'll get a lot more testing if you can easily\n> > turn it on/off, and so far I haven't seen anything that would require it\n> > to be a compile time option.\n> \n> I was thinking about global variable annotations. If someone wants to\n> build without multithreading, I think that they won't want to still\n> end up with a ton of variables being changed to thread-local.\n\nDepending on the architecture / ABI / compiler options it's often not\nmeaningfully more expensive to access a thread local variable than a \"normal\"\nvariable.\n\nI think these days it's e.g. more expensive on x86-64 windows, but not\nlinux. On arm the overhead of TLS is more noticeable, across platforms,\nafaict.\n\nExample compiler output for x86-64 and armv8:\nhttps://godbolt.org/z/K369eG5MM\n\nCycle analysis or linux x86-64 output:\nhttps://godbolt.org/z/KK57vM1of\n\nThis shows that for the linux x86-64 case there's no difference in efficiency\nbetween the tls/non-tls case.\n\nThe reason it's so fast on x86-64 linux is that they reused one of the \"old\"\nsegment registers to serve as the index register differing between each\nthread. For x86-64 code, most code is compiled position independent, and\n*also* uses an indexed mode (but relative to the instruction pointer).\n\n\nI think we might be able to gain some small performance benefits via the\nannotations, which actualy might make it viable to just apply the annotations\nregardless of using threads or not.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Jun 2024 18:50:32 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On Wed, Jun 5, 2024 at 9:50 PM Andres Freund <[email protected]> wrote:\n> Depending on the architecture / ABI / compiler options it's often not\n> meaningfully more expensive to access a thread local variable than a \"normal\"\n> variable.\n>\n> I think these days it's e.g. more expensive on x86-64 windows, but not\n> linux. On arm the overhead of TLS is more noticeable, across platforms,\n> afaict.\n\nI mean, to me, this still sounds like we want multithreading to be a\nbuild-time option.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Jun 2024 21:59:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "Hi,\n\nOn 2024-06-05 21:59:42 -0400, Robert Haas wrote:\n> On Wed, Jun 5, 2024 at 9:50 PM Andres Freund <[email protected]> wrote:\n> > Depending on the architecture / ABI / compiler options it's often not\n> > meaningfully more expensive to access a thread local variable than a \"normal\"\n> > variable.\n> >\n> > I think these days it's e.g. more expensive on x86-64 windows, but not\n> > linux. On arm the overhead of TLS is more noticeable, across platforms,\n> > afaict.\n> \n> I mean, to me, this still sounds like we want multithreading to be a\n> build-time option.\n\nMaybe. I think shipping a mode where users can fairly simply toggle between\nthreaded and process mode will allow us to get this stable a *lot* quicker\nthan if we distribute two builds. Most users don't build from source, distros\nwill have to pick the mode. If they don't choose threaded mode, we'll not find\nproblems. If they do choose threaded mode, we can't ask users to switch to a\nprocess based mode to check if the problem is related.\n\nWe have been talking in a bunch of threads about having a mode where the main\npostgres binary chooses a build optimized for the current architecture, to be\nable to use SIMD instructions without a runtime check/dispatch. I guess we\ncould add threadedness to such a matrix...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Jun 2024 19:09:36 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On Wed, Jun 5, 2024 at 10:09 PM Andres Freund <[email protected]> wrote:\n> Maybe. I think shipping a mode where users can fairly simply toggle between\n> threaded and process mode will allow us to get this stable a *lot* quicker\n> than if we distribute two builds. Most users don't build from source, distros\n> will have to pick the mode. If they don't choose threaded mode, we'll not find\n> problems. If they do choose threaded mode, we can't ask users to switch to a\n> process based mode to check if the problem is related.\n\nI don't believe that being coercive here is the right approach. I\nthink distros see the value in compiling with as many things turned on\nas possible; when they ship with something turned off, it's because\nit's unstable or introduces weird dependencies or has some other\ndisadvantage.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Jun 2024 22:47:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On 06/06/2024 05:47, Robert Haas wrote:\n> On Wed, Jun 5, 2024 at 10:09 PM Andres Freund <[email protected]> wrote:\n>> Maybe. I think shipping a mode where users can fairly simply toggle between\n>> threaded and process mode will allow us to get this stable a *lot* quicker\n>> than if we distribute two builds. Most users don't build from source, distros\n>> will have to pick the mode. If they don't choose threaded mode, we'll not find\n>> problems. If they do choose threaded mode, we can't ask users to switch to a\n>> process based mode to check if the problem is related.\n> \n> I don't believe that being coercive here is the right approach. I\n> think distros see the value in compiling with as many things turned on\n> as possible; when they ship with something turned off, it's because\n> it's unstable or introduces weird dependencies or has some other\n> disadvantage.\n\nI presume there's no harm in building with multithreading support. If \nyou don't want to use it, put \"multithreading=off\" in your config file \n(which will presumably be the default for a while).\n\nIf we're worried about the performance impact of thread-local variables \nin particular, we can try to measure that. I don't think it's material \nthough.\n\nIf there is some material harm from compiling with multithreading \nsupport even if you're not using it, we should try to fix that. I'm not \ndead set against having a compile-time option, but I don't see the need \nfor it at the moment.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 6 Jun 2024 12:00:38 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On Thu, Jun 6, 2024 at 5:00 AM Heikki Linnakangas <[email protected]> wrote:\n> If there is some material harm from compiling with multithreading\n> support even if you're not using it, we should try to fix that. I'm not\n> dead set against having a compile-time option, but I don't see the need\n> for it at the moment.\n\nWell, OK, so it sounds like I'm outvoted, at least at the moment.\nMaybe that will change as more people vote, but for now, that's where\nwe are. Given that, I suppose we want something more like Tristan's\npatch, but with a more extensible syntax. Does that sound right?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 10:23:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On 06/06/2024 17:23, Robert Haas wrote:\n> On Thu, Jun 6, 2024 at 5:00 AM Heikki Linnakangas <[email protected]> wrote:\n>> If there is some material harm from compiling with multithreading\n>> support even if you're not using it, we should try to fix that. I'm not\n>> dead set against having a compile-time option, but I don't see the need\n>> for it at the moment.\n> \n> Well, OK, so it sounds like I'm outvoted, at least at the moment.\n> Maybe that will change as more people vote, but for now, that's where\n> we are. Given that, I suppose we want something more like Tristan's\n> patch, but with a more extensible syntax. Does that sound right?\n\n+1\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 6 Jun 2024 17:32:12 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [multithreading] extension compatibility" }, { "msg_contents": "On Thu, Jun 6, 2024 at 10:32 AM Heikki Linnakangas <[email protected]> wrote:\n> > Well, OK, so it sounds like I'm outvoted, at least at the moment.\n> > Maybe that will change as more people vote, but for now, that's where\n> > we are. Given that, I suppose we want something more like Tristan's\n> > patch, but with a more extensible syntax. Does that sound right?\n>\n> +1\n\nSo, how shall we proceed here? Tristan, do you want to update your\npatch based on this feedback?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 15:23:25 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [multithreading] extension compatibility" } ]
[ { "msg_contents": "There was an unconference session at pgconf.dev related to threading\nsupport. One of the problems identified was setlocale().\n\nThe attached series of patches make collation not depend on\nsetlocale(), even if the database collation uses the libc provider.\n\nSince commit 8d9a9f034e, all supported platforms have locale_t, so we\ncan use strcoll_l(), etc., or uselocale() when no \"_l\" variant is\navailable.\n\nA brief test shows that there may be a performance regression for libc\ndefault collations. But if so, I'm not sure that's avoidable if the\ngoal is to take away setlocale. I'll see if removing the extra branches\nmitigates it.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Wed, 05 Jun 2024 17:23:25 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Wed, 2024-06-05 at 17:23 -0700, Jeff Davis wrote:\n> A brief test shows that there may be a performance regression for\n> libc\n> default collations. But if so, I'm not sure that's avoidable if the\n> goal is to take away setlocale. I'll see if removing the extra\n> branches\n> mitigates it.\n\nI redid the test and didn't see a difference, and then I ran a\nstandalone microbenchmark to compare strcoll() vs strcoll_l(), and\ndidn't see a difference there, either.\n\nAnother implementation may show a difference, but it doesn't seem to be\na problem for glibc.\n\nI think this patch series is a nice cleanup, as well, making libc more\nlike the other providers and not dependent on global state.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 06 Jun 2024 11:37:08 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Thu, 2024-06-06 at 11:37 -0700, Jeff Davis wrote:\n> \n> I think this patch series is a nice cleanup, as well, making libc\n> more\n> like the other providers and not dependent on global state.\n\nNew rebased series attached with additional cleanup. Now that\npg_locale_t is never NULL, we can simplify the way the collation cache\nworks, eliminating ~100 lines.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Fri, 14 Jun 2024 16:35:19 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 15.06.24 01:35, Jeff Davis wrote:\n> On Thu, 2024-06-06 at 11:37 -0700, Jeff Davis wrote:\n>>\n>> I think this patch series is a nice cleanup, as well, making libc\n>> more\n>> like the other providers and not dependent on global state.\n> \n> New rebased series attached with additional cleanup. Now that\n> pg_locale_t is never NULL, we can simplify the way the collation cache\n> works, eliminating ~100 lines.\n\nOverall, this is great. I have wished for a simplification like this\nfor a long time. It is the logical continuation of relying on\nlocale_t stuff rather than process-global settings.\n\n\n* v2-0001-Make-database-default-collation-internal-to-pg_lo.patch\n\n+void\n+pg_init_database_collation()\n\nThe function argument should be (void).\n\nThe prefix pg_ makes it sound like it's a user-facing function of some\nsort. Is there a reason for it?\n\nMaybe add a small comment before the function.\n\n\n* v2-0002-Make-database-collation-pg_locale_t-always-non-NU.patch\n\nThere is quite a bit of code duplication from\npg_newlocale_from_collation(). Can we do this better?\n\n\n* v2-0003-ts_locale.c-do-not-use-NULL-to-mean-the-database-.patch\n\nThe \"TODO\" markers should be left, because what they refer to is that\nthese functions just use the default collation rather than something\npassed in from the expression machinery. This is not addressed by\nthis change. (Obviously, the comments could have been clearer about\nthis.)\n\n\n* v2-0004-Remove-support-for-null-pg_locale_t.patch\n\nI found a few more places that should be adjusted in a similar way.\n\n- In src/backend/regex/regc_pg_locale.c, the whole business with\n pg_regex_locale, in particular in pg_set_regex_collation().\n\n- In src/backend/utils/adt/formatting.c, various places such as\n str_tolower().\n\n- In src/backend/utils/adt/pg_locale.c, wchar2char() and char2wchar().\n (Also, for wchar2char() there is one caller that explicitly passes\n NULL.)\n\nThe changes at the call sites of pg_locale_deterministic() are\nunfortunate, because they kind of go into the opposite direction: They\nadd checks for NULL locales instead of removing them. (For a minute I\nwas thinking I was reading your patch backwards.) We should think of\na way to write this clearer.\n\nLooking for example at hashtext() after 0001..0004 applied, it is\n\n if (!lc_collate_is_c(collid))\n mylocale = pg_newlocale_from_collation(collid);\n\n if (!mylocale || pg_locale_deterministic(mylocale))\n {\n\nBut then after your 0006 patch, lc_locale_is_c() internally also calls\npg_newlocale_from_collation(), so this code just becomes redundant.\nBetter might be if pg_locale_deterministic() itself checked if collate\nis C, and then hashtext() would just need to write:\n\n mylocale = pg_newlocale_from_collation(collid);\n\n if (pg_locale_deterministic(mylocale))\n {\n\nThe patch sequencing might be a bit tricky here. Maybe it's ok if\npatch 0004 stays as is in this respect if 0006 were to fix it back.\n\n\n* v2-0005-Avoid-setlocale-in-lc_collate_is_c-and-lc_ctype_i.patch\n\nNothing uses this, AFAICT, so why?\n\nAlso, this creates more duplication between\npg_init_database_collation() and pg_newlocale_from_collation(), as\nmentioned at patch 0002.\n\n\n* v2-0006-Simplify-collation-cache.patch\n\nok\n\n\n\n", "msg_date": "Wed, 19 Jun 2024 11:15:48 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Wed, 2024-06-19 at 11:15 +0200, Peter Eisentraut wrote:\n> > * v2-0001-Make-database-default-collation-internal-to-pg_lo.patch\n> > \n> > +void\n> > +pg_init_database_collation()\n> > \n> > The function argument should be (void).\n> > \n> > The prefix pg_ makes it sound like it's a user-facing function of >\n> > some\n> > sort.  Is there a reason for it?\n> > \n> > Maybe add a small comment before the function.\n\nAgreed, done.\n\n> > \n> > * v2-0002-Make-database-collation-pg_locale_t-always-non-NU.patch\n> > \n> > There is quite a bit of code duplication from\n> > pg_newlocale_from_collation().  Can we do this better?\n\nI refactored it into make_libc_collator().\n\n> > * v2-0003-ts_locale.c-do-not-use-NULL-to-mean-the-database-.patch\n> > \n> > The \"TODO\" markers should be left, because what they refer to is\n> > that\n> > these functions just use the default collation rather than\n> > something\n> > passed in from the expression machinery.  This is not addressed by\n> > this change.  (Obviously, the comments could have been clearer\n> > about\n> > this.)\n\nDone.\n\n> > * v2-0004-Remove-support-for-null-pg_locale_t.patch\n> > \n> > I found a few more places that should be adjusted in a similar way.\n\nFixed, thank you.\n\n> > The changes at the call sites of pg_locale_deterministic() are\n> > unfortunate\n\nYeah, that part was a bit annoying.\n\n> > But then after your 0006 patch, lc_locale_is_c() internally also >\n> > calls\n> > pg_newlocale_from_collation()\n\nNot always. It still returns early for C_COLLATION_OID or\nPOSIX_COLLATION_OID, and that's actually required because\npg_regcomp(..., C_COLLATION_OID) is called when parsing pg_hba.conf,\nbefore catalog access is available. I don't think that detail is\nrelevant in the cases you brought up, but it got in the way of some\nother refactoring I was trying to do.\n\n> > , so this code just becomes redundant.\n> > Better might be if pg_locale_deterministic() itself checked if >\n> > collate\n> > is C, and then hashtext() would just need to write:\n> > \n> >      mylocale = pg_newlocale_from_collation(collid);\n> > \n> >      if (pg_locale_deterministic(mylocale))\n> >      {\n> > \n> > The patch sequencing might be a bit tricky here.  Maybe it's ok if\n> > patch 0004 stays as is in this respect if 0006 were to fix it back.\n\nAddressed in v3-0006.\n\n> > * v2-0005-Avoid-setlocale-in-lc_collate_is_c-and-lc_ctype_i.patch\n> > \n> > Nothing uses this, AFAICT, so why?\n\nYou're right. It was used to avoid setlocale() in lc_collate_is_c(),\nbut that's eliminated in the next patch anyway.\n\n\nAlso, in v3-0005, I had to also check for \"C\" or \"POSIX\" in\nmake_libc_collator(), so that it wouldn't try to actually create the\nlocale_t in that case.\n\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 25 Jul 2024 23:07:37 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "Nice refactoring!\n\nTwo small comments about CheckMyDatabase().\n\n- Shouldn't we look at the default_locale.ctype_is_c when setting \ndatabase_ctype_is_c instead of doing a strcmp()? or maybe we should even \nremove the global variable and always look at the default_locale?\n\n- I think that the lookup of Anum_pg_database_datlocale could be done \nlater in the code since it is not needed when we use a libc locale. E.g. \nas below.\n\n if (dbform->datlocprovider == COLLPROVIDER_LIBC)\n locale = collate;\n else\n {\n datum = SysCacheGetAttr(DATABASEOID, tup, \nAnum_pg_database_datlocale, &isnull);\n if (!isnull)\n locale = TextDatumGetCString(datum);\n }\n\nAlso is there any reaosn you do not squash th 4th and the 6th patch?\n\nAndreas\n\n\n", "msg_date": "Fri, 26 Jul 2024 19:38:23 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Fri, 2024-07-26 at 19:38 +0200, Andreas Karlsson wrote:\n> Nice refactoring!\n> \n> Two small comments about CheckMyDatabase().\n> \n> - Shouldn't we look at the default_locale.ctype_is_c when setting \n> database_ctype_is_c instead of doing a strcmp()? or maybe we should\n> even \n> remove the global variable and always look at the default_locale?\n\ndatabase_ctype_is_c refers to the LC_CTYPE environment of the database\n-- pg_database.datctype. default_locale.ctype_is_c is the ctype of the\ndatabase's default collation.\n\nConfusing, I know, but it matters for a few things that still depend on\nthe LC_CTYPE, such as tsearch and maybe a few extensions. See\nf413941f41.\n\n> - I think that the lookup of Anum_pg_database_datlocale could be done\n> later in the code since it is not needed when we use a libc locale.\n> E.g. \n> as below.\n\nDone, thank you.\n\n> Also is there any reaosn you do not squash th 4th and the 6th patch?\n\nDone. I had to rearrange the patch ordering a bit because prior to the\ncache refactoring patch, it's unsafe to call\npg_newlocale_from_collation() without checking lc_collate_is_c() or\nlc_ctype_is_c() first.\n\nRegards,\n\tJeff Davis", "msg_date": "Fri, 26 Jul 2024 13:35:54 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 7/26/24 10:35 PM, Jeff Davis wrote:\n> database_ctype_is_c refers to the LC_CTYPE environment of the database\n> -- pg_database.datctype. default_locale.ctype_is_c is the ctype of the\n> database's default collation.\n> \n> Confusing, I know, but it matters for a few things that still depend on\n> the LC_CTYPE, such as tsearch and maybe a few extensions. See\n> f413941f41.\n\nAh, right! That was thinko on my behalf.\n\nThe set of patches looks good to me now. There is further refactoring \nthat can be done in this area (and should be done given all calls e.g to \nisalpha()) but I think this set of patches improves code readability \nwhile moving us away from setlocale().\n\nAnd even if we take a tiny performance hit here, which your tests did \nnot measure, I would say it is worth it both due to code clarity and due \nto not relying on thread unsafe state.\n\nI do not see these patches in the commitfest app but if they were I \nwould have marked them as ready for committer.\n\nAndreas\n\n\n\n", "msg_date": "Sat, 27 Jul 2024 21:03:20 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 27.07.24 21:03, Andreas Karlsson wrote:\n> On 7/26/24 10:35 PM, Jeff Davis wrote:\n>> database_ctype_is_c refers to the LC_CTYPE environment of the database\n>> -- pg_database.datctype. default_locale.ctype_is_c is the ctype of the\n>> database's default collation.\n>>\n>> Confusing, I know, but it matters for a few things that still depend on\n>> the LC_CTYPE, such as tsearch and maybe a few extensions. See\n>> f413941f41.\n> \n> Ah, right! That was thinko on my behalf.\n> \n> The set of patches looks good to me now. There is further refactoring \n> that can be done in this area (and should be done given all calls e.g to \n> isalpha()) but I think this set of patches improves code readability \n> while moving us away from setlocale().\n> \n> And even if we take a tiny performance hit here, which your tests did \n> not measure, I would say it is worth it both due to code clarity and due \n> to not relying on thread unsafe state.\n> \n> I do not see these patches in the commitfest app but if they were I \n> would have marked them as ready for committer.\n\nHere: https://commitfest.postgresql.org/48/5023/\n\nI have also re-reviewed the patches and I agree they are good to go.\n\n\n\n", "msg_date": "Mon, 29 Jul 2024 21:45:31 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Mon, 2024-07-29 at 21:45 +0200, Peter Eisentraut wrote:\n> I have also re-reviewed the patches and I agree they are good to go.\n\nI found a couple issues with the later patches:\n\n* There was still some confusion about the default collation vs.\ndatcollate/datctype for callers of wchar2char() and char2wchar() (those\nfunctions only work for libc). I introduced a new pg_locale_t structure\nto represent datcollate/datctype regardless of datlocprovider to solve\nthis.\n\n* Another loose end relying on setlocale(): in selfuncs.c, there's\nstill a call directly to strxfrm(), which depends on setlocale(). I\nchanged this to lookup the collation and then use pg_strxfrm(). That\nshould improve histogram selectivity estimates because it uses the\ncorrect provider, rather than relying on setlocale(), right?\n\nNew series attached.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 30 Jul 2024 12:13:12 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Tue, 2024-07-30 at 12:13 -0700, Jeff Davis wrote:\n> I found a couple issues with the later patches:\n> \n> * There was still some confusion about the default collation vs.\n> datcollate/datctype for callers of wchar2char() and char2wchar()\n> (those\n> functions only work for libc). I introduced a new pg_locale_t\n> structure\n> to represent datcollate/datctype regardless of datlocprovider to\n> solve\n> this.\n\nI didn't quite like the API I introduced in 0001, so I skipped 0001.\n\nFor 0002 I left char2wchar() and wchar2char() as-is, where they expect\nlibc and accept a NULL pg_locale_t. I committed the rest of 0002.\n\n> * Another loose end relying on setlocale(): in selfuncs.c, there's\n> still a call directly to strxfrm(), which depends on setlocale(). I\n> changed this to lookup the collation and then use pg_strxfrm(). That\n> should improve histogram selectivity estimates because it uses the\n> correct provider, rather than relying on setlocale(), right?\n\nCommitted 0003.\n\nWith these changes, collations are no longer dependent on the\nenvironment locale (setlocale()) at all for either collation behavior\n(ORDER BY) or ctype behavior (LOWER(), etc.).\n\nAdditionally, unless I missed something, nothing in the server is\ndependent on LC_COLLATE at all.\n\nThere are still some things that depend on setlocale() in one way or\nanother:\n\n - char2wchar() & wchar2char()\n - ts_locale.c\n - various places that depend on LC_CTYPE unrelated to the collation\ninfrastructure\n - things that depend on other locale settings, like LC_NUMERIC\n\nWe can address those as part of a separate thread. I'll count this as\ncommitted.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 06 Aug 2024 14:40:17 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 06.08.24 23:40, Jeff Davis wrote:\n> With these changes, collations are no longer dependent on the\n> environment locale (setlocale()) at all for either collation behavior\n> (ORDER BY) or ctype behavior (LOWER(), etc.).\n> \n> Additionally, unless I missed something, nothing in the server is\n> dependent on LC_COLLATE at all.\n> \n> There are still some things that depend on setlocale() in one way or\n> another:\n> \n> - char2wchar() & wchar2char()\n> - ts_locale.c\n> - various places that depend on LC_CTYPE unrelated to the collation\n> infrastructure\n> - things that depend on other locale settings, like LC_NUMERIC\n> \n> We can address those as part of a separate thread. I'll count this as\n> committed.\n\nI have a couple of small follow-up patches for this.\n\nFirst, in like.c, SB_lower_char() now reads:\n\nstatic char\nSB_lower_char(unsigned char c, pg_locale_t locale, bool locale_is_c)\n{\n if (locale_is_c)\n return pg_ascii_tolower(c);\n else if (locale)\n return tolower_l(c, locale->info.lt);\n else\n return pg_tolower(c);\n}\n\nBut after this patch set, locale cannot be NULL anymore, so the third \nbranch is obsolete.\n\n(Now that I look at it, pg_tolower() has some short-circuiting for ASCII \nletters, so it would not handle Turkish-i correctly if that had been the \nglobal locale. By removing the use of pg_tolower(), we fix that issue \nin passing.)\n\nSecond, there are a number of functions in like.c like the above that \ntake separate arguments like pg_locale_t locale, bool locale_is_c. \nBecause pg_locale_t now contains the locale_is_c information, these can \nbe combined.", "msg_date": "Wed, 7 Aug 2024 22:44:25 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Wed, 2024-08-07 at 22:44 +0200, Peter Eisentraut wrote:\n> But after this patch set, locale cannot be NULL anymore, so the third\n> branch is obsolete.\n\n...\n\n> Second, there are a number of functions in like.c like the above that\n> take separate arguments like pg_locale_t locale, bool locale_is_c. \n> Because pg_locale_t now contains the locale_is_c information, these\n> can \n> be combined.\n\nI believe these patches are correct, but the reasoning is fairly\ncomplex:\n\n1. Some MatchText variants are called with 0 for locale. But that's OK\nbecause ...\n\n2. A MatchText variant only cares about the locale if MATCH_LOWER(t) is\ndefined, and ...\n\n3. Only one variant, SB_IMatchText() defines MATCH_LOWER(), and ...\n\n4. SB_IMatchText() is called with a non-zero locale.\n\nAll of these are a bit confusing to follow because it's generated code.\n#2 is particularly non-obvious, because \"locale\" is not even an\nargument of the MATCH_LOWER(t) or GETCHAR(t) macros, it's taken\nimplicitly from the outer scope.\n\nI don't think your patches cause this confusion, but is there a way you\ncan clarify some of this along the way?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 08 Aug 2024 13:00:41 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 07.08.24 22:44, Peter Eisentraut wrote:\n> (Now that I look at it, pg_tolower() has some short-circuiting for ASCII \n> letters, so it would not handle Turkish-i correctly if that had been the \n> global locale.  By removing the use of pg_tolower(), we fix that issue \n> in passing.)\n\nIt occurred to me that this issue also surfaces in a more prominent \nplace. These arguably-wrong pg_tolower() and pg_toupper() calls were \nalso used by the normal SQL lower() and upper() functions before commit \ne9931bfb751 if you used a single byte encoding.\n\nFor example, in PG17, multi-byte encoding:\n\ninitdb --locale=tr_TR.utf8\n\nselect upper('hij'); --> HİJ\n\nPG17, single-byte encoding:\n\ninitdb --locale=tr_TR # uses LATIN5\n\nselect upper('hij'); --> HIJ\n\nWith current master, after commit e9931bfb751, you get the first result \nin both cases.\n\nSo this could break indexes across pg_upgrade in such configurations.\n\n\n\n", "msg_date": "Thu, 8 Aug 2024 23:09:53 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 08.08.24 22:00, Jeff Davis wrote:\n> On Wed, 2024-08-07 at 22:44 +0200, Peter Eisentraut wrote:\n>> But after this patch set, locale cannot be NULL anymore, so the third\n>> branch is obsolete.\n> \n> ...\n> \n>> Second, there are a number of functions in like.c like the above that\n>> take separate arguments like pg_locale_t locale, bool locale_is_c.\n>> Because pg_locale_t now contains the locale_is_c information, these\n>> can\n>> be combined.\n> \n> I believe these patches are correct, but the reasoning is fairly\n> complex:\n> \n> 1. Some MatchText variants are called with 0 for locale. But that's OK\n> because ...\n> \n> 2. A MatchText variant only cares about the locale if MATCH_LOWER(t) is\n> defined, and ...\n> \n> 3. Only one variant, SB_IMatchText() defines MATCH_LOWER(), and ...\n> \n> 4. SB_IMatchText() is called with a non-zero locale.\n> \n> All of these are a bit confusing to follow because it's generated code.\n> #2 is particularly non-obvious, because \"locale\" is not even an\n> argument of the MATCH_LOWER(t) or GETCHAR(t) macros, it's taken\n> implicitly from the outer scope.\n> \n> I don't think your patches cause this confusion, but is there a way you\n> can clarify some of this along the way?\n\nYes, this is also my analysis. The patch in \n<https://www.postgresql.org/message-id/flat/[email protected]> \nwould replace passing 0 with an actual locale object. The changes in \nGenericMatchText() could also be applied independently so that we'd \nalways pass in a non-zero locale value, even if it would not be used in \nsome cases. I need to update that patch to cover your latest changes. \nI'll see if I can propose something here that looks a bit nicer.\n\n\n\n", "msg_date": "Thu, 8 Aug 2024 23:18:33 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Thu, 2024-08-08 at 23:09 +0200, Peter Eisentraut wrote:\n> With current master, after commit e9931bfb751, you get the first\n> result \n> in both cases.\n> \n> So this could break indexes across pg_upgrade in such configurations.\n\nGood observation. Is there any action we should take here, or just add\nthat to the release notes for 18?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 08 Aug 2024 15:38:09 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> We can address those as part of a separate thread. I'll count this as\n> committed.\n\nCoverity has a nit about this:\n\n*** CID 1616189: Null pointer dereferences (REVERSE_INULL)\n/srv/coverity/git/pgsql-git/postgresql/src/backend/utils/adt/like.c: 206 in Generic_Text_IC_like()\n200 \t * on the pattern and text, but instead call SB_lower_char on each\n201 \t * character. In the multi-byte case we don't have much choice :-(. Also,\n202 \t * ICU does not support single-character case folding, so we go the long\n203 \t * way.\n204 \t */\n205 \n>>> CID 1616189: Null pointer dereferences (REVERSE_INULL)\n>>> Null-checking \"locale\" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.\n206 \tif (pg_database_encoding_max_length() > 1 || (locale && locale->provider == COLLPROVIDER_ICU))\n207 \t{\n208 \t\tpat = DatumGetTextPP(DirectFunctionCall1Coll(lower, collation,\n209 \t\t\t\t\t\t\t\t\t\t\t\t\t PointerGetDatum(pat)));\n210 \t\tp = VARDATA_ANY(pat);\n211 \t\tplen = VARSIZE_ANY_EXHDR(pat);\n\nI assume it would now be okay to take out \"locale &&\" here?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 11 Aug 2024 12:33:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 11.08.24 18:33, Tom Lane wrote:\n> Jeff Davis <[email protected]> writes:\n>> We can address those as part of a separate thread. I'll count this as\n>> committed.\n> \n> Coverity has a nit about this:\n> \n> *** CID 1616189: Null pointer dereferences (REVERSE_INULL)\n> /srv/coverity/git/pgsql-git/postgresql/src/backend/utils/adt/like.c: 206 in Generic_Text_IC_like()\n> 200 \t * on the pattern and text, but instead call SB_lower_char on each\n> 201 \t * character. In the multi-byte case we don't have much choice :-(. Also,\n> 202 \t * ICU does not support single-character case folding, so we go the long\n> 203 \t * way.\n> 204 \t */\n> 205\n>>>> CID 1616189: Null pointer dereferences (REVERSE_INULL)\n>>>> Null-checking \"locale\" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.\n> 206 \tif (pg_database_encoding_max_length() > 1 || (locale && locale->provider == COLLPROVIDER_ICU))\n> 207 \t{\n> 208 \t\tpat = DatumGetTextPP(DirectFunctionCall1Coll(lower, collation,\n> 209 \t\t\t\t\t\t\t\t\t\t\t\t\t PointerGetDatum(pat)));\n> 210 \t\tp = VARDATA_ANY(pat);\n> 211 \t\tplen = VARSIZE_ANY_EXHDR(pat);\n> \n> I assume it would now be okay to take out \"locale &&\" here?\n\nCorrect, that check is no longer necessary.\n\n\n\n", "msg_date": "Mon, 12 Aug 2024 09:00:11 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Mon, 2024-08-12 at 09:00 +0200, Peter Eisentraut wrote:\n> > I assume it would now be okay to take out \"locale &&\" here?\n> \n> Correct, that check is no longer necessary.\n\nThank you, fixed.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 12 Aug 2024 12:47:58 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "I'm wondering if after this patch series, lc_collate_is_c() and \nlc_ctype_is_c() are still useful.\n\nThey used to be completely separate from pg_newlocale_from_collation(), \nbut now they are just mostly a thin wrapper around it. Except there is \nsome hardcoded handling of C_COLLATION_OID and POSIX_COLLATION_OID. Do \nwe care about that?\n\nIn many places, the notational and structural complexity would be \nsignificantly improved if we changed code like\n\n if (pg_collate_is_c(colloid))\n {\n ...\n }\n else\n {\n pg_locale_t locale = pg_newlocale_from_collation(colloid);\n\n if (locale->provider == ...)\n {\n ...\n }\n\nto more like\n\n pg_locale_t locale = pg_newlocale_from_collation(colloid);\n\n if (locale->collate_is_c)\n {\n ...\n }\n else if (locale->provider == ...)\n ...\n }\n ...\n\nHowever, it's not clear whether the hardcoded handling of some \ncollations is needed for performance parity or perhaps some \nbootstrapping reasons. It would be useful to get that cleared up. \nThoughts?\n\n\n\n", "msg_date": "Tue, 13 Aug 2024 17:11:50 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Tue, 2024-08-13 at 17:11 +0200, Peter Eisentraut wrote:\n\n> They used to be completely separate from\n> pg_newlocale_from_collation(), \n> but now they are just mostly a thin wrapper around it.  Except there\n> is \n> some hardcoded handling of C_COLLATION_OID and POSIX_COLLATION_OID. \n> Do \n> we care about that?\n\n...\n\n> However, it's not clear whether the hardcoded handling of some \n> collations is needed for performance parity or perhaps some \n> bootstrapping reasons.  It would be useful to get that cleared up. \n> Thoughts?\n\nThere's at least one place where we expect lc_collate_is_c() to work\nwithout catalog access at all: libpq/hba.c uses regexes with\nC_COLLATION_OID.\n\nBut I don't think that's a major problem -- we can just move the\nhardcoded test into pg_newlocale_from_collation() and return a\npredefined struct with collate_is_c/ctype_is_c already set.\n\n+1.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 13 Aug 2024 10:56:07 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 8/13/24 7:56 PM, Jeff Davis wrote:\n> But I don't think that's a major problem -- we can just move the\n> hardcoded test into pg_newlocale_from_collation() and return a\n> predefined struct with collate_is_c/ctype_is_c already set.\n\nI tried that out but thought it felt cleaner to do the hardcoding in \npg_set_regex_collation(). What do you think?\n\nI have attached patches removing lc_collate_is_c() and lc_ctype_is_c(). \nI have not checked if there are any performance regressions when using \nthe C and POSIX locales but remove these special cases makes the code a \nlot cleaner in my book.\n\nI also attach some other clean up patches I did while touching this code.\n\n0001-Remove-lc_collate_is_c.patch\n\nRemoves lc_collate_is_c().\n\n0002-Remove-lc_ctype_is_c.patch\n\nRemoves lc_ctype_is_c() and POSIX_COLLATION_OID which is no longer \nnecessary.\n\n0003-Remove-dubious-check-against-default-locale.patch\n\nThis patch removes a check against DEFAULT_COLLATION_OID which I thought \nlooked really dubious. Shouldn't this just be a simple check for if the \nlocale is deterministic? Since we know we have a valid locale that \nshould be enough, right?\n\n0004-Do-not-check-both-for-collate_is_c-and-deterministic.patch\n\nIt is redundant to check both for \"collation_is_c && deterministic\", right?\n\n0005-Remove-pg_collate_deterministic-and-check-field-dire.patch\n\nSince after my patches we look a lot directly at the collation_is_c and \nctype_is_c fields I think the thin wrapper around the deterministic \nfield makes it seem like there is more to it so I suggest that we should \njust remove it.\n\n0006-Slightly-refactor-varstr_sortsupport-to-improve-read.patch\n\nSmall refactor to make a hard to read function a bit easier to read.\n\nAndreas", "msg_date": "Wed, 14 Aug 2024 01:31:11 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Wed, 2024-08-14 at 01:31 +0200, Andreas Karlsson wrote:\n> 0001-Remove-lc_collate_is_c.patch\n> \n> Removes lc_collate_is_c().\n\nThis could use some performance testing, as the commit message says,\notherwise it looks good.\n\n> 0002-Remove-lc_ctype_is_c.patch\n> \n> Removes lc_ctype_is_c() and POSIX_COLLATION_OID which is no longer \n> necessary.\n\nThis overlaps a bit with what Peter already proposed here:\n\nhttps://www.postgresql.org/message-id/4f562d84-87f4-44dc-8946-01d6c437936f%40eisentraut.org\n\nright?\n\n> 0003-Remove-dubious-check-against-default-locale.patch\n> \n> This patch removes a check against DEFAULT_COLLATION_OID which I\n> thought \n> looked really dubious. Shouldn't this just be a simple check for if\n> the \n> locale is deterministic? Since we know we have a valid locale that \n> should be enough, right?\n\nLooks good to me. The code was correct in the sense that the default\ncollation is always deterministic, but (a) Peter is working on non-\ndeterministic default collations; and (b) it was a redundant check.\n\n> 0004-Do-not-check-both-for-collate_is_c-and-deterministic.patch\n> \n> It is redundant to check both for \"collation_is_c && deterministic\",\n> right?\n\n+1.\n\n> 0005-Remove-pg_collate_deterministic-and-check-field-dire.patch\n> \n> Since after my patches we look a lot directly at the collation_is_c\n> and \n> ctype_is_c fields I think the thin wrapper around the deterministic \n> field makes it seem like there is more to it so I suggest that we\n> should \n> just remove it.\n\n+1. When I added that, there was also a NULL check, but that was\nremoved so we might as well just read the field.\n\n> 0006-Slightly-refactor-varstr_sortsupport-to-improve-read.patch\n> \n> Small refactor to make a hard to read function a bit easier to read.\n\nLooks good to me.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 15:55:03 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 8/15/24 12:55 AM, Jeff Davis wrote:\n> This overlaps a bit with what Peter already proposed here:\n> \n> https://www.postgresql.org/message-id/4f562d84-87f4-44dc-8946-01d6c437936f%40eisentraut.org\n> \n> right?\n\nMaybe I am missing something but not as far as I can see. It is related \nbut I do not see any overlap.\n\n>> 0003-Remove-dubious-check-against-default-locale.patch\n>>\n>> This patch removes a check against DEFAULT_COLLATION_OID which I\n>> thought\n>> looked really dubious. Shouldn't this just be a simple check for if\n>> the\n>> locale is deterministic? Since we know we have a valid locale that\n>> should be enough, right?\n> \n> Looks good to me. The code was correct in the sense that the default\n> collation is always deterministic, but (a) Peter is working on non-\n> deterministic default collations; and (b) it was a redundant check.\n\nSuspected so.\n\nI have attached a set of rebased patches with an added assert. Maybe I \nshould start a new thread though.\n\nAndreas", "msg_date": "Wed, 28 Aug 2024 18:43:49 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Wed, 2024-08-28 at 18:43 +0200, Andreas Karlsson wrote:\n> On 8/15/24 12:55 AM, Jeff Davis wrote:\n> > This overlaps a bit with what Peter already proposed here:\n> > \n> > https://www.postgresql.org/message-id/4f562d84-87f4-44dc-8946-01d6c437936f%40eisentraut.org\n> > \n> > right?\n> \n> Maybe I am missing something but not as far as I can see. It is\n> related \n> but I do not see any overlap.\n\nv2-0002:\n\nOh, I see, pattern_char_isalpha() only has one caller, and it never\npasses a NULL for pg_locale_t, so my complaint doesn't affect your\npatch. This is about ready, then.\n\n> > > 0003-Remove-dubious-check-against-default-locale.patch\n> > > \n> > > This patch removes a check against DEFAULT_COLLATION_OID which I\n> > > thought\n> > > looked really dubious. Shouldn't this just be a simple check for\n> > > if\n> > > the\n> > > locale is deterministic? Since we know we have a valid locale\n> > > that\n> > > should be enough, right?\n> > \n> > Looks good to me. The code was correct in the sense that the\n> > default\n> > collation is always deterministic, but (a) Peter is working on non-\n> > deterministic default collations; and (b) it was a redundant check.\n> \n> Suspected so.\n> \n> I have attached a set of rebased patches with an added assert. Maybe\n> I \n> should start a new thread though.\n\nv2-0001:\n\n* There was a performance regression for repeated lookups, such as a \"t\n< 'xyz'\" predicate. I committed 12d3345c0d (to remember the last\ncollation lookup), which will prevent that regression.\n\n* This patch may change the handling of collation oid 0, and I'm not\nsure whether that was intentional or not. lc_collate_is_c(0) returned\nfalse, whereas pg_newlocale_from_collation(0)->collate_is_c raised\nAssert or threw an cache lookup error. I'm not sure if this is an\nactual problem, but looking at the callers, they should be more\ndefensive if they expect a collation oid of 0 to work at all.\n\n* The comment in lc_collate_is_c() said that it returned false with the\nidea that the caller would throw a useful error, and I think that\ncomment has been wrong for a while. If it returns false, the caller is\nexpected to call pg_newlocale_from_collation(), which would Assert on a\ndebug build. Should we remove that Assert, too, so that it will\nconsistently throw a cache lookup failure?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 03 Sep 2024 22:04:23 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "Committed v2-0001.\n\nOn Tue, 2024-09-03 at 22:04 -0700, Jeff Davis wrote:\n\n> * This patch may change the handling of collation oid 0, and I'm not\n> sure whether that was intentional or not. lc_collate_is_c(0) returned\n> false, whereas pg_newlocale_from_collation(0)->collate_is_c raised\n> Assert or threw an cache lookup error. I'm not sure if this is an\n> actual problem, but looking at the callers, they should be more\n> defensive if they expect a collation oid of 0 to work at all.\n\nFor functions that do call pg_newlocale_from_collation() when\nlc_collation_is_c() returns false, the behavior is unchanged.\n\nThere are only 3 callers which don't follow that pattern:\n\n * spg_text_inner_consistent: gets collation ID from the index, and\ntext is a collatable type so it will be valid\n\n * match_pattern_prefix: same\n\n * make_greater_string: I changed the order of the tests in\nmake_greater_string so that if len=0 and collation=0, it won't throw an\nerror. If len !=0, it goes into varstr_cmp(), which will call\npg_newlocale_from_collation().\n\n> * The comment in lc_collate_is_c() said that it returned false with\n> the\n> idea that the caller would throw a useful error, and I think that\n> comment has been wrong for a while. If it returns false, the caller\n> is\n> expected to call pg_newlocale_from_collation(), which would Assert on\n> a\n> debug build. Should we remove that Assert, too, so that it will\n> consistently throw a cache lookup failure?\n\nI fixed this by replacing the assert with an elog(ERROR, ...), so that\nit will consistently show a \"cache lookup failed for collation 0\"\nregardless of whether it's a debug build or not. It's not expected that\nthe error will be encountered.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 04 Sep 2024 14:45:24 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 08.08.24 22:00, Jeff Davis wrote:\n> On Wed, 2024-08-07 at 22:44 +0200, Peter Eisentraut wrote:\n>> But after this patch set, locale cannot be NULL anymore, so the third\n>> branch is obsolete.\n> \n> ...\n> \n>> Second, there are a number of functions in like.c like the above that\n>> take separate arguments like pg_locale_t locale, bool locale_is_c.\n>> Because pg_locale_t now contains the locale_is_c information, these\n>> can\n>> be combined.\n> \n> I believe these patches are correct, but the reasoning is fairly\n> complex:\n> \n> 1. Some MatchText variants are called with 0 for locale. But that's OK\n> because ...\n> \n> 2. A MatchText variant only cares about the locale if MATCH_LOWER(t) is\n> defined, and ...\n> \n> 3. Only one variant, SB_IMatchText() defines MATCH_LOWER(), and ...\n> \n> 4. SB_IMatchText() is called with a non-zero locale.\n> \n> All of these are a bit confusing to follow because it's generated code.\n> #2 is particularly non-obvious, because \"locale\" is not even an\n> argument of the MATCH_LOWER(t) or GETCHAR(t) macros, it's taken\n> implicitly from the outer scope.\n> \n> I don't think your patches cause this confusion, but is there a way you\n> can clarify some of this along the way?\n\nIn the end, I figured the best thing to do here is to add an explicit \nlocale argument to MATCH_LOWER() and GETCHAR() so one can actually \nunderstand this code by reading it. New patch attached.", "msg_date": "Mon, 9 Sep 2024 15:37:23 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On Mon, 2024-09-09 at 15:37 +0200, Peter Eisentraut wrote:\n> In the end, I figured the best thing to do here is to add an explicit\n> locale argument to MATCH_LOWER() and GETCHAR() so one can actually \n> understand this code by reading it.  New patch attached.\n\nLooks good to me, thank you.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 12 Sep 2024 13:01:45 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 9/4/24 11:45 PM, Jeff Davis wrote:\n> Committed v2-0001.\n>\n > [...]\n>\n> I fixed this by replacing the assert with an elog(ERROR, ...), so that\n> it will consistently show a \"cache lookup failed for collation 0\"\n> regardless of whether it's a debug build or not. It's not expected that\n> the error will be encountered.\n\nThanks!\n\nAndreas\n\n\n\n", "msg_date": "Thu, 12 Sep 2024 23:09:10 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" }, { "msg_contents": "On 12.09.24 22:01, Jeff Davis wrote:\n> On Mon, 2024-09-09 at 15:37 +0200, Peter Eisentraut wrote:\n>> In the end, I figured the best thing to do here is to add an explicit\n>> locale argument to MATCH_LOWER() and GETCHAR() so one can actually\n>> understand this code by reading it.  New patch attached.\n> \n> Looks good to me, thank you.\n\ncommitted, thanks\n\n\n\n", "msg_date": "Fri, 13 Sep 2024 16:39:56 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tiny step toward threading: reduce dependence on setlocale()" } ]
[ { "msg_contents": "Hackers,\r\n\r\n\r\nWe are the PostgreSQL team in Tencent. We have recently developed a job scheduler that runs inside the database to schedules and manages jobs similar to Oracle DBMS_JOB package, and we would like to contribute this feature to the community.\r\n\r\n\r\nSimilar to autovacuum, the job scheduler consists of 2 parts: the job launcher and the job worker. The job launcher periodically scans a metadata table and signals the postmaster to start new workers if needed.\r\n\r\n\r\nAs far as we know, there are currently two open-sourced job scheduling extensions for PostgreSQL: pg_cron (https://github.com/citusdata/pg_cron/) and pg_dbms_job (https://github.com/MigOpsRepos/pg_dbms_job/tree/main). However, the cron-based syntax is not easy to use and suffers some limitations like one-off commands. The pg_dbms_job extension is difficult to manage and operate because it runs as a standalone process .\r\n\r\n\r\nThat's why we have developed the job scheduler that runs as a process inside the database just like autovacuum.\r\n\r\n\r\nWe can start to send the patch if this idea makes sense to the you. Thanks for your time.\r\n\r\n\r\n\r\nRegards,\r\nCheng\r\n\r\n\r\n\r\n&nbsp;\nHackers,We are the PostgreSQL team in Tencent. We have recently developed a job scheduler that runs inside the database to schedules and manages jobs similar to Oracle DBMS_JOB package, and we would like to contribute this feature to the community.Similar to autovacuum, the job scheduler consists of 2 parts: the job launcher and the job worker. The job launcher periodically scans a metadata table and signals the postmaster to start new workers if needed.As far as we know, there are currently two open-sourced job scheduling extensions for PostgreSQL: pg_cron (https://github.com/citusdata/pg_cron/) and pg_dbms_job (https://github.com/MigOpsRepos/pg_dbms_job/tree/main). However, the cron-based syntax is not easy to use and suffers some limitations like one-off commands. The pg_dbms_job extension is difficult to manage and operate because it runs as a standalone process .That's why we have developed the job scheduler that runs as a process inside the database just like autovacuum.We can start to send the patch if this idea makes sense to the you. Thanks for your time.Regards,Cheng", "msg_date": "Thu, 6 Jun 2024 16:27:14 +0800", "msg_from": "\"=?ISO-8859-1?B?V2FuZyBDaGVuZw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal: Job Scheduler" }, { "msg_contents": "On Thu, 2024-06-06 at 16:27 +0800, Wang Cheng wrote:\n> We are the PostgreSQL team in Tencent. We have recently developed a job scheduler\n> that runs inside the database to schedules and manages jobs similar to Oracle\n> DBMS_JOB package, and we would like to contribute this feature to the community.\n> \n> As far as we know, there are currently two open-sourced job scheduling extensions\n> for PostgreSQL: pg_cron (https://github.com/citusdata/pg_cron/) and pg_dbms_job\n> (https://github.com/MigOpsRepos/pg_dbms_job/tree/main). However, the cron-based\n> syntax is not easy to use and suffers some limitations like one-off commands.\n> The pg_dbms_job extension is difficult to manage and operate because it runs as\n> a standalone process .\n\nThere is also pg_timetable:\nhttps://github.com/cybertec-postgresql/pg_timetable\n\n> That's why we have developed the job scheduler that runs as a process inside the\n> database just like autovacuum.\n> \n> We can start to send the patch if this idea makes sense to the you.\n\nPerhaps your job scheduler is much better than all the existing ones.\nBut what would be a compelling reason to keep it in the PostgreSQL source tree?\nWith PostgreSQL's extensibility features, it should be possible to write your\njob scheduler as an extension and maintain it outside the PostgreSQL source.\n\nI am sure that the PostgreSQL community will be happy to use the extension\nif it is any good.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 06 Jun 2024 10:47:35 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Job Scheduler" }, { "msg_contents": "On Thu, 6 Jun 2024 at 09:47, Laurenz Albe <[email protected]> wrote:\n\n> On Thu, 2024-06-06 at 16:27 +0800, Wang Cheng wrote:\n> > We are the PostgreSQL team in Tencent. We have recently developed a job\n> scheduler\n> > that runs inside the database to schedules and manages jobs similar to\n> Oracle\n> > DBMS_JOB package, and we would like to contribute this feature to the\n> community.\n> >\n> > As far as we know, there are currently two open-sourced job scheduling\n> extensions\n> > for PostgreSQL: pg_cron (https://github.com/citusdata/pg_cron/) and\n> pg_dbms_job\n> > (https://github.com/MigOpsRepos/pg_dbms_job/tree/main). However, the\n> cron-based\n> > syntax is not easy to use and suffers some limitations like one-off\n> commands.\n> > The pg_dbms_job extension is difficult to manage and operate because it\n> runs as\n> > a standalone process .\n>\n> There is also pg_timetable:\n> https://github.com/cybertec-postgresql/pg_timetable\n\n\nAnd probably the oldest of them all, pgAgent:\nhttps://www.pgadmin.org/docs/pgadmin4/8.7/pgagent.html\n\n\n>\n>\n> > That's why we have developed the job scheduler that runs as a process\n> inside the\n> > database just like autovacuum.\n> >\n> > We can start to send the patch if this idea makes sense to the you.\n>\n> Perhaps your job scheduler is much better than all the existing ones.\n> But what would be a compelling reason to keep it in the PostgreSQL source\n> tree?\n> With PostgreSQL's extensibility features, it should be possible to write\n> your\n> job scheduler as an extension and maintain it outside the PostgreSQL\n> source.\n>\n> I am sure that the PostgreSQL community will be happy to use the extension\n> if it is any good.\n>\n\nI agree. This is an area in which there are lots of options at the moment,\nwith compelling reasons to choose from various of them depending on your\nneeds.\n\nIt's this kind of choice that means it's unlikely we'd include any one\noption in PostgreSQL, much like various other tools such as failover\nmanagers or poolers.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Thu, 6 Jun 2024 at 09:47, Laurenz Albe <[email protected]> wrote:On Thu, 2024-06-06 at 16:27 +0800, Wang Cheng wrote:\n> We are the PostgreSQL team in Tencent. We have recently developed a job scheduler\n> that runs inside the database to schedules and manages jobs similar to Oracle\n> DBMS_JOB package, and we would like to contribute this feature to the community.\n> \n> As far as we know, there are currently two open-sourced job scheduling extensions\n> for PostgreSQL: pg_cron (https://github.com/citusdata/pg_cron/) and pg_dbms_job\n> (https://github.com/MigOpsRepos/pg_dbms_job/tree/main). However, the cron-based\n> syntax is not easy to use and suffers some limitations like one-off commands.\n> The pg_dbms_job extension is difficult to manage and operate because it runs as\n> a standalone process .\n\nThere is also pg_timetable:\nhttps://github.com/cybertec-postgresql/pg_timetableAnd probably the oldest of them all, pgAgent: https://www.pgadmin.org/docs/pgadmin4/8.7/pgagent.html \n\n> That's why we have developed the job scheduler that runs as a process inside the\n> database just like autovacuum.\n> \n> We can start to send the patch if this idea makes sense to the you.\n\nPerhaps your job scheduler is much better than all the existing ones.\nBut what would be a compelling reason to keep it in the PostgreSQL source tree?\nWith PostgreSQL's extensibility features, it should be possible to write your\njob scheduler as an extension and maintain it outside the PostgreSQL source.\n\nI am sure that the PostgreSQL community will be happy to use the extension\nif it is any good.I agree. This is an area in which there are lots of options at the moment, with compelling reasons to choose from various of them depending on your needs.It's this kind of choice that means it's unlikely we'd include any one option in PostgreSQL, much like various other tools such as failover managers or poolers. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Thu, 6 Jun 2024 09:59:30 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Job Scheduler" }, { "msg_contents": "Noted. Thanks for suggestions. We will open-source it as an extension.\r\n\r\n\r\n\r\nRegards,\r\nCheng\r\n\r\n\r\n\r\n&nbsp;\r\n\r\n\r\n\r\n\r\n------------------&nbsp;Original&nbsp;------------------\r\nFrom: \"Dave Page\" <[email protected]&gt;;\r\nDate:&nbsp;Thu, Jun 6, 2024 04:59 PM\r\nTo:&nbsp;\"Laurenz Albe\"<[email protected]&gt;;\r\nCc:&nbsp;\"Wang Cheng\"<[email protected]&gt;;\"pgsql-hackers\"<[email protected]&gt;;\r\nSubject:&nbsp;Re: Proposal: Job Scheduler\r\n\r\n\r\n\r\n\r\n\r\n\r\nOn Thu, 6 Jun 2024 at 09:47, Laurenz Albe <[email protected]&gt; wrote:\r\n\r\nOn Thu, 2024-06-06 at 16:27 +0800, Wang Cheng wrote:\r\n &gt; We are the PostgreSQL team in Tencent. We have recently developed a job scheduler\r\n &gt; that runs inside the database to schedules and manages jobs similar to Oracle\r\n &gt; DBMS_JOB package, and we would like to contribute this feature to the community.\r\n &gt; \r\n &gt; As far as we know, there are currently two open-sourced job scheduling extensions\r\n &gt; for PostgreSQL: pg_cron (https://github.com/citusdata/pg_cron/) and pg_dbms_job\r\n &gt; (https://github.com/MigOpsRepos/pg_dbms_job/tree/main). However, the cron-based\r\n &gt; syntax is not easy to use and suffers some limitations like one-off commands.\r\n &gt; The pg_dbms_job extension is difficult to manage and operate because it runs as\r\n &gt; a standalone process .\r\n \r\n There is also pg_timetable:\r\n https://github.com/cybertec-postgresql/pg_timetable\r\n\r\nAnd probably the oldest of them all, pgAgent:&nbsp;https://www.pgadmin.org/docs/pgadmin4/8.7/pgagent.html\r\n&nbsp;\r\n\r\n \r\n &gt; That's why we have developed the job scheduler that runs as a process inside the\r\n &gt; database just like autovacuum.\r\n &gt; \r\n &gt; We can start to send the patch if this idea makes sense to the you.\r\n \r\n Perhaps your job scheduler is much better than all the existing ones.\r\n But what would be a compelling reason to keep it in the PostgreSQL source tree?\r\n With PostgreSQL's extensibility features, it should be possible to write your\r\n job scheduler as an extension and maintain it outside the PostgreSQL source.\r\n \r\n I am sure that the PostgreSQL community will be happy to use the extension\r\n if it is any good.\r\n\r\n\r\nI agree. This is an area in which there are lots of options at the moment, with compelling reasons to choose from various of them depending on your needs.\r\n\r\n\r\nIt's this kind of choice that means it's unlikely we'd include any one option in PostgreSQL, much like various other tools such as failover managers or poolers.&nbsp;\r\n\r\n\r\n\r\n-- \r\nDave PagepgAdmin: https://www.pgadmin.org\r\nPostgreSQL: https://www.postgresql.org\r\nEDB:&nbsp;https://www.enterprisedb.com\nNoted. Thanks for suggestions. We will open-source it as an extension.Regards,Cheng ------------------ Original ------------------From: \"Dave Page\" <[email protected]>;Date: Thu, Jun 6, 2024 04:59 PMTo: \"Laurenz Albe\"<[email protected]>;Cc: \"Wang Cheng\"<[email protected]>;\"pgsql-hackers\"<[email protected]>;Subject: Re: Proposal: Job SchedulerOn Thu, 6 Jun 2024 at 09:47, Laurenz Albe <[email protected]> wrote:On Thu, 2024-06-06 at 16:27 +0800, Wang Cheng wrote:\n> We are the PostgreSQL team in Tencent. We have recently developed a job scheduler\n> that runs inside the database to schedules and manages jobs similar to Oracle\n> DBMS_JOB package, and we would like to contribute this feature to the community.\n> \n> As far as we know, there are currently two open-sourced job scheduling extensions\n> for PostgreSQL: pg_cron (https://github.com/citusdata/pg_cron/) and pg_dbms_job\n> (https://github.com/MigOpsRepos/pg_dbms_job/tree/main). However, the cron-based\n> syntax is not easy to use and suffers some limitations like one-off commands.\n> The pg_dbms_job extension is difficult to manage and operate because it runs as\n> a standalone process .\n\nThere is also pg_timetable:\nhttps://github.com/cybertec-postgresql/pg_timetableAnd probably the oldest of them all, pgAgent: https://www.pgadmin.org/docs/pgadmin4/8.7/pgagent.html \n\n> That's why we have developed the job scheduler that runs as a process inside the\n> database just like autovacuum.\n> \n> We can start to send the patch if this idea makes sense to the you.\n\nPerhaps your job scheduler is much better than all the existing ones.\nBut what would be a compelling reason to keep it in the PostgreSQL source tree?\nWith PostgreSQL's extensibility features, it should be possible to write your\njob scheduler as an extension and maintain it outside the PostgreSQL source.\n\nI am sure that the PostgreSQL community will be happy to use the extension\nif it is any good.I agree. This is an area in which there are lots of options at the moment, with compelling reasons to choose from various of them depending on your needs.It's this kind of choice that means it's unlikely we'd include any one option in PostgreSQL, much like various other tools such as failover managers or poolers. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Thu, 6 Jun 2024 17:04:42 +0800", "msg_from": "\"=?ISO-8859-1?B?V2FuZyBDaGVuZw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Job Scheduler" }, { "msg_contents": "On 6/6/2024 16:04, Wang Cheng wrote:\n> Noted. Thanks for suggestions. We will open-source it as an extension.\nIt would be nice! `For me doesn't matter where to contribute: to \nPostgreSQL core or to its extension if it is published under BSD license.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Thu, 6 Jun 2024 16:26:13 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Job Scheduler" }, { "msg_contents": "On 2024-Jun-06, Dave Page wrote:\n\n> It's this kind of choice that means it's unlikely we'd include any one\n> option in PostgreSQL, much like various other tools such as failover\n> managers or poolers.\n\nTBH I see that more as a bug than as a feature, and I see the fact that\nthere are so many schedulers as a process failure. If we could have\n_one_ scheduler in core that encompassed all the important features of\nall the independent ones we have, with hooks or whatever to allow the\nuser to add any fringe features they need, that would probably lead to\nless duplicative code and divergent UIs, and would be better for users\noverall.\n\nThat's, of course, just my personal opinion.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 6 Jun 2024 12:53:38 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Job Scheduler" }, { "msg_contents": "> On Thu, Jun 06, 2024 at 12:53:38PM GMT, Alvaro Herrera wrote:\n> On 2024-Jun-06, Dave Page wrote:\n>\n> > It's this kind of choice that means it's unlikely we'd include any one\n> > option in PostgreSQL, much like various other tools such as failover\n> > managers or poolers.\n>\n> TBH I see that more as a bug than as a feature, and I see the fact that\n> there are so many schedulers as a process failure. If we could have\n> _one_ scheduler in core that encompassed all the important features of\n> all the independent ones we have, with hooks or whatever to allow the\n> user to add any fringe features they need, that would probably lead to\n> less duplicative code and divergent UIs, and would be better for users\n> overall.\n>\n> That's, of course, just my personal opinion.\n\n+1. The PostgreSQL ecosystem is surprisingly fragmented, when it comes\nto quite essential components that happen to be outside of the core. But\nof course it doesn't mean that there should be _one_ component of every\nkind in core, more like it makes sense to have _one_ component available\nout of the box (where the box is whatever form of PostgreSQL that gets\ndelivered to users, e.g. a distro package, container, etc.).\n\n\n", "msg_date": "Thu, 6 Jun 2024 14:31:44 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Job Scheduler" } ]
[ { "msg_contents": "As part of commit 5cd72cc0c5017a9d4de8b5d465a75946da5abd1d, the\ndependency on global counters such as VacuumPage(Hit/Miss/Dirty) was\nremoved from the vacuum. However, do_analyze_rel() was still using\nthese counters, necessitating the tracking of global counters\nalongside BufferUsage counters.\n\nThe attached patch addresses the issue by eliminating the need to\ntrack VacuumPage(Hit/Miss/Dirty) counters in do_analyze_rel(), making\nthe global counters obsolete. This simplifies the code and improves\nconsistency.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 6 Jun 2024 14:40:05 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Remove dependency on VacuumPage(Hit/Miss/Dirty) counters in\n do_analyze_rel" }, { "msg_contents": "Hi,\n\nI sent a similar patch for this in\nhttps://www.postgresql.org/message-id/flat/CAO6_Xqr__kTTCLkftqS0qSCm-J7_xbRG3Ge2rWhucxQJMJhcRA@mail.gmail.com\n\nRegards,\nAnthonin\n\nOn Thu, Jun 6, 2024 at 11:10 AM Dilip Kumar <[email protected]> wrote:\n\n> As part of commit 5cd72cc0c5017a9d4de8b5d465a75946da5abd1d, the\n> dependency on global counters such as VacuumPage(Hit/Miss/Dirty) was\n> removed from the vacuum. However, do_analyze_rel() was still using\n> these counters, necessitating the tracking of global counters\n> alongside BufferUsage counters.\n>\n> The attached patch addresses the issue by eliminating the need to\n> track VacuumPage(Hit/Miss/Dirty) counters in do_analyze_rel(), making\n> the global counters obsolete. This simplifies the code and improves\n> consistency.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nHi, I sent a similar patch for this in https://www.postgresql.org/message-id/flat/CAO6_Xqr__kTTCLkftqS0qSCm-J7_xbRG3Ge2rWhucxQJMJhcRA@mail.gmail.comRegards,AnthoninOn Thu, Jun 6, 2024 at 11:10 AM Dilip Kumar <[email protected]> wrote:As part of commit 5cd72cc0c5017a9d4de8b5d465a75946da5abd1d, the\ndependency on global counters such as VacuumPage(Hit/Miss/Dirty) was\nremoved from the vacuum. However, do_analyze_rel() was still using\nthese counters, necessitating the tracking of global counters\nalongside BufferUsage counters.\n\nThe attached patch addresses the issue by eliminating the need to\ntrack VacuumPage(Hit/Miss/Dirty) counters in do_analyze_rel(), making\nthe global counters obsolete. This simplifies the code and improves\nconsistency.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 6 Jun 2024 11:52:59 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependency on VacuumPage(Hit/Miss/Dirty) counters in\n do_analyze_rel" }, { "msg_contents": "On Thu, Jun 6, 2024 at 3:23 PM Anthonin Bonnefoy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I sent a similar patch for this in https://www.postgresql.org/message-id/flat/CAO6_Xqr__kTTCLkftqS0qSCm-J7_xbRG3Ge2rWhucxQJMJhcRA@mail.gmail.com\n\nOkay, I see, In that case, we can just discard mine, thanks for notifying me.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 15:51:48 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependency on VacuumPage(Hit/Miss/Dirty) counters in\n do_analyze_rel" } ]
[ { "msg_contents": "Hi,\n\nAlvaro reported off-list that the following should really fail,\nbecause the jsonpath expression refers to a PASSING variable that\ndoesn't exist:\n\nselect json_query('\"1\"', jsonpath '$xy' passing 2 AS xyz);\n json_query\n------------\n 2\n(1 row)\n\nThis works because of a bug in GetJsonPathVar() whereby it allows a\njsonpath expression to reference any prefix of the PASSING variable\nnames.\n\nAttached is a patch to fix that.\n\nThanks Alvaro for the report.\n\n-- \nThanks, Amit Langote", "msg_date": "Thu, 6 Jun 2024 18:20:07 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": true, "msg_subject": "Incorrect matching of sql/json PASSING variable names" }, { "msg_contents": "On Thu, Jun 6, 2024 at 6:20 PM Amit Langote <[email protected]> wrote:\n>\n> Hi,\n>\n> Alvaro reported off-list that the following should really fail,\n> because the jsonpath expression refers to a PASSING variable that\n> doesn't exist:\n>\n> select json_query('\"1\"', jsonpath '$xy' passing 2 AS xyz);\n> json_query\n> ------------\n> 2\n> (1 row)\n>\n> This works because of a bug in GetJsonPathVar() whereby it allows a\n> jsonpath expression to reference any prefix of the PASSING variable\n> names.\n>\n> Attached is a patch to fix that.\n\nHere's an updated version that I'll push tomorrow.\n\n-- \nThanks, Amit Langote", "msg_date": "Thu, 13 Jun 2024 17:04:29 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect matching of sql/json PASSING variable names" }, { "msg_contents": "On Thu, Jun 13, 2024 at 5:04 PM Amit Langote <[email protected]> wrote:\n> On Thu, Jun 6, 2024 at 6:20 PM Amit Langote <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Alvaro reported off-list that the following should really fail,\n> > because the jsonpath expression refers to a PASSING variable that\n> > doesn't exist:\n> >\n> > select json_query('\"1\"', jsonpath '$xy' passing 2 AS xyz);\n> > json_query\n> > ------------\n> > 2\n> > (1 row)\n> >\n> > This works because of a bug in GetJsonPathVar() whereby it allows a\n> > jsonpath expression to reference any prefix of the PASSING variable\n> > names.\n> >\n> > Attached is a patch to fix that.\n>\n> Here's an updated version that I'll push tomorrow.\n\nPushed.\n\n(Seems like pgsql-committers notification has stalled.)\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Wed, 19 Jun 2024 15:52:12 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect matching of sql/json PASSING variable names" } ]
[ { "msg_contents": "Hello hackers,\n\nI tried to investigate a recent buildfarm test failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-06-04%2003%3A27%3A47\n\n  29/295 postgresql:recovery / recovery/026_overwrite_contrecord ERROR            39.55s   exit status 32\n\nlog/026_overwrite_contrecord_standby.log\nTRAP: failed Assert(\"ItemIdIsNormal(lpp)\"), File: \"../pgsql/src/backend/access/heap/heapam.c\", Line: 1002, PID: 3740958\npostgres: standby: bf postgres [local] startup(ExceptionalCondition+0x81)[0x56c60bf9]\npostgres: standby: bf postgres [local] startup(+0xf776e)[0x5667276e]\npostgres: standby: bf postgres [local] startup(heap_getnextslot+0x40)[0x56672ee1]\npostgres: standby: bf postgres [local] startup(+0x11c218)[0x56697218]\npostgres: standby: bf postgres [local] startup(systable_getnext+0xfa)[0x56697c1a]\npostgres: standby: bf postgres [local] startup(+0x6d29c7)[0x56c4d9c7]\npostgres: standby: bf postgres [local] startup(+0x6d372c)[0x56c4e72c]\npostgres: standby: bf postgres [local] startup(+0x6d8288)[0x56c53288]\npostgres: standby: bf postgres [local] startup(RelationCacheInitializePhase3+0x149)[0x56c52d71]\n\n(It's not the only failure of that ilk in the buildfarm.)\n\nand managed to reproduce the failure by running many\n026_overwrite_contrecord tests in parallel (with fsync=on).\n\nAnalyzing the core dump added some info:\n...\n#3  0x0000000000bb43cc in ExceptionalCondition (conditionName=0xc45c77 \"ItemIdIsNormal(lpp)\",\n     fileName=0xc45aa8 \"heapam.c\", lineNumber=1002) at assert.c:66\n#4  0x00000000004f7f13 in heapgettup_pagemode (scan=0x19f5660, dir=ForwardScanDirection, nkeys=2, key=0x19f61d0)\n     at heapam.c:1002\n#5  0x00000000004f86d1 in heap_getnextslot (sscan=0x19f5660, direction=ForwardScanDirection, slot=0x19f5da0)\n     at heapam.c:1307\n#6  0x000000000051d028 in table_scan_getnextslot (sscan=0x19f5660, direction=ForwardScanDirection, slot=0x19f5da0)\n     at ../../../../src/include/access/tableam.h:1081\n#7  0x000000000051da80 in systable_getnext (sysscan=0x19f5470) at genam.c:530\n#8  0x0000000000ba0937 in RelationBuildTupleDesc (relation=0x7fa004feea88) at relcache.c:572\n#9  0x0000000000ba17b9 in RelationBuildDesc (targetRelId=2679, insertIt=true) at relcache.c:1184\n#10 0x0000000000ba6520 in load_critical_index (indexoid=2679, heapoid=2610) at relcache.c:4353\n#11 0x0000000000ba607d in RelationCacheInitializePhase3 () at relcache.c:4132\n#12 0x0000000000bcb704 in InitPostgres (in_dbname=0x196ca30 \"postgres\", dboid=5, username=0x19a91b8 \"law\", useroid=0,\n     flags=1, out_dbname=0x0) at postinit.c:1193\n...\n(gdb) frame 4\n(gdb) p lpp->lp_flags\n$2 = 1\n(gdb) p ItemIdIsNormal(lpp)\n$12 = 1\n\nSo it looks like the Assert had failed when lpp->lp_flags had some other\ncontents...\n\nI added the following debugging code:\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -995,10 +995,14 @@ continue_page:\n                 for (; linesleft > 0; linesleft--, lineindex += dir)\n                 {\n                         ItemId          lpp;\n+                       ItemIdData      iid;\n                         OffsetNumber lineoff;\n\n                         lineoff = scan->rs_vistuples[lineindex];\n                         lpp = PageGetItemId(page, lineoff);\n+                       iid = *((ItemIdData *)lpp);\n+\n+                       Assert(ItemIdIsNormal(&iid));\n                         Assert(ItemIdIsNormal(lpp));\n\nand got:\n...\n#2  0x000055b68dc6998c in ExceptionalCondition (conditionName=0x55b68dcfe5f7 \"ItemIdIsNormal(&iid)\",\n     fileName=0x55b68dcfe428 \"heapam.c\", lineNumber=1010) at assert.c:66\n#3  0x000055b68d588a78 in heapgettup_pagemode (scan=0x55b68f0905e0, dir=ForwardScanDirection, nkeys=2,\n     key=0x55b68f091150) at heapam.c:1010\n#4  0x000055b68d58930e in heap_getnextslot (sscan=0x55b68f0905e0, direction=ForwardScanDirection, slot=0x55b68f090d20)\n     at heapam.c:1322\n...\n(gdb) frame 3\n#3  0x000055b68d588a78 in heapgettup_pagemode (...) at heapam.c:1010\n1010                            Assert(ItemIdIsNormal(&iid));\n(gdb) info locals\nlpp = 0x7f615c34b0ec\niid = {lp_off = 0, lp_flags = 0, lp_len = 0}\nlineoff = 54\ntuple = 0x55b68f090638\npage = 0x7f615c34b000 \"\"\n\n(gdb) p *lpp\n$1 = {lp_off = 3160, lp_flags = 1, lp_len = 136}\n\nIt seemingly confirms that the underlying memory was changed while being\nprocessed in heapgettup_pagemode().\n\nI've tried to add checks for the page buffer content as below:\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -953,11 +953,15 @@ heapgettup_pagemode(HeapScanDesc scan,\n      Page        page;\n      int         lineindex;\n      int         linesleft;\n+char    page_copy[BLCKSZ];\n\n      if (likely(scan->rs_inited))\n      {\n          /* continue from previously returned page/tuple */\n          page = BufferGetPage(scan->rs_cbuf);\n+memcpy(page_copy, page, BLCKSZ);\n+for (int i = 0; i < 100; i++)\n+Assert(memcmp(page_copy, page, BLCKSZ) == 0);\n\n          lineindex = scan->rs_cindex + dir;\n          if (ScanDirectionIsForward(dir))\n@@ -986,6 +990,10 @@ heapgettup_pagemode(HeapScanDesc scan,\n          /* prune the page and determine visible tuple offsets */\n          heap_prepare_pagescan((TableScanDesc) scan);\n          page = BufferGetPage(scan->rs_cbuf);\n+memcpy(page_copy, page, BLCKSZ);\n+for (int i = 0; i < 100; i++)\n+Assert(memcmp(page_copy, page, BLCKSZ) == 0);\n+\n          linesleft = scan->rs_ntuples;\n          lineindex = ScanDirectionIsForward(dir) ? 0 : linesleft - 1;\n\nand got the assertion failures even during `make check`:\n...\n#5  0x00005577f29e0bc4 in ExceptionalCondition (\n     conditionName=conditionName@entry=0x5577f2a4a5d0 \"memcmp(page_copy, page, BLCKSZ) == 0\",\n     fileName=fileName@entry=0x5577f2a4aa38 \"heapam.c\", lineNumber=lineNumber@entry=966) at assert.c:66\n#6  0x00005577f24faa68 in heapgettup_pagemode (scan=scan@entry=0x5577f46574e8, dir=ForwardScanDirection, nkeys=0,\n     key=0x0) at heapam.c:966\n...\n(gdb) frame 6\n#6  0x00005577f24faa68 in heapgettup_pagemode (...) at heapam.c:966\n966     Assert(memcmp(page_copy, page, BLCKSZ) == 0);\n(gdb) p i\n$1 = 25\n\nAm I missing something or the the page buffer indeed lacks locking there?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 6 Jun 2024 13:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Assert in heapgettup_pagemode() fails due to underlying buffer change" }, { "msg_contents": "On Thu, Jun 6, 2024 at 6:00 AM Alexander Lakhin <[email protected]> wrote:\n> Am I missing something or the the page buffer indeed lacks locking there?\n\nI don't know, but if the locks are really missing now, I feel like the\nfirst question is \"which commit got rid of them?\". It's a little hard\nto believe that they've never been there and somehow nobody has\nnoticed.\n\nThen again, maybe we have; see Noah's thread about in-place updates\nbreaking stuff and some of the surprising discoveries there. But it\nseems worth investigating.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 12:36:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "Hello Robert,\n\n06.06.2024 19:36, Robert Haas wrote:\n> On Thu, Jun 6, 2024 at 6:00 AM Alexander Lakhin <[email protected]> wrote:\n>> Am I missing something or the the page buffer indeed lacks locking there?\n> I don't know, but if the locks are really missing now, I feel like the\n> first question is \"which commit got rid of them?\". It's a little hard\n> to believe that they've never been there and somehow nobody has\n> noticed.\n>\n> Then again, maybe we have; see Noah's thread about in-place updates\n> breaking stuff and some of the surprising discoveries there. But it\n> seems worth investigating.\n\nYes, my last experiment with memcmp for the whole buffer was wrong,\ngiven the comment above heapgettup_pagemode(). I think the correct\ncheck would be:\n              ItemId      lpp;\n              OffsetNumber lineoff;\n+ItemIdData      iid;\n\n              lineoff = scan->rs_vistuples[lineindex];\n              lpp = PageGetItemId(page, lineoff);\n+iid = *((ItemIdData *)lpp);\n+for (int i = 0; i < 1000; i++)\n+Assert(memcmp(&iid, lpp, sizeof(iid)) == 0);\n\nIt significantly alleviates reproducing of the test failure for me.\nWill try to bisect this anomaly tomorrow.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 6 Jun 2024 22:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "On Thu, Jun 06, 2024 at 12:36:32PM -0400, Robert Haas wrote:\n> On Thu, Jun 6, 2024 at 6:00 AM Alexander Lakhin <[email protected]> wrote:\n> > Am I missing something or the the page buffer indeed lacks locking there?\n> \n> I don't know, but if the locks are really missing now, I feel like the\n> first question is \"which commit got rid of them?\". It's a little hard\n> to believe that they've never been there and somehow nobody has\n> noticed.\n> \n> Then again, maybe we have; see Noah's thread about in-place updates\n> breaking stuff and some of the surprising discoveries there. But it\n> seems worth investigating.\n\n$SUBJECT looks more like a duplicate of\npostgr.es/m/flat/[email protected] (Hot standby queries see\ntransient all-zeros pages).\n\n\n", "msg_date": "Thu, 6 Jun 2024 12:07:02 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "Hello Noah,\n\n06.06.2024 22:07, Noah Misch wrote:\n>\n>> I don't know, but if the locks are really missing now, I feel like the\n>> first question is \"which commit got rid of them?\". It's a little hard\n>> to believe that they've never been there and somehow nobody has\n>> noticed.\n>>\n>> Then again, maybe we have; see Noah's thread about in-place updates\n>> breaking stuff and some of the surprising discoveries there. But it\n>> seems worth investigating.\n> $SUBJECT looks more like a duplicate of\n> postgr.es/m/flat/[email protected] (Hot standby queries see\n> transient all-zeros pages).\n\nThank you for the reference! Yes, it looks very similar. Though I can't\nsay the sleep you proposed helps the failure reproduction (I've tried\n026_overwrite_contrecord.pl and saw no more frequent failures or so).\n\nMy bisect run ended with:\n210622c60e1a9db2e2730140b8106ab57d259d15 is the first bad commit\n\nAuthor: Thomas Munro <[email protected]>\nDate:   Wed Apr 3 00:03:08 2024 +1300\n\n     Provide vectored variant of ReadBuffer().\n\nOther buildfarm failures with this Assert I could find kind of confirm this:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-04-03%2003%3A32%3A18\n(presumably a first failure of this sort)\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-04-04%2015%3A38%3A16\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2024-05-07%2004%3A00%3A08\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 7 Jun 2024 06:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "On Fri, Jun 7, 2024 at 3:00 PM Alexander Lakhin <[email protected]> wrote:\n> My bisect run ended with:\n> 210622c60e1a9db2e2730140b8106ab57d259d15 is the first bad commit\n>\n> Author: Thomas Munro <[email protected]>\n> Date: Wed Apr 3 00:03:08 2024 +1300\n>\n> Provide vectored variant of ReadBuffer().\n>\n> Other buildfarm failures with this Assert I could find kind of confirm this:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-04-03%2003%3A32%3A18\n> (presumably a first failure of this sort)\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-04-04%2015%3A38%3A16\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2024-05-07%2004%3A00%3A08\n\nLooking...\n\n\n", "msg_date": "Fri, 7 Jun 2024 15:06:20 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "On Fri, Jun 7, 2024 at 3:06 PM Thomas Munro <[email protected]> wrote:\n> On Fri, Jun 7, 2024 at 3:00 PM Alexander Lakhin <[email protected]> wrote:\n> > My bisect run ended with:\n> > 210622c60e1a9db2e2730140b8106ab57d259d15 is the first bad commit\n> >\n> > Author: Thomas Munro <[email protected]>\n> > Date: Wed Apr 3 00:03:08 2024 +1300\n> >\n> > Provide vectored variant of ReadBuffer().\n> >\n> > Other buildfarm failures with this Assert I could find kind of confirm this:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-04-03%2003%3A32%3A18\n> > (presumably a first failure of this sort)\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-04-04%2015%3A38%3A16\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2024-05-07%2004%3A00%3A08\n>\n> Looking...\n\nWhat Noah described[1] is what should be happening already, I think,\nbut 210622c6 unconditionally zeroed the page. Oops. The attached\nseems to cure his repro for me. Does it also cure your test? I\ncouldn't see that variant myself for some reason, but it seems to make\nsense as the explanation. I would probably adjust the function name\nor perhaps consider refactoring slightly, but first let's confirm that\nthis is the same issue and fix.\n\n[1] https://www.postgresql.org/message-id/flat/[email protected]", "msg_date": "Fri, 7 Jun 2024 18:06:14 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "On 2024-Jun-07, Thomas Munro wrote:\n\n> static void\n> -ZeroBuffer(Buffer buffer, ReadBufferMode mode)\n> +ZeroBuffer(Buffer buffer, ReadBufferMode mode, bool zero)\n\nThis change makes the API very strange. Should the function be called\nZeroAndLockBuffer() instead? Then the addition of a \"bool zero\"\nargument makes a lot more sense.\n\nIn passing, I noticed that WaitReadBuffers has zero comments, which\nseems an insufficient number of them.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 7 Jun 2024 10:05:43 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "Hello Thomas,\n\n07.06.2024 09:06, Thomas Munro wrote:\n> On Fri, Jun 7, 2024 at 3:06 PM Thomas Munro <[email protected]> wrote:\n>> On Fri, Jun 7, 2024 at 3:00 PM Alexander Lakhin <[email protected]> wrote:\n>>> My bisect run ended with:\n>>> 210622c60e1a9db2e2730140b8106ab57d259d15 is the first bad commit\n>>>\n>>> Author: Thomas Munro <[email protected]>\n>>> Date: Wed Apr 3 00:03:08 2024 +1300\n>>>\n>>> Provide vectored variant of ReadBuffer().\n>>>\n>>> Other buildfarm failures with this Assert I could find kind of confirm this:\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-04-03%2003%3A32%3A18\n>>> (presumably a first failure of this sort)\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-04-04%2015%3A38%3A16\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2024-05-07%2004%3A00%3A08\n>> Looking...\n> What Noah described[1] is what should be happening already, I think,\n> but 210622c6 unconditionally zeroed the page. Oops. The attached\n> seems to cure his repro for me. Does it also cure your test? I\n> couldn't see that variant myself for some reason, but it seems to make\n> sense as the explanation. I would probably adjust the function name\n> or perhaps consider refactoring slightly, but first let's confirm that\n> this is the same issue and fix.\n\nThank you for looking and for the fix!\n\nUsing the same testing procedure (applying patch for checking lpp,\nmultiplying 026_overwrite_contrecord.pl tests and running 30 tests in\nparallel, with fsync=on) which I used for bisecting, I got failures on\niterations 8, 19, 4 without the fix, but with the fix applied, 125\niterations passed. I think The Cure is sound.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 7 Jun 2024 12:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "On Fri, Jun 7, 2024 at 4:05 AM Alvaro Herrera <[email protected]> wrote:\n> > static void\n> > -ZeroBuffer(Buffer buffer, ReadBufferMode mode)\n> > +ZeroBuffer(Buffer buffer, ReadBufferMode mode, bool zero)\n>\n> This change makes the API very strange. Should the function be called\n> ZeroAndLockBuffer() instead? Then the addition of a \"bool zero\"\n> argument makes a lot more sense.\n\nI agree that's better, but it still looks a bit weird. You have to\nrealize that 'bool zero' means 'is already zeroed' here -- or at\nleast, I guess that's the intention. But then I wonder why you'd call\na function called ZeroAndLockBuffer if all you need to do is\nLockBuffer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 08:46:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "On Sat, Jun 8, 2024 at 12:47 AM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Jun 7, 2024 at 4:05 AM Alvaro Herrera <[email protected]> wrote:\n> > > static void\n> > > -ZeroBuffer(Buffer buffer, ReadBufferMode mode)\n> > > +ZeroBuffer(Buffer buffer, ReadBufferMode mode, bool zero)\n> >\n> > This change makes the API very strange. Should the function be called\n> > ZeroAndLockBuffer() instead? Then the addition of a \"bool zero\"\n> > argument makes a lot more sense.\n>\n> I agree that's better, but it still looks a bit weird. You have to\n> realize that 'bool zero' means 'is already zeroed' here -- or at\n> least, I guess that's the intention. But then I wonder why you'd call\n> a function called ZeroAndLockBuffer if all you need to do is\n> LockBuffer.\n\nThe name weirdness comes directly from RBM_ZERO_AND_LOCK (the fact\nthat it doesn't always zero despite shouting ZERO is probably what\ntemporarily confused me). But coming up with a better name is hard\nand I certainly don't propose to change it now. I think it's\nreasonable for this internal helper function to have that matching\nname as Alvaro suggested, with a good comment about that.\n\nEven though that quick-demonstration change fixed the two reported\nrepros, I think it is still probably racy (or if it isn't, it relies\non higher level interlocking that I don't want to rely on). This case\nreally should be using the standard StartBufferIO/TerminateBufferIO\ninfrastructure as it was before. I had moved that around to deal with\nmulti-block I/O, but dropped the ball on the zero case... sorry.\n\nHere's a version like that. The \"zero\" argument (yeah that was not a\ngood name) is now inverted and called \"already_valid\", but it's only a\nsort of optimisation for the case where we already know for sure that\nit's valid. If it isn't, we do the standard\nBM_IO_IN_PROGRESS/BM_VALID/CV dance, for correct interaction with any\nconcurrent read or zero operation.", "msg_date": "Sat, 8 Jun 2024 10:36:45 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "New version. Same code as v2, but comments improved to explain the\nreasoning, with reference to README's buffer access rules. I'm\nplanning to push this soon if there are no objections.", "msg_date": "Sun, 9 Jun 2024 09:42:44 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "On Fri, Jun 7, 2024 at 8:05 PM Alvaro Herrera <[email protected]> wrote:\n> In passing, I noticed that WaitReadBuffers has zero comments, which\n> seems an insufficient number of them.\n\nAck. Here is a patch for that. I guess I hadn't put a comment there\nbecause it's hard to write anything without sort of duplicating what\nis already said by the StartReadBuffers() comment and doubling down on\ndescriptions of future plans... but, in for a penny, in for a pound as\nthey say.", "msg_date": "Sun, 9 Jun 2024 10:01:08 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" }, { "msg_contents": "I adjusted the code a bit more to look like the 16 coding including\nrestoring some very useful comments that had been lost, and pushed.\n\nThanks very much to Alexander and Noah for (independently) chasing\nthis down and reporting, testing etc, and Alvaro and Robert for review\ncomments.\n\n(Passing thought: it's a bit weird that we need to zero pages at all\nbefore restoring FPIs or initialising them. Perhaps there is some way\nto defer marking the buffer valid until after the caller gets a chance\nto initialise? Or something like that...)\n\n\n", "msg_date": "Mon, 10 Jun 2024 14:30:16 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert in heapgettup_pagemode() fails due to underlying buffer\n change" } ]
[ { "msg_contents": "Hi,\n\nWhen the content of a large transaction (size exceeding\nlogical_decoding_work_mem) and its sub-transactions has to be\nreordered during logical decoding, then, all the changes are written\non disk in temporary files located in pg_replslot/<slot_name>.\nDecoding very large transactions by multiple replication slots can\nlead to disk space saturation and high I/O utilization.\n\nWhen compiled with LZ4 support (--with-lz4), this patch enables data\ncompression/decompression of these temporary files. Each transaction\nchange that must be written on disk (ReorderBufferDiskChange) is now\ncompressed and encapsulated in a new structure.\n\n3 different compression strategies are implemented:\n\n1. LZ4 streaming compression is the preferred one and works\n efficiently for small individual changes.\n2. LZ4 regular compression when the changes are too large for using\n the streaming API.\n3. No compression when compression fails, the change is then stored\n not compressed.\n\nWhen not using compression, the following case generates 1590MB of\nspill files:\n\n CREATE TABLE t (i INTEGER PRIMARY KEY, t TEXT);\n INSERT INTO t\n SELECT i, 'Hello number n°'||i::TEXT\n FROM generate_series(1, 10000000) as i;\n\nWith LZ4 compression, it creates 653MB of spill files: 58.9% less\ndisk space usage.\n\nOpen items:\n\n1. The spill_bytes column from pg_stat_get_replication_slot() still returns\nplain data size, not the compressed data size. Should we expose the\ncompressed data size when compression occurs?\n\n2. Do we want a GUC to switch compression on/off?\n\nRegards,\n\nJT", "msg_date": "Thu, 6 Jun 2024 03:58:12 -0700", "msg_from": "Julien Tachoires <[email protected]>", "msg_from_op": true, "msg_subject": "Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires <[email protected]> wrote:\n>\n> When the content of a large transaction (size exceeding\n> logical_decoding_work_mem) and its sub-transactions has to be\n> reordered during logical decoding, then, all the changes are written\n> on disk in temporary files located in pg_replslot/<slot_name>.\n> Decoding very large transactions by multiple replication slots can\n> lead to disk space saturation and high I/O utilization.\n>\n\nWhy can't one use 'streaming' option to send changes to the client\nonce it reaches the configured limit of 'logical_decoding_work_mem'?\n\n>\n> 2. Do we want a GUC to switch compression on/off?\n>\n\nIt depends on the overhead of decoding. Did you try to measure the\ndecoding overhead of decompression when reading compressed files?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Jun 2024 16:43:24 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On Thu, Jun 6, 2024 at 4:43 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires <[email protected]> wrote:\n> >\n> > When the content of a large transaction (size exceeding\n> > logical_decoding_work_mem) and its sub-transactions has to be\n> > reordered during logical decoding, then, all the changes are written\n> > on disk in temporary files located in pg_replslot/<slot_name>.\n> > Decoding very large transactions by multiple replication slots can\n> > lead to disk space saturation and high I/O utilization.\n> >\n>\n> Why can't one use 'streaming' option to send changes to the client\n> once it reaches the configured limit of 'logical_decoding_work_mem'?\n>\n> >\n> > 2. Do we want a GUC to switch compression on/off?\n> >\n>\n> It depends on the overhead of decoding. Did you try to measure the\n> decoding overhead of decompression when reading compressed files?\n\nI think it depends on the trade-off between the I/O savings from\nreducing the data size and the performance cost of compressing and\ndecompressing the data. This balance is highly dependent on the\nhardware. For example, if you have a very slow disk and a powerful\nprocessor, compression could be advantageous. Conversely, if the disk\nis very fast, the I/O savings might be minimal, and the compression\noverhead could outweigh the benefits. Additionally, the effectiveness\nof compression also depends on the compression ratio, which varies\nwith the type of data being compressed.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 17:28:52 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "Le jeu. 6 juin 2024 à 04:13, Amit Kapila <[email protected]> a écrit :\n>\n> On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires <[email protected]> wrote:\n> >\n> > When the content of a large transaction (size exceeding\n> > logical_decoding_work_mem) and its sub-transactions has to be\n> > reordered during logical decoding, then, all the changes are written\n> > on disk in temporary files located in pg_replslot/<slot_name>.\n> > Decoding very large transactions by multiple replication slots can\n> > lead to disk space saturation and high I/O utilization.\n> >\n>\n> Why can't one use 'streaming' option to send changes to the client\n> once it reaches the configured limit of 'logical_decoding_work_mem'?\n\nThat's right, setting subscription's option 'streaming' to 'on' moves\nthe problem away from the publisher to the subscribers. This patch\ntries to improve the default situation when 'streaming' is set to\n'off'.\n\n> > 2. Do we want a GUC to switch compression on/off?\n> >\n>\n> It depends on the overhead of decoding. Did you try to measure the\n> decoding overhead of decompression when reading compressed files?\n\nQuick benchmarking executed on my laptop shows 1% overhead.\n\nTable DDL:\nCREATE TABLE t (i INTEGER PRIMARY KEY, t TEXT);\n\nData generated with:\nINSERT INTO t SELECT i, 'Text number n°'||i::TEXT FROM\ngenerate_series(1, 10000000) as i;\n\nRestoration duration measured using timestamps of log messages:\n\"DEBUG: restored XXXX/YYYY changes from disk\"\n\nHEAD: 25.54s, 25.94s, 25.516s, 26.267s, 26.11s / avg=25.874s\nPatch: 26.872s, 26.311s, 25.753s, 26.003, 25.843s / avg=26.156s\n\nRegards,\n\nJT\n\n\n", "msg_date": "Thu, 6 Jun 2024 05:51:56 -0700", "msg_from": "Julien Tachoires <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On Thu, Jun 6, 2024 at 6:22 PM Julien Tachoires <[email protected]> wrote:\n>\n> Le jeu. 6 juin 2024 à 04:13, Amit Kapila <[email protected]> a écrit :\n> >\n> > On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires <[email protected]> wrote:\n> > >\n> > > When the content of a large transaction (size exceeding\n> > > logical_decoding_work_mem) and its sub-transactions has to be\n> > > reordered during logical decoding, then, all the changes are written\n> > > on disk in temporary files located in pg_replslot/<slot_name>.\n> > > Decoding very large transactions by multiple replication slots can\n> > > lead to disk space saturation and high I/O utilization.\n> > >\n> >\n> > Why can't one use 'streaming' option to send changes to the client\n> > once it reaches the configured limit of 'logical_decoding_work_mem'?\n>\n> That's right, setting subscription's option 'streaming' to 'on' moves\n> the problem away from the publisher to the subscribers. This patch\n> tries to improve the default situation when 'streaming' is set to\n> 'off'.\n>\n\nCan we think of changing the default to 'parallel'? BTW, it would be\nbetter to use 'parallel' for the 'streaming' option, if the workload\nhas large transactions. Is there a reason to use a default value in\nthis case?\n\n> > > 2. Do we want a GUC to switch compression on/off?\n> > >\n> >\n> > It depends on the overhead of decoding. Did you try to measure the\n> > decoding overhead of decompression when reading compressed files?\n>\n> Quick benchmarking executed on my laptop shows 1% overhead.\n>\n\nThanks. We probably need different types of data (say random data in\nbytea column, etc.) for this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Jun 2024 19:10:46 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On 2024-Jun-06, Amit Kapila wrote:\n\n> On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires <[email protected]> wrote:\n> >\n> > When the content of a large transaction (size exceeding\n> > logical_decoding_work_mem) and its sub-transactions has to be\n> > reordered during logical decoding, then, all the changes are written\n> > on disk in temporary files located in pg_replslot/<slot_name>.\n> > Decoding very large transactions by multiple replication slots can\n> > lead to disk space saturation and high I/O utilization.\n\nI like the general idea of compressing the output of logical decoding.\nIt's not so clear to me that we only want to do so for spilling to disk;\nfor instance, if the two nodes communicate over a slow network, it may\neven be beneficial to compress when streaming, so to this question:\n\n> Why can't one use 'streaming' option to send changes to the client\n> once it reaches the configured limit of 'logical_decoding_work_mem'?\n\nI would say that streaming doesn't necessarily have to mean we don't\nwant compression, because for some users it might be beneficial.\n\nI think a GUC would be a good idea. Also, what if for whatever reason\nyou want a different compression algorithm or different compression\nparameters? Looking at the existing compression UI we offer in\npg_basebackup, perhaps you could add something like this:\n\ncompress_logical_decoding = none\ncompress_logical_decoding = lz4:42\ncompress_logical_decoding = spill-zstd:99\n\n\"none\" says to never use compression (perhaps should be the default),\n\"lz4:42\" says to use lz4 with parameters 42 on both spilling and\nstreaming, and \"spill-zstd:99\" says to use Zstd with parameter 99 but\nonly for spilling to disk.\n\n(I don't mean to say that you should implement Zstd compression with\nthis patch, only that you should choose the implementation so that\nadding Zstd support (or whatever) later is just a matter of adding some\nbranches here and there. With the current #ifdef you propose, it's hard\nto do that. Maybe separate the parts that depend on the specific\nalgorithm to algorithm-agnostic functions.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 6 Jun 2024 16:24:07 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "Le jeu. 6 juin 2024 à 06:40, Amit Kapila <[email protected]> a écrit :\n>\n> On Thu, Jun 6, 2024 at 6:22 PM Julien Tachoires <[email protected]> wrote:\n> >\n> > Le jeu. 6 juin 2024 à 04:13, Amit Kapila <[email protected]> a écrit :\n> > >\n> > > On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires <[email protected]> wrote:\n> > > >\n> > > > When the content of a large transaction (size exceeding\n> > > > logical_decoding_work_mem) and its sub-transactions has to be\n> > > > reordered during logical decoding, then, all the changes are written\n> > > > on disk in temporary files located in pg_replslot/<slot_name>.\n> > > > Decoding very large transactions by multiple replication slots can\n> > > > lead to disk space saturation and high I/O utilization.\n> > > >\n> > >\n> > > Why can't one use 'streaming' option to send changes to the client\n> > > once it reaches the configured limit of 'logical_decoding_work_mem'?\n> >\n> > That's right, setting subscription's option 'streaming' to 'on' moves\n> > the problem away from the publisher to the subscribers. This patch\n> > tries to improve the default situation when 'streaming' is set to\n> > 'off'.\n> >\n>\n> Can we think of changing the default to 'parallel'? BTW, it would be\n> better to use 'parallel' for the 'streaming' option, if the workload\n> has large transactions. Is there a reason to use a default value in\n> this case?\n\nYou're certainly right, if using the streaming API helps to avoid bad\nsituations and there is no downside, it could be used by default.\n\n> > > > 2. Do we want a GUC to switch compression on/off?\n> > > >\n> > >\n> > > It depends on the overhead of decoding. Did you try to measure the\n> > > decoding overhead of decompression when reading compressed files?\n> >\n> > Quick benchmarking executed on my laptop shows 1% overhead.\n> >\n>\n> Thanks. We probably need different types of data (say random data in\n> bytea column, etc.) for this.\n\nYes, good idea, will run new tests in that sense.\n\nThank you!\n\nRegards,\n\nJT\n\n\n", "msg_date": "Thu, 6 Jun 2024 12:31:25 -0700", "msg_from": "Julien Tachoires <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "Le jeu. 6 juin 2024 à 07:24, Alvaro Herrera <[email protected]> a écrit :\n>\n> On 2024-Jun-06, Amit Kapila wrote:\n>\n> > On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires <[email protected]> wrote:\n> > >\n> > > When the content of a large transaction (size exceeding\n> > > logical_decoding_work_mem) and its sub-transactions has to be\n> > > reordered during logical decoding, then, all the changes are written\n> > > on disk in temporary files located in pg_replslot/<slot_name>.\n> > > Decoding very large transactions by multiple replication slots can\n> > > lead to disk space saturation and high I/O utilization.\n>\n> I like the general idea of compressing the output of logical decoding.\n> It's not so clear to me that we only want to do so for spilling to disk;\n> for instance, if the two nodes communicate over a slow network, it may\n> even be beneficial to compress when streaming, so to this question:\n>\n> > Why can't one use 'streaming' option to send changes to the client\n> > once it reaches the configured limit of 'logical_decoding_work_mem'?\n>\n> I would say that streaming doesn't necessarily have to mean we don't\n> want compression, because for some users it might be beneficial.\n\nInteresting idea, will try to evaluate how to compress/decompress data\ntransiting via streaming and how good the compression ratio would be.\n\n> I think a GUC would be a good idea. Also, what if for whatever reason\n> you want a different compression algorithm or different compression\n> parameters? Looking at the existing compression UI we offer in\n> pg_basebackup, perhaps you could add something like this:\n>\n> compress_logical_decoding = none\n> compress_logical_decoding = lz4:42\n> compress_logical_decoding = spill-zstd:99\n>\n> \"none\" says to never use compression (perhaps should be the default),\n> \"lz4:42\" says to use lz4 with parameters 42 on both spilling and\n> streaming, and \"spill-zstd:99\" says to use Zstd with parameter 99 but\n> only for spilling to disk.\n\nI agree, if the server was compiled with support of multiple\ncompression libraries, users should be able to choose which one they\nwant to use.\n\n> (I don't mean to say that you should implement Zstd compression with\n> this patch, only that you should choose the implementation so that\n> adding Zstd support (or whatever) later is just a matter of adding some\n> branches here and there. With the current #ifdef you propose, it's hard\n> to do that. Maybe separate the parts that depend on the specific\n> algorithm to algorithm-agnostic functions.)\n\nMakes sense, will rework this patch in that way.\n\nThank you!\n\nRegards,\n\nJT\n\n\n", "msg_date": "Thu, 6 Jun 2024 12:41:16 -0700", "msg_from": "Julien Tachoires <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On Thu, Jun 6, 2024 at 7:54 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jun-06, Amit Kapila wrote:\n>\n> > On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires <[email protected]> wrote:\n> > >\n> > > When the content of a large transaction (size exceeding\n> > > logical_decoding_work_mem) and its sub-transactions has to be\n> > > reordered during logical decoding, then, all the changes are written\n> > > on disk in temporary files located in pg_replslot/<slot_name>.\n> > > Decoding very large transactions by multiple replication slots can\n> > > lead to disk space saturation and high I/O utilization.\n>\n> I like the general idea of compressing the output of logical decoding.\n> It's not so clear to me that we only want to do so for spilling to disk;\n> for instance, if the two nodes communicate over a slow network, it may\n> even be beneficial to compress when streaming, so to this question:\n>\n> > Why can't one use 'streaming' option to send changes to the client\n> > once it reaches the configured limit of 'logical_decoding_work_mem'?\n>\n> I would say that streaming doesn't necessarily have to mean we don't\n> want compression, because for some users it might be beneficial.\n\n+1\n\n> I think a GUC would be a good idea. Also, what if for whatever reason\n> you want a different compression algorithm or different compression\n> parameters? Looking at the existing compression UI we offer in\n> pg_basebackup, perhaps you could add something like this:\n>\n> compress_logical_decoding = none\n> compress_logical_decoding = lz4:42\n> compress_logical_decoding = spill-zstd:99\n>\n> \"none\" says to never use compression (perhaps should be the default),\n> \"lz4:42\" says to use lz4 with parameters 42 on both spilling and\n> streaming, and \"spill-zstd:99\" says to use Zstd with parameter 99 but\n> only for spilling to disk.\n>\n\nI think the compression option should be supported at the CREATE\nSUBSCRIPTION level instead of being controlled by a GUC. This way, we\ncan decide on compression for each subscription individually rather\nthan applying it to all subscribers. It makes more sense for the\nsubscriber to control this, especially when we are planning to\ncompress the data sent downstream.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:08:23 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On 2024-Jun-07, Dilip Kumar wrote:\n\n> I think the compression option should be supported at the CREATE\n> SUBSCRIPTION level instead of being controlled by a GUC. This way, we\n> can decide on compression for each subscription individually rather\n> than applying it to all subscribers. It makes more sense for the\n> subscriber to control this, especially when we are planning to\n> compress the data sent downstream.\n\nTrue. (I think we have some options that are in GUCs for the general\nbehavior and can be overridden by per-subscription options for specific\ntailoring; would that make sense here? I think it does, considering\nthat what we mostly want is to save disk space in the publisher when\nspilling to disk.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)\n\n\n", "msg_date": "Fri, 7 Jun 2024 11:09:28 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On Fri, Jun 7, 2024 at 2:39 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jun-07, Dilip Kumar wrote:\n>\n> > I think the compression option should be supported at the CREATE\n> > SUBSCRIPTION level instead of being controlled by a GUC. This way, we\n> > can decide on compression for each subscription individually rather\n> > than applying it to all subscribers. It makes more sense for the\n> > subscriber to control this, especially when we are planning to\n> > compress the data sent downstream.\n>\n> True. (I think we have some options that are in GUCs for the general\n> behavior and can be overridden by per-subscription options for specific\n> tailoring; would that make sense here? I think it does, considering\n> that what we mostly want is to save disk space in the publisher when\n> spilling to disk.)\n\nYeah, that makes sense.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:44:35 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On Thu, Jun 6, 2024 at 7:54 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jun-06, Amit Kapila wrote:\n>\n> > On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires <[email protected]> wrote:\n> > >\n> > > When the content of a large transaction (size exceeding\n> > > logical_decoding_work_mem) and its sub-transactions has to be\n> > > reordered during logical decoding, then, all the changes are written\n> > > on disk in temporary files located in pg_replslot/<slot_name>.\n> > > Decoding very large transactions by multiple replication slots can\n> > > lead to disk space saturation and high I/O utilization.\n>\n> I like the general idea of compressing the output of logical decoding.\n> It's not so clear to me that we only want to do so for spilling to disk;\n> for instance, if the two nodes communicate over a slow network, it may\n> even be beneficial to compress when streaming, so to this question:\n>\n> > Why can't one use 'streaming' option to send changes to the client\n> > once it reaches the configured limit of 'logical_decoding_work_mem'?\n>\n> I would say that streaming doesn't necessarily have to mean we don't\n> want compression, because for some users it might be beneficial.\n>\n\nFair enough. it would be an interesting feature if we see the wider\nusefulness of compression/decompression of logical changes. For\nexample, if this can improve the performance of applying large\ntransactions (aka reduce the apply lag for them) even when the\n'streaming' option is 'parallel' then it would have a much wider\nimpact.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:54:53 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On Fri, Jun 7, 2024 at 2:08 PM Dilip Kumar <[email protected]> wrote:\n>\n> I think the compression option should be supported at the CREATE\n> SUBSCRIPTION level instead of being controlled by a GUC. This way, we\n> can decide on compression for each subscription individually rather\n> than applying it to all subscribers. It makes more sense for the\n> subscriber to control this, especially when we are planning to\n> compress the data sent downstream.\n>\n\nYes, that makes sense. However, we then need to provide this option\nvia SQL APIs as well for other plugins.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:57:41 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On 6/6/24 16:24, Alvaro Herrera wrote:\n> On 2024-Jun-06, Amit Kapila wrote:\n> \n>> On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires <[email protected]> wrote:\n>>>\n>>> When the content of a large transaction (size exceeding\n>>> logical_decoding_work_mem) and its sub-transactions has to be\n>>> reordered during logical decoding, then, all the changes are written\n>>> on disk in temporary files located in pg_replslot/<slot_name>.\n>>> Decoding very large transactions by multiple replication slots can\n>>> lead to disk space saturation and high I/O utilization.\n> \n> I like the general idea of compressing the output of logical decoding.\n> It's not so clear to me that we only want to do so for spilling to disk;\n> for instance, if the two nodes communicate over a slow network, it may\n> even be beneficial to compress when streaming, so to this question:\n> \n>> Why can't one use 'streaming' option to send changes to the client\n>> once it reaches the configured limit of 'logical_decoding_work_mem'?\n> \n> I would say that streaming doesn't necessarily have to mean we don't\n> want compression, because for some users it might be beneficial.\n> \n> I think a GUC would be a good idea. Also, what if for whatever reason\n> you want a different compression algorithm or different compression\n> parameters? Looking at the existing compression UI we offer in\n> pg_basebackup, perhaps you could add something like this:\n> \n> compress_logical_decoding = none\n> compress_logical_decoding = lz4:42\n> compress_logical_decoding = spill-zstd:99\n> \n> \"none\" says to never use compression (perhaps should be the default),\n> \"lz4:42\" says to use lz4 with parameters 42 on both spilling and\n> streaming, and \"spill-zstd:99\" says to use Zstd with parameter 99 but\n> only for spilling to disk.\n> \n> (I don't mean to say that you should implement Zstd compression with\n> this patch, only that you should choose the implementation so that\n> adding Zstd support (or whatever) later is just a matter of adding some\n> branches here and there. With the current #ifdef you propose, it's hard\n> to do that. Maybe separate the parts that depend on the specific\n> algorithm to algorithm-agnostic functions.)\n> \n\nI haven't been following the \"libpq compression\" thread, but wouldn't\nthat also do compression for the streaming case? That was my assumption,\nat least, and it seems like the right way - we probably don't want to\npatch every place that sends data over network independently, right?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:57:13 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On 6/6/24 12:58, Julien Tachoires wrote:\n> ...\n>\n> When compiled with LZ4 support (--with-lz4), this patch enables data\n> compression/decompression of these temporary files. Each transaction\n> change that must be written on disk (ReorderBufferDiskChange) is now\n> compressed and encapsulated in a new structure.\n> \n\nI'm a bit confused, but why tie this to having lz4? Why shouldn't this\nbe supported even for pglz, or whatever algorithms we add in the future?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:59:43 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "Le ven. 7 juin 2024 à 05:59, Tomas Vondra\n<[email protected]> a écrit :\n>\n> On 6/6/24 12:58, Julien Tachoires wrote:\n> > ...\n> >\n> > When compiled with LZ4 support (--with-lz4), this patch enables data\n> > compression/decompression of these temporary files. Each transaction\n> > change that must be written on disk (ReorderBufferDiskChange) is now\n> > compressed and encapsulated in a new structure.\n> >\n>\n> I'm a bit confused, but why tie this to having lz4? Why shouldn't this\n> be supported even for pglz, or whatever algorithms we add in the future?\n\nThat's right, reworking this patch in that sense.\n\nRegards,\n\nJT\n\n\n", "msg_date": "Fri, 7 Jun 2024 06:18:23 -0700", "msg_from": "Julien Tachoires <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "Hi,\n\nLe ven. 7 juin 2024 à 06:18, Julien Tachoires <[email protected]> a écrit :\n>\n> Le ven. 7 juin 2024 à 05:59, Tomas Vondra\n> <[email protected]> a écrit :\n> >\n> > On 6/6/24 12:58, Julien Tachoires wrote:\n> > > ...\n> > >\n> > > When compiled with LZ4 support (--with-lz4), this patch enables data\n> > > compression/decompression of these temporary files. Each transaction\n> > > change that must be written on disk (ReorderBufferDiskChange) is now\n> > > compressed and encapsulated in a new structure.\n> > >\n> >\n> > I'm a bit confused, but why tie this to having lz4? Why shouldn't this\n> > be supported even for pglz, or whatever algorithms we add in the future?\n>\n> That's right, reworking this patch in that sense.\n\nPlease find a new version of this patch adding support for LZ4, pglz\nand ZSTD. It introduces the new GUC logical_decoding_spill_compression\nwhich is used to set the compression method. In order to stay aligned\nwith the other server side GUCs related to compression methods\n(wal_compression, default_toast_compression), the compression level is\nnot exposed to users.\n\nThe last patch of this set is still in WIP, it adds the machinery\nrequired for setting the compression methods as a subscription option:\nCREATE SUBSCRIPTION ... WITH (spill_compression = ...);\nI think there is a major problem with this approach: the logical\ndecoding context is tied to one replication slot, but multiple\nsubscriptions can use the same replication slot. How should this work\nif 2 subscriptions want to use the same replication slot but different\ncompression methods?\n\nAt this point, compression is only available for the changes spilled\non disk. It is still not clear to me if the compression of data\ntransiting through the streaming protocol should be addressed by this\npatch set or by another one. Thought ?\n\nRegards,\n\nJT", "msg_date": "Mon, 15 Jul 2024 11:50:56 -0700", "msg_from": "Julien Tachoires <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On 7/15/24 20:50, Julien Tachoires wrote:\n> Hi,\n> \n> Le ven. 7 juin 2024 à 06:18, Julien Tachoires <[email protected]> a écrit :\n>>\n>> Le ven. 7 juin 2024 à 05:59, Tomas Vondra\n>> <[email protected]> a écrit :\n>>>\n>>> On 6/6/24 12:58, Julien Tachoires wrote:\n>>>> ...\n>>>>\n>>>> When compiled with LZ4 support (--with-lz4), this patch enables data\n>>>> compression/decompression of these temporary files. Each transaction\n>>>> change that must be written on disk (ReorderBufferDiskChange) is now\n>>>> compressed and encapsulated in a new structure.\n>>>>\n>>>\n>>> I'm a bit confused, but why tie this to having lz4? Why shouldn't this\n>>> be supported even for pglz, or whatever algorithms we add in the future?\n>>\n>> That's right, reworking this patch in that sense.\n> \n> Please find a new version of this patch adding support for LZ4, pglz\n> and ZSTD. It introduces the new GUC logical_decoding_spill_compression\n> which is used to set the compression method. In order to stay aligned\n> with the other server side GUCs related to compression methods\n> (wal_compression, default_toast_compression), the compression level is\n> not exposed to users.\n> \n\nSounds reasonable. I wonder if it might be useful to allow specifying\nthe compression level in those places, but that's clearly not something\nthis patch needs to do.\n\n> The last patch of this set is still in WIP, it adds the machinery\n> required for setting the compression methods as a subscription option:\n> CREATE SUBSCRIPTION ... WITH (spill_compression = ...);\n> I think there is a major problem with this approach: the logical\n> decoding context is tied to one replication slot, but multiple\n> subscriptions can use the same replication slot. How should this work\n> if 2 subscriptions want to use the same replication slot but different\n> compression methods?\n> \n\nDo we really support multiple subscriptions sharing the same slot? I\ndon't think we do, but maybe I'm missing something.\n\n> At this point, compression is only available for the changes spilled\n> on disk. It is still not clear to me if the compression of data\n> transiting through the streaming protocol should be addressed by this\n> patch set or by another one. Thought ?\n> \n\nI'd stick to only compressing the data spilled to disk. It might be\nuseful to compress the streamed data too, but why shouldn't we compress\nthe regular (non-streamed) transactions too? Yeah, it's more efficient\nto compress larger chunks, but we can fit quite large transactions into\nlogical_decoding_work_mem without spilling.\n\nFWIW I'd expect that to be handled at the libpq level - there's already\na patch for that, but I haven't checked if it would handle this. But\nmaybe more importantly, I think compressing streamed data might need to\nhandle some sort of negotiation of the compression algorithm, which\nseems fairly complex.\n\nTo conclude, I'd leave this out of scope for this patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 15 Jul 2024 21:28:29 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "Le lun. 15 juil. 2024 à 12:28, Tomas Vondra\n<[email protected]> a écrit :\n>\n> On 7/15/24 20:50, Julien Tachoires wrote:\n> > The last patch of this set is still in WIP, it adds the machinery\n> > required for setting the compression methods as a subscription option:\n> > CREATE SUBSCRIPTION ... WITH (spill_compression = ...);\n> > I think there is a major problem with this approach: the logical\n> > decoding context is tied to one replication slot, but multiple\n> > subscriptions can use the same replication slot. How should this work\n> > if 2 subscriptions want to use the same replication slot but different\n> > compression methods?\n> >\n>\n> Do we really support multiple subscriptions sharing the same slot? I\n> don't think we do, but maybe I'm missing something.\n\nYou are right, it's not supported, the following error is raised in this case:\nERROR: replication slot \"sub1\" is active for PID 51735\n\nI was distracted by the fact that nothing prevents the configuration\nof multiple subscriptions sharing the same replication slot.\n\nThanks,\n\nJT\n\n\n", "msg_date": "Tue, 16 Jul 2024 01:08:30 -0700", "msg_from": "Julien Tachoires <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On Tue, Jul 16, 2024 at 12:58 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 7/15/24 20:50, Julien Tachoires wrote:\n> > Hi,\n> >\n> > Le ven. 7 juin 2024 à 06:18, Julien Tachoires <[email protected]> a écrit :\n> >>\n> >> Le ven. 7 juin 2024 à 05:59, Tomas Vondra\n> >> <[email protected]> a écrit :\n> >>>\n> >>> On 6/6/24 12:58, Julien Tachoires wrote:\n> >>>> ...\n> >>>>\n> >>>> When compiled with LZ4 support (--with-lz4), this patch enables data\n> >>>> compression/decompression of these temporary files. Each transaction\n> >>>> change that must be written on disk (ReorderBufferDiskChange) is now\n> >>>> compressed and encapsulated in a new structure.\n> >>>>\n> >>>\n> >>> I'm a bit confused, but why tie this to having lz4? Why shouldn't this\n> >>> be supported even for pglz, or whatever algorithms we add in the future?\n> >>\n> >> That's right, reworking this patch in that sense.\n> >\n> > Please find a new version of this patch adding support for LZ4, pglz\n> > and ZSTD. It introduces the new GUC logical_decoding_spill_compression\n> > which is used to set the compression method. In order to stay aligned\n> > with the other server side GUCs related to compression methods\n> > (wal_compression, default_toast_compression), the compression level is\n> > not exposed to users.\n> >\n>\n> Sounds reasonable. I wonder if it might be useful to allow specifying\n> the compression level in those places, but that's clearly not something\n> this patch needs to do.\n>\n> > The last patch of this set is still in WIP, it adds the machinery\n> > required for setting the compression methods as a subscription option:\n> > CREATE SUBSCRIPTION ... WITH (spill_compression = ...);\n> > I think there is a major problem with this approach: the logical\n> > decoding context is tied to one replication slot, but multiple\n> > subscriptions can use the same replication slot. How should this work\n> > if 2 subscriptions want to use the same replication slot but different\n> > compression methods?\n> >\n>\n> Do we really support multiple subscriptions sharing the same slot? I\n> don't think we do, but maybe I'm missing something.\n>\n> > At this point, compression is only available for the changes spilled\n> > on disk. It is still not clear to me if the compression of data\n> > transiting through the streaming protocol should be addressed by this\n> > patch set or by another one. Thought ?\n> >\n>\n> I'd stick to only compressing the data spilled to disk. It might be\n> useful to compress the streamed data too, but why shouldn't we compress\n> the regular (non-streamed) transactions too? Yeah, it's more efficient\n> to compress larger chunks, but we can fit quite large transactions into\n> logical_decoding_work_mem without spilling.\n>\n> FWIW I'd expect that to be handled at the libpq level - there's already\n> a patch for that, but I haven't checked if it would handle this. But\n> maybe more importantly, I think compressing streamed data might need to\n> handle some sort of negotiation of the compression algorithm, which\n> seems fairly complex.\n>\n> To conclude, I'd leave this out of scope for this patch.\n>\n\nYour point sounds reasonable to me. OTOH, if we want to support\ncompression for spill case then shouldn't there be a question how\nfrequent such an option would be required? Users currently have an\noption to stream large transactions for parallel apply or otherwise in\nwhich case no spilling is required. I feel sooner or later we will\nmake such behavior (streaming=parallel) as default, and then spilling\nshould happen in very few cases. Is it worth adding this new option\nand GUC if that is true?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 16 Jul 2024 18:22:51 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "\n\nOn 7/16/24 14:52, Amit Kapila wrote:\n> On Tue, Jul 16, 2024 at 12:58 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 7/15/24 20:50, Julien Tachoires wrote:\n>>> Hi,\n>>>\n>>> Le ven. 7 juin 2024 à 06:18, Julien Tachoires <[email protected]> a écrit :\n>>>>\n>>>> Le ven. 7 juin 2024 à 05:59, Tomas Vondra\n>>>> <[email protected]> a écrit :\n>>>>>\n>>>>> On 6/6/24 12:58, Julien Tachoires wrote:\n>>>>>> ...\n>>>>>>\n>>>>>> When compiled with LZ4 support (--with-lz4), this patch enables data\n>>>>>> compression/decompression of these temporary files. Each transaction\n>>>>>> change that must be written on disk (ReorderBufferDiskChange) is now\n>>>>>> compressed and encapsulated in a new structure.\n>>>>>>\n>>>>>\n>>>>> I'm a bit confused, but why tie this to having lz4? Why shouldn't this\n>>>>> be supported even for pglz, or whatever algorithms we add in the future?\n>>>>\n>>>> That's right, reworking this patch in that sense.\n>>>\n>>> Please find a new version of this patch adding support for LZ4, pglz\n>>> and ZSTD. It introduces the new GUC logical_decoding_spill_compression\n>>> which is used to set the compression method. In order to stay aligned\n>>> with the other server side GUCs related to compression methods\n>>> (wal_compression, default_toast_compression), the compression level is\n>>> not exposed to users.\n>>>\n>>\n>> Sounds reasonable. I wonder if it might be useful to allow specifying\n>> the compression level in those places, but that's clearly not something\n>> this patch needs to do.\n>>\n>>> The last patch of this set is still in WIP, it adds the machinery\n>>> required for setting the compression methods as a subscription option:\n>>> CREATE SUBSCRIPTION ... WITH (spill_compression = ...);\n>>> I think there is a major problem with this approach: the logical\n>>> decoding context is tied to one replication slot, but multiple\n>>> subscriptions can use the same replication slot. How should this work\n>>> if 2 subscriptions want to use the same replication slot but different\n>>> compression methods?\n>>>\n>>\n>> Do we really support multiple subscriptions sharing the same slot? I\n>> don't think we do, but maybe I'm missing something.\n>>\n>>> At this point, compression is only available for the changes spilled\n>>> on disk. It is still not clear to me if the compression of data\n>>> transiting through the streaming protocol should be addressed by this\n>>> patch set or by another one. Thought ?\n>>>\n>>\n>> I'd stick to only compressing the data spilled to disk. It might be\n>> useful to compress the streamed data too, but why shouldn't we compress\n>> the regular (non-streamed) transactions too? Yeah, it's more efficient\n>> to compress larger chunks, but we can fit quite large transactions into\n>> logical_decoding_work_mem without spilling.\n>>\n>> FWIW I'd expect that to be handled at the libpq level - there's already\n>> a patch for that, but I haven't checked if it would handle this. But\n>> maybe more importantly, I think compressing streamed data might need to\n>> handle some sort of negotiation of the compression algorithm, which\n>> seems fairly complex.\n>>\n>> To conclude, I'd leave this out of scope for this patch.\n>>\n> \n> Your point sounds reasonable to me. OTOH, if we want to support\n> compression for spill case then shouldn't there be a question how\n> frequent such an option would be required? Users currently have an\n> option to stream large transactions for parallel apply or otherwise in\n> which case no spilling is required. I feel sooner or later we will\n> make such behavior (streaming=parallel) as default, and then spilling\n> should happen in very few cases. Is it worth adding this new option\n> and GUC if that is true?\n> \n\nI don't know, but streaming is 'off' by default, and I'm not aware of\nany proposals to change this, so when you suggest \"sooner or later\"\nwe'll change this, I'd probably bet on \"later or never\".\n\nI haven't been following the discussions about parallel apply very\nclosely, but my impression from dealing with similar stuff in other\ntools is that it's rather easy to run into issues with some workloads,\nwhich just makes me more skeptical about \"streamin=parallel\" by default.\nBut as I said, I'm out of the loop so I may be wrong ...\n\nAs for whether the GUC is needed, I don't know. I guess we might do the\nsame thing we do for streaming - we don't have a GUC to enable this, but\nwe default to 'off' and the client has to request that when opening the\nreplication connection. So it'd be specified at the subscription level,\nmore or less.\n\nBut then how would we specify compression for cases that invoke decoding\ndirectly by pg_logical_slot_get_changes()? Through options?\n\nBTW if we specify this at subscription level, will it be possible to\nchange the compression method?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 16 Jul 2024 16:01:46 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "On Tue, Jul 16, 2024 at 7:31 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 7/16/24 14:52, Amit Kapila wrote:\n> > On Tue, Jul 16, 2024 at 12:58 AM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> FWIW I'd expect that to be handled at the libpq level - there's already\n> >> a patch for that, but I haven't checked if it would handle this. But\n> >> maybe more importantly, I think compressing streamed data might need to\n> >> handle some sort of negotiation of the compression algorithm, which\n> >> seems fairly complex.\n> >>\n> >> To conclude, I'd leave this out of scope for this patch.\n> >>\n> >\n> > Your point sounds reasonable to me. OTOH, if we want to support\n> > compression for spill case then shouldn't there be a question how\n> > frequent such an option would be required? Users currently have an\n> > option to stream large transactions for parallel apply or otherwise in\n> > which case no spilling is required. I feel sooner or later we will\n> > make such behavior (streaming=parallel) as default, and then spilling\n> > should happen in very few cases. Is it worth adding this new option\n> > and GUC if that is true?\n> >\n>\n> I don't know, but streaming is 'off' by default, and I'm not aware of\n> any proposals to change this, so when you suggest \"sooner or later\"\n> we'll change this, I'd probably bet on \"later or never\".\n>\n> I haven't been following the discussions about parallel apply very\n> closely, but my impression from dealing with similar stuff in other\n> tools is that it's rather easy to run into issues with some workloads,\n> which just makes me more skeptical about \"streamin=parallel\" by default.\n> But as I said, I'm out of the loop so I may be wrong ...\n>\n\nIt is difficult to say whether enabling it by default will have issues\nor not but till now we haven't seen many reports for the streaming =\n'parallel' option. It could be due to the reason that not many people\nenable it in their workloads. We can probably find out by enabling it\nby default.\n\n> As for whether the GUC is needed, I don't know. I guess we might do the\n> same thing we do for streaming - we don't have a GUC to enable this, but\n> we default to 'off' and the client has to request that when opening the\n> replication connection. So it'd be specified at the subscription level,\n> more or less.\n>\n> But then how would we specify compression for cases that invoke decoding\n> directly by pg_logical_slot_get_changes()? Through options?\n>\n\nIf we decide to go with this then yeah that is one way, another\npossibility is to make it a slot's property, so we can allow to take a\nnew parameter in pg_create_logical_replication_slot(). We can even\nthink of inventing a new API to alter the slot's properties if we\ndecide to go this route.\n\n> BTW if we specify this at subscription level, will it be possible to\n> change the compression method?\n>\n\nThis needs analysis but offhand I can't see the problems with it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 17 Jul 2024 14:42:36 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "Le mer. 17 juil. 2024 à 02:12, Amit Kapila <[email protected]> a écrit :\n>\n> On Tue, Jul 16, 2024 at 7:31 PM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > On 7/16/24 14:52, Amit Kapila wrote:\n> > > On Tue, Jul 16, 2024 at 12:58 AM Tomas Vondra\n> > > <[email protected]> wrote:\n> > >>\n> > >> FWIW I'd expect that to be handled at the libpq level - there's already\n> > >> a patch for that, but I haven't checked if it would handle this. But\n> > >> maybe more importantly, I think compressing streamed data might need to\n> > >> handle some sort of negotiation of the compression algorithm, which\n> > >> seems fairly complex.\n> > >>\n> > >> To conclude, I'd leave this out of scope for this patch.\n> > >>\n> > >\n> > > Your point sounds reasonable to me. OTOH, if we want to support\n> > > compression for spill case then shouldn't there be a question how\n> > > frequent such an option would be required? Users currently have an\n> > > option to stream large transactions for parallel apply or otherwise in\n> > > which case no spilling is required. I feel sooner or later we will\n> > > make such behavior (streaming=parallel) as default, and then spilling\n> > > should happen in very few cases. Is it worth adding this new option\n> > > and GUC if that is true?\n> > >\n> >\n> > I don't know, but streaming is 'off' by default, and I'm not aware of\n> > any proposals to change this, so when you suggest \"sooner or later\"\n> > we'll change this, I'd probably bet on \"later or never\".\n> >\n> > I haven't been following the discussions about parallel apply very\n> > closely, but my impression from dealing with similar stuff in other\n> > tools is that it's rather easy to run into issues with some workloads,\n> > which just makes me more skeptical about \"streamin=parallel\" by default.\n> > But as I said, I'm out of the loop so I may be wrong ...\n> >\n>\n> It is difficult to say whether enabling it by default will have issues\n> or not but till now we haven't seen many reports for the streaming =\n> 'parallel' option. It could be due to the reason that not many people\n> enable it in their workloads. We can probably find out by enabling it\n> by default.\n>\n> > As for whether the GUC is needed, I don't know. I guess we might do the\n> > same thing we do for streaming - we don't have a GUC to enable this, but\n> > we default to 'off' and the client has to request that when opening the\n> > replication connection. So it'd be specified at the subscription level,\n> > more or less.\n> >\n> > But then how would we specify compression for cases that invoke decoding\n> > directly by pg_logical_slot_get_changes()? Through options?\n> >\n>\n> If we decide to go with this then yeah that is one way, another\n> possibility is to make it a slot's property, so we can allow to take a\n> new parameter in pg_create_logical_replication_slot(). We can even\n> think of inventing a new API to alter the slot's properties if we\n> decide to go this route.\n\nPlease find a new version of this patch set. The compression method is\nnow set on subscriber level via CREATE SUBSCRIPTION or ALTER\nSUBSCRIPTION and can be passed to\npg_logical_slot_get_changes()/pg_logical_slot_get_binary_changes()\nthrough the option spill_compression.\n\n> > BTW if we specify this at subscription level, will it be possible to\n> > change the compression method?\n> >\n>\n> This needs analysis but offhand I can't see the problems with it.\n\nI didn't notice any issue, the compression method can be changed even\nwhen a decoding is in progress, in this case, the replication worker\nrestart due to parameter change.\n\nJT", "msg_date": "Fri, 19 Jul 2024 15:05:07 -0700", "msg_from": "Julien Tachoires <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "Hi Julien,\n\nThanks for the last patch version and sorry for the delay. Here's a\nquick review, I plan to do a bit of additional testing, hopefully\nsometime this week.\n\nAttached is v4, which is your v3 rebased to current master, with a\ncouple \"review\" commits, adding comments to relevant places, which I\nfind easier to follow. There's also a \"pgindent\" commit at the end,\nshowing some formatting issues, adding structs to typedefs, etc. This\nneeds to be applied to the earlier patches, I didn't want to have too\nmany pgindent commits.\n\n\nv4-0002-review.patch\n--------------------\n\n1) I don't think we need to rename ReorderBufferDiskChange to\nReorderBufferDiskHeader. It seems we could easily just add a field to\nReorderBufferDiskChange, and keep using that, no? It's still just a\nwrapper for ReorderBufferChange, even if it's compressed.\n\n2) Likewise, I don't see the point in moving ReorderBufferDiskChange to\na different file. It seems like something that should remain private to\nreorderbuffer.c. IMHO the correct thing would be to move the various\nReorderBuffer* funct from reorderbuffer_compress.c to reorderbuffer.c.\nThe compression is something the ReorderBuffer is responsible for, so\nreorderbuffer.c is the right place for that.\n\n3) That means the reorderbuffer_compress.c would have only the actual\ncompression code. But wouldn't it be better to have one file for each\ncompression algorithm, similar to src/fe_utils/astreamer_{pglz,lz4,...}?\nJust a suggestion, though. Not sure.\n\n4) logical_decoding_spill_compression moved a bit, next to the variables\nfor other logical_decoding GUCs. It's also missing the definition in a\nheader file, so I get a compiler warning, so add it to reorderbuffer.h.\n\n5) Won't the code in ReorderBufferCompress() do palloc/pfree for each\npiece of data we compress? That seems it might be pretty expensive, at\nleast for records that happen to be large (>8kB), because oversized\nchunks are not cached in the context. IMHO this should use a single\nbuffer, similarly to what astreamer_lz4_compressor_content does.\n\n6) I'm really confused by having \"streaming\" and \"regular\" compression.\nI know lz4 supports different modes, but I'd expect only one of them\nbeing really suitable for this, so why support both? But even more\nimportantly, ReorderBufferCompress() sets the strategy using\n\n #define lz4_CanDoStreamingCompression(s) \\\n (s < (LZ4_RING_BUFFER_SIZE / 2))\n\nBut it does that with \"s\" being the \"change size\", and that can vary\nwildly. Doesn't that means the strategy will change withing a single\nfile? Does that even make sense?\n\n\nv4-0004-review.patch\n--------------------\n\n7) I'm not sure the \"Fix spill_bytes counter\" is actually a fix. Even\nnow (i.e. without compression) it tracks the size of transactions, not\nthe bytes written to disk exactly, because it doesn't include the bytes\nfor the size field, so maybe we should not change that ...\n\n\nv4-0006-review.patch\n--------------------\n\n8) It seems a bit strange that we add lz4 first, and only later pglz.\nIMHO it should be the other way around, as pglz is the default algorithm\nwe can rely to have, while lz4 etc. are optional.\n\n\nOne question I have is whether it might be better to compress stuff at a\nlower layer - not in reorderbuffer.c, but for the whole file. But maybe\nthere are reasons why that would be difficult, I haven't tried that.\n\n\n\nregards\n\n-- \nTomas Vondra", "msg_date": "Tue, 17 Sep 2024 20:48:10 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "Hi,\n\nI've spent a bit more time on this, mostly running tests to get a better\nidea of the practical benefits.\n\nFirstly, I think there's a bug in ReorderBufferCompress() - it's legal\nfor pglz_compress() to return -1. This can happen if the data is not\ncompressible, and would not fit into the output buffer. The code can't\njust do elog(ERROR) in this case, it needs to handle that by storing the\nraw data. The attached fixup patch makes this work for me - I'm not\nclaiming this is the best way to handle this, but it works.\n\nFWIW I find it strange the tests included in the patch did not trigger\nthis. That probably means the tests are not quite sufficient.\n\n\nNow, to the testing. Attached are two scripts, testing different cases:\n\ntest-columns.sh - Table with a variable number of 'float8' columns.\n\ntest-toast.sh - Table with a single text column.\n\nThe script always sets up a publication/subscription on two instances,\ngenerates certain amount of data (~1GB for columns, ~3.2GB for TOAST),\nwaits for it to be replicated to the replica, and measures how much data\nwas spilled to disk with the different compression methods (off, pglz\nand lz4). There's a couple more metrics, but that's irrelevant here.\n\nFor the \"column\" test, it looks like this (this is in MB):\n\n rows columns distribution off pglz lz4\n ========================================================\n 100000 1000 compressible 778 20 9\n random 778 778 16\n --------------------------------------------------------\n 1000000 100 compressible 916 116 62\n random 916 916 67\n\nIt's very clear that for the \"compressible\" data (which just copies the\nsame value into all columns), both pglz and lz4 can significantly reduce\nthe amount of data. For 1000 columns it's 780MB -> 20MB/9MB, for 100\ncolumns it's a bit less efficient, but still good.\n\nFor the \"random\" data (where every column gets a random value, but rows\nare copied), it's a very different story - pglz does not help at all,\nwhile lz4 still massively reduces the amount of spilled data.\n\nI think the explanation is very simple - for pglz, we compress each row\non it's own, there's no concept of streaming/context. If a row is\ncompressible, it works fine, but when the row gets random, pglz can't\ncompress it at all. For lz4, this does not matter, because with the\nstreaming mode it still sees that rows are just repeated, and so can\ncompress them efficiently.\n\nFor TOAST test, the results look like this:\n\n distribution repeats toast off pglz lz4\n ===============================================================\n compressible 10000 lz4 14 2 1\n pglz 40 4 3\n 1000 lz4 32 16 9\n pglz 54 17 10\n ---------------------------------------------------------\n random 10000 lz4 3305 3305 3157\n pglz 3305 3305 3157\n 1000 lz4 3166 3162 1580\n pglz 3334 3326 1745\n ----------------------------------------------------------\n random2 10000 lz4 3305 3305 3157\n pglz 3305 3305 3158\n 1000 lz4 3160 3156 3010\n pglz 3334 3326 3172\n\nThe \"repeats\" value means how long the string is - it's the number of\n\"md5\" hashes added to the string. The number of rows is calculated to\nkeep the total amount of data the same. The \"toast\" column tracks what\ncompression was used for TOAST, I was wondering if it matters.\n\nThis time there are three data distributions - compressible means that\neach TOAST value is nicely compressible, \"random\" means each value is\nrandom (not compressible), but the rows are just copy of the same value\n(so on the whole there's a lot of redundancy). And \"random2\" means each\nrow is random and unique (so not compressible at all).\n\nThe table shows that with compressible TOAST values, compressing the\nspill file is rather useless. The reason is that ReorderBufferCompress\nis handling raw TOAST data, which is already compressed. Yes, it may\nfurther reduce the amount of data, but it's negligible when compared to\nthe original amount of data.\n\nFor the random cases, the spill compression is rather pointless. Yes,\nlz4 can reduce it to 1/2 for the shorter strings, but other than that\nit's not very useful.\n\nFor a while I was thinking this approach is flawed, because it only sees\nand compressed changes one by one, and that seeing a batch of changes\nwould improve this (e.g. we'd see the copied rows). But I realized lz4\nalready does that (in the streaming mode at least), and yet it does not\nhelp very much. Presumably that depends on how large the context is. If\nthe random string is long enough, it won't help.\n\nSo maybe this approach is fine, and doing the compression at a lower\nlayer (for the whole file), would not really improve this. Even then\nwe'd only see a limited amount of data.\n\nMaybe the right answer to this is that compression does not help cases\nwhere most of the replicated data is TOAST, and that it can help cases\nwith wide (and redundant) rows, or repeated rows. And that lz4 is a\nclearly superior choice. (This also raises the question if we want to\nsupport REORDER_BUFFER_STRAT_LZ4_REGULAR. I haven't looked into this,\nbut doesn't that behave more like pglz, i.e. no context?)\n\nFWIW when doing these tests, it made me realize how useful would it be\nto track both the \"raw\" and \"spilled\" amounts. That is before/after\ncompression. It'd make calculating compression ratio much easier.\n\n\nregards\n\n-- \nTomas Vondra", "msg_date": "Mon, 23 Sep 2024 18:13:23 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "Hi Tomas,\n\nLe lun. 23 sept. 2024 à 18:13, Tomas Vondra <[email protected]> a écrit :\n>\n> Hi,\n>\n> I've spent a bit more time on this, mostly running tests to get a better\n> idea of the practical benefits.\n\nThank you for your code review and testing!\n\n> Firstly, I think there's a bug in ReorderBufferCompress() - it's legal\n> for pglz_compress() to return -1. This can happen if the data is not\n> compressible, and would not fit into the output buffer. The code can't\n> just do elog(ERROR) in this case, it needs to handle that by storing the\n> raw data. The attached fixup patch makes this work for me - I'm not\n> claiming this is the best way to handle this, but it works.\n>\n> FWIW I find it strange the tests included in the patch did not trigger\n> this. That probably means the tests are not quite sufficient.\n>\n>\n> Now, to the testing. Attached are two scripts, testing different cases:\n>\n> test-columns.sh - Table with a variable number of 'float8' columns.\n>\n> test-toast.sh - Table with a single text column.\n>\n> The script always sets up a publication/subscription on two instances,\n> generates certain amount of data (~1GB for columns, ~3.2GB for TOAST),\n> waits for it to be replicated to the replica, and measures how much data\n> was spilled to disk with the different compression methods (off, pglz\n> and lz4). There's a couple more metrics, but that's irrelevant here.\n\nIt would be interesting to run the same tests with zstd: in my early\ntesting I found that zstd was able to provide a better compression\nratio than lz4, but seemed to use more CPU resources/is slower.\n\n> For the \"column\" test, it looks like this (this is in MB):\n>\n> rows columns distribution off pglz lz4\n> ========================================================\n> 100000 1000 compressible 778 20 9\n> random 778 778 16\n> --------------------------------------------------------\n> 1000000 100 compressible 916 116 62\n> random 916 916 67\n>\n> It's very clear that for the \"compressible\" data (which just copies the\n> same value into all columns), both pglz and lz4 can significantly reduce\n> the amount of data. For 1000 columns it's 780MB -> 20MB/9MB, for 100\n> columns it's a bit less efficient, but still good.\n>\n> For the \"random\" data (where every column gets a random value, but rows\n> are copied), it's a very different story - pglz does not help at all,\n> while lz4 still massively reduces the amount of spilled data.\n>\n> I think the explanation is very simple - for pglz, we compress each row\n> on it's own, there's no concept of streaming/context. If a row is\n> compressible, it works fine, but when the row gets random, pglz can't\n> compress it at all. For lz4, this does not matter, because with the\n> streaming mode it still sees that rows are just repeated, and so can\n> compress them efficiently.\n\nThat's correct.\n\n> For TOAST test, the results look like this:\n>\n> distribution repeats toast off pglz lz4\n> ===============================================================\n> compressible 10000 lz4 14 2 1\n> pglz 40 4 3\n> 1000 lz4 32 16 9\n> pglz 54 17 10\n> ---------------------------------------------------------\n> random 10000 lz4 3305 3305 3157\n> pglz 3305 3305 3157\n> 1000 lz4 3166 3162 1580\n> pglz 3334 3326 1745\n> ----------------------------------------------------------\n> random2 10000 lz4 3305 3305 3157\n> pglz 3305 3305 3158\n> 1000 lz4 3160 3156 3010\n> pglz 3334 3326 3172\n>\n> The \"repeats\" value means how long the string is - it's the number of\n> \"md5\" hashes added to the string. The number of rows is calculated to\n> keep the total amount of data the same. The \"toast\" column tracks what\n> compression was used for TOAST, I was wondering if it matters.\n>\n> This time there are three data distributions - compressible means that\n> each TOAST value is nicely compressible, \"random\" means each value is\n> random (not compressible), but the rows are just copy of the same value\n> (so on the whole there's a lot of redundancy). And \"random2\" means each\n> row is random and unique (so not compressible at all).\n>\n> The table shows that with compressible TOAST values, compressing the\n> spill file is rather useless. The reason is that ReorderBufferCompress\n> is handling raw TOAST data, which is already compressed. Yes, it may\n> further reduce the amount of data, but it's negligible when compared to\n> the original amount of data.\n>\n> For the random cases, the spill compression is rather pointless. Yes,\n> lz4 can reduce it to 1/2 for the shorter strings, but other than that\n> it's not very useful.\n\nIt's still interesting to confirm that data already compressed or\nrandom data cannot be significantly compressed.\n\n> For a while I was thinking this approach is flawed, because it only sees\n> and compressed changes one by one, and that seeing a batch of changes\n> would improve this (e.g. we'd see the copied rows). But I realized lz4\n> already does that (in the streaming mode at least), and yet it does not\n> help very much. Presumably that depends on how large the context is. If\n> the random string is long enough, it won't help.\n>\n> So maybe this approach is fine, and doing the compression at a lower\n> layer (for the whole file), would not really improve this. Even then\n> we'd only see a limited amount of data.\n>\n> Maybe the right answer to this is that compression does not help cases\n> where most of the replicated data is TOAST, and that it can help cases\n> with wide (and redundant) rows, or repeated rows. And that lz4 is a\n> clearly superior choice. (This also raises the question if we want to\n> support REORDER_BUFFER_STRAT_LZ4_REGULAR. I haven't looked into this,\n> but doesn't that behave more like pglz, i.e. no context?)\n\nI'm working on a new version of this patch set that will include the\nchanges you suggested in your review. About using LZ4 regular API, the\ngoal was to use it when we cannot use the streaming API due to raw\ndata larger than LZ4 ring buffer. But this is something I'm going to\ndelete in the new version because I'm planning to use a similar\napproach as we do in astreamer_lz4.c: using frames, not blocks. LZ4\nframe API looks very similar to ZSTD's streaming API.\n\n> FWIW when doing these tests, it made me realize how useful would it be\n> to track both the \"raw\" and \"spilled\" amounts. That is before/after\n> compression. It'd make calculating compression ratio much easier.\n\nYes, that's why I tried to \"fix\" the spill_bytes counter.\n\nRegards,\n\nJT\n\n\n", "msg_date": "Mon, 23 Sep 2024 21:58:14 +0200", "msg_from": "Julien Tachoires <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" }, { "msg_contents": "\n\nOn 9/23/24 21:58, Julien Tachoires wrote:\n> Hi Tomas,\n> \n> Le lun. 23 sept. 2024 à 18:13, Tomas Vondra <[email protected]> a écrit :\n>>\n>> Hi,\n>>\n>> I've spent a bit more time on this, mostly running tests to get a better\n>> idea of the practical benefits.\n> \n> Thank you for your code review and testing!\n> \n>> Firstly, I think there's a bug in ReorderBufferCompress() - it's legal\n>> for pglz_compress() to return -1. This can happen if the data is not\n>> compressible, and would not fit into the output buffer. The code can't\n>> just do elog(ERROR) in this case, it needs to handle that by storing the\n>> raw data. The attached fixup patch makes this work for me - I'm not\n>> claiming this is the best way to handle this, but it works.\n>>\n>> FWIW I find it strange the tests included in the patch did not trigger\n>> this. That probably means the tests are not quite sufficient.\n>>\n>>\n>> Now, to the testing. Attached are two scripts, testing different cases:\n>>\n>> test-columns.sh - Table with a variable number of 'float8' columns.\n>>\n>> test-toast.sh - Table with a single text column.\n>>\n>> The script always sets up a publication/subscription on two instances,\n>> generates certain amount of data (~1GB for columns, ~3.2GB for TOAST),\n>> waits for it to be replicated to the replica, and measures how much data\n>> was spilled to disk with the different compression methods (off, pglz\n>> and lz4). There's a couple more metrics, but that's irrelevant here.\n> \n> It would be interesting to run the same tests with zstd: in my early\n> testing I found that zstd was able to provide a better compression\n> ratio than lz4, but seemed to use more CPU resources/is slower.\n> \n\nOh, I completely forgot about zstd. I don't think it'd substantially\nchange the conclusions, though. It might compress better/worse for some\ncases, but the overall behavior would remain the same.\n\nI can't test this right now, the testmachine is busy with some other\nstuff. But it should not be difficult to update the test scripts I\nattached and get results yourself. There's a couple hard-coded paths\nthat need to be updated, ofc.\n\n>> For the \"column\" test, it looks like this (this is in MB):\n>>\n>> rows columns distribution off pglz lz4\n>> ========================================================\n>> 100000 1000 compressible 778 20 9\n>> random 778 778 16\n>> --------------------------------------------------------\n>> 1000000 100 compressible 916 116 62\n>> random 916 916 67\n>>\n>> It's very clear that for the \"compressible\" data (which just copies the\n>> same value into all columns), both pglz and lz4 can significantly reduce\n>> the amount of data. For 1000 columns it's 780MB -> 20MB/9MB, for 100\n>> columns it's a bit less efficient, but still good.\n>>\n>> For the \"random\" data (where every column gets a random value, but rows\n>> are copied), it's a very different story - pglz does not help at all,\n>> while lz4 still massively reduces the amount of spilled data.\n>>\n>> I think the explanation is very simple - for pglz, we compress each row\n>> on it's own, there's no concept of streaming/context. If a row is\n>> compressible, it works fine, but when the row gets random, pglz can't\n>> compress it at all. For lz4, this does not matter, because with the\n>> streaming mode it still sees that rows are just repeated, and so can\n>> compress them efficiently.\n> \n> That's correct.\n> \n>> For TOAST test, the results look like this:\n>>\n>> distribution repeats toast off pglz lz4\n>> ===============================================================\n>> compressible 10000 lz4 14 2 1\n>> pglz 40 4 3\n>> 1000 lz4 32 16 9\n>> pglz 54 17 10\n>> ---------------------------------------------------------\n>> random 10000 lz4 3305 3305 3157\n>> pglz 3305 3305 3157\n>> 1000 lz4 3166 3162 1580\n>> pglz 3334 3326 1745\n>> ----------------------------------------------------------\n>> random2 10000 lz4 3305 3305 3157\n>> pglz 3305 3305 3158\n>> 1000 lz4 3160 3156 3010\n>> pglz 3334 3326 3172\n>>\n>> The \"repeats\" value means how long the string is - it's the number of\n>> \"md5\" hashes added to the string. The number of rows is calculated to\n>> keep the total amount of data the same. The \"toast\" column tracks what\n>> compression was used for TOAST, I was wondering if it matters.\n>>\n>> This time there are three data distributions - compressible means that\n>> each TOAST value is nicely compressible, \"random\" means each value is\n>> random (not compressible), but the rows are just copy of the same value\n>> (so on the whole there's a lot of redundancy). And \"random2\" means each\n>> row is random and unique (so not compressible at all).\n>>\n>> The table shows that with compressible TOAST values, compressing the\n>> spill file is rather useless. The reason is that ReorderBufferCompress\n>> is handling raw TOAST data, which is already compressed. Yes, it may\n>> further reduce the amount of data, but it's negligible when compared to\n>> the original amount of data.\n>>\n>> For the random cases, the spill compression is rather pointless. Yes,\n>> lz4 can reduce it to 1/2 for the shorter strings, but other than that\n>> it's not very useful.\n> \n> It's still interesting to confirm that data already compressed or\n> random data cannot be significantly compressed.\n> \n>> For a while I was thinking this approach is flawed, because it only sees\n>> and compressed changes one by one, and that seeing a batch of changes\n>> would improve this (e.g. we'd see the copied rows). But I realized lz4\n>> already does that (in the streaming mode at least), and yet it does not\n>> help very much. Presumably that depends on how large the context is. If\n>> the random string is long enough, it won't help.\n>>\n>> So maybe this approach is fine, and doing the compression at a lower\n>> layer (for the whole file), would not really improve this. Even then\n>> we'd only see a limited amount of data.\n>>\n>> Maybe the right answer to this is that compression does not help cases\n>> where most of the replicated data is TOAST, and that it can help cases\n>> with wide (and redundant) rows, or repeated rows. And that lz4 is a\n>> clearly superior choice. (This also raises the question if we want to\n>> support REORDER_BUFFER_STRAT_LZ4_REGULAR. I haven't looked into this,\n>> but doesn't that behave more like pglz, i.e. no context?)\n> \n> I'm working on a new version of this patch set that will include the\n> changes you suggested in your review. About using LZ4 regular API, the\n> goal was to use it when we cannot use the streaming API due to raw\n> data larger than LZ4 ring buffer. But this is something I'm going to\n> delete in the new version because I'm planning to use a similar\n> approach as we do in astreamer_lz4.c: using frames, not blocks. LZ4\n> frame API looks very similar to ZSTD's streaming API.\n> \n>> FWIW when doing these tests, it made me realize how useful would it be\n>> to track both the \"raw\" and \"spilled\" amounts. That is before/after\n>> compression. It'd make calculating compression ratio much easier.\n> \n> Yes, that's why I tried to \"fix\" the spill_bytes counter.\n> \n\nBut I think the 'fixed' counter only tracks the data after the new\ncompression, right? I'm suggesting to have two counters - one for \"raw\"\ndata (before compression) and \"compressed\" (after compression).\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Tue, 24 Sep 2024 11:57:28 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compress ReorderBuffer spill files using LZ4" } ]
[ { "msg_contents": "Hello all,\n\nI have a query that forces an out of memory error, where the OS will kill\nthe postgresql process.\nThe query plan (run immediately after a vacuum analyze) is at\nhttps://explain.depesz.com/s/ITQI#html .\n\n\nPostgreSQL version 16.3, running on RHEL 8.9, 16 vCPU, 64 GB RAM, 32 GB swap\n\nshared_buffers=8G\neffective_cache_size=24G\nmaintenance_work_mem=2G\nwork_mem=104857kB\ndefault_statistics_target = 100\nmax_worker_processes = 16\nmax_parallel_workers_per_gather = 4\nmax_parallel_workers = 16\nmax_parallel_maintenance_workers = 4\njit=off\n\n\nIt looks like the excessive memory allocation is reported in\nHashSpillContext. I've attached the dump of the memory context for the 5\nprocesses (query + 4 parallel workers) some time after query start. I also\nsee a huge number of temporary files being created. For the time being I've\nset enable_parallel_hash = 'off' and the problem went away.\n\nI've seen a potentially similar problem reported in\nhttps://www.postgresql.org/message-id/flat/20230516200052.sbg6z4ghcmsas3wv%40liskov#f6059259c7c9251fb8c17f5793a2d427\n.\n\n\nAny idea on how to identify the problem? I can reproduce it on demand.\nShould I report it pgsql-bugs?\n\nBest regards,\nRadu", "msg_date": "Thu, 6 Jun 2024 15:25:25 +0300", "msg_from": "Radu Radutiu <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql OOM" }, { "msg_contents": "On Thu, Jun 6, 2024 at 1:25 PM Radu Radutiu <[email protected]> wrote:\n\n> Hello all,\n>\n> I have a query that forces an out of memory error, where the OS will kill\n> the postgresql process.\n> The query plan (run immediately after a vacuum analyze) is at\n> https://explain.depesz.com/s/ITQI#html .\n>\n> ...\n\n>\n> Any idea on how to identify the problem? I can reproduce it on demand.\n> Should I report it pgsql-bugs?\n>\n> Best regards,\n> Radu\n>\n\nI am not qualified to answer on the OOM issue but why are you joining the\nsame table (outputrequest) 4 times (using an identical join condition)?\nThis essentially does a cross join, if an input_sequence value has say,\n1000 related rows in outputrequest, you will be getting 1000^4 rows in the\nresult set.\n\n FROM inputrequest t\n LEFT JOIN outputrequest rec_tro\n ON rec_tro.input_sequence = t.input_sequence\n LEFT JOIN inputrequest r\n ON r.originalRequest_id = t.input_sequence\n LEFT JOIN outputrequest rpl_rec_tro\n ON rpl_rec_tro.input_sequence = r.input_sequence\n LEFT JOIN outputrequest rpl_snd_tro\n ON rpl_snd_tro.reply_input_sequence = r.input_sequence\n LEFT JOIN outputrequest snd_tro\n ON snd_tro.reply_input_sequence = t.input_sequence\n\nOn Thu, Jun 6, 2024 at 1:25 PM Radu Radutiu <[email protected]> wrote:Hello all,I have a query that forces an out of memory error, where the OS will kill the postgresql process.The query plan (run immediately after a vacuum analyze) is at https://explain.depesz.com/s/ITQI#html .... Any idea on how to identify the problem? I can reproduce it on demand. Should I report it pgsql-bugs? Best regards,RaduI am not qualified to answer on the OOM issue but why are you joining the same table (outputrequest) 4 times (using an identical join condition)?This essentially does a cross join, if an input_sequence value has say, 1000 related rows in outputrequest, you will be getting 1000^4 rows in the result set. FROM inputrequest t\n LEFT JOIN outputrequest rec_tro\n ON rec_tro.input_sequence = t.input_sequence\n LEFT JOIN inputrequest r\n ON r.originalRequest_id = t.input_sequence\n LEFT JOIN outputrequest rpl_rec_tro\n ON rpl_rec_tro.input_sequence = r.input_sequence\n LEFT JOIN outputrequest rpl_snd_tro\n ON rpl_snd_tro.reply_input_sequence = r.input_sequence\n LEFT JOIN outputrequest snd_tro\n ON snd_tro.reply_input_sequence = t.input_sequence", "msg_date": "Thu, 6 Jun 2024 13:58:24 +0100", "msg_from": "Pantelis Theodosiou <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql OOM" }, { "msg_contents": ">\n>\n>> I am not qualified to answer on the OOM issue but why are you joining the\n> same table (outputrequest) 4 times (using an identical join condition)?\n> This essentially does a cross join, if an input_sequence value has say,\n> 1000 related rows in outputrequest, you will be getting 1000^4 rows in the\n> result set.\n>\n\nThe query itself runs fine in a reasonable time with enable_parallel_hash =\n'off'. I see two problems - one is the wrong execution plan (right after\nrunning analyze), the second and the most important is the huge memory\nusage (far exceeding work_mem and shared buffers) leading to OOM.\nSee https://explain.depesz.com/s/yAqS for the explain plan\nwith enable_parallel_hash = 'off.\n\nI am not qualified to answer on the OOM issue but why are you joining the same table (outputrequest) 4 times (using an identical join condition)?This essentially does a cross join, if an input_sequence value has say, 1000 related rows in outputrequest, you will be getting 1000^4 rows in the result set.The query itself runs fine in a reasonable time with enable_parallel_hash = 'off'. I see two problems - one is the wrong execution plan (right after running analyze), the second and the most important is the huge memory usage (far exceeding work_mem and shared buffers) leading to OOM. See https://explain.depesz.com/s/yAqS for the explain plan with enable_parallel_hash = 'off.", "msg_date": "Thu, 6 Jun 2024 17:19:34 +0300", "msg_from": "Radu Radutiu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql OOM" }, { "msg_contents": "Radu Radutiu <[email protected]> writes:\n> The query itself runs fine in a reasonable time with enable_parallel_hash =\n> 'off'. I see two problems - one is the wrong execution plan (right after\n> running analyze), the second and the most important is the huge memory\n> usage (far exceeding work_mem and shared buffers) leading to OOM.\n> See https://explain.depesz.com/s/yAqS for the explain plan\n> with enable_parallel_hash = 'off.\n\nWhat it looks like to me is that the join key column has very skewed\nstatistics, such that a large majority of the tuples end up in the\nsame hash bucket (probably they even all have identical keys). I think\nthe memory growth is coming from the executor repeatedly splitting\nthe buckets in a vain attempt to separate those tuples into multiple\nbuckets.\n\nThe planner should recognize this situation and avoid use of hash\njoin in such cases, but maybe the statistics aren't reflecting the\nproblem, or maybe there's something wrong with the logic specific\nto parallel hash join. You've not really provided enough information\nto diagnose why the poor choice of plan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Jun 2024 16:55:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql OOM" }, { "msg_contents": ">\n>\n>\n>> The planner should recognize this situation and avoid use of hash\n>> join in such cases, but maybe the statistics aren't reflecting the\n>> problem, or maybe there's something wrong with the logic specific\n>> to parallel hash join. You've not really provided enough information\n>> to diagnose why the poor choice of plan.\n>>\n>> regards, tom lane\n>>\n>\n> Thanks for looking into this. I'm not sure what information would be\n> needed to look at the choice of plan.\n> The statistics for the join conditions in the query would be:\n> join_condition | min_count | max_count | avg_count\n> ----------------+-----------+-----------+----------------------------\n> snd_tro | 0 | 0 | 0.000000000000000000000000\n> rpl_rec_tro | 0 | 2 | 0.99869222814474470477\n> rec_tro | 0 | 2 | 0.99869222814474470477\n> rpl_snd_tro | 0 | 0 | 0.000000000000000000000000\n> r | 0 | 1 | 0.49850916663490161653\n>\n>\n> The relevant columns for the tables are:\n> postgres=# \\d inputrequest\n> Table \"public.inputrequest\"\n> Column | Type | Collation |\n> Nullable | Default\n>\n> --------------------------+-----------------------------+-----------+----------+---------\n> input_sequence | bigint | | not\n> null |\n> msg_type | character varying(8) | | not\n> null |\n> msg_content | text | | not\n> null |\n> msg_reference | character varying(35) | |\n> |\n> originalrequest_id | bigint | |\n> |\n> receive_time | timestamp without time zone | | not\n> null |\n> related_output_sequence | bigint | |\n> |\n> msg_status | character varying(15) | |\n> |\n>\n> Indexes:\n> \"inputrequest_pkey\" PRIMARY KEY, btree (input_sequence)\n> \"inputrequest_originalrequest_id_idx\" btree (originalrequest_id)\n>\n> postgres=# \\d outputrequest\n> Table \"public.outputrequest\"\n> Column | Type | Collation |\n> Nullable | Default\n>\n> ------------------------+-----------------------------+-----------+----------+---------\n> output_sequence | bigint | | not\n> null |\n> input_sequence | bigint | |\n> |\n> msg_type | character varying(8) | |\n> |\n> msg_content | text | | not\n> null |\n> msg_reference | character varying(35) | |\n> |\n> reply_input_sequence | bigint | |\n> |\n> status | integer | | not\n> null |\n> related_input_sequence | bigint | |\n> |\n> Indexes:\n> \"outputrequest_pkey\" PRIMARY KEY, btree (output_sequence)\n> \"outputrequest_input_sequence_idx\" btree (input_sequence)\n> \"outputrequest_reply_input_sequence_idx\" btree (reply_input_sequence)\n>\n>\nI wonder if our choice of primary keys (input_sequence and output_sequence)\nhas something to do with the skew in the hash bucket distribution. We use\nthe following format: yyyymmdd????????xx , where ???????? is more or less a\nsequence and xx is the node generating the id, i.e. 01,02,etc (with only\none or two values in the dataset).\n\nI wonder if it would be difficult to have an upper limit on the private\nmemory that can be allocated by one process (or all processes similar to\nOracle's pga_aggregate_limit). I would rather have one query failing with\nan error message instead of postgres eating up all memory and swap on the\nserver.\n\nBest regards,\nRadu\n\n\r\nThe planner should recognize this situation and avoid use of hash\r\njoin in such cases, but maybe the statistics aren't reflecting the\r\nproblem, or maybe there's something wrong with the logic specific\r\nto parallel hash join.  You've not really provided enough information\r\nto diagnose why the poor choice of plan.\n\r\n                        regards, tom laneThanks for looking into this. I'm not sure what information would be needed to look at the choice of plan.The statistics for the join conditions in the query would be: join_condition | min_count | max_count |         avg_count          ----------------+-----------+-----------+---------------------------- snd_tro        |         0 |         0 | 0.000000000000000000000000 rpl_rec_tro    |         0 |         2 |     0.99869222814474470477 rec_tro        |         0 |         2 |     0.99869222814474470477 rpl_snd_tro    |         0 |         0 | 0.000000000000000000000000 r              |         0 |         1 |     0.49850916663490161653 The relevant columns for the tables are:postgres=# \\d inputrequest                               Table \"public.inputrequest\"          Column          |            Type             | Collation | Nullable | Default --------------------------+-----------------------------+-----------+----------+--------- input_sequence           | bigint                      |           | not null |  msg_type                 | character varying(8)        |           | not null |  msg_content              | text                        |           | not null |  msg_reference            | character varying(35)       |           |          |  originalrequest_id       | bigint                      |           |          |  receive_time             | timestamp without time zone |           | not null |  related_output_sequence  | bigint                      |           |          |  msg_status               | character varying(15)       |           |          |  Indexes:    \"inputrequest_pkey\" PRIMARY KEY, btree (input_sequence)    \"inputrequest_originalrequest_id_idx\" btree (originalrequest_id)postgres=# \\d outputrequest                             Table \"public.outputrequest\"         Column         |            Type             | Collation | Nullable | Default ------------------------+-----------------------------+-----------+----------+--------- output_sequence        | bigint                      |           | not null |  input_sequence         | bigint                      |           |          |  msg_type               | character varying(8)        |           |          |  msg_content            | text                        |           | not null |  msg_reference          | character varying(35)       |           |          |  reply_input_sequence   | bigint                      |           |          |  status                 | integer                     |           | not null |  related_input_sequence | bigint                      |           |          | Indexes:    \"outputrequest_pkey\" PRIMARY KEY, btree (output_sequence)    \"outputrequest_input_sequence_idx\" btree (input_sequence)    \"outputrequest_reply_input_sequence_idx\" btree (reply_input_sequence)I wonder if our choice of primary keys (input_sequence and output_sequence) has something to do with the skew in the hash bucket distribution. We use the following format: yyyymmdd????????xx , where ???????? is more or less a sequence and xx is the node generating the id, i.e. 01,02,etc (with only one or two values in the dataset).I wonder if it would be difficult to have an upper limit on the private memory that can be allocated by one process (or all processes similar to Oracle's pga_aggregate_limit). I would rather have one query failing with an error message instead of postgres eating up all memory and swap on the server.   Best regards,Radu", "msg_date": "Fri, 7 Jun 2024 17:28:18 +0300", "msg_from": "Radu Radutiu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql OOM" }, { "msg_contents": "Hi,\n\nOn 2024-06-06 15:25:25 +0300, Radu Radutiu wrote:\n> I have a query that forces an out of memory error, where the OS will kill\n> the postgresql process.\n\nFWIW, it can be useful to configure the OS with strict memory overcommit. That\ncauses postgres to fail more gracefully, because the OOM killer won't be\ninvoked.\n\n\n> The query plan (run immediately after a vacuum analyze) is at\n> https://explain.depesz.com/s/ITQI#html .\n\nCan you get EXPLAIN (ANALYZE, BUFFERS) to complete if you reduce the number of\nworkers? It'd be useful to get some of the information about the actual\nnumbers of tuples etc.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Jun 2024 09:59:07 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql OOM" }, { "msg_contents": "Hi,\n\nOn 2024-06-06 13:58:24 +0100, Pantelis Theodosiou wrote:\n> I am not qualified to answer on the OOM issue but why are you joining the\n> same table (outputrequest) 4 times (using an identical join condition)?\n\nThe conditions aren't actually the same\n rpl_rec_tro. input_sequence = r.input_sequence\n rpl_snd_tro.reply_input_sequence = r.input_sequence\n snd_tro. reply_input_sequence = t.input_sequence\n\nFirst two are r.input_sequence to different columns, the third one also uses\nreply_input_sequence but joins to t, not r.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Jun 2024 10:04:19 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql OOM" }, { "msg_contents": "On Fri, Jun 7, 2024 at 7:59 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-06-06 15:25:25 +0300, Radu Radutiu wrote:\n> > I have a query that forces an out of memory error, where the OS will kill\n> > the postgresql process.\n>\n> FWIW, it can be useful to configure the OS with strict memory overcommit.\n> That\n> causes postgres to fail more gracefully, because the OOM killer won't be\n> invoked.\n>\n>\n> > The query plan (run immediately after a vacuum analyze) is at\n> > https://explain.depesz.com/s/ITQI#html .\n>\n> Can you get EXPLAIN (ANALYZE, BUFFERS) to complete if you reduce the\n> number of\n> workers? It'd be useful to get some of the information about the actual\n> numbers of tuples etc.\n>\n>\n> Hi,\nI've tried first giving more memory to the OS and mounting a tmpfs\nin pgsql_tmp. It didn't work, I got\nERROR: invalid DSA memory alloc request size 1140850688\nCONTEXT: parallel worker\nI've seen around 2 million temporary files created before the crash.\nWith work_mem 100MB I was not able to get it to work with 2 parallel\nworkers.\nNext, I've increased work_mem to 200MB and now (with extra memory and\ntmpfs) it finished: https://explain.depesz.com/s/NnRC\n\nRadu\n\nOn Fri, Jun 7, 2024 at 7:59 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-06-06 15:25:25 +0300, Radu Radutiu wrote:\n> I have a query that forces an out of memory error, where the OS will kill\n> the postgresql process.\n\nFWIW, it can be useful to configure the OS with strict memory overcommit. That\ncauses postgres to fail more gracefully, because the OOM killer won't be\ninvoked.\n\n\n> The query plan (run immediately after a vacuum analyze) is at\n> https://explain.depesz.com/s/ITQI#html .\n\nCan you get EXPLAIN (ANALYZE, BUFFERS) to complete if you reduce the number of\nworkers? It'd be useful to get some of the information about the actual\nnumbers of tuples etc.\n\nHi, I've tried first giving more memory to the OS and mounting a tmpfs in  pgsql_tmp. It didn't  work, I got ERROR:  invalid DSA memory alloc request size 1140850688CONTEXT:  parallel workerI've seen around 2 million temporary files created before the crash.With work_mem 100MB I was not able to get it to work with 2 parallel workers. Next, I've increased work_mem to 200MB and now (with extra memory and tmpfs) it finished: https://explain.depesz.com/s/NnRC Radu", "msg_date": "Fri, 7 Jun 2024 21:42:58 +0300", "msg_from": "Radu Radutiu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql OOM" }, { "msg_contents": ">\n>\n> FWIW, it can be useful to configure the OS with strict memory overcommit.\n> That\n> causes postgres to fail more gracefully, because the OOM killer won't be\n> invoked.\n>\n\nIn the current setup the database is used as an embedded db, with the\napplication sharing the same host as the database. Setting the memory\novercommit affects the other application, but configuring LimitAS for the\npostgresql systemd service should work.\nDoes this mean that the fact that this query uses more than 100x the\nconfigured work_mem is not considered a bug? Should I create a bug report?\n\nFWIW, it can be useful to configure the OS with strict memory overcommit. That\ncauses postgres to fail more gracefully, because the OOM killer won't be\ninvoked.In the current setup the database is used as an embedded db, with the application sharing the same host as the database. Setting the memory overcommit affects the other application, but configuring LimitAS for the postgresql systemd service should work. Does this mean that the fact that this query uses more than 100x the configured work_mem is not considered a bug? Should I create a bug report?", "msg_date": "Mon, 10 Jun 2024 16:13:06 +0300", "msg_from": "Radu Radutiu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql OOM" }, { "msg_contents": "Hi,\n\nOn 2024-06-07 21:42:58 +0300, Radu Radutiu wrote:\n> On Fri, Jun 7, 2024 at 7:59 PM Andres Freund <[email protected]> wrote:\n> > On 2024-06-06 15:25:25 +0300, Radu Radutiu wrote:\n> > > I have a query that forces an out of memory error, where the OS will kill\n> > > the postgresql process.\n> >\n> > FWIW, it can be useful to configure the OS with strict memory overcommit.\n> > That\n> > causes postgres to fail more gracefully, because the OOM killer won't be\n> > invoked.\n> >\n> >\n> > > The query plan (run immediately after a vacuum analyze) is at\n> > > https://explain.depesz.com/s/ITQI#html .\n> >\n> > Can you get EXPLAIN (ANALYZE, BUFFERS) to complete if you reduce the\n> > number of\n> > workers? It'd be useful to get some of the information about the actual\n> > numbers of tuples etc.\n\n> I've tried first giving more memory to the OS and mounting a tmpfs\n> in pgsql_tmp. It didn't work, I got\n> ERROR: invalid DSA memory alloc request size 1_140_850_688\n> CONTEXT: parallel worker\n> I've seen around 2 million temporary files created before the crash.\n> With work_mem 100MB I was not able to get it to work with 2 parallel\n> workers.\n> Next, I've increased work_mem to 200MB and now (with extra memory and\n> tmpfs) it finished: https://explain.depesz.com/s/NnRC\n\nThat's helpful, thanks!\n\nOne thing to note is that postgres' work_mem is confusing - it applies to\nindividual query execution nodes, not the whole query. Additionally, when you\nuse parallel workers, each of the parallel workers can use the \"full\"\nwork_mem, rather than work_mem being evenly distributed across workers.\n\nWithin that, the memory usage in the EXPLAIN ANALYZE isn't entirely unexpected\n(I'd say it's unreasonable if you're not a postgres dev, but that's a\ndifferent issue).\n\nWe can see each of the Hash nodes use ~1GB, which is due to\n(1 leader + 4 workers) * work_mem = 5 * 200MB = 1GB.\n\nIn this specific query we probably could free the memory in the \"lower\" hash\njoin nodes once the node directly above has finished building, but we don't\nhave the logic for that today.\n\n\nOf course that doesn't explain why the memory usage / temp file creation is so\nextreme with a lower limit / fewer workers. There aren't any bad statistics\nafaics, nor can I really see a potential for a significant skew, it looks to\nme that the hashtables are all built on a single text field and that nearly\nall the input rows are distinct.\n\n\nCould you post the table definition (\\d+) and the database definition\n(\\l). One random guess I have is that you ended up with a \"non-deterministic\"\ncollation for that column and that we end up with a bad hash due to that.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 Jun 2024 17:49:20 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql OOM" }, { "msg_contents": "Hi,\n\n\n> That's helpful, thanks!\n>\n> One thing to note is that postgres' work_mem is confusing - it applies to\n> individual query execution nodes, not the whole query. Additionally, when\n> you\n> use parallel workers, each of the parallel workers can use the \"full\"\n> work_mem, rather than work_mem being evenly distributed across workers.\n>\n> Within that, the memory usage in the EXPLAIN ANALYZE isn't entirely\n> unexpected\n> (I'd say it's unreasonable if you're not a postgres dev, but that's a\n> different issue).\n>\n> We can see each of the Hash nodes use ~1GB, which is due to\n> (1 leader + 4 workers) * work_mem = 5 * 200MB = 1GB.\n>\n> In this specific query we probably could free the memory in the \"lower\"\n> hash\n> join nodes once the node directly above has finished building, but we don't\n> have the logic for that today.\n>\n\nI would understand 1 GB, even 2GB (considering hash_mem_multiplier) but the\nserver ran out of memory with more than 64 GB.\n\n>\n> Of course that doesn't explain why the memory usage / temp file creation\n> is so\n> extreme with a lower limit / fewer workers. There aren't any bad\n> statistics\n> afaics, nor can I really see a potential for a significant skew, it looks\n> to\n> me that the hashtables are all built on a single text field and that nearly\n> all the input rows are distinct.\n>\n> Could you post the table definition (\\d+) and the database definition\n> (\\l). One random guess I have is that you ended up with a\n> \"non-deterministic\"\n> collation for that column and that we end up with a bad hash due to that.\n>\n\nI was able to eliminate the columns not involved in the query while still\nretaining the problematic behavior (that's the reason for the new table\nnames):\npostgres=# \\d inreq\n Table \"public.inreq\"\n Column | Type | Collation |\nNullable | Default\n-------------------------+-----------------------------+-----------+----------+---------\n input_sequence | bigint | | not\nnull |\n msg_type | character varying(8) | |\n |\n originalrequest_id | bigint | |\n |\n receive_time | timestamp without time zone | |\n |\n related_output_sequence | bigint | |\n |\n msg_status | character varying(15) | |\n |\nIndexes:\n \"inreq_pkey\" PRIMARY KEY, btree (input_sequence)\n \"inreq_originalrequest_id_idx\" btree (originalrequest_id)\n\npostgres=# \\d outreq\n Table \"public.outreq\"\n Column | Type | Collation | Nullable | Default\n------------------------+--------+-----------+----------+---------\n output_sequence | bigint | | not null |\n input_sequence | bigint | | |\n reply_input_sequence | bigint | | |\n related_input_sequence | bigint | | |\nIndexes:\n \"outreq_pkey\" PRIMARY KEY, btree (output_sequence)\n \"outreq_input_sequence_idx\" btree (input_sequence)\n \"outreq_reply_input_sequence_idx\" btree (reply_input_sequence)\npostgres=# SELECT datname, datcollate FROM pg_database WHERE datname =\ncurrent_database();\n datname | datcollate\n----------+-------------\n postgres | en_US.UTF-8\n(1 row)\n\nA dump of the two tables above can be found at\nhttps://drive.google.com/file/d/1ep1MYjNzlFaICL3GlPZaMdpOxRG6WCGz/view?usp=sharing\n(compressed size 1GB; size of database after import 14 GB ).\n\n# prepare my_query (timestamp,bigint) as SELECT t.input_sequence,\nrec_tro.output_sequence, r.input_sequence, rpl_rec_tro.output_sequence,\nrpl_snd_tro.output_sequence, snd_tro.output_sequence, t.msg_type FROM\ninreq t LEFT JOIN outreq rec_tro ON rec_tro.input_sequence =\nt.input_sequence LEFT JOIN inreq r ON r.originalRequest_id =\nt.input_sequence LEFT JOIN outreq rpl_rec_tro ON\nrpl_rec_tro.input_sequence = r.input_sequence LEFT JOIN outreq rpl_snd_tro\nON rpl_snd_tro.reply_input_sequence = r.input_sequence LEFT JOIN outreq\nsnd_tro ON snd_tro.reply_input_sequence = t.input_sequence WHERE\nt.receive_time < $1 AND t.input_sequence < $2 AND t.msg_status IN\n('COMPLETED', 'REJECTED') ORDER BY t.msg_status DESC, t.input_sequence ;\n# EXPLAIN (ANALYZE,BUFFERS) EXECUTE my_query('2024-05-18 00:00:00',\n202406020168279904);\n\nBest Regards,\nRadu\n\nHi,\nThat's helpful, thanks!\n\nOne thing to note is that postgres' work_mem is confusing - it applies to\nindividual query execution nodes, not the whole query. Additionally, when you\nuse parallel workers, each of the parallel workers can use the \"full\"\nwork_mem, rather than work_mem being evenly distributed across workers.\n\nWithin that, the memory usage in the EXPLAIN ANALYZE isn't entirely unexpected\n(I'd say it's unreasonable if you're not a postgres dev, but that's a\ndifferent issue).\n\nWe can see each of the Hash nodes use ~1GB, which is due to\n(1 leader + 4 workers) * work_mem = 5 * 200MB = 1GB.\n\nIn this specific query we probably could free the memory in the \"lower\" hash\njoin nodes once the node directly above has finished building, but we don't\nhave the logic for that today. I would understand 1 GB, even 2GB (considering hash_mem_multiplier) but the server ran out of memory with more than 64 GB. \n\nOf course that doesn't explain why the memory usage / temp file creation is so\nextreme with a lower limit / fewer workers.  There aren't any bad statistics\nafaics, nor can I really see a potential for a significant skew, it looks to\nme that the hashtables are all built on a single text field and that nearly\nall the input rows are distinct.\nCould you post the table definition (\\d+) and the database definition\n(\\l). One random guess I have is that you ended up with a \"non-deterministic\"\ncollation for that column and that we end up with a bad hash due to that.I was  able to eliminate the columns not involved in the query while still retaining the problematic behavior (that's the reason for the new table names):postgres=# \\d inreq                                  Table \"public.inreq\"         Column          |            Type             | Collation | Nullable | Default -------------------------+-----------------------------+-----------+----------+--------- input_sequence          | bigint                      |           | not null |  msg_type                | character varying(8)        |           |          |  originalrequest_id      | bigint                      |           |          |  receive_time            | timestamp without time zone |           |          |  related_output_sequence | bigint                      |           |          |  msg_status              | character varying(15)       |           |          | Indexes:    \"inreq_pkey\" PRIMARY KEY, btree (input_sequence)    \"inreq_originalrequest_id_idx\" btree (originalrequest_id)postgres=# \\d outreq                      Table \"public.outreq\"         Column         |  Type  | Collation | Nullable | Default ------------------------+--------+-----------+----------+--------- output_sequence        | bigint |           | not null |  input_sequence         | bigint |           |          |  reply_input_sequence   | bigint |           |          |  related_input_sequence | bigint |           |          | Indexes:    \"outreq_pkey\" PRIMARY KEY, btree (output_sequence)    \"outreq_input_sequence_idx\" btree (input_sequence)    \"outreq_reply_input_sequence_idx\" btree (reply_input_sequence)postgres=# SELECT datname, datcollate FROM pg_database WHERE datname = current_database(); datname  | datcollate  ----------+------------- postgres | en_US.UTF-8(1 row)A dump of the two tables above can be found at  https://drive.google.com/file/d/1ep1MYjNzlFaICL3GlPZaMdpOxRG6WCGz/view?usp=sharing (compressed size 1GB; size of database after import 14 GB ).# prepare my_query (timestamp,bigint) as SELECT  t.input_sequence, rec_tro.output_sequence, r.input_sequence, rpl_rec_tro.output_sequence, rpl_snd_tro.output_sequence, snd_tro.output_sequence, t.msg_type  FROM inreq t  LEFT JOIN outreq rec_tro ON rec_tro.input_sequence = t.input_sequence LEFT JOIN inreq r ON r.originalRequest_id = t.input_sequence   LEFT JOIN outreq rpl_rec_tro ON rpl_rec_tro.input_sequence = r.input_sequence  LEFT JOIN outreq rpl_snd_tro ON rpl_snd_tro.reply_input_sequence = r.input_sequence  LEFT JOIN outreq snd_tro ON snd_tro.reply_input_sequence = t.input_sequence  WHERE t.receive_time < $1 AND t.input_sequence < $2  AND t.msg_status IN ('COMPLETED', 'REJECTED')  ORDER BY t.msg_status DESC, t.input_sequence ;# EXPLAIN (ANALYZE,BUFFERS)  EXECUTE my_query('2024-05-18 00:00:00', 202406020168279904);Best Regards, Radu", "msg_date": "Wed, 12 Jun 2024 14:28:56 +0300", "msg_from": "Radu Radutiu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql OOM" } ]
[ { "msg_contents": "Hello everyone,\n\nAt present, we use MVCC snapshots to identify dependent objects. This\nimplies that if a new dependent object is inserted within a transaction\nthat is still ongoing, our search for dependent objects won't include this\nrecently added one. Consequently, if someone attempts to drop the\nreferenced object, it will be dropped, and when the ongoing transaction\ncompletes, we will end up having an entry for a referenced object that has\nalready been dropped. This situation can lead to an inconsistent state.\nBelow is an example illustrating this scenario:\n\nSession 1:\n- create table t1(a int);\n- insert into t1 select i from generate_series(1, 10000000) i;\n- create extension btree_gist;\n- create index i1 on t1 using gist( a );\n\nSession 2: (While the index creation in session 1 is in progress, drop the\nbtree_gist extension)\n- drop extension btree_gist;\n\nAbove command succeeds and so does the create index command running in\nsession 1, post this, if we try running anything on table t1, i1, it fails\nwith an error: \"cache lookup failed for opclass ...\"\n\nAttached is the patch that I have tried, which seems to be working for me.\nIt's not polished and thoroughly tested, but just sharing here to clarify\nwhat I am trying to suggest. Please have a look and let me know your\nthoughts.\n\n--\nWith Regards,\nAshutosh Sharma.", "msg_date": "Thu, 6 Jun 2024 17:59:00 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "How about using dirty snapshots to locate dependent objects?" }, { "msg_contents": "On Thu, Jun 6, 2024 at 5:59 PM Ashutosh Sharma <[email protected]> wrote:\n>\n> Hello everyone,\n>\n> At present, we use MVCC snapshots to identify dependent objects. This implies that if a new dependent object is inserted within a transaction that is still ongoing, our search for dependent objects won't include this recently added one. Consequently, if someone attempts to drop the referenced object, it will be dropped, and when the ongoing transaction completes, we will end up having an entry for a referenced object that has already been dropped. This situation can lead to an inconsistent state. Below is an example illustrating this scenario:\n\nI don't think it's correct to allow the index to be dropped while a\ntransaction is creating it. Instead, the right solution should be for\nthe create index operation to protect the object it is using from\nbeing dropped. Specifically, the create index operation should acquire\na shared lock on the Access Method (AM) to ensure it doesn't get\ndropped concurrently while the transaction is still in progress.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 18:20:30 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How about using dirty snapshots to locate dependent objects?" }, { "msg_contents": "On Thu, Jun 06, 2024 at 05:59:00PM +0530, Ashutosh Sharma wrote:\n> Hello everyone,\n> \n> At present, we use MVCC snapshots to identify dependent objects. This\n> implies that if a new dependent object is inserted within a transaction\n> that is still ongoing, our search for dependent objects won't include this\n> recently added one. Consequently, if someone attempts to drop the\n> referenced object, it will be dropped, and when the ongoing transaction\n> completes, we will end up having an entry for a referenced object that has\n> already been dropped. This situation can lead to an inconsistent state.\n> Below is an example illustrating this scenario:\n> \n> Session 1:\n> - create table t1(a int);\n> - insert into t1 select i from generate_series(1, 10000000) i;\n> - create extension btree_gist;\n> - create index i1 on t1 using gist( a );\n> \n> Session 2: (While the index creation in session 1 is in progress, drop the\n> btree_gist extension)\n> - drop extension btree_gist;\n> \n> Above command succeeds and so does the create index command running in\n> session 1, post this, if we try running anything on table t1, i1, it fails\n> with an error: \"cache lookup failed for opclass ...\"\n> \n> Attached is the patch that I have tried, which seems to be working for me.\n> It's not polished and thoroughly tested, but just sharing here to clarify\n> what I am trying to suggest. Please have a look and let me know your\n> thoughts.\n\nThanks for the patch proposal!\n\nThe patch does not fix the other way around:\n\n- session 1: BEGIN; DROP extension btree_gist;\n- session 2: create index i1 on t1 using gist( a );\n- session 1: commits while session 2 is creating the index\n\nand does not address all the possible orphaned dependencies cases.\n\nThere is an ongoing thread (see [1]) to fix the orphaned dependencies issue.\n\nv9 attached in [1] fixes the case you describe here.\n\n[1]: https://www.postgresql.org/message-id/flat/ZiYjn0eVc7pxVY45%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 13:27:44 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How about using dirty snapshots to locate dependent objects?" }, { "msg_contents": "On Thu, Jun 6, 2024 at 6:20 PM Dilip Kumar <[email protected]> wrote:\n\n> On Thu, Jun 6, 2024 at 5:59 PM Ashutosh Sharma <[email protected]>\n> wrote:\n> >\n> > Hello everyone,\n> >\n> > At present, we use MVCC snapshots to identify dependent objects. This\n> implies that if a new dependent object is inserted within a transaction\n> that is still ongoing, our search for dependent objects won't include this\n> recently added one. Consequently, if someone attempts to drop the\n> referenced object, it will be dropped, and when the ongoing transaction\n> completes, we will end up having an entry for a referenced object that has\n> already been dropped. This situation can lead to an inconsistent state.\n> Below is an example illustrating this scenario:\n>\n> I don't think it's correct to allow the index to be dropped while a\n> transaction is creating it. Instead, the right solution should be for\n> the create index operation to protect the object it is using from\n> being dropped. Specifically, the create index operation should acquire\n> a shared lock on the Access Method (AM) to ensure it doesn't get\n> dropped concurrently while the transaction is still in progress.\n>\n\nIf I'm following you correctly, that's exactly what the patch is trying to\ndo; while the index creation is in progress, if someone tries to drop the\nobject referenced by the index under creation, the referenced object being\ndropped is able to know about the dependent object (in this case the index\nbeing created) using dirty snapshot and hence, it is unable to acquire the\nlock on the dependent object, and as a result of that, it is unable to drop\nit.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Jun 6, 2024 at 6:20 PM Dilip Kumar <[email protected]> wrote:On Thu, Jun 6, 2024 at 5:59 PM Ashutosh Sharma <[email protected]> wrote:\n>\n> Hello everyone,\n>\n> At present, we use MVCC snapshots to identify dependent objects. This implies that if a new dependent object is inserted within a transaction that is still ongoing, our search for dependent objects won't include this recently added one. Consequently, if someone attempts to drop the referenced object, it will be dropped, and when the ongoing transaction completes, we will end up having an entry for a referenced object that has already been dropped. This situation can lead to an inconsistent state. Below is an example illustrating this scenario:\n\nI don't think it's correct to allow the index to be dropped while a\ntransaction is creating it. Instead, the right solution should be for\nthe create index operation to protect the object it is using from\nbeing dropped. Specifically, the create index operation should acquire\na shared lock on the Access Method (AM) to ensure it doesn't get\ndropped concurrently while the transaction is still in progress.If I'm following you correctly, that's exactly what the patch is trying to do; while the index creation is in progress, if someone tries to drop the object referenced by the index under creation, the referenced object being dropped is able to know about the dependent object (in this case the index being created) using dirty snapshot and hence, it is unable to acquire the lock on the dependent object, and as a result of that, it is unable to drop it. --With Regards,Ashutosh Sharma.", "msg_date": "Thu, 6 Jun 2024 19:39:31 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How about using dirty snapshots to locate dependent objects?" }, { "msg_contents": "On Thu, Jun 6, 2024 at 6:57 PM Bertrand Drouvot <\[email protected]> wrote:\n\n> On Thu, Jun 06, 2024 at 05:59:00PM +0530, Ashutosh Sharma wrote:\n> > Hello everyone,\n> >\n> > At present, we use MVCC snapshots to identify dependent objects. This\n> > implies that if a new dependent object is inserted within a transaction\n> > that is still ongoing, our search for dependent objects won't include\n> this\n> > recently added one. Consequently, if someone attempts to drop the\n> > referenced object, it will be dropped, and when the ongoing transaction\n> > completes, we will end up having an entry for a referenced object that\n> has\n> > already been dropped. This situation can lead to an inconsistent state.\n> > Below is an example illustrating this scenario:\n> >\n> > Session 1:\n> > - create table t1(a int);\n> > - insert into t1 select i from generate_series(1, 10000000) i;\n> > - create extension btree_gist;\n> > - create index i1 on t1 using gist( a );\n> >\n> > Session 2: (While the index creation in session 1 is in progress, drop\n> the\n> > btree_gist extension)\n> > - drop extension btree_gist;\n> >\n> > Above command succeeds and so does the create index command running in\n> > session 1, post this, if we try running anything on table t1, i1, it\n> fails\n> > with an error: \"cache lookup failed for opclass ...\"\n> >\n> > Attached is the patch that I have tried, which seems to be working for\n> me.\n> > It's not polished and thoroughly tested, but just sharing here to clarify\n> > what I am trying to suggest. Please have a look and let me know your\n> > thoughts.\n>\n> Thanks for the patch proposal!\n>\n> The patch does not fix the other way around:\n>\n> - session 1: BEGIN; DROP extension btree_gist;\n> - session 2: create index i1 on t1 using gist( a );\n> - session 1: commits while session 2 is creating the index\n>\n> and does not address all the possible orphaned dependencies cases.\n>\n> There is an ongoing thread (see [1]) to fix the orphaned dependencies\n> issue.\n>\n> v9 attached in [1] fixes the case you describe here.\n>\n> [1]:\n> https://www.postgresql.org/message-id/flat/ZiYjn0eVc7pxVY45%40ip-10-97-1-34.eu-west-3.compute.internal\n\n\nI see. Thanks for sharing this. I can take a look at this and help in\nwhatever way I can.\n\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Jun 6, 2024 at 6:57 PM Bertrand Drouvot <[email protected]> wrote:On Thu, Jun 06, 2024 at 05:59:00PM +0530, Ashutosh Sharma wrote:\n> Hello everyone,\n> \n> At present, we use MVCC snapshots to identify dependent objects. This\n> implies that if a new dependent object is inserted within a transaction\n> that is still ongoing, our search for dependent objects won't include this\n> recently added one. Consequently, if someone attempts to drop the\n> referenced object, it will be dropped, and when the ongoing transaction\n> completes, we will end up having an entry for a referenced object that has\n> already been dropped. This situation can lead to an inconsistent state.\n> Below is an example illustrating this scenario:\n> \n> Session 1:\n> - create table t1(a int);\n> - insert into t1 select i from generate_series(1, 10000000) i;\n> - create extension btree_gist;\n> - create index i1 on t1 using gist( a );\n> \n> Session 2: (While the index creation in session 1 is in progress, drop the\n> btree_gist extension)\n> - drop extension btree_gist;\n> \n> Above command succeeds and so does the create index command running in\n> session 1, post this, if we try running anything on table t1, i1, it fails\n> with an error: \"cache lookup failed for opclass ...\"\n> \n> Attached is the patch that I have tried, which seems to be working for me.\n> It's not polished and thoroughly tested, but just sharing here to clarify\n> what I am trying to suggest. Please have a look and let me know your\n> thoughts.\n\nThanks for the patch proposal!\n\nThe patch does not fix the other way around:\n\n- session 1: BEGIN; DROP extension btree_gist;\n- session 2: create index i1 on t1 using gist( a );\n- session 1: commits while session 2 is creating the index\n\nand does not address all the possible orphaned dependencies cases.\n\nThere is an ongoing thread (see [1]) to fix the orphaned dependencies issue.\n\nv9 attached in [1] fixes the case you describe here.\n\n[1]: https://www.postgresql.org/message-id/flat/ZiYjn0eVc7pxVY45%40ip-10-97-1-34.eu-west-3.compute.internalI see. Thanks for sharing this. I can take a look at this and help in whatever way I can.With Regards,Ashutosh Sharma.", "msg_date": "Thu, 6 Jun 2024 19:51:22 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How about using dirty snapshots to locate dependent objects?" }, { "msg_contents": "On Thu, Jun 6, 2024 at 7:39 PM Ashutosh Sharma <[email protected]> wrote:\n>\n> On Thu, Jun 6, 2024 at 6:20 PM Dilip Kumar <[email protected]> wrote:\n>>\n>> On Thu, Jun 6, 2024 at 5:59 PM Ashutosh Sharma <[email protected]> wrote:\n>> >\n>> > Hello everyone,\n>> >\n>> > At present, we use MVCC snapshots to identify dependent objects. This implies that if a new dependent object is inserted within a transaction that is still ongoing, our search for dependent objects won't include this recently added one. Consequently, if someone attempts to drop the referenced object, it will be dropped, and when the ongoing transaction completes, we will end up having an entry for a referenced object that has already been dropped. This situation can lead to an inconsistent state. Below is an example illustrating this scenario:\n>>\n>> I don't think it's correct to allow the index to be dropped while a\n>> transaction is creating it. Instead, the right solution should be for\n>> the create index operation to protect the object it is using from\n>> being dropped. Specifically, the create index operation should acquire\n>> a shared lock on the Access Method (AM) to ensure it doesn't get\n>> dropped concurrently while the transaction is still in progress.\n>\n>\n> If I'm following you correctly, that's exactly what the patch is trying to do; while the index creation is in progress, if someone tries to drop the object referenced by the index under creation, the referenced object being dropped is able to know about the dependent object (in this case the index being created) using dirty snapshot and hence, it is unable to acquire the lock on the dependent object, and as a result of that, it is unable to drop it.\n\nYou are aiming for the same outcome, but not in the conventional way.\nIn my opinion, the correct approach is not to find objects being\ncreated using a dirty snapshot. Instead, when creating an object, you\nshould acquire a proper lock on any dependent objects to prevent them\nfrom being dropped during the creation process. For instance, when\ncreating an index that depends on the btree_gist access method, the\ncreate index operation should protect btree_gist from being dropped by\nacquiring the appropriate lock. It is not the responsibility of the\ndrop extension to identify in-progress index creations.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 10:06:27 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How about using dirty snapshots to locate dependent objects?" }, { "msg_contents": "On Fri, Jun 7, 2024 at 10:06 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Jun 6, 2024 at 7:39 PM Ashutosh Sharma <[email protected]> wrote:\n> >\n> > On Thu, Jun 6, 2024 at 6:20 PM Dilip Kumar <[email protected]> wrote:\n> >>\n> >> On Thu, Jun 6, 2024 at 5:59 PM Ashutosh Sharma <[email protected]> wrote:\n> >> >\n> >> > Hello everyone,\n> >> >\n> >> > At present, we use MVCC snapshots to identify dependent objects. This implies that if a new dependent object is inserted within a transaction that is still ongoing, our search for dependent objects won't include this recently added one. Consequently, if someone attempts to drop the referenced object, it will be dropped, and when the ongoing transaction completes, we will end up having an entry for a referenced object that has already been dropped. This situation can lead to an inconsistent state. Below is an example illustrating this scenario:\n> >>\n> >> I don't think it's correct to allow the index to be dropped while a\n> >> transaction is creating it. Instead, the right solution should be for\n> >> the create index operation to protect the object it is using from\n> >> being dropped. Specifically, the create index operation should acquire\n> >> a shared lock on the Access Method (AM) to ensure it doesn't get\n> >> dropped concurrently while the transaction is still in progress.\n> >\n> >\n> > If I'm following you correctly, that's exactly what the patch is trying to do; while the index creation is in progress, if someone tries to drop the object referenced by the index under creation, the referenced object being dropped is able to know about the dependent object (in this case the index being created) using dirty snapshot and hence, it is unable to acquire the lock on the dependent object, and as a result of that, it is unable to drop it.\n>\n> You are aiming for the same outcome, but not in the conventional way.\n> In my opinion, the correct approach is not to find objects being\n> created using a dirty snapshot. Instead, when creating an object, you\n> should acquire a proper lock on any dependent objects to prevent them\n> from being dropped during the creation process. For instance, when\n> creating an index that depends on the btree_gist access method, the\n> create index operation should protect btree_gist from being dropped by\n> acquiring the appropriate lock. It is not the responsibility of the\n> drop extension to identify in-progress index creations.\n\nThanks for sharing your thoughts, I appreciate your inputs and\ncompletely understand your perspective, but I wonder if that is\nfeasible? For example, if an object (index in this case) has\ndependency on lets say 'n' number of objects, and those 'n' number of\nobjects belong to say 'n' different catalog tables, so should we\nacquire locks on each of them until the create index command succeeds,\nor, should we just check for the presence of dependent objects and\nrecord their dependency inside the pg_depend table. Talking about this\nparticular case, we are trying to create gist index that has\ndependency on gist_int4 opclass, it is one of the tuple inside\npg_opclass catalog table, so should acquire lock in this tuple/table\nuntil the create index command succeeds and is that the thing to be\ndone for all the dependent objects?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Fri, 7 Jun 2024 11:53:14 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How about using dirty snapshots to locate dependent objects?" }, { "msg_contents": "On Fri, Jun 7, 2024 at 11:53 AM Ashutosh Sharma <[email protected]> wrote:\n>\n> On Fri, Jun 7, 2024 at 10:06 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Thu, Jun 6, 2024 at 7:39 PM Ashutosh Sharma <[email protected]> wrote:\n> > >\n> > > On Thu, Jun 6, 2024 at 6:20 PM Dilip Kumar <[email protected]> wrote:\n> > >>\n> > >> On Thu, Jun 6, 2024 at 5:59 PM Ashutosh Sharma <[email protected]> wrote:\n> > >> >\n> > >> > Hello everyone,\n> > >> >\n> > >> > At present, we use MVCC snapshots to identify dependent objects. This implies that if a new dependent object is inserted within a transaction that is still ongoing, our search for dependent objects won't include this recently added one. Consequently, if someone attempts to drop the referenced object, it will be dropped, and when the ongoing transaction completes, we will end up having an entry for a referenced object that has already been dropped. This situation can lead to an inconsistent state. Below is an example illustrating this scenario:\n> > >>\n> > >> I don't think it's correct to allow the index to be dropped while a\n> > >> transaction is creating it. Instead, the right solution should be for\n> > >> the create index operation to protect the object it is using from\n> > >> being dropped. Specifically, the create index operation should acquire\n> > >> a shared lock on the Access Method (AM) to ensure it doesn't get\n> > >> dropped concurrently while the transaction is still in progress.\n> > >\n> > >\n> > > If I'm following you correctly, that's exactly what the patch is trying to do; while the index creation is in progress, if someone tries to drop the object referenced by the index under creation, the referenced object being dropped is able to know about the dependent object (in this case the index being created) using dirty snapshot and hence, it is unable to acquire the lock on the dependent object, and as a result of that, it is unable to drop it.\n> >\n> > You are aiming for the same outcome, but not in the conventional way.\n> > In my opinion, the correct approach is not to find objects being\n> > created using a dirty snapshot. Instead, when creating an object, you\n> > should acquire a proper lock on any dependent objects to prevent them\n> > from being dropped during the creation process. For instance, when\n> > creating an index that depends on the btree_gist access method, the\n> > create index operation should protect btree_gist from being dropped by\n> > acquiring the appropriate lock. It is not the responsibility of the\n> > drop extension to identify in-progress index creations.\n>\n> Thanks for sharing your thoughts, I appreciate your inputs and\n> completely understand your perspective, but I wonder if that is\n> feasible? For example, if an object (index in this case) has\n> dependency on lets say 'n' number of objects, and those 'n' number of\n> objects belong to say 'n' different catalog tables, so should we\n> acquire locks on each of them until the create index command succeeds,\n> or, should we just check for the presence of dependent objects and\n> record their dependency inside the pg_depend table. Talking about this\n> particular case, we are trying to create gist index that has\n> dependency on gist_int4 opclass, it is one of the tuple inside\n> pg_opclass catalog table, so should acquire lock in this tuple/table\n> until the create index command succeeds and is that the thing to be\n> done for all the dependent objects?\n\nI am not sure what is the best way to do it, but if you are creating\nan object which is dependent on the other object then you need to\ncheck the existence of those objects, record dependency on those\nobjects, and also lock them so that those object doesn't get dropped\nwhile you are creating your object. I haven't looked into the patch\nbut something similar is being achieved in the thread Bertrand has\npointed out by locking the database object while recording the\ndependency on those.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 12:13:06 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How about using dirty snapshots to locate dependent objects?" } ]
[ { "msg_contents": "In the context of the multithreaded-server project, I looked into\npotentially not thread-safe functions.\n\n(See proposed next steps at the end of this message.)\n\nHere is a list of functions in POSIX that are possibly not thread-safe:\n\nhttps://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_09_01\n\nI checked those against the PostgreSQL server source code (backend +\ncommon + timezone), and found that the following from those are in\nuse:\n\n- dlerror()\n- getenv()\n- getgrnam()\n- getopt()\n- getpwuid()\n- localeconv()\n- localtime()\n- nl_langinfo()\n- readdir()\n- setenv()\n- setlocale()\n- strerror()\n- strsignal()\n- strtok()\n- system()\n- unsetenv()\n\nAdditionally, there are non-standard functions that are not\nthread-safe, such as getopt_long().\n\nAlso, there are replacement functions such as pg_gmtime() and\npg_localtime() that mirror standard thread-unsafe functions and that\nappear to be equally unsafe. (Note to those looking into annotating\nglobal variables: You also need to check static local variables.)\n\nConversely, some of the above might actually be thread-safe in\nsome/many/all implementations. For example, strerror() and system()\nare thread-safe in glibc. So you might actually get a multithreaded\nserver running in that environment with fewer source code changes but\nhave it fail in others. Just something to keep in mind.\n\nI also tried the concurrency-mt-unsafe check from clang-tidy\n(https://clang.llvm.org/extra/clang-tidy/checks/concurrency/mt-unsafe.html). \n Run it for example like this:\n\nclang-tidy -p build --quiet --checks='-*,concurrency-mt-unsafe' \nsrc/backend/main/*.c\n\n(needs a compilation database in the build directory)\n\n(You can't just run it like src/backend/**/*.c because some .c files\ndon't compile standalone, and then the whole thing aborts with too\nmany errors. Maybe with a judicious exclusion list, this can be\nachieved. However, it's also not good dealing with compilation\noptions like FRONTEND. So it can't easily serve as an automated\nchecker, but it's okay as a manual exploration tool.)\n\nIn addition to the POSIX list above, this also flagged:\n\n- exit()\n- sigprocmask()\n\nAllegedly, you can toggle it between posix and glibc modes, but I\nhaven't succeeded with that. So for example, it does not actually\nflag strerror() out of the box, presumably because that is not in its\nglibc list.\n\n\nNow some more detailed comments on these functions:\n\n- dlerror()\n\ndlerror() gets the error from the last dlopen() call, which is\nobviously not thread-safe. This might require some deeper\ninvestigation of the whole dfmgr.c mechanism. (Which might be\nappropriate in any case, since in multithreaded environments, you\ndon't need to load a library into each session separately.)\n\n- exit()\n\nMost of the exit() calls happen where there are not multiple threads\nactive. But some emergency exit calls like in elog.c might more\ncorrectly use _exit()?\n\n- getenv()\n- setenv()\n- unsetenv()\n\ngetenv() is unsafe if there are concurrent setenv() or unsetenv()\ncalls. We should try to move all those to early in the program\nstartup. This seems doable. Some calls are related to locale stuff,\nwhich is a separate subproject to clean up. There are some calls to\nsetenv(\"KRB5*\"), which would need to be fixed. The source code\ncomments nearby already contain ideas how to.\n\n- getgrnam()\n- getpwuid()\n- localtime()\n\nThese have _r replacements.\n\n- getopt()\n\nThis needs a custom replacement. (There is no getopt_r() because\nprograms usually don't call getopt() after startup.)\n\n(Note: This is also called during session startup, not only during\ninitial postmaster start. So we definitely need something here, if we\nwant to, like, start more than one session concurrently.)\n\n- localeconv()\n- nl_langinfo()\n- setlocale()\n\nThe locale business needs to be reworked to use locale_t and _l\nfunctions. This is already being discussed for other reasons.\n\n- readdir()\n\nThis is listed as possibly thread-unsafe, but I think it is\nthread-safe in practice. You just can't work on the same DIR handle\nfrom multiple threads. There is a readdir_r(), but that's already\ndeprecated. I think we can ignore this one.\n\n- sigprocmask()\n\nIt looks like this is safe in practice. Also, there is\npthread_sigmask() if needed.\n\n- strerror()\n\nUse strerror_r(). There are very few calls of this, actually, since\nmost potential users use printf %m.\n\n- strsignal()\n\nUse strsignal_r(). These calls are already wrapped in pg_strsignal()\nfor Windows portability, so it's easy to change.\n\nBut this also led me to think that it is potentially dangerous to have\ndifferent standards of thread-safety across the tree. pg_strsignal()\nis used by wait_result_to_str() which is used by pclose_check()\n... and at that point you have completely lost track of what you are\ndealing with underneath. So if someone were to put, say,\npclose_check() into pgbench, it could be broken.\n\n- strtok()\n\nUse strtok_r() or maybe even strsep() (but there are small semantic\ndifferences with the latter).\n\n- system()\n\nAs mentioned above, this actually safe on some systems. If there are\nsystems where it's not safe, then this could require some nontrivial\nwork.\n\n\nSuggested next steps:\n\n- The locale API business is already being worked on under separate\n cover.\n\n- Getting rid of the setenv(\"KRB5*\") calls is a small independently\n doable project.\n\n- Check if we can get rid of the getopt() calls at session startup.\n Else figure out a thread-safe replacement.\n\n- Replace remaining strtok() with strsep(). I think the semantics of\n strsep() are actually more correct for the uses I found. (strtok()\n skips over multiple adjacent separators, but strsep() would return\n empty fields.)\n\nAfter those, the remaining issues seem less complicated.\n\n\n", "msg_date": "Thu, 6 Jun 2024 16:34:07 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "report on not thread-safe functions" }, { "msg_contents": "On Thu, 2024-06-06 at 16:34 +0200, Peter Eisentraut wrote:\n> - setlocale()\n> \n> The locale business needs to be reworked to use locale_t and _l\n> functions.  This is already being discussed for other reasons.\n\nI posted a few patches to do this for collation:\n\nhttps://commitfest.postgresql.org/48/5023/\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 06 Jun 2024 16:48:12 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: report on not thread-safe functions" } ]
[ { "msg_contents": "Hi all,\n(Relevant folks in CC.)\n\nWhile hacking on the area of pgstat_*.c, I have noticed the existence\nof named_on_disk in PgStat_KindInfo, that is here to track the fact\nthat replication slots are a particular case in the PgStat_HashKey for\nthe dshash table of the stats because this kind of stats requires a\nmapping between the replication slot name and the hash key.\n\nAs far as I can see, this field is not required and is used nowhere,\nbecause the code relies on the existence of the to_serialized_name and\nfrom_serialized_name callbacks to do the mapping.\n\nWouldn't it make sense to remove it? This field is defined since\n5891c7a8ed8f that introduced the shmem stats, and has never been used\nsince.\n\nThis frees an extra bit in PgStat_KindInfo, which is going to help me\na bit with what I'm doing with this area of the code while keeping the\nstructure size the same.\n\nThoughts?\n--\nMichael", "msg_date": "Fri, 7 Jun 2024 14:07:33 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "PgStat_KindInfo.named_on_disk not required in shared stats" }, { "msg_contents": "Hi,\n\nOn 2024-06-07 14:07:33 +0900, Michael Paquier wrote:\n> While hacking on the area of pgstat_*.c, I have noticed the existence\n> of named_on_disk in PgStat_KindInfo, that is here to track the fact\n> that replication slots are a particular case in the PgStat_HashKey for\n> the dshash table of the stats because this kind of stats requires a\n> mapping between the replication slot name and the hash key.\n> \n> As far as I can see, this field is not required and is used nowhere,\n> because the code relies on the existence of the to_serialized_name and\n> from_serialized_name callbacks to do the mapping.\n> \n> Wouldn't it make sense to remove it? This field is defined since\n> 5891c7a8ed8f that introduced the shmem stats, and has never been used\n> since.\n\nYes, makes sense. Looks we changed direction during development a bunch of times...q\n\n\n> This frees an extra bit in PgStat_KindInfo, which is going to help me\n> a bit with what I'm doing with this area of the code while keeping the\n> structure size the same.\n\nNote it's just a single bit, not a full byte. So unless you need precisely 30\nbits, rather than 29, I don't really see why it'd help? And i don't see a\nreason to strictly keep the structure size the same.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Jun 2024 08:30:06 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgStat_KindInfo.named_on_disk not required in shared stats" }, { "msg_contents": "On Fri, Jun 07, 2024 at 08:30:06AM -0700, Andres Freund wrote:\n> Yes, makes sense. Looks we changed direction during development a bunch of times...q\n\nThanks for looking, Andres! I guess I'll just apply that once v18\nopens up.\n--\nMichael", "msg_date": "Sat, 8 Jun 2024 07:31:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PgStat_KindInfo.named_on_disk not required in shared stats" }, { "msg_contents": "On Sat, Jun 08, 2024 at 07:31:21AM +0900, Michael Paquier wrote:\n> Thanks for looking, Andres! I guess I'll just apply that once v18\n> opens up.\n\nApplied as of b19db55bd687.\n--\nMichael", "msg_date": "Mon, 1 Jul 2024 10:04:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PgStat_KindInfo.named_on_disk not required in shared stats" } ]
[ { "msg_contents": "Hi All,\r\n\r\nI’m a Postgres user and I’m looking into restricting the set of allowed ciphers on Postgres and configure a concrete set of curves on our postgres instances.\r\n\r\nI see in current Postgres doc mentioned that only TLS1.2 and below cipher lists can be configured. And there is no setting that controls the cipher choices used by TLS1.3. \r\n\r\nAs for ECDH keys currently postgres opts to support setting only a single elliptic group instead of setting a lists.\r\nAs described in below doc link:\r\n\r\nhttps://www.postgresql.org/docs/devel/runtime-config-connection.html\r\n\r\n\r\nNow I have a patch to support settings for TLS1.3 ciphersuites and expanding the configuration option for EC settings. With my patch we can do:\r\n1. Added a new configuration option ssl_ciphers_suites to control the cipher choices used by TLS 1.3. 2. Extend the existing configuration option ssl_ecdh_curve to accept a list of curve names seperated by colon.\r\n\r\nCould you please help to review to see if you are interested in having this change in upcoming Postgres major release(It's should be PG17)? \r\n\r\nThanks in advance.", "msg_date": "Fri, 7 Jun 2024 14:10:57 +0800", "msg_from": "\"=?utf-8?B?RXJpY2EgWmhhbmc=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On 07.06.24 08:10, Erica Zhang wrote:\n> I’m a Postgres user and I’m looking into restricting the set of allowed \n> ciphers on Postgres and configure a concrete set of curves on our \n> postgres instances.\n\nOut of curiosity, why is this needed in practice?\n\n> Could you please help to review to see if you are interested in having \n> this change in upcoming Postgres major release(It's should be PG17)?\n\nIt would be targetting PG18 now.\n\n\n\n", "msg_date": "Fri, 7 Jun 2024 10:55:30 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" } ]
[ { "msg_contents": "hi.\n\nwe have 450 appearance of\n`cache lookup failed .*`\n\nwe have 141 appearance of\n`could not open file .*`\n\nso when it actually happens, it cannot quickly locate which function\nwhere the error has happened.\nmaybe under certain conditions (e.g. certain build type or certain\nlog_min_messages),\nwe can also print out the function name by using gcc __func__.\n\nor we can just do like:\nif (!HeapTupleIsValid(tuple))\nelog(ERROR, \"cache lookup failed for relation %u %s\",\nRelationGetRelid(rel), __func__);\n\ngiven that these errors are very unlikely to happen, if it happens,\nprinting out the function name seems not that inversive?\n\n\n", "msg_date": "Fri, 7 Jun 2024 16:22:10 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "using __func__ to locate and distinguish some error messages" }, { "msg_contents": "On 2024-Jun-07, jian he wrote:\n\n> so when it actually happens, it cannot quickly locate which function\n> where the error has happened.\n> maybe under certain conditions (e.g. certain build type or certain\n> log_min_messages),\n> we can also print out the function name by using gcc __func__.\n\nThat information is already in the error data, so you don't need it in\nthe message text. You can change your log_error_verbosity if you want\nit to show up in the log; in psql you can use \\errverbose to have it\nshown to you after the error is thrown, or you can use\n \\pset VERBOSITY verbose\nto have it printed for every error message. Tools other than psql would\nneed their own specific ways to display those.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Hay que recordar que la existencia en el cosmos, y particularmente la\nelaboración de civilizaciones dentro de él no son, por desgracia,\nnada idílicas\" (Ijon Tichy)\n\n\n", "msg_date": "Fri, 7 Jun 2024 10:28:27 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: using __func__ to locate and distinguish some error messages" }, { "msg_contents": "On Fri, Jun 7, 2024 at 4:28 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jun-07, jian he wrote:\n>\n> > so when it actually happens, it cannot quickly locate which function\n> > where the error has happened.\n> > maybe under certain conditions (e.g. certain build type or certain\n> > log_min_messages),\n> > we can also print out the function name by using gcc __func__.\n>\n> That information is already in the error data, so you don't need it in\n> the message text. You can change your log_error_verbosity if you want\n> it to show up in the log; in psql you can use \\errverbose to have it\n> shown to you after the error is thrown, or you can use\n> \\pset VERBOSITY verbose\n> to have it printed for every error message. Tools other than psql would\n> need their own specific ways to display those.\n>\n\nThanks for pointing this out.\n\n\n", "msg_date": "Fri, 7 Jun 2024 16:38:24 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: using __func__ to locate and distinguish some error messages" } ]
[ { "msg_contents": "Hi hackers,\n\nI found that in enum XactEvent, there is 'XACT_EVENT_PREPARE' for\n'prepare transaction', but there is no event for 'commit prepared' or\n'rollback prepared'.\n\nFor the following SQL:\n------------------------------------------------\nbegin;\ncreate table test(a int);\nPREPARE TRANSACTION 'foo';\nrollback prepared 'foo';\n-------------------------------------------------\nWhen executing ' rollback prepared 'foo'; ', I expected to get\n'XACT_EVENT_ABORT', but actually,\nthe event type is 'XACT_EVENT_COMMIT'.\n\nI think XACT_EVENT_COMMIT_PREPARED and XACT_EVENT_ROLLBACK_PREPARED can be\nadded in function 'FinishPreparedTransaction'\n\nI'm confused why there are no related events for them.\n\nHi hackers, I found that in enum XactEvent, there is  'XACT_EVENT_PREPARE'  for 'prepare transaction', but there is no event for 'commit prepared' or 'rollback prepared'.For the following SQL:------------------------------------------------begin;create table test(a int);PREPARE TRANSACTION 'foo';rollback prepared 'foo';-------------------------------------------------When executing ' rollback prepared 'foo'; ', I expected to get 'XACT_EVENT_ABORT', but actually, the event type is 'XACT_EVENT_COMMIT'. I think XACT_EVENT_COMMIT_PREPARED and XACT_EVENT_ROLLBACK_PREPARED can be added in function 'FinishPreparedTransaction'I'm confused why there are no related events for them.", "msg_date": "Fri, 7 Jun 2024 17:14:29 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "XACT_EVENT for 'commit prepared'" }, { "msg_contents": "Xiaoran Wang <[email protected]> writes:\n> I found that in enum XactEvent, there is 'XACT_EVENT_PREPARE' for\n> 'prepare transaction', but there is no event for 'commit prepared' or\n> 'rollback prepared'.\n\nOn the whole, it seems like a good idea to me that those commands\ndon't invoke event triggers. It is a core principle of 2PC that\nif 'prepare' succeeded, 'commit prepared' must not fail. Invoking a\ntrigger during the second step would add failure cases and I'm not\nsure what value it has.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Jun 2024 11:19:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XACT_EVENT for 'commit prepared'" }, { "msg_contents": "Hi,\n\nOn 2024-06-07 11:19:40 -0400, Tom Lane wrote:\n> Xiaoran Wang <[email protected]> writes:\n> > I found that in enum XactEvent, there is 'XACT_EVENT_PREPARE' for\n> > 'prepare transaction', but there is no event for 'commit prepared' or\n> > 'rollback prepared'.\n> \n> On the whole, it seems like a good idea to me that those commands\n> don't invoke event triggers. It is a core principle of 2PC that\n> if 'prepare' succeeded, 'commit prepared' must not fail. Invoking a\n> trigger during the second step would add failure cases and I'm not\n> sure what value it has.\n\nEvent triggers? Isn't this about RegisterXactCallback?\n\nXACT_EVENT_COMMIT is called after the commit record has been flushed and the\nprocarray has been modified. Thus a failure in the hook has somewhat limited\nconsequences. I'd assume XACT_EVENT_COMMIT_PREPARED would do something\nsimilar.\n\nI suspect the reason we don't callback for 2pc commit/rollback prepared is\nsimpl: The code for doing a 2pc commit prepared lives in twophase.c, not\nxact.c...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Jun 2024 09:52:41 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XACT_EVENT for 'commit prepared'" } ]
[ { "msg_contents": "Hi Peter,\r\nThanks a lot for the quick response. We are using Postgres instance in our product. For some security consideration, we prefer to use TLS1.3 cipher suites in our product with some customization values instead of default value \"HIGH:MEDIUM:+3DES:!aNULL\". Moreover we prefer to set a group of ecdh keys instead of a single value.\r\n\r\n\r\nI see the https://commitfest.postgresql.org/48/ is still open, could it be possible to target for PG17? As I know PG17 is going to be release this year so that we can upgrade our instances to this new version accodingly.\r\n \r\nOriginal Email\r\n \r\n \r\n\r\nSender:\"Peter Eisentraut\"< [email protected] &gt;;\r\n\r\nSent Time:2024/6/7 16:55\r\n\r\nTo:\"Erica Zhang\"< [email protected] &gt;;\"pgsql-hackers\"< [email protected] &gt;;\r\n\r\nSubject:Re: Add support to TLS 1.3 cipher suites and curves lists\r\n\r\n\r\nOn 07.06.24 08:10, Erica Zhang wrote:\r\n&gt; I’m a Postgres user and I’m looking into restricting the set of allowed \r\n&gt; ciphers on Postgres and configure a concrete set of curves on our \r\n&gt; postgres instances.\r\n\r\nOut of curiosity, why is this needed in practice?\r\n\r\n&gt; Could you please help to review to see if you are interested in having \r\n&gt; this change in upcoming Postgres major release(It's should be PG17)?\r\n\r\nIt would be targetting PG18 now.\nHi Peter,Thanks a lot for the quick response. We are using Postgres instance in our product. For some security consideration, we prefer to use TLS1.3 cipher suites in our product with some customization values instead of default value \"HIGH:MEDIUM:+3DES:!aNULL\". Moreover we prefer to set a group of ecdh keys instead of a single value.I see the https://commitfest.postgresql.org/48/ is still open, could it be possible to target for PG17? As I know PG17 is going to be release this year so that we can upgrade our instances to this new version accodingly.\nOriginal Email\n\nSender:\"Peter Eisentraut\"< [email protected] >;Sent Time:2024/6/7 16:55To:\"Erica Zhang\"< [email protected] >;\"pgsql-hackers\"< [email protected] >;Subject:Re: Add support to TLS 1.3 cipher suites and curves listsOn 07.06.24 08:10, Erica Zhang wrote:> I’m a Postgres user and I’m looking into restricting the set of allowed > ciphers on Postgres and configure a concrete set of curves on our > postgres instances.Out of curiosity, why is this needed in practice?> Could you please help to review to see if you are interested in having > this change in upcoming Postgres major release(It's should be PG17)?It would be targetting PG18 now.", "msg_date": "Fri, 7 Jun 2024 18:02:37 +0800", "msg_from": "\"=?utf-8?B?RXJpY2EgWmhhbmc=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On Fri, Jun 07, 2024 at 06:02:37PM +0800, Erica Zhang wrote:\n> I see the https://commitfest.postgresql.org/48/ is still open, could\n> it be possible to target for PG17? As I know PG17 is going to be\n> release this year so that we can upgrade our instances to this new\n> version accodingly.\n\nEchoing with Peter, https://commitfest.postgresql.org/48/ is planned\nto be the first commit fest of the development cycle for Postgres 18.\nv17 is in feature freeze state and beta, where only bug fixes are\naccepted, and not new features.\n--\nMichael", "msg_date": "Fri, 7 Jun 2024 19:46:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On Fri, Jun 7, 2024 at 3:02 AM Erica Zhang <[email protected]> wrote:\n>\n> For some security consideration, we prefer to use TLS1.3 cipher suites in our product with some customization values instead of default value \"HIGH:MEDIUM:+3DES:!aNULL\". Moreover we prefer to set a group of ecdh keys instead of a single value.\n\n+1 for the curve list feature, at least. No opinions on the 1.3\nciphersuites half, yet.\n\nI've added this patch to my planned review for the v18 cycle. Some\ninitial notes:\n\n- Could you separate the two features into two patches? That would\nmake it easier for reviewers. (They can still share the same thread\nand CF entry.)\n- The \"curve\" APIs have been renamed \"group\" in newer OpenSSLs for a\nwhile now, and we should probably use those if possible.\n- I think parsing apart the groups list to check NIDs manually could\nlead to false negatives. From a docs skim, 3.0 allows providers to add\ntheir own group names, and 3.3 now supports question marks in the\nstring to allow graceful fallbacks.\n- I originally thought it'd be better to just stop calling\nSSL_set_tmp_ecdh() entirely by default, so we could use OpenSSL's\nbuiltin list of groups. But that may have denial-of-service concerns\n[1]?\n- We should maybe look into SSL_CTX_config(), if we haven't discussed\nthat already on the list, but that's probably a bigger tangent and\ndoesn't need to be part of this patch.\n\nThanks,\n--Jacob\n\n[1] https://www.openssl.org/blog/blog/2022/10/21/tls-groups-configuration/index.html\n\n\n", "msg_date": "Fri, 7 Jun 2024 10:14:50 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "> On 7 Jun 2024, at 19:14, Jacob Champion <[email protected]> wrote:\n\n> - Could you separate the two features into two patches? That would\n> make it easier for reviewers. (They can still share the same thread\n> and CF entry.)\n\n+1, please do.\n\n> - The \"curve\" APIs have been renamed \"group\" in newer OpenSSLs for a\n> while now, and we should probably use those if possible.\n\nAgreed. While not deprecated per se the curve API is considered obsolete and\nis just aliased to the group API (OpenSSL using both the term obsolete and\ndeprecated to mean the same thing but with very different mechanics is quite\nconfusing).\n\n> - I think parsing apart the groups list to check NIDs manually could\n> lead to false negatives. From a docs skim, 3.0 allows providers to add\n> their own group names, and 3.3 now supports question marks in the\n> string to allow graceful fallbacks.\n\nParsing the list will likely risk false negatives as you say, but from skimming\nthe code there doesn't seem to be a good errormessage from SSL_set1_groups_list\nto indicate if listitems were invalid (unless all of them were). Maybe calling\nthe associated _get function to check the number of set groups can be used to\nverify what happenend?\n\nRegarding the ciphersuites portion of the patch. I'm not particularly thrilled\nabout having a GUC for TLSv1.2 ciphers and one for TLSv1.3 ciphersuites, users\nnot all that familiar with TLS will likely find it confusing to figure out what\nto do.\n\nIn which way is this feature needed since this can be achieved with the config\ndirective \"Ciphersuites\" in openssl.conf IIUC?\n\nIf we add this I think we should keep it blank and if so skip setting it at all\nfalling back on OpenSSL defaults. The below default for the GUC does not match\nthe OpenSSL default and I think we are better off trusting them on this.\n\n+\t\"TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256\",\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 10 Jun 2024 12:30:36 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On Mon, 10 Jun 2024 at 12:31, Daniel Gustafsson <[email protected]> wrote:\n> Regarding the ciphersuites portion of the patch. I'm not particularly thrilled\n> about having a GUC for TLSv1.2 ciphers and one for TLSv1.3 ciphersuites, users\n> not all that familiar with TLS will likely find it confusing to figure out what\n> to do.\n\nI don't think it's easy to create a single GUC because OpenSSL has\ndifferent APIs for both. So we'd have to add some custom parsing for\nthe combined string, which is likely to cause some problems imho. I\nthink separating them is the best option from the options we have and\nI don't think it matters much practice for users. Users not familiar\nwith TLS might indeed be confused, but those users shouldn't touch\nthese settings anyway, and just use the defaults. The users that care\nabout this probably already get two cipher strings from their\ncompliance teams, because many other applications also have two\nseparate options for specifying both.\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:51:05 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On 12.06.24 10:51, Jelte Fennema-Nio wrote:\n> On Mon, 10 Jun 2024 at 12:31, Daniel Gustafsson <[email protected]> wrote:\n>> Regarding the ciphersuites portion of the patch. I'm not particularly thrilled\n>> about having a GUC for TLSv1.2 ciphers and one for TLSv1.3 ciphersuites, users\n>> not all that familiar with TLS will likely find it confusing to figure out what\n>> to do.\n> \n> I don't think it's easy to create a single GUC because OpenSSL has\n> different APIs for both. So we'd have to add some custom parsing for\n> the combined string, which is likely to cause some problems imho. I\n> think separating them is the best option from the options we have and\n> I don't think it matters much practice for users. Users not familiar\n> with TLS might indeed be confused, but those users shouldn't touch\n> these settings anyway, and just use the defaults. The users that care\n> about this probably already get two cipher strings from their\n> compliance teams, because many other applications also have two\n> separate options for specifying both.\n\nMaybe some comparisons with other SSL-enabled server products would be \nuseful.\n\nHere is the Apache httpd setting:\n\nhttps://httpd.apache.org/docs/current/mod/mod_ssl.html#sslciphersuite\n\nThey use a complex syntax to be able to set both via one setting.\n\nHere is the nginx setting:\n\nhttps://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers\n\nThis doesn't appear to support TLS 1.3?\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 15:57:03 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" } ]
[ { "msg_contents": "Hi,\n\nAfter looking into parallel builds for BRIN and GIN indexes, I was\nwondering if there's a way to do parallel builds for GiST too. I knew\nnext to nothing about how GiST works, but I gave it a shot and here's\nwhat I have - the attached patch allows parallel GiST builds for the\n\"unsorted\" case (i.e. when the opclass does not include sortsupport),\nand does not support buffered builds.\n\n\nunsorted builds only\n--------------------\n\nAddressing only the unsorted case may seem a bit weird, but I did it\nthis way for two reasons - parallel sort is a solved problem, and adding\nthis to the patch seems quite straightforward. It's what btree does, for\nexample. But I also was not very sure how common this is - we do have\nsort for points, but I have no idea if the PostGIS indexes define\nsorting etc. My guess was \"no\" but I've been told that's no longer true,\nso I guess sorted builds are more widely applicable than I thought.\n\nIn any case, I'm not in a rush to parallelize sorted builds. It can be\nadded later, as an improvement, IMHO. In fact, it's a well isolated part\nof the patch, which might make it a good choice for someone looking for\nan idea for their first patch ...\n\n\nbuffered builds\n---------------\n\nThe lack of support for buffered builds is a very different thing. The\nbasic idea is that we don't push the index entries all the way to the\nleaf pages right away, but accumulate them in buffers half-way through.\nThis combines writes and reduces random I/O, which is nice.\n\nUnfortunately, the way it's implemented does not work with parallel\nbuilds at all - all the state is in private memory, and it assumes the\nworker is the only possible backend that can split the page (at which\npoint the buffers need to be split too, etc.). But for parallel builds\nthis is obviously not true.\n\nI'm not saying parallel builds can't do similar buffering, but it\nrequires moving the buffers into shared memory, and introducing locking\nto coordinate accesses to the buffers. (Or perhaps it might be enough to\nonly \"notify\" the workers about page splits, with buffers still in\nprivate memory?). Anyway, it seems far too complicated for v1.\n\nIn fact, I'm not sure the buffering is entirely necessary - maybe the\nincrease in amount of RAM makes this less of an issue? If the index can\nfit into shared buffers (or at least page cache), maybe the amount of\nextra I/O is not that bad? I'm sure there may be cases really affected\nby this, but maybe it's OK to tell people to disable parallel builds in\nthose cases?\n\n\ngistGetFakeLSN\n--------------\n\nOne more thing - GiST disables WAL-logging during the build, and only\nlogs it once at the end. For serial builds this is fine, because there\nare no concurrent splits, and so we don't need to rely on page LSNs to\ndetect these cases (in fact, is uses a bogus value).\n\nBut for parallel builds this would not work - we need page LSNs that\nactually change, otherwise we'd miss page splits, and the index build\nwould either fail or produce a broken index. But the existing is_build\nflag affects both things, so I had to introduce a new \"is_parallel\" flag\nwhich only affects the page LSN part, using the gistGetFakeLSN()\nfunction, previously used only for unlogged indexes.\n\nThis means we'll produce WAL during the index build (because\ngistGetFakeLSN() writes a trivial message into WAL). Compared to the\nserial builds this produces maybe 25-75% more WAL, but it's an order of\nmagnitude less than with \"full\" WAL logging (is_build=false).\n\nFor example, serial build of 5GB index needs ~5GB of WAL. A parallel\nbuild may need ~7GB, while a parallel build with \"full\" logging would\nuse 50GB. I think this is a reasonable trade off.\n\nThere's one \"strange\" thing, though - the amount of WAL decreases with\nthe number of parallel workers. Consider for example an index on a\nnumeric field, where the index is ~9GB, but the amount of WAL changes\nlike this (0 workers means serial builds):\n\n parallel workers 0 1 3 5 7\n WAL (GB) 5.7 9.2 7.6 7.0 6.8\n\nThe explanation for this is fairly simple (AFAIK) - gistGetFakeLSN\ndetermines if it needs to actually assign a new LSN (and write stuff to\nWAL) by comparing the last LSN assigned (in a given worker) to the\ncurrent insert LSN. But the current insert LSN might have been updated\nby some other worker, in which case we simply use that. Which means that\nmultiple workers may use the same fake LSN, and the likelihood increases\nwith the number of workers - and this is consistent with the observed\nbehavior of the WAL decreasing as the number of workers increases\n(because more workers use the same LSN).\n\nI'm not entirely sure if this is OK or a problem. I was worried two\nworkers might end up using the same LSN for the same page, leading to\nother workers not noticing the split. But after a week of pretty\nintensive stress testing, I haven't seen a single such failure ...\n\nIf this turns out to be a problem, the fix is IMHO quite simple - it\nshould be enough to force gistGetFakeLSN() to produce a new fake LSN\nevery time when is_parallel=true.\n\n\nperformance\n-----------\n\nObviously, the primary goal of the patch is to speed up the builds, so\ndoes it actually do that? For indexes of different sizes I got this\ntimings (in seconds):\n\n scale type 0 1 3 5 7\n ------------------------------------------------------------------\n small inet 13 7 4 4 2\n numeric 239 122 67 46 36\n oid 15 8 5 3 2\n text 71 35 19 13 10\n medium inet 207 111 59 42 32\n numeric 3409 1714 885 618 490\n oid 214 114 60 43 33\n text 940 479 247 180 134\n large inet 2167 1459 865 632 468\n numeric 38125 20256 10454 7487 5846\n oid 2647 1490 808 594 475\n text 10987 6298 3376 2462 1961\n\nHere small is ~100-200MB index, medium is 1-2GB and large 10-20GB index,\ndepending on the data type.\n\nThe raw duration is not particularly easy to interpret, so let's look at\nthe \"actual speedup\" which is calculated as\n\n (serial duration) / (parallel duration)\n\nand the table looks like this:\n\n scale type 1 3 5 7\n --------------------------------------------------------------\n small inet 1.9 3.3 3.3 6.5\n numeric 2.0 3.6 5.2 6.6\n oid 1.9 3.0 5.0 7.5\n text 2.0 3.7 5.5 7.1\n medium inet 1.9 3.5 4.9 6.5\n numeric 2.0 3.9 5.5 7.0\n oid 1.9 3.6 5.0 6.5\n text 2.0 3.8 5.2 7.0\n large inet 1.5 2.5 3.4 4.6\n numeric 1.9 3.6 5.1 6.5\n oid 1.8 3.3 4.5 5.6\n text 1.7 3.3 4.5 5.6\n\nIdeally (if the build scaled linearly with the number of workers), we'd\nget the number of workers + 1 (because the leader participates too).\nObviously, it's not that great - for example for text with 3 workers we\nget 3.3 instead of 4.0, and 5.6 vs. 8 with 7 workers.\n\nBut I think those numbers are actually pretty good - I'd definitely not\ncomplain if my index builds got 5x faster.\n\nBut those are synthetic tests on random data, using the btree_gist\nopclasses. It'd be interesting if people could do their own testing on\nreal-world data sets.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 7 Jun 2024 19:41:10 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "WIP: parallel GiST index builds" }, { "msg_contents": "Hi,\n\nI've done a number of experiments with the GiST parallel builds, both\nwith the sorted and unsorted cases, so let me share some of the results\nand conclusions from that.\n\nIn the first post I did some benchmarks using btree_gist, but that\nseemed not very realistic - there certainly are much more widely used\nGiST indexes in the GIS world. So this time I used OpenStreetMap, loaded\nusing osm2pgsql, with two dataset sizes:\n\n- small - \"north america\" (121GB without indexes)\n- large - \"planet\" (688GB without indexes)\n\nAnd then I created indexes using either gist_geometry_ops_2d (with sort)\nor gist_geometry_ops_nd (no sorting).\n\n\nOn 6/7/24 19:41, Tomas Vondra wrote:\n> Hi,\n> \n> After looking into parallel builds for BRIN and GIN indexes, I was\n> wondering if there's a way to do parallel builds for GiST too. I knew\n> next to nothing about how GiST works, but I gave it a shot and here's\n> what I have - the attached patch allows parallel GiST builds for the\n> \"unsorted\" case (i.e. when the opclass does not include sortsupport),\n> and does not support buffered builds.\n> \n> \n> unsorted builds only\n> --------------------\n> \n> Addressing only the unsorted case may seem a bit weird, but I did it\n> this way for two reasons - parallel sort is a solved problem, and adding\n> this to the patch seems quite straightforward. It's what btree does, for\n> example. But I also was not very sure how common this is - we do have\n> sort for points, but I have no idea if the PostGIS indexes define\n> sorting etc. My guess was \"no\" but I've been told that's no longer true,\n> so I guess sorted builds are more widely applicable than I thought.\n\n\nFor sorted builds, I made the claim that parallelizing sorted builds is\n\"solved problem\" because we can use a parallel tuplesort. I was thinking\nthat maybe it'd be better to do that in the initial patch, and only then\nintroduce the more complex stuff in the unsorted case, so I gave this a\ntry, and it turned to be rather pointless.\n\nYes, parallel tuplesort does improve the duration, but it's not a very\nsignificant improvement - maybe 10% or so. Most of the build time is\nspent in gist_indexsortbuild(), so this is the part that would need to\nbe parallelized for any substantial improvement. Only then is it useful\nto improve the tuplesort, I think.\n\nAnd parallelizing gist_indexsortbuild() is not trivial - most of the\ntime is spent in gist_indexsortbuild_levelstate_flush() / gistSplit(),\nso ISTM a successful parallel implementation would need to divide this\nwork between multiple workers. I don't have a clear idea how, though.\n\nI do have a PoC/WIP patch doing the paralle tuplesort in my github\nbranch at [1] (and then also some ugly experiments on top of that), but\nI'm not going to attach it here because of the reasons I just explained.\nIt'd be just a pointless distraction.\n\n> In any case, I'm not in a rush to parallelize sorted builds. It can be\n> added later, as an improvement, IMHO. In fact, it's a well isolated part\n> of the patch, which might make it a good choice for someone looking for\n> an idea for their first patch ...\n> \n\nI still think this assessment is correct - it's fine to not parallelize\nsorted builds. It can be improved in the future, or even not at all.\n\n> \n> buffered builds\n> ---------------\n> \n> The lack of support for buffered builds is a very different thing. The\n> basic idea is that we don't push the index entries all the way to the\n> leaf pages right away, but accumulate them in buffers half-way through.\n> This combines writes and reduces random I/O, which is nice.\n> \n> Unfortunately, the way it's implemented does not work with parallel\n> builds at all - all the state is in private memory, and it assumes the\n> worker is the only possible backend that can split the page (at which\n> point the buffers need to be split too, etc.). But for parallel builds\n> this is obviously not true.\n> \n> I'm not saying parallel builds can't do similar buffering, but it\n> requires moving the buffers into shared memory, and introducing locking\n> to coordinate accesses to the buffers. (Or perhaps it might be enough to\n> only \"notify\" the workers about page splits, with buffers still in\n> private memory?). Anyway, it seems far too complicated for v1.\n> \n> In fact, I'm not sure the buffering is entirely necessary - maybe the\n> increase in amount of RAM makes this less of an issue? If the index can\n> fit into shared buffers (or at least page cache), maybe the amount of\n> extra I/O is not that bad? I'm sure there may be cases really affected\n> by this, but maybe it's OK to tell people to disable parallel builds in\n> those cases?\n> \n\nFor unsorted builds, here's the results from one of the machines for\nduration of CREATE INDEX with the requested number of workers (0 means\nserial build) for different tables in the OSM databases:\n\n db type size (MB) | 0 1 2 3 4\n -----------------------------|----------------------------------\n small line 4889 | 811 429 294 223 186\n point 2625 | 485 262 179 141 125\n polygon 7644 | 1230 623 418 318 261\n roads 273 | 40 22 16 14 12\n -----------------------------|----------------------------------\n large line 20592 | 3916 2157 1479 1137 948\n point 13080 | 2636 1442 981 770 667\n polygon 50598 | 10990 5648 3860 2991 2504\n roads 1322 | 228 123 85 67 56\n\nThere's also the size of the index. If we calculate the speedup compared\nto serial build, we get this:\n\n db type | 1 2 3 4\n -----------------|--------------------------------\n small line | 1.9 2.8 3.6 4.4\n point | 1.9 2.7 3.4 3.9\n polygon | 2.0 2.9 3.9 4.7\n roads | 1.8 2.5 2.9 3.3\n -----------------|--------------------------------\n large line | 1.8 2.6 3.4 4.1\n point | 1.8 2.7 3.4 4.0\n polygon | 1.9 2.8 3.7 4.4\n roads | 1.9 2.7 3.4 4.1\n\nRemember, the leader participates in the build, so K workers means K+1\nprocesses are doing the work. And the speedup is pretty close to the\nideal speedup.\n\nThere's the question about buffering, though - as mentioned in the first\npatch, the parallel builds do not support buffering, so the question is\nhow bad is the impact of that. Clearly, the duration improves a lot, so\nthat's good, but maybe it did write out far more buffers and the NVMe\ndrive handled it well?\n\nSo I used pg_stat_statements to track the number of buffer writes\n(shared_blks_written) for the CREATE INDEX, and for the large data set\nit looks like this (this is in MBs written):\n\n size | 0 1 2 3 4\n ----------------|--------------------------------------------\n line 20592 | 43577 47580 49574 50388 50734\n point 13080 | 23331 25721 26399 26745 26889\n polygon 50598 | 113108 125095 129599 130170 131249\n roads 1322 | 1322 1310 1305 1300 1295\n\nThe serial builds (0 workers) are buffered, but the buffering only\napplies for indexes that exceed effective_cache_size (4GB). Which means\nthe \"roads\" buffer is too small to activate buffering, and there should\nbe very little differences - which is the case (but the index also fits\ninto shared buffers in this case).\n\nThe other indexes do activate buffering, so the question is how many\nmore buffers get written out compared to serial builds (with buffering).\nAnd the comparison looks like this:\n\n 1 2 3 4\n ------------------------------------------\n line 109% 114% 116% 116%\n point 110% 113% 115% 115%\n polygon 111% 115% 115% 116%\n roads 99% 99% 98% 98%\n\nSo it writes about 15-20% more buffers during the index build, which is\nnot that much IMHO. I was wondering if this might change with smaller\nshared buffers, so I tried building indexes on the smaller data set with\n128MB shared buffers, but the difference remained to be ~15-20%.\n\nMy conclusion from this is that it's OK to have parallel builds without\nbuffering, and then maybe improve that later. The thing I'm not sure\nabout is how this should interact with the \"buffering\" option. Right now\nwe just ignore that entirely if we decide to do parallel build. But\nmaybe it'd be better to disable parallel builds when the user specifies\n\"buffering=on\" (and only allow parallel builds with off/auto)?\n\nI did check how parallelism affects the amount of WAL produced, but\nthat's pretty much exactly how I described that in the initial message,\nincluding the \"strange\" decrease with more workers due to reusing the\nfake LSN etc.\n\n\nregards\n\n\n[1] https://github.com/tvondra/postgres/tree/parallel-gist-20240625\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 1 Jul 2024 22:20:41 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "Hi Tomas!\n\n> On 7 Jun 2024, at 20:41, Tomas Vondra <[email protected]> wrote:\n> \n> After looking into parallel builds for BRIN and GIN indexes, I was\n> wondering if there's a way to do parallel builds for GiST too. I knew\n> next to nothing about how GiST works, but I gave it a shot and here's\n> what I have - the attached patch allows parallel GiST builds for the\n> \"unsorted\" case (i.e. when the opclass does not include sortsupport),\n> and does not support buffered builds.\n\nI think this totally makes sense. I've took a look into tuples partitioning (for sorted build) in your Github and I see that it's complicated feature. So, probably, we can do it later.\nI'm trying to review the patch as it is now. Currently I have some questions about code.\n\n1. Do I get it right that is_parallel argument for gistGetFakeLSN() is only needed for assertion? And this assertion can be ensured just by inspecting code. Is it really necessary?\n2. gistBuildParallelCallback() updates indtuplesSize, but it seems to be not used anywhere. AFAIK it's only needed to buffered build.\n3. I think we need a test that reliably triggers parallel and serial builds.\n\nAs far as I know, there's a well known trick to build better GiST over PostGIS data: randomize input. I think parallel scan is just what is needed, it will shuffle tuples enough...\n\nThanks for working on this!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sun, 21 Jul 2024 22:31:51 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "On 7/21/24 21:31, Andrey M. Borodin wrote:\n> Hi Tomas!\n> \n>> On 7 Jun 2024, at 20:41, Tomas Vondra\n>> <[email protected]> wrote:\n>> \n>> After looking into parallel builds for BRIN and GIN indexes, I was \n>> wondering if there's a way to do parallel builds for GiST too. I\n>> knew next to nothing about how GiST works, but I gave it a shot and\n>> here's what I have - the attached patch allows parallel GiST builds\n>> for the \"unsorted\" case (i.e. when the opclass does not include\n>> sortsupport), and does not support buffered builds.\n> \n> I think this totally makes sense. I've took a look into tuples\n> partitioning (for sorted build) in your Github and I see that it's\n> complicated feature. So, probably, we can do it later. I'm trying to\n> review the patch as it is now. Currently I have some questions about\n> code.\n\nOK. I'm not even sure partitioning is the right approach for sorted\nbuilds. Or how to do it, exactly.\n\n> \n> 1. Do I get it right that is_parallel argument for gistGetFakeLSN()\n> is only needed for assertion? And this assertion can be ensured just\n> by inspecting code. Is it really necessary?\n\nYes, in the patch it's only used for an assert. But it's actually\nincorrect - just as I speculated in my initial message (in the section\nabout gistGetFakeLSN), it sometimes fails to detect a page split. I\nnoticed that while testing the patch adding GiST to amcheck, and I\nreported that in that thread [1]. But I apparently forgot to post an\nupdated version of this patch :-(\n\nI'll post a new version tomorrow, but in short it needs to update the\nfake LSN even if (lastlsn != currlsn) if is_parallel=true. It's a bit\nannoying this means we generate a new fake LSN on every call, and I'm\nnot sure that's actually necessary. But I've been unable to come up with\na better condition when to generate a new LSN.\n\n[1]\nhttps://www.postgresql.org/message-id/79622955-6d1a-4439-b358-ec2b6a9e7cbf%40enterprisedb.com\n\n> 2. gistBuildParallelCallback() updates indtuplesSize, but it seems to be\n> not used anywhere. AFAIK it's only needed to buffered build. 3. I\n> think we need a test that reliably triggers parallel and serial\n> builds.\n> \n\nYeah, it's possible the variable is unused. Agreed on the testing.\n\n> As far as I know, there's a well known trick to build better GiST\n> over PostGIS data: randomize input. I think parallel scan is just\n> what is needed, it will shuffle tuples enough...\n> \n\nI'm not sure I understand this comment. What do you mean by \"better\nGiST\" or what does that mean for this patch?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 21 Jul 2024 22:42:22 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "\n\n> On 21 Jul 2024, at 23:42, Tomas Vondra <[email protected]> wrote:\n> \n>> \n>> 1. Do I get it right that is_parallel argument for gistGetFakeLSN()\n>> is only needed for assertion? And this assertion can be ensured just\n>> by inspecting code. Is it really necessary?\n> \n> Yes, in the patch it's only used for an assert. But it's actually\n> incorrect - just as I speculated in my initial message (in the section\n> about gistGetFakeLSN), it sometimes fails to detect a page split. I\n> noticed that while testing the patch adding GiST to amcheck, and I\n> reported that in that thread [1]. But I apparently forgot to post an\n> updated version of this patch :-(\n\nOops, I just though that it's a version with solved FakeLSN problem.\n\n> \n> I'll post a new version tomorrow, but in short it needs to update the\n> fake LSN even if (lastlsn != currlsn) if is_parallel=true. It's a bit\n> annoying this means we generate a new fake LSN on every call, and I'm\n> not sure that's actually necessary. But I've been unable to come up with\n> a better condition when to generate a new LSN.\n\nWhy don't we just use an atomic counter withtin shared build state? \n\n> \n> [1]\n> https://www.postgresql.org/message-id/79622955-6d1a-4439-b358-ec2b6a9e7cbf%40enterprisedb.com\nYes, I'll be back in that thread soon. I'm still on vacation and it's hard to get continuous uninterrupted time here. You did a great review, and I want to address all issues there wholistically. Thank you!\n\n>> 2. gistBuildParallelCallback() updates indtuplesSize, but it seems to be\n>> not used anywhere. AFAIK it's only needed to buffered build. 3. I\n>> think we need a test that reliably triggers parallel and serial\n>> builds.\n>> \n> \n> Yeah, it's possible the variable is unused. Agreed on the testing.\n> \n>> As far as I know, there's a well known trick to build better GiST\n>> over PostGIS data: randomize input. I think parallel scan is just\n>> what is needed, it will shuffle tuples enough...\n>> \n> \n> I'm not sure I understand this comment. What do you mean by \"better\n> GiST\" or what does that mean for this patch?\n\nI think parallel build indexes will have faster IndexScans.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 22 Jul 2024 12:00:32 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "On 7/22/24 11:00, Andrey M. Borodin wrote:\n> \n> \n>> On 21 Jul 2024, at 23:42, Tomas Vondra <[email protected]> wrote:\n>>\n>>>\n>>> 1. Do I get it right that is_parallel argument for gistGetFakeLSN()\n>>> is only needed for assertion? And this assertion can be ensured just\n>>> by inspecting code. Is it really necessary?\n>>\n>> Yes, in the patch it's only used for an assert. But it's actually\n>> incorrect - just as I speculated in my initial message (in the section\n>> about gistGetFakeLSN), it sometimes fails to detect a page split. I\n>> noticed that while testing the patch adding GiST to amcheck, and I\n>> reported that in that thread [1]. But I apparently forgot to post an\n>> updated version of this patch :-(\n> \n> Oops, I just though that it's a version with solved FakeLSN problem.\n> \n>>\n>> I'll post a new version tomorrow, but in short it needs to update the\n>> fake LSN even if (lastlsn != currlsn) if is_parallel=true. It's a bit\n>> annoying this means we generate a new fake LSN on every call, and I'm\n>> not sure that's actually necessary. But I've been unable to come up with\n>> a better condition when to generate a new LSN.\n> \n> Why don't we just use an atomic counter withtin shared build state? \n> \n\nI don't understand how would that solve the problem, can you elaborate?\nWhich of the values are you suggesting should be replaced with the\nshared counter? lastlsn?\n\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/79622955-6d1a-4439-b358-ec2b6a9e7cbf%40enterprisedb.com\n> Yes, I'll be back in that thread soon. I'm still on vacation and it's hard to get continuous uninterrupted time here. You did a great review, and I want to address all issues there wholistically. Thank you!\n> \n>>> 2. gistBuildParallelCallback() updates indtuplesSize, but it seems to be\n>>> not used anywhere. AFAIK it's only needed to buffered build. 3. I\n>>> think we need a test that reliably triggers parallel and serial\n>>> builds.\n>>>\n>>\n>> Yeah, it's possible the variable is unused. Agreed on the testing.\n>>\n>>> As far as I know, there's a well known trick to build better GiST\n>>> over PostGIS data: randomize input. I think parallel scan is just\n>>> what is needed, it will shuffle tuples enough...\n>>>\n>>\n>> I'm not sure I understand this comment. What do you mean by \"better\n>> GiST\" or what does that mean for this patch?\n> \n> I think parallel build indexes will have faster IndexScans.\n> \n\nThat's interesting. I haven't thought about measuring stuff like that\n(and it hasn't occurred to me it might have this benefit, or why would\nthat be the case).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Jul 2024 11:26:28 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "\n\n> On 22 Jul 2024, at 12:26, Tomas Vondra <[email protected]> wrote:\n> \n> I don't understand how would that solve the problem, can you elaborate?\n> Which of the values are you suggesting should be replaced with the\n> shared counter? lastlsn?\n\nI think during build we should consider index unlogged and always use GetFakeLSNForUnloggedRel() or something similar. Anyway we will calllog_newpage_range(RelationGetNumberOfBlocks(index)) at the end.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 22 Jul 2024 14:08:39 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "\n\nOn 7/22/24 13:08, Andrey M. Borodin wrote:\n> \n> \n>> On 22 Jul 2024, at 12:26, Tomas Vondra\n>> <[email protected]> wrote:\n>> \n>> I don't understand how would that solve the problem, can you\n>> elaborate? Which of the values are you suggesting should be\n>> replaced with the shared counter? lastlsn?\n> \n> I think during build we should consider index unlogged and always use\n> GetFakeLSNForUnloggedRel() or something similar. Anyway we will\n> calllog_newpage_range(RelationGetNumberOfBlocks(index)) at the end.\n> \n\nBut that doesn't update the page LSN, which GiST uses to detect\nconcurrent splits, no?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Jul 2024 13:53:31 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "\n\n> On 22 Jul 2024, at 14:53, Tomas Vondra <[email protected]> wrote:\n> \n> \n> \n> On 7/22/24 13:08, Andrey M. Borodin wrote:\n>> \n>> \n>>> On 22 Jul 2024, at 12:26, Tomas Vondra\n>>> <[email protected]> wrote:\n>>> \n>>> I don't understand how would that solve the problem, can you\n>>> elaborate? Which of the values are you suggesting should be\n>>> replaced with the shared counter? lastlsn?\n>> \n>> I think during build we should consider index unlogged and always use\n>> GetFakeLSNForUnloggedRel() or something similar. Anyway we will\n>> calllog_newpage_range(RelationGetNumberOfBlocks(index)) at the end.\n>> \n> \n> But that doesn't update the page LSN, which GiST uses to detect\n> concurrent splits, no?\n\nDuring inserting tuples we need NSN on page. For NSN we can use just a counter, generated by gistGetFakeLSN() which in turn will call GetFakeLSNForUnloggedRel(). Or any other shared counter.\nAfter inserting tuples we call log_newpage_range() to actually WAL-log pages.\nAll NSNs used during build must be less than LSNs used to insert new tuples after index is built.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 22 Jul 2024 15:08:55 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "On 7/22/24 2:08 PM, Andrey M. Borodin wrote:\n> During inserting tuples we need NSN on page. For NSN we can use just a counter, generated by gistGetFakeLSN() which in turn will call GetFakeLSNForUnloggedRel(). Or any other shared counter.\n> After inserting tuples we call log_newpage_range() to actually WAL-log pages.\n> All NSNs used during build must be less than LSNs used to insert new tuples after index is built.\n\nI feel the tricky part about doing that is that we need to make sure the \nfake LSNs are all less than the current real LSN when the index build \ncompletes and while that normally should be the case we will have a \nalmost never exercised code path for when the fake LSN becomes bigger \nthan the real LSN which may contain bugs. Is that really worth it to \noptimize.\n\nBut if we are going to use fake LSN: since the index being built is not \nvisible to any scans we do not have to use GetFakeLSNForUnloggedRel() \nbut could use an own counter in shared memory in the GISTShared struct \nfor this specific index which starts at FirstNormalUnloggedLSN. This \nwould give us slightly less contention plus decrease the risk (for good \nand bad) of the fake LSN being larger than the real LSN.\n\nAndreas\n\n\n", "msg_date": "Fri, 26 Jul 2024 11:30:12 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "\n\n> On 26 Jul 2024, at 14:30, Andreas Karlsson <[email protected]> wrote:\n> \n> I feel the tricky part about doing that is that we need to make sure the fake LSNs are all less than the current real LSN when the index build completes and while that normally should be the case we will have a almost never exercised code path for when the fake LSN becomes bigger than the real LSN which may contain bugs. Is that really worth it to optimize.\n> \n> But if we are going to use fake LSN: since the index being built is not visible to any scans we do not have to use GetFakeLSNForUnloggedRel() but could use an own counter in shared memory in the GISTShared struct for this specific index which starts at FirstNormalUnloggedLSN. This would give us slightly less contention plus decrease the risk (for good and bad) of the fake LSN being larger than the real LSN.\n\n+1 for atomic counter in GISTShared.\nBTW we can just reset LSNs to GistBuildLSN just before doing log_newpage_range().\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 26 Jul 2024 17:13:52 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "On 7/26/24 14:13, Andrey M. Borodin wrote:\n> \n> \n>> On 26 Jul 2024, at 14:30, Andreas Karlsson <[email protected]> wrote:\n>>\n>> I feel the tricky part about doing that is that we need to make sure the fake LSNs are all less than the current real LSN when the index build completes and while that normally should be the case we will have a almost never exercised code path for when the fake LSN becomes bigger than the real LSN which may contain bugs. Is that really worth it to optimize.\n>>\n>> But if we are going to use fake LSN: since the index being built is not visible to any scans we do not have to use GetFakeLSNForUnloggedRel() but could use an own counter in shared memory in the GISTShared struct for this specific index which starts at FirstNormalUnloggedLSN. This would give us slightly less contention plus decrease the risk (for good and bad) of the fake LSN being larger than the real LSN.\n> \n> +1 for atomic counter in GISTShared.\n\nI tried implementing this, see the attached 0002 patch that replaces the\nfake LSN with an atomic counter in shared memory. It seems to work (more\ntesting needed), but I can't say I'm very happy with the code :-(\n\nThe way it passes the shared counter to places that actually need it is\npretty ugly. The thing is - the counter needs to be in shared memory,\nbut places like gistplacetopage() have no idea/need of that. I chose to\nsimply pass a pg_atomic_uint64 pointer, but that's ... not pretty. Is\nthere's a better way to do this?\n\nI thought maybe we could simply increment the counter before each call\nand pass the LSN value - 64bits should be enough, not sure about the\noverhead. But gistplacetopage() also uses the LSN twice, and I'm not\nsure it'd be legal to use the same value twice.\n\nAny better ideas?\n\n\n> BTW we can just reset LSNs to GistBuildLSN just before doing log_newpage_range().\n> \n\nWhy would the reset be necessary? Doesn't log_newpage_range() set page\nLSN to current insert LSN? So why would reset that?\n\nI'm not sure about the discussion about NSN and the need to handle the\ncase when NSN / fake LSN values get ahead of LSN. Is that really a\nproblem? If the values generated from the counter are private to the\nindex build, and log_newpage_range() replaces them with current LSN, do\nwe still need to worry about it?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 30 Jul 2024 11:05:56 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "\n\n> On 30 Jul 2024, at 14:05, Tomas Vondra <[email protected]> wrote:\n> \n> \n> \n> On 7/26/24 14:13, Andrey M. Borodin wrote:\n>> \n>> \n>>> On 26 Jul 2024, at 14:30, Andreas Karlsson <[email protected]> wrote:\n>>> \n>>> I feel the tricky part about doing that is that we need to make sure the fake LSNs are all less than the current real LSN when the index build completes and while that normally should be the case we will have a almost never exercised code path for when the fake LSN becomes bigger than the real LSN which may contain bugs. Is that really worth it to optimize.\n>>> \n>>> But if we are going to use fake LSN: since the index being built is not visible to any scans we do not have to use GetFakeLSNForUnloggedRel() but could use an own counter in shared memory in the GISTShared struct for this specific index which starts at FirstNormalUnloggedLSN. This would give us slightly less contention plus decrease the risk (for good and bad) of the fake LSN being larger than the real LSN.\n>> \n>> +1 for atomic counter in GISTShared.\n> \n> I tried implementing this, see the attached 0002 patch that replaces the\n> fake LSN with an atomic counter in shared memory. It seems to work (more\n> testing needed), but I can't say I'm very happy with the code :-(\n\nI agree. Passing this pointer everywhere seems ugly.\n\n> \n> The way it passes the shared counter to places that actually need it is\n> pretty ugly. The thing is - the counter needs to be in shared memory,\n> but places like gistplacetopage() have no idea/need of that. I chose to\n> simply pass a pg_atomic_uint64 pointer, but that's ... not pretty. Is\n> there's a better way to do this?\n> \n> I thought maybe we could simply increment the counter before each call\n> and pass the LSN value - 64bits should be enough, not sure about the\n> overhead. But gistplacetopage() also uses the LSN twice, and I'm not\n> sure it'd be legal to use the same value twice.\n> \n> Any better ideas?\n\nHow about global pointer to fake LSN?\nJust set it to correct pointer when in parallel build, or NULL either way.\n\n>> BTW we can just reset LSNs to GistBuildLSN just before doing log_newpage_range().\n>> \n> \n> Why would the reset be necessary? Doesn't log_newpage_range() set page\n> LSN to current insert LSN? So why would reset that?\n> \n> I'm not sure about the discussion about NSN and the need to handle the\n> case when NSN / fake LSN values get ahead of LSN. Is that really a\n> problem? If the values generated from the counter are private to the\n> index build, and log_newpage_range() replaces them with current LSN, do\n> we still need to worry about it?\n\nStamping pages with new real LSN will do the trick. I didn’t know that log_newpage_range() is already doing so.\n\nHow do we synchronize Shared Fake LSN with global XLogCtl->unloggedLSN? Just bump XLogCtl->unloggedLSN if necessary?\nPerhaps, consider using GetFakeLSNForUnloggedRel() instead of shared counter? As long as we do not care about FakeLSN>RealLSN.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 30 Jul 2024 14:39:44 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "\n\nOn 7/30/24 11:39, Andrey M. Borodin wrote:\n> \n> \n>> On 30 Jul 2024, at 14:05, Tomas Vondra <[email protected]> wrote:\n>>\n>>\n>>\n>> On 7/26/24 14:13, Andrey M. Borodin wrote:\n>>>\n>>>\n>>>> On 26 Jul 2024, at 14:30, Andreas Karlsson <[email protected]> wrote:\n>>>>\n>>>> I feel the tricky part about doing that is that we need to make sure the fake LSNs are all less than the current real LSN when the index build completes and while that normally should be the case we will have a almost never exercised code path for when the fake LSN becomes bigger than the real LSN which may contain bugs. Is that really worth it to optimize.\n>>>>\n>>>> But if we are going to use fake LSN: since the index being built is not visible to any scans we do not have to use GetFakeLSNForUnloggedRel() but could use an own counter in shared memory in the GISTShared struct for this specific index which starts at FirstNormalUnloggedLSN. This would give us slightly less contention plus decrease the risk (for good and bad) of the fake LSN being larger than the real LSN.\n>>>\n>>> +1 for atomic counter in GISTShared.\n>>\n>> I tried implementing this, see the attached 0002 patch that replaces the\n>> fake LSN with an atomic counter in shared memory. It seems to work (more\n>> testing needed), but I can't say I'm very happy with the code :-(\n> \n> I agree. Passing this pointer everywhere seems ugly.\n> \n\nYeah.\n\n>>\n>> The way it passes the shared counter to places that actually need it is\n>> pretty ugly. The thing is - the counter needs to be in shared memory,\n>> but places like gistplacetopage() have no idea/need of that. I chose to\n>> simply pass a pg_atomic_uint64 pointer, but that's ... not pretty. Is\n>> there's a better way to do this?\n>>\n>> I thought maybe we could simply increment the counter before each call\n>> and pass the LSN value - 64bits should be enough, not sure about the\n>> overhead. But gistplacetopage() also uses the LSN twice, and I'm not\n>> sure it'd be legal to use the same value twice.\n>>\n>> Any better ideas?\n> \n> How about global pointer to fake LSN?\n> Just set it to correct pointer when in parallel build, or NULL either way.\n> \n\nI'm not sure adding a global variable is pretty either. What if there's\nsome error, for example? Will it get reset to NULL?\n\n>>> BTW we can just reset LSNs to GistBuildLSN just before doing log_newpage_range().\n>>>\n>>\n>> Why would the reset be necessary? Doesn't log_newpage_range() set page\n>> LSN to current insert LSN? So why would reset that?\n>>\n>> I'm not sure about the discussion about NSN and the need to handle the\n>> case when NSN / fake LSN values get ahead of LSN. Is that really a\n>> problem? If the values generated from the counter are private to the\n>> index build, and log_newpage_range() replaces them with current LSN, do\n>> we still need to worry about it?\n> \n> Stamping pages with new real LSN will do the trick. I didn’t know that log_newpage_range() is already doing so.\n> \n\nI believe it does, or at least that's what I believe this code at the\nend is meant to do:\n\n recptr = XLogInsert(RM_XLOG_ID, XLOG_FPI);\n\n for (i = 0; i < nbufs; i++)\n {\n PageSetLSN(BufferGetPage(bufpack[i]), recptr);\n UnlockReleaseBuffer(bufpack[i]);\n }\n\nUnless I misunderstood what this does.\n\n> How do we synchronize Shared Fake LSN with global XLogCtl->unloggedLSN? Just bump XLogCtl->unloggedLSN if necessary?\n> Perhaps, consider using GetFakeLSNForUnloggedRel() instead of shared counter? As long as we do not care about FakeLSN>RealLSN.\n> \n\nI'm confused. How is this related to unloggedLSN at all?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 30 Jul 2024 11:57:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "\n\n> On 30 Jul 2024, at 14:57, Tomas Vondra <[email protected]> wrote:\n> \n>> \n>> How do we synchronize Shared Fake LSN with global XLogCtl->unloggedLSN? Just bump XLogCtl->unloggedLSN if necessary?\n>> Perhaps, consider using GetFakeLSNForUnloggedRel() instead of shared counter? As long as we do not care about FakeLSN>RealLSN.\n>> \n> \n> I'm confused. How is this related to unloggedLSN at all?\n\nParallel build should work for both logged and unlogged indexes.\nIf we use fake LSN in shared memory, we have to make sure that FakeLSN < XLogCtl->unloggedLSN after build.\nEither way we can just use XLogCtl->unloggedLSN instead of FakeLSN in shared memory.\n\nIn other words I propose to use GetFakeLSNForUnloggedRel() instead of \"pg_atomic_uint64 *fakelsn;”.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 30 Jul 2024 16:31:03 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "\n\nOn 7/30/24 13:31, Andrey M. Borodin wrote:\n> \n> \n>> On 30 Jul 2024, at 14:57, Tomas Vondra <[email protected]> wrote:\n>>\n>>>\n>>> How do we synchronize Shared Fake LSN with global XLogCtl->unloggedLSN? Just bump XLogCtl->unloggedLSN if necessary?\n>>> Perhaps, consider using GetFakeLSNForUnloggedRel() instead of shared counter? As long as we do not care about FakeLSN>RealLSN.\n>>>\n>>\n>> I'm confused. How is this related to unloggedLSN at all?\n> \n> Parallel build should work for both logged and unlogged indexes.\n\nAgreed, no argument here.\n\n> If we use fake LSN in shared memory, we have to make sure that FakeLSN < XLogCtl->unloggedLSN after build.\n> Either way we can just use XLogCtl->unloggedLSN instead of FakeLSN in shared memory.\n> \n\nAh, right. For unlogged relations we won't invoke log_newpage_range(),\nso we'd end up with the bogus page LSNs ...\n\n> In other words I propose to use GetFakeLSNForUnloggedRel() instead of \"pg_atomic_uint64 *fakelsn;”.\n> \n\nInteresting idea. IIRC you suggested this earlier, but I didn't realize\nit has the benefit of already using an atomic counter - which solves the\n\"ugliness\" of my patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 30 Jul 2024 14:46:39 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "On 7/30/24 1:31 PM, Andrey M. Borodin wrote:>> On 30 Jul 2024, at 14:57, \nTomas Vondra <[email protected]> wrote:\n>>\n>>>\n>>> How do we synchronize Shared Fake LSN with global XLogCtl->unloggedLSN? Just bump XLogCtl->unloggedLSN if necessary?\n>>> Perhaps, consider using GetFakeLSNForUnloggedRel() instead of shared counter? As long as we do not care about FakeLSN>RealLSN.\n>>>\n>>\n>> I'm confused. How is this related to unloggedLSN at all?\n> \n> Parallel build should work for both logged and unlogged indexes.\n> If we use fake LSN in shared memory, we have to make sure that FakeLSN < XLogCtl->unloggedLSN after build.\n> Either way we can just use XLogCtl->unloggedLSN instead of FakeLSN in shared memory.\n> \n> In other words I propose to use GetFakeLSNForUnloggedRel() instead of \"pg_atomic_uint64 *fakelsn;”.\n\nYeah,\n\nGreat point, given the ugliness of passing around the fakelsn we might \nas well just use GetFakeLSNForUnloggedRel().\n\nAndreas\n\n\n", "msg_date": "Sat, 3 Aug 2024 11:25:34 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIP: parallel GiST index builds" }, { "msg_contents": "Hi,\n\nHere's an updated patch using GetFakeLSNForUnloggedRel() instead of the\natomic counter. I think this looks much nicer and less invasive, as it\nsimply uses XLogCtl shared memory (instead of having to pass a new\npointer everywhere).\n\nWe still need to pass the is_parallel flag, though. I wonder if we could\nget rid of that too, and just use GetFakeLSNForUnloggedRel() for both\nparallel and non-parallel builds? Why wouldn't that work?\n\nI've spent quite a bit of time testing this, but mostly for correctness.\nI haven't redone the benchmarks, that's on my TODO.\n\n\nregards\n\n-- \nTomas Vondra", "msg_date": "Mon, 5 Aug 2024 17:18:52 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WIP: parallel GiST index builds" } ]
[ { "msg_contents": "Here is a new function which could produce an array of numbers with a\ncontrollable array length and duplicated elements in these arrays. I\nused it when working with gin index, and I think it would be helpful for\nothers as well, so here it is.\n\nselect * from normal_rand_array(5, 10, 1.8::numeric, 3.5::numeric);\n normal_rand_array \n-----------------------------------------------\n {3.3,2.3,2.7,3.2,2.0,2.7,3.4,2.7,2.3,2.9}\n {3.3,1.8,2.9,3.4,2.0,1.8,2.0,3.5,2.8,2.5}\n {2.1,1.9,2.3,1.9,2.5,2.7,2.4,2.9,1.8}\n {2.3,2.5,2.4,2.7,2.7,2.3,2.9,3.3,3.3,1.9,3.5}\n {2.8,3.4,2.7,1.8,3.3,2.3,2.2,3.5,2.6,2.5}\n(5 rows)\n\nselect * from normal_rand_array(5, 10, 1.8::int4, 3.5::int4);\n normal_rand_array \n-------------------------------------\n {3,2,2,3,4,2}\n {2,4,2,3,3,3,3,2,2,3,3,2,3,2}\n {2,4,3}\n {4,2,3,4,2,4,2,2,3,4,3,3,2,4,4,2,3}\n {4,3,3,4,3,3,4,2,4}\n(5 rows)\n\nthe 5 means it needs to produce 5 rows in total and the 10 is the\naverage array length, and 1.8 is the minvalue for the random function\nand 3.5 is the maxvalue. \n\n-- \nBest Regards\nAndy Fan", "msg_date": "Sat, 08 Jun 2024 14:05:51 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "It looks useful, for example, it can be used in sorting tests to make them more interesting. I just have one question. Why are you using SRF_IS_FIRST CALL and not _PG_init?\r\nBest regards, Stepan Neretin.", "msg_date": "Mon, 24 Jun 2024 08:30:32 +0000", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "Hi Andy\n\nOn 08.06.24 08:05, Andy Fan wrote:\n> Here is a new function which could produce an array of numbers with a\n> controllable array length and duplicated elements in these arrays. I\n> used it when working with gin index, and I think it would be helpful for\n> others as well, so here it is.\n>\n> select * from normal_rand_array(5, 10, 1.8::numeric, 3.5::numeric);\n> normal_rand_array \n> -----------------------------------------------\n> {3.3,2.3,2.7,3.2,2.0,2.7,3.4,2.7,2.3,2.9}\n> {3.3,1.8,2.9,3.4,2.0,1.8,2.0,3.5,2.8,2.5}\n> {2.1,1.9,2.3,1.9,2.5,2.7,2.4,2.9,1.8}\n> {2.3,2.5,2.4,2.7,2.7,2.3,2.9,3.3,3.3,1.9,3.5}\n> {2.8,3.4,2.7,1.8,3.3,2.3,2.2,3.5,2.6,2.5}\n> (5 rows)\n>\n> select * from normal_rand_array(5, 10, 1.8::int4, 3.5::int4);\n> normal_rand_array \n> -------------------------------------\n> {3,2,2,3,4,2}\n> {2,4,2,3,3,3,3,2,2,3,3,2,3,2}\n> {2,4,3}\n> {4,2,3,4,2,4,2,2,3,4,3,3,2,4,4,2,3}\n> {4,3,3,4,3,3,4,2,4}\n> (5 rows)\n>\n> the 5 means it needs to produce 5 rows in total and the 10 is the\n> average array length, and 1.8 is the minvalue for the random function\n> and 3.5 is the maxvalue. \n>\n\nWhen either minval or maxval exceeds int4 the function cannot be\nexecuted/found\n\nSELECT * FROM normal_rand_array(5, 10, 8, 42::bigint);\n\nERROR:  function normal_rand_array(integer, integer, integer, bigint)\ndoes not exist\nLINE 1: SELECT * FROM normal_rand_array(5, 10, 8, 42::bigint);\n                      ^\nHINT:  No function matches the given name and argument types. You might\nneed to add explicit type casts.\n---\n\nSELECT * FROM normal_rand_array(5, 10, 8::bigint, 42);\n\nERROR:  function normal_rand_array(integer, integer, bigint, integer)\ndoes not exist\nLINE 1: SELECT * FROM normal_rand_array(5, 10, 8::bigint, 42);\n                      ^\nHINT:  No function matches the given name and argument types. You might\nneed to add explicit type casts.\n---\n\nHowever, when both are int8 it works fine:\n\nSELECT * FROM normal_rand_array(5, 10, 8::bigint, 42::bigint);\n\n                normal_rand_array                 \n--------------------------------------------------\n {29,38,31,10,23,39,9,32}\n {8,39,19,31,29,15,17,15,36,20,33,19}\n {15,18,42,19}\n {16,31,33,11,14,20,24,9,12,17,22,42,41,24,11,41}\n {15,11,36,8,28,37}\n(5 rows)\n---\n\nIs it the expected behaviour?\n\nIn some cases the function returns an empty array. Is it also expected?\n\nSELECT count(*)\nFROM normal_rand_array(100000, 10, 8, 42) i\nWHERE array_length(i,1) IS NULL;\n\n count\n-------\n  4533\n(1 row)\n\n\nIn both cases, perhaps mentioning these behaviors in the docs would\navoid some confusion.\n\nThanks!\n\nBest,\n\n-- \nJim\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 12:18:21 +0200", "msg_from": "Jim Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "On Tue, 2 Jul 2024 at 11:18, Jim Jones <[email protected]> wrote:\n>\n> When either minval or maxval exceeds int4 the function cannot be\n> executed/found\n>\n> SELECT * FROM normal_rand_array(5, 10, 8, 42::bigint);\n>\n> ERROR: function normal_rand_array(integer, integer, integer, bigint)\n> does not exist\n> LINE 1: SELECT * FROM normal_rand_array(5, 10, 8, 42::bigint);\n> ^\n> HINT: No function matches the given name and argument types. You might\n> need to add explicit type casts.\n>\n\nThis could be solved by defining separate functions for each supported\ntype, rather than one function with type anyelement. Internally, they\ncould continue to share common code, of course.\n\n> In some cases the function returns an empty array. Is it also expected?\n>\n\nPerhaps it would be useful to have separate minimum and maximum array\nlength arguments, rather than a mean array length argument.\n\nActually, I find the current behaviour somewhat counterintuitive. Only\nafter reading the source code did I realise what it's actually doing,\nwhich is this:\n\nRow 1: array of random length in range [0, meanarraylen]\nRow 2: array of length 2*meanarraylen - length of previous array\nRow 3: array of random length in range [0, meanarraylen]\nRow 4: array of length 2*meanarraylen - length of previous array\n...\n\nThat's far from obvious (it's certainly not documented) and I don't\nthink it's a particularly good way of achieving a specified mean array\nlength, because only half the lengths are random.\n\nOne thing that's definitely needed is more documentation. It should\nhave its own new subsection, like the other tablefunc functions.\n\nSomething else that confused me is why this function is called\nnormal_rand_array(). The underlying random functions that it's calling\nreturn random values from a uniform distribution, not a normal\ndistribution. Arrays of normally distributed random values might also\nbe useful, but that's not what this patch is doing.\n\nAlso, the function accepts float8 minval and maxval arguments, and\nthen simply ignores them and returns random float8 values in the range\n[0,1), which is highly counterintuitive.\n\nMy suggestion would be to mirror the signatures of the core random()\nfunctions more closely, and have this:\n\n1). rand_array(numvals int, minlen int, maxlen int)\n returns setof float8[]\n\n2). rand_array(numvals int, minlen int, maxlen int,\n minval int, maxval int)\n returns setof int[]\n\n3). rand_array(numvals int, minlen int, maxlen int,\n minval bigint, maxval bigint)\n returns setof bigint[]\n\n4). rand_array(numvals int, minlen int, maxlen int,\n minval numeric, maxval numeric)\n returns setof numeric[]\n\nAlso, I'd recommend giving the function arguments names in SQL, like\nthis, since that makes them easier to use.\n\nSomething else that's not obvious is that this patch is relying on the\ncore random functions, which means that it's using the same PRNG state\nas those functions. That's probably OK, but it should be documented,\nbecause it's different from tablefunc's normal_rand() function, which\nuses pg_global_prng_state. In particular, what this means is that\ncalling setseed() will affect the output of these new functions, but\nnot normal_rand(). Incidentally, that makes writing tests much simpler\n-- just call setseed() first and the output will be repeatable.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 15 Jul 2024 11:47:14 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n\n> My suggestion would be to mirror the signatures of the core random()\n> functions more closely, and have this:\n>\n> 1). rand_array(numvals int, minlen int, maxlen int)\n> returns setof float8[]\n>\n> 2). rand_array(numvals int, minlen int, maxlen int,\n> minval int, maxval int)\n> returns setof int[]\n>\n> 3). rand_array(numvals int, minlen int, maxlen int,\n> minval bigint, maxval bigint)\n> returns setof bigint[]\n>\n> 4). rand_array(numvals int, minlen int, maxlen int,\n> minval numeric, maxval numeric)\n> returns setof numeric[]\n\nthis is indeed a more clean and correct APIs, I will use the above ones\nin the next version. Thanks for the suggestion. \n\nIt is just not clear to me how verbose the document should to be, and\nwhere the document should be, tablefunc.sgml, the comment above the\nfunction or the places just the code? In many cases you provides above\nor below are just implementation details, not the API declaimed purpose. \n\n> Something else that's not obvious is that this patch is relying on the\n> core random functions, which means that it's using the same PRNG state\n> as those functions. That's probably OK, but it should be documented,\n> because it's different from tablefunc's normal_rand() function, which\n> uses pg_global_prng_state.\n\nMy above question applies to this comment. \n\n> In particular, what this means is that\n> calling setseed() will affect the output of these new functions, but\n> not normal_rand(). Incidentally, that makes writing tests much simpler\n> -- just call setseed() first and the output will be repeatable.\n\nGood to know this user case. for example, should this be documented?\n\n>> In some cases the function returns an empty array. Is it also expected?\n>>\n>\n> Perhaps it would be useful to have separate minimum and maximum array\n> length arguments, rather than a mean array length argument.\n\nI'm not sure which one is better, but main user case of this function\nfor testing pupose, so it I think minimum and maximum array length is\ngood for me too.\n\n> \n> Actually, I find the current behaviour somewhat counterintuitive. Only\n> after reading the source code did I realise what it's actually doing,\n> which is this:\n>\n> Row 1: array of random length in range [0, meanarraylen]\n> Row 2: array of length 2*meanarraylen - length of previous array\n> Row 3: array of random length in range [0, meanarraylen]\n> Row 4: array of length 2*meanarraylen - length of previous array\n> ...\n>\n> That's far from obvious (it's certainly not documented) and I don't\n> think it's a particularly good way of achieving a specified mean array\n> length, because only half the lengths are random.\n\nI'm not sure how does this matter in real user case.\n\n> One thing that's definitely needed is more documentation. It should\n> have its own new subsection, like the other tablefunc functions.\n\nis the documentaion for the '2*meanarraylen - lastarraylen'?\n\nand What is new subsection, do you mean anything wrong in\n'tablefunc.sgml', I did have some issue to run 'make html', but the\nerror exists before my patch, so I change the document carefully without\ntesting it. do you know how to fix the below error in 'make html'?\n\n\n$/usr/bin/xsltproc --nonet --path . --path . --stringparam pg.version '18devel' stylesheet.xsl postgres-full.xml\n\nI/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\nwarning: failed to load external entity \"http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\"\ncompilation error: file stylesheet.xsl line 6 element import\nxsl:import : unable to load http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\nI/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\nstylesheet-html-common.xsl:4: warning: failed to load external entity \"http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\"\n%common.entities;\n ^\nstylesheet-html-common.xsl:124: parser error : Entity 'primary' not defined\n translate(substring(&primary;, 1, 1),\n\n\n> Something else that confused me is why this function is called\n> normal_rand_array(). The underlying random functions that it's calling\n> return random values from a uniform distribution, not a normal\n> distribution. Arrays of normally distributed random values might also\n> be useful, but that's not what this patch is doing.\n\nOK, you are right, your new names should be better. \n\n> Also, the function accepts float8 minval and maxval arguments, and\n> then simply ignores them and returns random float8 values in the range\n> [0,1), which is highly counterintuitive.\n\nThis is a obvious bug and it only exists in float8 case IIUC, will fix\nit in the next version.\n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Wed, 17 Jul 2024 14:29:07 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "Andy Fan <[email protected]> writes:\n\n\n(just noticed this reply is sent to Jim privately, re-sent it to\npublic.)\n\n> Hi Jim,\n>\n>>\n>> When either minval or maxval exceeds int4 the function cannot be\n>> executed/found\n>>\n>> SELECT * FROM normal_rand_array(5, 10, 8, 42::bigint);\n>>\n>> ERROR:  function normal_rand_array(integer, integer, integer, bigint)\n>> does not exist\n>> LINE 1: SELECT * FROM normal_rand_array(5, 10, 8, 42::bigint);\n>>                       ^\n>> HINT:  No function matches the given name and argument types. You might\n>> need to add explicit type casts.\n>> ---\n>>\n>> SELECT * FROM normal_rand_array(5, 10, 8::bigint, 42);\n>>\n>> ERROR:  function normal_rand_array(integer, integer, bigint, integer)\n>> does not exist\n>> LINE 1: SELECT * FROM normal_rand_array(5, 10, 8::bigint, 42);\n>>                       ^\n>> HINT:  No function matches the given name and argument types. You might\n>> need to add explicit type casts.\n>> ---\n>\n>>\n>> However, when both are int8 it works fine:\n>\n> I defined the function as below:\n>\n> postgres=# \\df normal_rand_array\n> List of functions\n> Schema | Name | Result data type | Argument data types | Type \n> --------+-------------------+------------------+------------------------------------------+------\n> public | normal_rand_array | SETOF anyarray | integer, integer, anyelement, anyelement | func\n> (1 row)\n>\n> so it is required that the 3nd and 4th argument should have the same\n> data type, that's why your first 2 test case failed and the third one\n> works. and I also think we should not add a test case / document for\n> this since the behavior of 'anyelement' system.\n>\n\nThis issue can be fixed with the new API defined suggested by Dean. \n\n>>\n>> SELECT * FROM normal_rand_array(5, 10, 8::bigint, 42::bigint);\n>>\n>>                 normal_rand_array                 \n>> --------------------------------------------------\n>>  {29,38,31,10,23,39,9,32}\n>>  {8,39,19,31,29,15,17,15,36,20,33,19}\n>>  {15,18,42,19}\n>>  {16,31,33,11,14,20,24,9,12,17,22,42,41,24,11,41}\n>>  {15,11,36,8,28,37}\n>> (5 rows)\n>> ---\n>>\n>> Is it the expected behaviour?\n>\n> Yes, see the above statements.\n>\n>>\n>> In some cases the function returns an empty array. Is it also expected?\n>>\n>> SELECT count(*)\n>> FROM normal_rand_array(100000, 10, 8, 42) i\n>> WHERE array_length(i,1) IS NULL;\n>>\n>>  count\n>> -------\n>>   4533\n>> (1 row)\n>\n> Yes, by design I think it is a feature which could generate [] case\n> which should be used a special case for testing, and at the\n> implementation side, the [] means the length is 0 which is caused by I\n> choose the 'len' by random [0 .. len * 2], so 0 is possible and doesn't\n> confict with the declared behavior. \n>\n>> In both cases, perhaps mentioning these behaviors in the docs would\n>> avoid some confusion.\n>\n> hmm, It doesn't take some big effort to add them, but I'm feeling that\n> would make the document a bit of too verbose/detailed.\n>\n> Sorry for the late respone! \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Wed, 17 Jul 2024 14:31:22 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "On Wed, 17 Jul 2024 at 07:29, Andy Fan <[email protected]> wrote:\n>\n> It is just not clear to me how verbose the document should to be, and\n> where the document should be, tablefunc.sgml, the comment above the\n> function or the places just the code? In many cases you provides above\n> or below are just implementation details, not the API declaimed purpose.\n>\n> > Something else that's not obvious is that this patch is relying on the\n> > core random functions, which means that it's using the same PRNG state\n> > as those functions. That's probably OK, but it should be documented,\n> > because it's different from tablefunc's normal_rand() function, which\n> > uses pg_global_prng_state.\n>\n> My above question applies to this comment.\n>\n> > One thing that's definitely needed is more documentation. It should\n> > have its own new subsection, like the other tablefunc functions.\n>\n\nI was really referring to the SGML docs. Try to follow the style used\nfor the existing functions in tablefunc.sgml -- so in addition to\nadding the row to the table at the top, also add one or more sections\nfurther down the page to give more details, and example output.\nSomething like this:\n\nhttps://www.postgresql.org/docs/current/tablefunc.html#TABLEFUNC-FUNCTIONS-NORMAL-RAND\n\nThat would be a good place to mention that setseed() can be used to\nproduce repeatable results.\n\n\n> I did have some issue to run 'make html', but the\n> error exists before my patch, so I change the document carefully without\n> testing it. do you know how to fix the below error in 'make html'?\n>\n> $/usr/bin/xsltproc --nonet --path . --path . --stringparam pg.version '18devel' stylesheet.xsl postgres-full.xml\n>\n> I/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\n> warning: failed to load external entity \"http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\"\n> compilation error: file stylesheet.xsl line 6 element import\n> xsl:import : unable to load http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\n> I/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\n> stylesheet-html-common.xsl:4: warning: failed to load external entity \"http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\"\n> %common.entities;\n> ^\n> stylesheet-html-common.xsl:124: parser error : Entity 'primary' not defined\n> translate(substring(&primary;, 1, 1),\n>\n\nThis looks like you're missing a required package. Try installing\ndocbook-xsl or docbook-xsl-stylesheets or something similar (the\npackage name varies depending on your distro).\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 17 Jul 2024 09:44:25 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n\nHello Dean,\n\n>> I did have some issue to run 'make html', but the\n>> error exists before my patch, so I change the document carefully without\n>> testing it. do you know how to fix the below error in 'make html'?\n>>\n>> $/usr/bin/xsltproc --nonet --path . --path . --stringparam pg.version '18devel' stylesheet.xsl postgres-full.xml\n>>\n>> I/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\n>> warning: failed to load external entity\n>> \"http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\"\n..\n>\n> This looks like you're missing a required package. Try installing\n> docbook-xsl or docbook-xsl-stylesheets or something similar (the\n> package name varies depending on your distro).\n\nThis does work, thank you!\n\n>> My suggestion would be to mirror the signatures of the core random()\n>> functions more closely, and have this:\n>>\n>> 1). rand_array(numvals int, minlen int, maxlen int)\n>> returns setof float8[]\n>>\n..>\n>> 4). rand_array(numvals int, minlen int, maxlen int,\n>> minval numeric, maxval numeric)\n>> returns setof numeric[]\n\n> this is indeed a more clean and correct APIs, I will use the above ones\n> in the next version. Thanks for the suggestion.\n\nI followed your suggestion in the new attached version. They are not\nonly some cleaner APIs for user and but also some cleaner implementation\nin core, Thank for this suggestion as well.\n\nSorry for the late response, just my new posistion is bit of busy that I\ndon't have enough time on community work.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 26 Aug 2024 19:01:23 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "Andy Fan <[email protected]> writes:\n\n>>> My suggestion would be to mirror the signatures of the core random()\n>>> functions more closely, and have this:\n>>>\n>>> 1). rand_array(numvals int, minlen int, maxlen int)\n>>> returns setof float8[]\n>>>\n> ..>\n>>> 4). rand_array(numvals int, minlen int, maxlen int,\n>>> minval numeric, maxval numeric)\n>>> returns setof numeric[]\n>\n>> this is indeed a more clean and correct APIs, I will use the above ones\n>> in the next version. Thanks for the suggestion.\n>\n> I followed your suggestion in the new attached version. They are not\n> only some cleaner APIs for user and but also some cleaner implementation\n> in core, Thank for this suggestion as well.\n\nA new version is attached, nothing changed except replace\nPG_GETARG_INT16 with PG_GETARG_INT32. PG_GETARG_INT16 is a copy-paste\nerror.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 27 Aug 2024 16:43:45 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "On Tue, 27 Aug 2024 at 16:43, Andy Fan <[email protected]> wrote:\n> Andy Fan <[email protected]> writes:\n>\n>>>> My suggestion would be to mirror the signatures of the core random()\n>>>> functions more closely, and have this:\n>>>>\n>>>> 1). rand_array(numvals int, minlen int, maxlen int)\n>>>> returns setof float8[]\n>>>>\n>> ..>\n>>>> 4). rand_array(numvals int, minlen int, maxlen int,\n>>>> minval numeric, maxval numeric)\n>>>> returns setof numeric[]\n>>\n>>> this is indeed a more clean and correct APIs, I will use the above ones\n>>> in the next version. Thanks for the suggestion.\n>>\n>> I followed your suggestion in the new attached version. They are not\n>> only some cleaner APIs for user and but also some cleaner implementation\n>> in core, Thank for this suggestion as well.\n>\n> A new version is attached, nothing changed except replace\n> PG_GETARG_INT16 with PG_GETARG_INT32. PG_GETARG_INT16 is a copy-paste\n> error.\n>\n\nThanks for updating the patch. Here are some comments.\n\n+\tif (minlen >= maxlen)\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\t\t\t\t errmsg(\"minlen must be greater than maxlen.\")));\n\nThere error message should be \"minlen must be smaller than maxlen\", right?\n\n+\tif (minlen < 0)\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\t\t\t\t errmsg(\"minlen and maxlen must be greater than zero.\")));\n\nHere the minlen might be zero, so the error message is incorrect.\nHow about use \"minlen must be greater than or equal to zero\"?\n\n-- \nRegrads,\nJapin Li\n\n\n", "msg_date": "Tue, 27 Aug 2024 22:33:14 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "Japin Li <[email protected]> writes:\n\n\n> Thanks for updating the patch. Here are some comments.\n>\n> +\tif (minlen >= maxlen)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"minlen must be greater than maxlen.\")));\n>\n> There error message should be \"minlen must be smaller than maxlen\", right?\n>\n> +\tif (minlen < 0)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"minlen and maxlen must be greater than zero.\")));\n>\n> Here the minlen might be zero, so the error message is incorrect.\n> How about use \"minlen must be greater than or equal to zero\"?\n\nYes, you are right. A new version is attached, thanks for checking! \n\n-- \nBest Regards\nAndy Fan", "msg_date": "Wed, 28 Aug 2024 12:27:16 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "On Wed, 28 Aug 2024 at 12:27, Andy Fan <[email protected]> wrote:\n> Japin Li <[email protected]> writes:\n>\n>\n>> Thanks for updating the patch. Here are some comments.\n>>\n>> +\tif (minlen >= maxlen)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> +\t\t\t\t errmsg(\"minlen must be greater than maxlen.\")));\n>>\n>> There error message should be \"minlen must be smaller than maxlen\", right?\n>>\n>> +\tif (minlen < 0)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> +\t\t\t\t errmsg(\"minlen and maxlen must be greater than zero.\")));\n>>\n>> Here the minlen might be zero, so the error message is incorrect.\n>> How about use \"minlen must be greater than or equal to zero\"?\n>\n> Yes, you are right. A new version is attached, thanks for checking! \n>\n\nNitpick, the minlen is smaller than maxlen, so the maxlen cannot be zero.\n\nAfter giving it some more thought, it would also be helpful if maxlen is\nequal to minlen.\n\nFor example, I want have each row has four items, I can use the following\n\nSELECT rand_array(10, 4, 4, 50::int, 80::int);\n\nOTOH, I find the range bound uses \"less than or equal to\", how about\nreplacing \"smaller\" with \"less\"?\n\n-- \nRegrads,\nJapin Li", "msg_date": "Wed, 28 Aug 2024 13:22:42 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." }, { "msg_contents": "Japin Li <[email protected]> writes:\n\n> On Wed, 28 Aug 2024 at 12:27, Andy Fan <[email protected]> wrote:\n>> Japin Li <[email protected]> writes:\n>>\n> Nitpick, the minlen is smaller than maxlen, so the maxlen cannot be zero.\n>\n> After giving it some more thought, it would also be helpful if maxlen is\n> equal to minlen.\n>\n> For example, I want have each row has four items, I can use the following\n>\n> SELECT rand_array(10, 4, 4, 50::int, 80::int);\n\nYes, that's a valid usage. the new vesion is attached. I have changed\nthe the commit entry [1] from \"Waiting on Author\" to \"Needs review\".\n\n> OTOH, I find the range bound uses \"less than or equal to\", how about\n> replacing \"smaller\" with \"less\"?\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 29 Aug 2024 12:38:57 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New function normal_rand_array function to contrib/tablefunc." } ]
[ { "msg_contents": "Hackers,\n\nMost of the jsonpath methods auto-unwrap in lax mode:\n\ndavid=# select jsonb_path_query('[-2,5]', '$.abs()');\n jsonb_path_query\n------------------\n 2\n 5\n(2 rows)\n\nThe obvious exceptions are size() and type(), which apply directly to arrays, so no need to unwrap:\n\ndavid=# select jsonb_path_query('[-2,5]', '$.size()');\n jsonb_path_query\n------------------\n 2\n(1 row)\n\ndavid=# select jsonb_path_query('[-2,5]', '$.type()');\n jsonb_path_query\n------------------\n \"array\"\n\nBut what about string()? Is there some reason it doesn’t unwrap?\n\ndavid=# select jsonb_path_query('[-2,5]', '$.string()');\nERROR: jsonpath item method .string() can only be applied to a bool, string, numeric, or datetime value\n\nWhat I expect:\n\ndavid=# select jsonb_path_query('[-2,5]', '$.string()');\n jsonb_path_query\n—————————\n \"2\"\n \"5\"\n(2 rows)\n\nHowever, I do see a test[1] for this behavior, so maybe there’s a reason for it?\n\nBest,\n\nDavid\n\n[1]: https://github.com/postgres/postgres/blob/REL_17_BETA1/src/test/regress/expected/jsonb_jsonpath.out#L2527-L2530\n\n\n\n", "msg_date": "Sat, 8 Jun 2024 18:49:38 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "On Sat, Jun 8, 2024 at 3:50 PM David E. Wheeler <[email protected]>\nwrote:\n\n> Hackers,\n>\n> Most of the jsonpath methods auto-unwrap in lax mode:\n>\n> david=# select jsonb_path_query('[-2,5]', '$.abs()');\n> jsonb_path_query\n> ------------------\n> 2\n> 5\n> (2 rows)\n>\n> The obvious exceptions are size() and type(), which apply directly to\n> arrays, so no need to unwrap:\n>\n> david=# select jsonb_path_query('[-2,5]', '$.size()');\n> jsonb_path_query\n> ------------------\n> 2\n> (1 row)\n>\n> david=# select jsonb_path_query('[-2,5]', '$.type()');\n> jsonb_path_query\n> ------------------\n> \"array\"\n>\n> But what about string()? Is there some reason it doesn’t unwrap?\n>\n> david=# select jsonb_path_query('[-2,5]', '$.string()');\n> ERROR: jsonpath item method .string() can only be applied to a bool,\n> string, numeric, or datetime value\n>\n> What I expect:\n>\n> david=# select jsonb_path_query('[-2,5]', '$.string()');\n> jsonb_path_query\n> —————————\n> \"2\"\n> \"5\"\n> (2 rows)\n>\n> However, I do see a test[1] for this behavior, so maybe there’s a reason\n> for it?\n>\n>\nAdding Andrew.\n\nI'm willing to call this an open item against this feature as I don't see\nany documentation explaining that string() behaves differently than the\nothers.\n\nDavid J.\n\nOn Sat, Jun 8, 2024 at 3:50 PM David E. Wheeler <[email protected]> wrote:Hackers,\n\nMost of the jsonpath methods auto-unwrap in lax mode:\n\ndavid=# select jsonb_path_query('[-2,5]', '$.abs()');\n jsonb_path_query\n------------------\n 2\n 5\n(2 rows)\n\nThe obvious exceptions are size() and type(), which apply directly to arrays, so no need to unwrap:\n\ndavid=# select jsonb_path_query('[-2,5]', '$.size()');\n jsonb_path_query\n------------------\n 2\n(1 row)\n\ndavid=# select jsonb_path_query('[-2,5]', '$.type()');\n jsonb_path_query\n------------------\n \"array\"\n\nBut what about string()? Is there some reason it doesn’t unwrap?\n\ndavid=# select jsonb_path_query('[-2,5]', '$.string()');\nERROR:  jsonpath item method .string() can only be applied to a bool, string, numeric, or datetime value\n\nWhat I expect:\n\ndavid=# select jsonb_path_query('[-2,5]', '$.string()');\n jsonb_path_query\n—————————\n \"2\"\n \"5\"\n(2 rows)\n\nHowever, I do see a test[1] for this behavior, so maybe there’s a reason for it?Adding Andrew.I'm willing to call this an open item against this feature as I don't see any documentation explaining that string() behaves differently than the others.David J.", "msg_date": "Wed, 12 Jun 2024 13:02:44 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "On Jun 12, 2024, at 4:02 PM, David G. Johnston <[email protected]> wrote:\n\n> Adding Andrew.\n\nThank you.\n\n> I'm willing to call this an open item against this feature as I don't see any documentation explaining that string() behaves differently than the others.\n\nMaybe there’s some wording in the standard on this topic?\n\nI’m happy to provide a patch to auto-unwrap .string() in lax mode. Seems pretty straightforward.\n\nD\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 16:10:01 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "On 2024-06-12 We 16:02, David G. Johnston wrote:\n> On Sat, Jun 8, 2024 at 3:50 PM David E. Wheeler \n> <[email protected]> wrote:\n>\n> Hackers,\n>\n> Most of the jsonpath methods auto-unwrap in lax mode:\n>\n> david=# select jsonb_path_query('[-2,5]', '$.abs()');\n>  jsonb_path_query\n> ------------------\n>  2\n>  5\n> (2 rows)\n>\n> The obvious exceptions are size() and type(), which apply directly\n> to arrays, so no need to unwrap:\n>\n> david=# select jsonb_path_query('[-2,5]', '$.size()');\n>  jsonb_path_query\n> ------------------\n>  2\n> (1 row)\n>\n> david=# select jsonb_path_query('[-2,5]', '$.type()');\n>  jsonb_path_query\n> ------------------\n>  \"array\"\n>\n> But what about string()? Is there some reason it doesn’t unwrap?\n>\n> david=# select jsonb_path_query('[-2,5]', '$.string()');\n> ERROR:  jsonpath item method .string() can only be applied to a\n> bool, string, numeric, or datetime value\n>\n> What I expect:\n>\n> david=# select jsonb_path_query('[-2,5]', '$.string()');\n>  jsonb_path_query\n> —————————\n>  \"2\"\n>  \"5\"\n> (2 rows)\n>\n> However, I do see a test[1] for this behavior, so maybe there’s a\n> reason for it?\n>\n>\n> Adding Andrew.\n>\n> I'm willing to call this an open item against this feature as I don't \n> see any documentation explaining that string() behaves differently \n> than the others.\n>\n>\n\nHmm. You might be right. Many of these items have this code, but the \nstring() branch does not:\n\n if (unwrap && JsonbType(jb) == jbvArray)\n     return executeItemUnwrapTargetArray(cxt, jsp, jb, found,\n                                         false);\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-12 We 16:02, David G.\n Johnston wrote:\n\n\n\n\n\nOn Sat, Jun\n 8, 2024 at 3:50 PM David E. Wheeler <[email protected]>\n wrote:\n\n\n\nHackers,\n\n Most of the jsonpath methods auto-unwrap in lax mode:\n\n david=# select jsonb_path_query('[-2,5]', '$.abs()');\n  jsonb_path_query\n ------------------\n  2\n  5\n (2 rows)\n\n The obvious exceptions are size() and type(), which apply\n directly to arrays, so no need to unwrap:\n\n david=# select jsonb_path_query('[-2,5]', '$.size()');\n  jsonb_path_query\n ------------------\n  2\n (1 row)\n\n david=# select jsonb_path_query('[-2,5]', '$.type()');\n  jsonb_path_query\n ------------------\n  \"array\"\n\n But what about string()? Is there some reason it doesn’t\n unwrap?\n\n david=# select\n jsonb_path_query('[-2,5]', '$.string()');\n ERROR:  jsonpath item method .string() can only be applied\n to a bool, string, numeric, or datetime value\n\n What I expect:\n\n david=# select jsonb_path_query('[-2,5]', '$.string()');\n  jsonb_path_query\n —————————\n  \"2\"\n  \"5\"\n (2 rows)\n\n However, I do see a test[1] for this behavior, so maybe\n there’s a reason for it?\n\n\n\n\n\nAdding\n Andrew.\n\n\nI'm willing\n to call this an open item against this feature as I don't\n see any documentation explaining that string() behaves\n differently than the others.\n\n\n\n\n\n\n\n\n\nHmm. You might be right. Many of these items have this code, but\n the string() branch does not:\n\nif (unwrap && JsonbType(jb) == jbvArray)\n    return executeItemUnwrapTargetArray(cxt, jsp, jb, found,\n                                        false);\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 13 Jun 2024 15:53:17 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "On Jun 13, 2024, at 3:53 PM, Andrew Dunstan <[email protected]> wrote:\n\n> Hmm. You might be right. Many of these items have this code, but the string() branch does not:\n> if (unwrap && JsonbType(jb) == jbvArray)\n> return executeItemUnwrapTargetArray(cxt, jsp, jb, found,\n> false);\n\nExactly, would be pretty easy to add. I can work up a patch this weekend.\n\nD\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 18:45:21 -0400", "msg_from": "David E. Wheeler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "On 06/13/24 18:45, David E. Wheeler wrote:\n> On Jun 13, 2024, at 3:53 PM, Andrew Dunstan <[email protected]> wrote:\n> \n>> Hmm. You might be right. Many of these items have this code, but the string() branch does not:\n>> if (unwrap && JsonbType(jb) == jbvArray)\n>> return executeItemUnwrapTargetArray(cxt, jsp, jb, found,\n>> false);\n> \n> Exactly, would be pretty easy to add. I can work up a patch this weekend.\n\nMy opinion is yes, that should be done. 9.46, umm, General\nRule 11 g ii 6) A) says just \"if MODE is lax and <JSON method> is not\ntype or size, then let BASE be Unwrap(BASE).\" No special exemption\nthere for string(), nor further below at C) XV) for the operation\nof string().\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 13 Jun 2024 21:55:04 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "On Jun 13, 2024, at 21:55, Chapman Flack <[email protected]> wrote:\n\n> My opinion is yes, that should be done. 9.46, umm, General\n> Rule 11 g ii 6) A) says just \"if MODE is lax and <JSON method> is not\n> type or size, then let BASE be Unwrap(BASE).\" No special exemption\n> there for string(), nor further below at C) XV) for the operation\n> of string().\n\nThank you! Cited that bit in the commit message in the attached patch (also available as a GitHub PR[1]).\n\nD\n\n[1]: https://github.com/theory/postgres/pull/5", "msg_date": "Fri, 14 Jun 2024 10:39:36 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "On 06/14/24 10:39, David E. Wheeler wrote:\n> Cited that bit in the commit message in the attached patch (also available as a GitHub PR[1]).\n> \n> [1]: https://github.com/theory/postgres/pull/5\n\nI would s/extepsions/exceptions/ in the added documentation. :)\n\nOffhand (as GitHub PRs aren't really The PG Way), if they were The Way,\nI would find this one a little hard to follow, being based at a point\n28 unrelated commits ahead of the ref it's opened against. I suspect\n'master' on theory/postgres could be fast-forwarded to f1affb6 and then\nthe PR would look much more approachable.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 14 Jun 2024 11:25:53 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "> On Jun 14, 2024, at 11:25, Chapman Flack <[email protected]> wrote:\n> \n> I would s/extepsions/exceptions/ in the added documentation. :)\n\nBah, fixed and attached, thanks.\n\n> Offhand (as GitHub PRs aren't really The PG Way), if they were The Way,\n> I would find this one a little hard to follow, being based at a point\n> 28 unrelated commits ahead of the ref it's opened against. I suspect\n> 'master' on theory/postgres could be fast-forwarded to f1affb6 and then\n> the PR would look much more approachable.\n\nYeah, I pushed the PR and branch before I synced master, and GitHub was taking a while to notice and update the PR. I fixed it with `git commit --all --amend --date now --reedit-message HEAD` and force-pushed (then fixed the typo and fixed again).\n\nD", "msg_date": "Fri, 14 Jun 2024 12:03:54 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "Hi,\n\nSorry, I have missed this in the original patch. I am surprised how that\nhappened. But thanks for catching the same and fixing it.\n\nThe changes are straightforward and look good to me. However, I have kept\nthe existing test of an empty array as is, assuming that it is one of the\nvalid tests. It now returns zero rows instead of an error. Your added test\ncase covers this issue.\n\nThanks\n\n\n\nOn Fri, Jun 14, 2024 at 9:34 PM David E. Wheeler <[email protected]>\nwrote:\n\n>\n>\n> > On Jun 14, 2024, at 11:25, Chapman Flack <[email protected]> wrote:\n> >\n> > I would s/extepsions/exceptions/ in the added documentation. :)\n>\n> Bah, fixed and attached, thanks.\n>\n> > Offhand (as GitHub PRs aren't really The PG Way), if they were The Way,\n> > I would find this one a little hard to follow, being based at a point\n> > 28 unrelated commits ahead of the ref it's opened against. I suspect\n> > 'master' on theory/postgres could be fast-forwarded to f1affb6 and then\n> > the PR would look much more approachable.\n>\n> Yeah, I pushed the PR and branch before I synced master, and GitHub was\n> taking a while to notice and update the PR. I fixed it with `git commit\n> --all --amend --date now --reedit-message HEAD` and force-pushed (then\n> fixed the typo and fixed again).\n>\n> D\n>\n>\n>\n>\n\n-- \n\n\n\n*Jeevan Chalke*\n\n*Principal, ManagerProduct Development*\n\nenterprisedb.com <https://www.enterprisedb.com>", "msg_date": "Sat, 15 Jun 2024 19:57:48 +0530", "msg_from": "Jeevan Chalke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "On Jun 15, 2024, at 10:27, Jeevan Chalke <[email protected]> wrote:\n\n> Sorry, I have missed this in the original patch. I am surprised how that happened. But thanks for catching the same and fixing it.\n\nNo worries. :-)\n\n> The changes are straightforward and look good to me. However, I have kept the existing test of an empty array as is, assuming that it is one of the valid tests. It now returns zero rows instead of an error. Your added test case covers this issue.\n\nMakes sense, thank you.\n\nD\n\n\n\n", "msg_date": "Sat, 15 Jun 2024 10:39:06 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "On Jun 15, 2024, at 10:39, David E. Wheeler <[email protected]> wrote:\n\n>> The changes are straightforward and look good to me. However, I have kept the existing test of an empty array as is, assuming that it is one of the valid tests. It now returns zero rows instead of an error. Your added test case covers this issue.\n> \n> Makes sense, thank you.\n\nAdded https://commitfest.postgresql.org/48/5039/.\n\nD\n\n\n\n", "msg_date": "Sat, 15 Jun 2024 10:51:57 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "\nOn 2024-06-15 Sa 10:51, David E. Wheeler wrote:\n> On Jun 15, 2024, at 10:39, David E. Wheeler <[email protected]> wrote:\n>\n>>> The changes are straightforward and look good to me. However, I have kept the existing test of an empty array as is, assuming that it is one of the valid tests. It now returns zero rows instead of an error. Your added test case covers this issue.\n>> Makes sense, thank you.\n> Added https://commitfest.postgresql.org/48/5039/.\n>\n\nNot really needed, I will commit shortly.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 15 Jun 2024 12:48:53 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" }, { "msg_contents": "On Jun 15, 2024, at 12:48, Andrew Dunstan <[email protected]> wrote:\n\n> Not really needed, I will commit shortly.\n\nAh, okay, I wasn’t sure so just defaulted to making sure it was tracked. :-)\n\nThanks Andrew,\n\nD\n\n\n\n", "msg_date": "Sat, 15 Jun 2024 16:28:57 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't jsonpath .string() Unwrap?" } ]
[ { "msg_contents": "Hello hackers, just as the title says:\r\n1.&nbsp;Remove redundant parentheses.\r\n2.&nbsp;Adjust the position of the break statement.\r\n--\r\nRegards,\r\nChangAo Chen", "msg_date": "Sun, 9 Jun 2024 17:39:25 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Format the code in xact_decode" }, { "msg_contents": "Changes make the code in case XLOG_XACT_INVALIDATIONS block syntactically\nconsistent with the other block. LGTM.\n\nOn Sun, Jun 9, 2024 at 6:57 PM cca5507 <[email protected]> wrote:\n\n> Hello hackers, just as the title says:\n> 1. Remove redundant parentheses.\n> 2. Adjust the position of the break statement.\n> --\n> Regards,\n> ChangAo Chen\n>\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nChanges make the code in case XLOG_XACT_INVALIDATIONS block syntactically consistent with the other block. LGTM.On Sun, Jun 9, 2024 at 6:57 PM cca5507 <[email protected]> wrote:Hello hackers, just as the title says:1. Remove redundant parentheses.2. Adjust the position of the break statement.--Regards,ChangAo Chen-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 10 Jun 2024 14:52:46 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Format the code in xact_decode" }, { "msg_contents": "Thank you for reply! I have new a patch in commitfest:Format the code in xact_decode (postgresql.org)\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\nThank you for reply! I have new a patch in commitfest:Format the code in xact_decode (postgresql.org)--Regards,ChangAo Chen", "msg_date": "Mon, 10 Jun 2024 18:03:40 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Format the code in xact_decode" }, { "msg_contents": "On Mon, Jun 10, 2024 at 06:03:40PM +0800, cca5507 wrote:\n> Thank you for reply!\n\nNo objections here, either. \n\n> I have new a patch in commitfest:Format the code in xact_decode\n> (postgresql.org)\n\nThanks for tracking that. For reference:\nhttps://commitfest.postgresql.org/48/5028/\n--\nMichael", "msg_date": "Tue, 11 Jun 2024 09:56:44 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Format the code in xact_decode" } ]
[ { "msg_contents": "Hi,\n\nI noticed that the error response for a constraint violation only contains the column name for not-null constraints. I'm confused because the field isn't present when other types of constraints are triggered, such as unique, foreign keys, and check constraints. Was this done intentionally because these other constraints may involve multiple columns, while the column_name field expects a single column?\n\nI understand that a client can find out which columns are involved by a constraint name. Alternatively, should it be made more handy so that the column_name field is present for all constraints and includes all involved columns?\n\nС уважением,\nАрхипов Никита\n\n\n\n\n\n\n\nHi,\n\nI noticed that the error response for a constraint violation only contains the column name for not-null constraints. I'm confused because the field isn't present when other types of constraints are triggered, such as unique, foreign keys, and check constraints. Was this done intentionally because these other constraints may involve multiple columns, while the column_name field expects a single column?\n\nI understand that a client can find out which columns are involved by a constraint name. Alternatively, should it be made more handy so that the column_name field is present for all constraints and includes all involved columns?\n\n\n\nС уважением,\nАрхипов Никита", "msg_date": "Sun, 9 Jun 2024 14:04:11 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "The content of the column_name field in the error response for\n a constraint violation" }, { "msg_contents": "[email protected] writes:\n> I noticed that the error response for a constraint violation only contains the column name for not-null constraints. I'm confused because the field isn't present when other types of constraints are triggered, such as unique, foreign keys, and check constraints. Was this done intentionally because these other constraints may involve multiple columns, while the column_name field expects a single column?\n\n> I understand that a client can find out which columns are involved by a constraint name. Alternatively, should it be made more handy so that the column_name field is present for all constraints and includes all involved columns?\n\nThis does not sound like a great idea to me. Our manual clearly\ndefines the contents of the column_name field as a single name:\n\n If the error was associated with a specific table column, the\n name of the column. (When this field is present, the schema\n and table name fields identify the table.)\n\nTo cram more names in there, we'd have to institute some kind of\nquoting convention, which would break all existing clients that\nare doing anything nontrivial with the field. Unique indexes on\nexpressions would present even more challenges.\n\nAlso, the point of reporting the constraint name for unique and FK\nerrors is that that name is unique (among the constraints or indexes\nof a particular table, anyway). No such assumption could be made\nfor the set of columns in an index or FK.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Jun 2024 12:28:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The content of the column_name field in the error response for a\n constraint violation" } ]
[ { "msg_contents": "Hello hackers, I found that we&nbsp;currently don't track txns committed in\r\nBUILDING_SNAPSHOT state because of the code in xact_decode():\r\n\t/*\r\n\t * If the snapshot isn't yet fully built, we cannot decode anything, so\r\n\t * bail out.\r\n\t */\r\n\tif (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT)\r\n\t\treturn;\r\n\r\n\r\nThis can cause a txn to take an incorrect historic snapshot and result in an\r\n\r\ninterruption of logical replication. Consider the following scenario:\r\n(pub)create table t1 (id int primary key);\r\n(pub)insert into t1 values (1);\r\n(pub)create publication pub for table t1;\r\n(sub)create table t1 (id int primary key);\r\n(pub)begin; insert into t1 values (2); (txn1 in session1)\r\n(sub)create subscription sub connection 'hostaddr=127.0.0.1 port=5432 user=xxx dbname=postgres' publication pub; (pub will switch to BUILDING_SNAPSHOT state soon)\r\n(pub)begin; insert into t1 values (3); (txn2 in session2)\r\n(pub)create table t2 (id int primary key); (session3)\r\n(pub)commit; (commit txn1, and pub will switch to FULL_SNAPSHOT state soon)\r\n(pub)begin; insert into t2 values (1); (txn3 in session3)\r\n(pub)commit; (commit txn2, and pub will switch to CONSISTENT state soon)\r\n(pub)commit; (commit txn3, and replay txn3 will failed because its snapshot cannot see table t2)\r\n\r\n\r\nThe output of pub's log:\r\nERROR: could not map filenumber \"base/5/16395\" to relation OID\r\n\r\n\r\nIs this a bug? Should we also track the txns committed in BUILDING_SNAPSHOT state?\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\nHello hackers, I found that we currently don't track txns committed inBUILDING_SNAPSHOT state because of the code in xact_decode(): /* * If the snapshot isn't yet fully built, we cannot decode anything, so * bail out. */ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT) return;This can cause a txn to take an incorrect historic snapshot and result in aninterruption of logical replication. Consider the following scenario:(pub)create table t1 (id int primary key);(pub)insert into t1 values (1);(pub)create publication pub for table t1;(sub)create table t1 (id int primary key);(pub)begin; insert into t1 values (2); (txn1 in session1)(sub)create subscription sub connection 'hostaddr=127.0.0.1 port=5432 user=xxx dbname=postgres' publication pub; (pub will switch to BUILDING_SNAPSHOT state soon)(pub)begin; insert into t1 values (3); (txn2 in session2)(pub)create table t2 (id int primary key); (session3)(pub)commit; (commit txn1, and pub will switch to FULL_SNAPSHOT state soon)(pub)begin; insert into t2 values (1); (txn3 in session3)(pub)commit; (commit txn2, and pub will switch to CONSISTENT state soon)(pub)commit; (commit txn3, and replay txn3 will failed because its snapshot cannot see table t2)The output of pub's log:ERROR: could not map filenumber \"base/5/16395\" to relation OIDIs this a bug? Should we also track the txns committed in BUILDING_SNAPSHOT state?--Regards,ChangAo Chen", "msg_date": "Sun, 9 Jun 2024 23:21:52 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Historic snapshot doesn't track txns committed in BUILDING_SNAPSHOT\n state" }, { "msg_contents": "On Sun, Jun 09, 2024 at 11:21:52PM +0800, cca5507 wrote:\n> Hello hackers, I found that we&nbsp;currently don't track txns committed in\n> BUILDING_SNAPSHOT state because of the code in xact_decode():\n> \t/*\n> \t * If the snapshot isn't yet fully built, we cannot decode anything, so\n> \t * bail out.\n> \t */\n> \tif (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT)\n> \t\treturn;\n> \n> The output of pub's log:\n> ERROR: could not map filenumber \"base/5/16395\" to relation OID\n> \n> Is this a bug? Should we also track the txns committed in BUILDING_SNAPSHOT state?\n\nClearly, this is not an error you should be able to see as a user. So\nyes, that's something that needs to be fixed.\n--\nMichael", "msg_date": "Mon, 10 Jun 2024 16:57:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Thank you for reply!I am trying to fix it. This patch (pass check-world) will track txns\r\ncommitted in BUILDING_SNAPSHOT state and can fix this bug.\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen", "msg_date": "Mon, 10 Jun 2024 22:04:31 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 10, 2024 at 10:04:31PM +0800, cca5507 wrote:\n> Thank you for reply!I am trying to fix it. This patch (pass check-world) will track txns\n> committed in BUILDING_SNAPSHOT state and can fix this bug.\n\nThanks for the report and the patch!\n\nI did not look at the patch in detail but I can see that it does no contain a test\nrelated to this issue.\n\nWould you mind to add a test in say contrib/test_decoding? (see catalog_change_snapshot\nfor example).\n\nFWIW, to ease the writing of the test, I think you don't need pub/sub to produce\nthe issue. I think you can reproduce with a single instance and multiple sessions\n: replace in your repro the \"(sub)create subscription\" by a \"logical slot creation\"\nand finish the test by \"pg_logical_slot_get_changes\" on the slot created above.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 10:33:49 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\r\n\r\n\r\nThanks for pointing it out!\r\n\r\n\r\nHere are the new version patches with a test case.\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\r\n\r\n\r\n\r\n------------------&nbsp;Original&nbsp;------------------\r\nFrom: \"Bertrand Drouvot\" <[email protected]&gt;;\r\nDate:&nbsp;Wed, Aug 7, 2024 06:33 PM\r\nTo:&nbsp;\"cca5507\"<[email protected]&gt;;\r\nCc:&nbsp;\"Michael Paquier\"<[email protected]&gt;;\"pgsql-hackers\"<[email protected]&gt;;\r\nSubject:&nbsp;Re: Historic snapshot doesn't track txns committed in BUILDING_SNAPSHOT state\r\n\r\n\r\n\r\nHi,\r\n\r\nOn Mon, Jun 10, 2024 at 10:04:31PM +0800, cca5507 wrote:\r\n&gt; Thank you for reply!I am trying to fix it. This patch (pass check-world) will track txns\r\n&gt; committed in BUILDING_SNAPSHOT state and can fix this bug.\r\n\r\nThanks for the report and the patch!\r\n\r\nI did not look at the patch in detail but I can see that it does no contain a test\r\nrelated to this issue.\r\n\r\nWould you mind to add a test in say contrib/test_decoding? (see catalog_change_snapshot\r\nfor example).\r\n\r\nFWIW, to ease the writing of the test, I think you don't need pub/sub to produce\r\nthe issue. I think you can reproduce with a single instance and multiple sessions\r\n: replace in your repro the \"(sub)create subscription\" by a \"logical slot creation\"\r\nand finish the test by \"pg_logical_slot_get_changes\" on the slot created above.\r\n\r\nRegards,\r\n\r\n-- \r\nBertrand Drouvot\r\nPostgreSQL Contributors Team\r\nRDS Open Source Databases\r\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 8 Aug 2024 15:53:29 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\n\nOn Thu, Aug 08, 2024 at 03:53:29PM +0800, cca5507 wrote:\n> Hi,\n> \n> \n> Thanks for pointing it out!\n> \n> \n> Here are the new version patches with a test case.\n\nThanks!\n\nI think the approach that the patch implements makes sense and that we should\ntrack the transactions that have been commmitted while building the snapshot.\n\nA few random comments:\n\n1 ===\n\n+ * that the xlog in BUILDING_SNAPSHOT is only useful for build\n\ns/for build/to build? (same comment for the commit message)\n\n2 ===\n\n+ * snapshot and will not be decoded.\n\nworth to mention DecodeTXNNeedSkip() here?\n\n3 ===\n\n+ * point in decoding changes. Note that we only handle XLOG_HEAP2_NEW_CID\n+ * which mark a transaction as catalog modifying in BUILDING_SNAPSHOT\n\ndo you mean? Note that during BUILDING_SNAPSHOT, we only handle XLOG_HEAP2_NEW_CID\nas it marks a transaction as catalog modifying. If so, what about making the \n\n- if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n+ if (SnapBuildCurrentState(builder) < SNAPBUILD_BUILDING_SNAPSHOT ||\n\nmore clear about that? (to avoid any work when it's not needed)\n\n4 ===\n\n+ * useful for build the snapshot.\n\ns/for/to/?\n\n5 ===\n\n+ * point in decoding changes. Note that we only handle XLOG_HEAP_INPLACE\n+ * which mark a transaction as catalog modifying in BUILDING_SNAPSHOT, it's\n\ns/which mark/which might mark/?\n\ndo you mean? Note that during BUILDING_SNAPSHOT, we only handle XLOG_HEAP_INPLACE\nas it might mark a transaction as catalog modifying. If so, what about making the \n\n- if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n+ if (SnapBuildCurrentState(builder) < SNAPBUILD_BUILDING_SNAPSHOT ||\n\nmore clear about that? (to avoid any work when it's not needed)\n\nIdea of 3 === and 5 === is to proceed further in the SNAPBUILD_BUILDING_SNAPSHOT\ncase only if we know that the transaction is a catalog changing one (or might\nbe one).\n\n6 ===\n\n+ * useful for build the snapshot.\n\ns/for/to/?\n\n7 ===\n\n+# Test snapshot build correctly\n+\n\nwhat about? Test tracking of committed transactions during BUILDING_SNAPSHOT\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 9 Aug 2024 12:07:48 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\r\n\r\n\r\nThanks for the comments!\r\n\r\n\r\nHere are the new version patches, I think it will be more clear.\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen", "msg_date": "Sat, 10 Aug 2024 18:07:30 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\n\nOn Sat, Aug 10, 2024 at 06:07:30PM +0800, cca5507 wrote:\n> Hi,\n> \n> \n> Thanks for the comments!\n> \n> \n> Here are the new version patches, I think it will be more clear.\n\nThanks!\n\n1 ===\n\nWhen applying I get:\n\nApplying: Track transactions committed in BUILDING_SNAPSHOT.\n.git/rebase-apply/patch:71: space before tab in indent.\n */\n.git/rebase-apply/patch:94: space before tab in indent.\n */\nwarning: 2 lines add whitespace errors.\n\n2 ===\n\n+ * have snapshot and the transaction will not be tracked by snapshot\n\ns/have snapshot/have a snapshot/?\n\n3 ===\n\n+ * snapshot and will not be decoded\n\ns/snapshot/a snapshot/?\n\n\n4 ===\n\n if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n ctx->fast_forward)\n+ {\n+ /*\n+ * Note that during or after BUILDING_SNAPSHOT, we need handle the xlog\n+ * that might mark a transaction as catalog modifying because the snapshot\n+ * only tracks catalog modifying transactions. The transaction before\n+ * BUILDING_SNAPSHOT will not be tracked anyway(see SnapBuildCommitTxn()\n+ * for details), so just return.\n+ */\n+ if (SnapBuildCurrentState(builder) >= SNAPBUILD_BUILDING_SNAPSHOT)\n+ {\n+ /* Currently only XLOG_HEAP2_NEW_CID means a catalog modifying */\n+ if (info == XLOG_HEAP2_NEW_CID && TransactionIdIsValid(xid))\n\nWhat about?\n\nif (SnapBuildCurrentState(builder) < SNAPBUILD_BUILDING_SNAPSHOT ||\n (SnapBuildCurrentState(builder) == SNAPBUILD_BUILDING_SNAPSHOT && info != XLOG_HEAP2_NEW_CID) ||\n ctx->fast_forward)\n return;\n\nThat way we'd still rely on what's being done in the XLOG_HEAP2_NEW_CID case (\nshould it change in the future).\n\n5 ===\n\n if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n ctx->fast_forward)\n+ {\n+ /*\n+ * Note that during or after BUILDING_SNAPSHOT, we need handle the xlog\n+ * that might mark a transaction as catalog modifying because the snapshot\n+ * only tracks catalog modifying transactions. The transaction before\n+ * BUILDING_SNAPSHOT will not be tracked anyway(see SnapBuildCommitTxn()\n+ * for details), so just return.\n+ */\n+ if (SnapBuildCurrentState(builder) >= SNAPBUILD_BUILDING_SNAPSHOT)\n+ {\n\nWhat about?\n\nif (SnapBuildCurrentState(builder) < SNAPBUILD_BUILDING_SNAPSHOT ||\n (SnapBuildCurrentState(builder) == SNAPBUILD_BUILDING_SNAPSHOT && info != XLOG_HEAP_INPLACE) ||\n ctx->fast_forward)\n return;\n\nThat way we'd still rely on what's being done in the XLOG_HEAP_INPLACE case (\nshould it change in the future).\n\n6 ===\n\nv3-0002 LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 12 Aug 2024 05:47:25 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" } ]
[ { "msg_contents": "Hi,\n\nIn [0] Andres suggested enabling -ftrapv in assert enabled builds. While\nI vastly underestimated the complexity of updating `configure` to do\nthis, I was able to enable the flag locally. Enabling this flag causes\nour existing regression tests to trap and fail in multiple different\nspots. The attached patch resolves all of these overflows so that all\nof our existing tests will pass with the -ftrapv flag enabled.\n\nSome notes on the patch itself are:\n\nI originally added the helper functions to int.h thinking I'd find\nmany more instances of overflow due to integer negation, however I\ndidn't find that many. So let me know if you think we'd be better\noff without the functions.\n\nI considered using #ifdef to rely on wrapping when -fwrapv was\nenabled. This would save us some unnecessary branching when we could\nrely on wrapping behavior, but it would mean that we could only enable\n-ftrapv when -fwrapv was disabled, greatly reducing its utility.\n\nThe following comment was in the code for parsing timestamps:\n\n /* check for just-barely overflow (okay except time-of-day wraps) */\n /* caution: we want to allow 1999-12-31 24:00:00 */\n\nI wasn't able to fully understand it even after staring at it for\na while. Is the comment suggesting that it is ok for the months field,\nfor example, to wrap around? That doesn't sound right to me I tested\nthe supplied timestamp, 1999-12-31 24:00:00, and it behaves the same\nbefore and after the patch.\n\nThanks,\nJoe Koshakow\n\n[0]\nhttps://www.postgresql.org/message-id/20240213191401.jjhsic7et4tiahjs%40awork3.anarazel.de", "msg_date": "Sun, 9 Jun 2024 16:59:22 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Remove dependence on integer wrapping" }, { "msg_contents": "On Sun, Jun 09, 2024 at 04:59:22PM -0400, Joseph Koshakow wrote:\n> I originally added the helper functions to int.h thinking I'd find\n> many more instances of overflow due to integer negation, however I\n> didn't find that many. So let me know if you think we'd be better\n> off without the functions.\n\nI'd vote for the functions, even if it's just future-proofing at the\nmoment. IMHO it helps with readability, too.\n\n> The following comment was in the code for parsing timestamps:\n> \n> /* check for just-barely overflow (okay except time-of-day wraps) */\n> /* caution: we want to allow 1999-12-31 24:00:00 */\n> \n> I wasn't able to fully understand it even after staring at it for\n> a while. Is the comment suggesting that it is ok for the months field,\n> for example, to wrap around? That doesn't sound right to me I tested\n> the supplied timestamp, 1999-12-31 24:00:00, and it behaves the same\n> before and after the patch.\n\nI haven't stared at this for a while like you, but I am likewise confused\nat first glance. This dates back to commit 84df54b, and it looks like this\ncomment was present in the first version of the patch in the thread [0]. I\nCTRL+F'd for any discussion about this but couldn't immediately find\nanything.\n\n> \t\t/* check the negative equivalent will fit without overflowing */\n> \t\tif (unlikely(tmp > (uint16) (-(PG_INT16_MIN + 1)) + 1))\n> \t\t\tgoto out_of_range;\n> +\n> +\t\t/*\n> +\t\t * special case the minimum integer because its negation cannot be\n> +\t\t * represented\n> +\t\t */\n> +\t\tif (tmp == ((uint16) PG_INT16_MAX) + 1)\n> +\t\t\treturn PG_INT16_MIN;\n> \t\treturn -((int16) tmp);\n\nMy first impression is that there appears to be two overflow checks, one of\nwhich sends us to out_of_range, and another that just returns a special\nresult. Why shouldn't we add a pg_neg_s16_overflow() and replace this\nwhole chunk with something like this?\n\n\tif (unlikely(pg_neg_s16_overflow(tmp, &tmp)))\n\t\tgoto out_of_range;\n\telse\n\t\treturn tmp;\n\n> +\t\treturn ((uint32) INT32_MAX) + 1;\n\n> +\t\treturn ((uint64) INT64_MAX) + 1;\n\nnitpick: Any reason not to use PG_INT32_MAX/PG_INT64_MAX for these?\n\n[0] https://postgr.es/m/flat/CAFj8pRBwqALkzc%3D1WV%2Bh5e%2BDcALY2EizjHCvFi9vHbs%2Bz1OhjA%40mail.gmail.com\n\n-- \nnathan\n\n\n", "msg_date": "Sun, 9 Jun 2024 20:48:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Sun, Jun 09, 2024 at 04:59:22PM -0400, Joseph Koshakow wrote:\n>> The following comment was in the code for parsing timestamps:\n>> /* check for just-barely overflow (okay except time-of-day wraps) */\n>> /* caution: we want to allow 1999-12-31 24:00:00 */\n>> \n>> I wasn't able to fully understand it even after staring at it for\n>> a while. Is the comment suggesting that it is ok for the months field,\n>> for example, to wrap around? That doesn't sound right to me I tested\n>> the supplied timestamp, 1999-12-31 24:00:00, and it behaves the same\n>> before and after the patch.\n\n> I haven't stared at this for a while like you, but I am likewise confused\n> at first glance. This dates back to commit 84df54b, and it looks like this\n> comment was present in the first version of the patch in the thread [0]. I\n> CTRL+F'd for any discussion about this but couldn't immediately find\n> anything.\n\nI believe this is a copy-and-paste from 841b4a2d5, which added this:\n\n+ *result = (date * INT64CONST(86400000000)) + time;\n+ /* check for major overflow */\n+ if ((*result - time) / INT64CONST(86400000000) != date)\n+ return -1;\n+ /* check for just-barely overflow (okay except time-of-day wraps) */\n+ if ((*result < 0) ? (date >= 0) : (date < 0))\n+ return -1;\n\nI think you could replace the whole thing by using overflow-aware\nmultiplication and addition primitives in the result calculation.\nLines 2-4 basically check for mult overflow and 5-7 for addition\noverflow.\n\nBTW, while I approve of trying to get rid of our need for -fwrapv,\nI'm quite scared of actually doing it. Whatever cases you may have\ndiscovered by running the regression tests will be at best the\ntip of the iceberg. Is there any chance of using static analysis\nto find all the places of concern?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Jun 2024 21:57:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Sun, Jun 09, 2024 at 09:57:54PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> On Sun, Jun 09, 2024 at 04:59:22PM -0400, Joseph Koshakow wrote:\n>>> The following comment was in the code for parsing timestamps:\n>>> /* check for just-barely overflow (okay except time-of-day wraps) */\n>>> /* caution: we want to allow 1999-12-31 24:00:00 */\n>>> \n>>> I wasn't able to fully understand it even after staring at it for\n>>> a while. Is the comment suggesting that it is ok for the months field,\n>>> for example, to wrap around? That doesn't sound right to me I tested\n>>> the supplied timestamp, 1999-12-31 24:00:00, and it behaves the same\n>>> before and after the patch.\n> \n>> I haven't stared at this for a while like you, but I am likewise confused\n>> at first glance. This dates back to commit 84df54b, and it looks like this\n>> comment was present in the first version of the patch in the thread [0]. I\n>> CTRL+F'd for any discussion about this but couldn't immediately find\n>> anything.\n> \n> I believe this is a copy-and-paste from 841b4a2d5, which added this:\n> \n> + *result = (date * INT64CONST(86400000000)) + time;\n> + /* check for major overflow */\n> + if ((*result - time) / INT64CONST(86400000000) != date)\n> + return -1;\n> + /* check for just-barely overflow (okay except time-of-day wraps) */\n> + if ((*result < 0) ? (date >= 0) : (date < 0))\n> + return -1;\n> \n> I think you could replace the whole thing by using overflow-aware\n> multiplication and addition primitives in the result calculation.\n> Lines 2-4 basically check for mult overflow and 5-7 for addition\n> overflow.\n\nAh, I see. Joe's patch does that in one place. It's probably worth doing\nthat in the other places this \"just-barefly overflow\" comment appears IMHO.\n\nI was still confused by the comment about 1999, but I tracked it down to\ncommit 542eeba [0]. IIUC it literally means that we need special handling\nfor that date because POSTGRES_EPOCH_JDATE is 2000-01-01.\n\n[0] https://postgr.es/m/CABUevEx5zUO%3DKRUg06a9qnQ_e9EvTKscL6HxAM_L3xj71R7AQw%40mail.gmail.com\n\n-- \nnathan\n\n\n", "msg_date": "Sun, 9 Jun 2024 21:37:03 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Sun, Jun 09, 2024 at 09:57:54PM -0400, Tom Lane wrote:\n>> I think you could replace the whole thing by using overflow-aware\n>> multiplication and addition primitives in the result calculation.\n\n> I was still confused by the comment about 1999, but I tracked it down to\n> commit 542eeba [0]. IIUC it literally means that we need special handling\n> for that date because POSTGRES_EPOCH_JDATE is 2000-01-01.\n\nYeah, I think so, and I think we probably don't need any special care\nif we switch to direct tests of overflow-aware primitives. (Though\nit'd be worth checking that '1999-12-31 24:00:00'::timestamp still\nworks. It doesn't look like I actually added a test case for that.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Jun 2024 23:03:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Hi,\n\nOn 2024-06-09 21:57:54 -0400, Tom Lane wrote:\n> BTW, while I approve of trying to get rid of our need for -fwrapv,\n> I'm quite scared of actually doing it.\n\nI think that's a quite fair concern. One potentially relevant datapoint is\nthat we actually don't have -fwrapv equivalent on all platforms, and I don't\nrecall a lot of complaints from windows users. Of course it's quite possible\nthat they'd never notice...\n\nI think this is a good argument for enabling -ftrapv in development\nbuilds. That gives us at least a *chance* of seeing these issues.\n\n\n> Whatever cases you may have discovered by running the regression tests will\n> be at best the tip of the iceberg. Is there any chance of using static\n> analysis to find all the places of concern?\n\nThe last time I tried removing -fwrapv both gcc and clang found quite a few\nissues. I think I fixed most of those though, so presumably the issue that\nremain are ones less easily found by simple static analysis.\n\nI wonder if coverity would find more if we built without -fwrapv.\n\n\nGCC has -Wstrict-overflow=n, which at least tells us where the compiler\noptimizes based on the assumption that there cannot be overflow. It does\ngenerate a bunch of noise, but I think it's almost exclusively due to our\nMemSet() macro.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Jun 2024 11:30:31 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": ">> /* check the negative equivalent will fit without\noverflowing */\n>> if (unlikely(tmp > (uint16) (-(PG_INT16_MIN + 1)) + 1))\n>> goto out_of_range;\n>> +\n>> + /*\n>> + * special case the minimum integer because its negation\ncannot be\n>> + * represented\n>> + */\n>> + if (tmp == ((uint16) PG_INT16_MAX) + 1)\n>> + return PG_INT16_MIN;\n>> return -((int16) tmp);\n>\n> My first impression is that there appears to be two overflow checks, one\nof\n> which sends us to out_of_range, and another that just returns a special\n> result. Why shouldn't we add a pg_neg_s16_overflow() and replace this\n> whole chunk with something like this?\n>\n> if (unlikely(pg_neg_s16_overflow(tmp, &tmp)))\n> goto out_of_range;\n> else\n> return tmp;\n\ntmp is an uint16 here, it seems like you might have read it as an\nint16? We would need some helper method like\n\n static inline bool\n pg_neg_u16_overflow(uint16 a, int16 *result);\n\nand then we could replace that whole chunk with something like\n\n if (unlikely(pg_neg_u16_overflow(tmp, &result)))\n goto out_of_range;\n else\n return result;\n\n\nthat pattern shows up a lot in this file, but I was worried that it\nwasn't useful as a general purpose function. Happy to add it\nthough if you still feel otherwise.\n\n>> + return ((uint32) INT32_MAX) + 1;\n>>\n>> + return ((uint64) INT64_MAX) + 1;\n>\n> nitpick: Any reason not to use PG_INT32_MAX/PG_INT64_MAX for these?\n\nCarelessness, sorry about that, it's been fixed in the attached patch.\n\n>> I believe this is a copy-and-paste from 841b4a2d5, which added this:\n>>\n>> + *result = (date * INT64CONST(86400000000)) + time;\n>> + /* check for major overflow */\n>> + if ((*result - time) / INT64CONST(86400000000) != date)\n>> + return -1;\n>> + /* check for just-barely overflow (okay except time-of-day wraps) */\n>> + if ((*result < 0) ? (date >= 0) : (date < 0))\n>> + return -1;\n>>\n>> I think you could replace the whole thing by using overflow-aware\n>> multiplication and addition primitives in the result calculation.\n>> Lines 2-4 basically check for mult overflow and 5-7 for addition\n>> overflow.\n>\n> Ah, I see. Joe's patch does that in one place. It's probably worth doing\n> that in the other places this \"just-barefly overflow\" comment appears\nIMHO.\n>\n> I was still confused by the comment about 1999, but I tracked it down to\n>commit 542eeba [0]. IIUC it literally means that we need special handling\n>for that date because POSTGRES_EPOCH_JDATE is 2000-01-01.\n>\n> [0]\nhttps://postgr.es/m/CABUevEx5zUO%3DKRUg06a9qnQ_e9EvTKscL6HxAM_L3xj71R7AQw%40mail.gmail.com\n\n> Yeah, I think so, and I think we probably don't need any special care\n> if we switch to direct tests of overflow-aware primitives. (Though\n>it'd be worth checking that '1999-12-31 24:00:00'::timestamp still\n> works. It doesn't look like I actually added a test case for that.)\n\nThe only other place I found this comment was in\n`make_timestamp_internal`. I've updated that function and added some\ntests. I also manually verified that the behavior matches before and\nafter this patch.\n\n>> BTW, while I approve of trying to get rid of our need for -fwrapv,\n>> I'm quite scared of actually doing it.\n>\n> I think that's a quite fair concern. One potentially relevant datapoint is\n> that we actually don't have -fwrapv equivalent on all platforms, and I\ndon't\n>recall a lot of complaints from windows users. Of course it's quite\npossible\n> that they'd never notice...\n>\n> I think this is a good argument for enabling -ftrapv in development\n> builds. That gives us at least a *chance* of seeing these issues.\n\n+1, I wouldn't recommend removing -fwrapv immediately after this\ncommit. However, if we can enable -ftrapv in development builds, then\nwe can find overflows much more easily.\n\n> Whatever cases you may have discovered by running the regression tests\nwill\n> be at best the tip of the iceberg.\n\nAgreed.\n\n> Is there any chance of using static\n> analysis to find all the places of concern?\n\nI'm not personally familiar with any static analysis tools, but I can\ntry and do some research. Andres had previously suggested SQLSmith. I\nthink any kind of fuzz testing with -ftrapv enabled will reveal a lot\nof issues. Honestly just grepping for +,-,* in certain directories\n(like backend/utils/adt) would probably be fairly fruitful for anyone\nwith the patience. My previous overflow patch was the result of looking\nthrough all the arithmetic in datetime.c.\n\nThanks,\nJoe Koshakow", "msg_date": "Tue, 11 Jun 2024 09:31:39 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Tue, Jun 11, 2024 at 09:31:39AM -0400, Joseph Koshakow wrote:\n> tmp is an uint16 here, it seems like you might have read it as an\n> int16? We would need some helper method like\n> \n> static inline bool\n> pg_neg_u16_overflow(uint16 a, int16 *result);\n> \n> and then we could replace that whole chunk with something like\n> \n> if (unlikely(pg_neg_u16_overflow(tmp, &result)))\n> goto out_of_range;\n> else\n> return result;\n> \n> \n> that pattern shows up a lot in this file, but I was worried that it\n> wasn't useful as a general purpose function. Happy to add it\n> though if you still feel otherwise.\n\nI personally find that much easier to read. Since the existing open-coded\noverflow check is apparently insufficient, I think there's a reasonably\nstrong case for centralizing this sort of thing so that we don't continue\nto make the same mistakes.\n\n>> Ah, I see. Joe's patch does that in one place. It's probably worth doing\n>> that in the other places this \"just-barefly overflow\" comment appears\n>> IMHO.\n> \n> The only other place I found this comment was in\n> `make_timestamp_internal`. I've updated that function and added some\n> tests. I also manually verified that the behavior matches before and\n> after this patch.\n\ntm2timestamp() in src/interfaces/ecpg/pgtypeslib/timestamp.c has the same\ncomment. The code there looks very similar to the code for tm2timestamp()\nin the other timestamp.c...\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 11 Jun 2024 11:22:27 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Tue, Jun 11, 2024 at 12:22 PM Nathan Bossart <[email protected]>\nwrote:\n\n> I personally find that much easier to read. Since the existing\nopen-coded\n> overflow check is apparently insufficient, I think there's a reasonably\n> strong case for centralizing this sort of thing so that we don't\ncontinue\n> to make the same mistakes.\n\nSounds good, the attached patch has these changes.\n\n> tm2timestamp() in src/interfaces/ecpg/pgtypeslib/timestamp.c has the\nsame\n> comment. The code there looks very similar to the code for\ntm2timestamp()\n> in the other timestamp.c...\n\nThe attached patch has updated this file too. For some reason I was\nunder the impression that I should leave the ecpg/ files alone, though\nI can't remember why.\n\nThanks,\nJoe Koshakow", "msg_date": "Tue, 11 Jun 2024 21:10:44 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Tue, Jun 11, 2024 at 09:10:44PM -0400, Joseph Koshakow wrote:\n> The attached patch has updated this file too. For some reason I was\n> under the impression that I should leave the ecpg/ files alone, though\n> I can't remember why.\n\nThanks. This looks pretty good to me after a skim, so I'll see about\ncommitting/back-patching it in the near future. IIUC there is likely more\nto come in this area, but I see no need to wait.\n\n(I'm chuckling that we are adding Y2K tests in 2024...)\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:02:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Thanks. This looks pretty good to me after a skim, so I'll see about\n> committing/back-patching it in the near future. IIUC there is likely more\n> to come in this area, but I see no need to wait.\n\nUh ... what of this would we back-patch? It seems like v18\nmaterial to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jun 2024 11:45:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Wed, Jun 12, 2024 at 11:45:20AM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> Thanks. This looks pretty good to me after a skim, so I'll see about\n>> committing/back-patching it in the near future. IIUC there is likely more\n>> to come in this area, but I see no need to wait.\n> \n> Uh ... what of this would we back-patch? It seems like v18\n> material to me.\n\nD'oh. I was under the impression that the numutils.c changes were arguably\nbug fixes. Even in that case, we should probably split out the other stuff\nfor v18. But you're probably right that we could just wait for all of it.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 12 Jun 2024 11:01:06 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Mon, Jun 10, 2024 at 2:30 PM Andres Freund <[email protected]> wrote:\n> On 2024-06-09 21:57:54 -0400, Tom Lane wrote:\n> > BTW, while I approve of trying to get rid of our need for -fwrapv,\n> > I'm quite scared of actually doing it.\n\nIMV it's perfectly fine to defensively assume that we need -fwrapv,\neven given a total lack of evidence that removing it would cause harm.\nDoing it to \"be more in line with the standard\" is a terrible argument.\n\n> I think that's a quite fair concern. One potentially relevant datapoint is\n> that we actually don't have -fwrapv equivalent on all platforms, and I don't\n> recall a lot of complaints from windows users.\n\nThat might just be because MSVC inherently doesn't do optimizations\nthat rely on signed wrap being undefined behavior. Last I checked MSVC\njust doesn't rely on the standard's strict aliasing rules, even as an\nopt-in thing.\n\n> Of course it's quite possible\n> that they'd never notice...\n\nRemoving -fwrapv is something that I see as a potential optimization.\nIt should be justified in about the same way as any other\noptimization.\n\nI suspect that it doesn't actually have any clear benefits for us. But\nif I'm wrong about that then the benefit might still be achievable\nthrough other means (something far short of just removing -fwrapv\nglobally).\n\n> I think this is a good argument for enabling -ftrapv in development\n> builds. That gives us at least a *chance* of seeing these issues.\n\n+1. I'm definitely prepared to say that code that actively relies on\n-fwrapv is broken code.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 12 Jun 2024 13:02:22 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "10.06.2024 04:57, Tom Lane wrote:\n> BTW, while I approve of trying to get rid of our need for -fwrapv,\n> I'm quite scared of actually doing it. Whatever cases you may have\n> discovered by running the regression tests will be at best the\n> tip of the iceberg. Is there any chance of using static analysis\n> to find all the places of concern?\n\nLet me remind you of bug #18240. Yes, that was about float8, but with\n-ftrapv we can get into the trap with:\nSELECT 1_000_000_000::money * 1_000_000_000::int;\nserver closed the connection unexpectedly\n\nAlso there are several trap-producing cases with date types:\nSELECT to_date('100000000', 'CC');\nSELECT to_timestamp('1000000000,999', 'Y,YYY');\nSELECT make_date(-2147483648, 1, 1);\n\nAnd one more with array...\nCREATE TABLE t (ia int[]);\nINSERT INTO t(ia[2147483647:2147483647]) VALUES ('{}');\n\nI think it's not the whole iceberg too.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 13 Jun 2024 07:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Jun 13, 2024 at 12:00 AM Alexander Lakhin <[email protected]>\nwrote:\n>\n> Let me remind you of bug #18240. Yes, that was about float8, but with\n> -ftrapv we can get into the trap with:\n> SELECT 1_000_000_000::money * 1_000_000_000::int;\n> server closed the connection unexpectedly\n\nInteresting, it looks like there's no overflow handling of any money\narithmetic. I've attached\nv4-0002-Handle-overflow-in-money-arithmetic.patch which adds some\noverflow checks and tests. I didn't address the float multiplication\nbecause I didn't see any helper methods in int.h. I did some some\nuseful helpers in float.h, but they raise an error directly instead\nof returning a bool. Would those be appropriate for use with the\nmoney type? If not I can refactor out the inner parts into a new method\nthat returns a bool.\n\nv4-0001-Remove-dependence-on-integer-wrapping.patch is unchanged, I\njust incremented the version number.\n\n> Also there are several trap-producing cases with date types:\n> SELECT to_date('100000000', 'CC');\n> SELECT to_timestamp('1000000000,999', 'Y,YYY');\n> SELECT make_date(-2147483648, 1, 1);\n>\n> And one more with array...\n> CREATE TABLE t (ia int[]);\n> INSERT INTO t(ia[2147483647:2147483647]) VALUES ('{}');\n\nI'll try and get patches to address these too in the next couple of\nweeks unless someone beats me to it.\n\n> I think it's not the whole iceberg too.\n\n+1\n\nThanks,\nJoe Koshakow", "msg_date": "Thu, 13 Jun 2024 22:48:14 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Jun 13, 2024 at 10:48 PM Joseph Koshakow <[email protected]> wrote:\n> I've attached\n> v4-0002-Handle-overflow-in-money-arithmetic.patch which adds some\n> overflow checks and tests. I didn't address the float multiplication\n> because I didn't see any helper methods in int.h. I did some some\n> useful helpers in float.h, but they raise an error directly instead\n> of returning a bool. Would those be appropriate for use with the\n> money type? If not I can refactor out the inner parts into a new method\n> that returns a bool.\n\n> v4-0001-Remove-dependence-on-integer-wrapping.patch is unchanged, I\n> just incremented the version number.\n\nOops I left a careless mistake in that last patch, my apologies. It's\nfixed in the attached patches.\n\nThanks,\nJoe Koshakow", "msg_date": "Thu, 13 Jun 2024 22:56:38 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "14.06.2024 05:48, Joseph Koshakow wrote:\n>\n> v4-0001-Remove-dependence-on-integer-wrapping.patch is unchanged, I\n> just incremented the version number.\n>\n> >    Also there are several trap-producing cases with date types:\n> >    SELECT to_date('100000000', 'CC');\n> >    SELECT to_timestamp('1000000000,999', 'Y,YYY');\n> >    SELECT make_date(-2147483648, 1, 1);\n> >\n> >    And one more with array...\n> >    CREATE TABLE t (ia int[]);\n> >    INSERT INTO t(ia[2147483647:2147483647]) VALUES ('{}');\n>\n> I'll try and get patches to address these too in the next couple of\n> weeks unless someone beats me to it.\n>\n> >    I think it's not the whole iceberg too.\n>\n> +1\n\nAfter sending my message, I toyed with -ftrapv a little time more and\nfound other cases:\nSELECT '[]'::jsonb -> -2147483648;\n\n#4  0x00007efe232d67f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x000055e8fde9f211 in __negvsi2 ()\n#6  0x000055e8fdcca62c in jsonb_array_element (fcinfo=0x55e8fec28220) at jsonfuncs.c:948\n\n(gdb) f 6\n#6  0x000055e14cb9362c in jsonb_array_element (fcinfo=0x55e14d493220) at jsonfuncs.c:948\n948                     if (-element > nelements)\n(gdb) p element\n$1 = -2147483648\n\n---\nSELECT jsonb_delete_path('{\"a\":[]}', '{\"a\",-2147483648}');\n\n#4  0x00007f1873bef7f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x0000564a009d2211 in __negvsi2 ()\n#6  0x0000564a00807c89 in setPathArray (it=0x7fff865c7380, path_elems=0x564a017baf20, path_nulls=0x564a017baf40,\n     path_len=2, st=0x7fff865c7388, level=1, newval=0x0, nelems=2, op_type=2) at jsonfuncs.c:5407\n\n(gdb) f 6\n#6  0x000055985e823c89 in setPathArray (it=0x7ffc22258fe0, path_elems=0x559860286f20, path_nulls=0x559860286f40,\n     path_len=2, st=0x7ffc22258fe8, level=1, newval=0x0, nelems=0, op_type=2) at jsonfuncs.c:5407\n5407                    if (-idx > nelems)\n(gdb) p idx\n$1 = -2147483648\n\n---\nCREATE FUNCTION check_foreign_key () RETURNS trigger AS .../refint.so' LANGUAGE C;\nCREATE TABLE t (i int4 NOT NULL);\nCREATE TRIGGER check_fkey BEFORE DELETE ON t FOR EACH ROW EXECUTE PROCEDURE\n   check_foreign_key (2147483647, 'cascade', 'i', \"ft\", \"i\");\nINSERT INTO t VALUES (1);\nDELETE FROM t;\n\n#4  0x00007f57f0bef7f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x00007f57f1671351 in __addvsi3 () from .../src/test/regress/refint.so\n#6  0x00007f57f1670234 in check_foreign_key (fcinfo=0x7ffebf523650) at refint.c:321\n\n(gdb) f 6\n#6  0x00007f3400ef9234 in check_foreign_key (fcinfo=0x7ffd6e16a600) at refint.c:321\n321             nkeys = (nargs - nrefs) / (nrefs + 1);\n(gdb) p nargs\n$1 = 3\n(gdb) p nrefs\n$2 = 2147483647\n\n---\nAnd the most interesting case to me:\nSET temp_buffers TO 1000000000;\n\nCREATE TEMP TABLE t(i int PRIMARY KEY);\nINSERT INTO t VALUES(1);\n\n#4  0x00007f385cdd37f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x00005620071c4f51 in __addvsi3 ()\n#6  0x0000562007143f3c in init_htab (hashp=0x562008facb20, nelem=610070812) at dynahash.c:720\n\n(gdb) f 6\n#6  0x0000560915207f3c in init_htab (hashp=0x560916039930, nelem=1000000000) at dynahash.c:720\n720             hctl->high_mask = (nbuckets << 1) - 1;\n(gdb) p nbuckets\n$1 = 1073741824\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 14 Jun 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Jun 13, 2024 at 10:56 PM Joseph Koshakow <[email protected]> wrote:\n\n On Thu, Jun 13, 2024 at 10:48 PM Joseph Koshakow <[email protected]>\nwrote:\n > I've attached\n > v4-0002-Handle-overflow-in-money-arithmetic.patch which adds some\n > overflow checks and tests. I didn't address the float\nmultiplication\n > because I didn't see any helper methods in int.h. I did some some\n > useful helpers in float.h, but they raise an error directly instead\n > of returning a bool. Would those be appropriate for use with the\n > money type? If not I can refactor out the inner parts into a new\nmethod\n > that returns a bool.\n\n > v4-0001-Remove-dependence-on-integer-wrapping.patch is unchanged, I\n > just incremented the version number.\n\nI added overflow handling for float arithmetic to the `money` type.\nv6-0002-Handle-overflow-in-money-arithmetic.patch is ready for review.\n\nv6-0001-Remove-dependence-on-integer-wrapping.patch is unchanged, I\njust incremented the version number.\n\nThanks,\nJoe Koshakow", "msg_date": "Wed, 19 Jun 2024 17:44:00 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Jun 13, 2024 at 12:00 AM Alexander Lakhin <[email protected]>\nwrote:\n>\n> And one more with array...\n> CREATE TABLE t (ia int[]);\n> INSERT INTO t(ia[2147483647:2147483647]) VALUES ('{}');\n\nI've added another patch, 0003, to resolve this wrap-around. In fact I\ndiscovered a bug that the following statement is accepted and inserts\nan empty array into the table.\n\n INSERT INTO t(ia[-2147483648:2147483647]) VALUES ('{}');\n\nMy patch resolves this bug as well.\n\nThe other patches, 0001 and 0002, are unchanged but have their version\nnumber incremented.\n\nAs a reminder, 0001 is reviewed and waiting for v18 and a committer.\n0002 and 0003 are unreviewed. So, I'm going to mark this as waiting for\na reviewer.\n\nThanks,\nJoe Koshakow", "msg_date": "Sat, 6 Jul 2024 14:50:05 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Jun 13, 2024 at 12:00 AM Alexander Lakhin <[email protected]>\nwrote:\n> SELECT '[]'::jsonb -> -2147483648;\n>\n> #4 0x00007efe232d67f3 in __GI_abort () at ./stdlib/abort.c:79\n> #5 0x000055e8fde9f211 in __negvsi2 ()\n> #6 0x000055e8fdcca62c in jsonb_array_element (fcinfo=0x55e8fec28220) at\njsonfuncs.c:948\n>\n> (gdb) f 6\n> #6 0x000055e14cb9362c in jsonb_array_element (fcinfo=0x55e14d493220) at\njsonfuncs.c:948\n> 948 if (-element > nelements)\n> (gdb) p element\n> $1 = -2147483648\n>\n> ---\n> SELECT jsonb_delete_path('{\"a\":[]}', '{\"a\",-2147483648}');\n>\n> #4 0x00007f1873bef7f3 in __GI_abort () at ./stdlib/abort.c:79\n> #5 0x0000564a009d2211 in __negvsi2 ()\n> #6 0x0000564a00807c89 in setPathArray (it=0x7fff865c7380,\npath_elems=0x564a017baf20, path_nulls=0x564a017baf40,\n> path_len=2, st=0x7fff865c7388, level=1, newval=0x0, nelems=2,\nop_type=2) at jsonfuncs.c:5407\n>\n> (gdb) f 6\n> #6 0x000055985e823c89 in setPathArray (it=0x7ffc22258fe0,\npath_elems=0x559860286f20, path_nulls=0x559860286f40,\n> path_len=2, st=0x7ffc22258fe8, level=1, newval=0x0, nelems=0,\nop_type=2) at jsonfuncs.c:5407\n> 5407 if (-idx > nelems)\n> (gdb) p idx\n> $1 = -2147483648\n\nI've added another patch, 0004, to resolve the jsonb wrap-arounds.\n\nThe other patches, 0001, 0002, and 0003 are unchanged but have their\nversion number incremented.\n\nThanks,\nJoe Koshakow", "msg_date": "Sat, 6 Jul 2024 19:04:38 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Sat, Jul 06, 2024 at 07:04:38PM -0400, Joseph Koshakow wrote:\n> I've added another patch, 0004, to resolve the jsonb wrap-arounds.\n> \n> The other patches, 0001, 0002, and 0003 are unchanged but have their\n> version number incremented.\n\nIIUC some of these changes are bug fixes. Can we split out the bug fixes\nto their own patches so that they can be back-patched?\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 12 Jul 2024 11:49:38 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Fri, Jul 12, 2024 at 12:49 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Sat, Jul 06, 2024 at 07:04:38PM -0400, Joseph Koshakow wrote:\n>> I've added another patch, 0004, to resolve the jsonb wrap-arounds.\n>>\n>> The other patches, 0001, 0002, and 0003 are unchanged but have their\n>> version number incremented.\n>\n> IIUC some of these changes are bug fixes. Can we split out the bug fixes\n> to their own patches so that they can be back-patched?\n\nThey happen to already be split out into their own patches. 0002 and\n0003 are both bug fixes (in the sense that they fix queries that\nproduce incorrect results even with -fwrapv). They also both apply\ncleanly to master. If it would be useful, I can re-order the patches so\nthat the bug-fixes are first.\n\nThanks,\nJoe Koshakow\n\nOn Fri, Jul 12, 2024 at 12:49 PM Nathan Bossart <[email protected]> wrote:> On Sat, Jul 06, 2024 at 07:04:38PM -0400, Joseph Koshakow wrote:>> I've added another patch, 0004, to resolve the jsonb wrap-arounds.>>>> The other patches, 0001, 0002, and 0003 are unchanged but have their>> version number incremented.>> IIUC some of these changes are bug fixes.  Can we split out the bug fixes> to their own patches so that they can be back-patched?They happen to already be split out into their own patches. 0002 and0003 are both bug fixes (in the sense that they fix queries thatproduce incorrect results even with -fwrapv). They also both applycleanly to master. If it would be useful, I can re-order the patches sothat the bug-fixes are first.Thanks,Joe Koshakow", "msg_date": "Sat, 13 Jul 2024 10:24:16 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Sat, Jul 13, 2024 at 10:24:16AM -0400, Joseph Koshakow wrote:\n> On Fri, Jul 12, 2024 at 12:49 PM Nathan Bossart <[email protected]>\n> wrote:\n>> IIUC some of these changes are bug fixes. Can we split out the bug fixes\n>> to their own patches so that they can be back-patched?\n> \n> They happen to already be split out into their own patches. 0002 and\n> 0003 are both bug fixes (in the sense that they fix queries that\n> produce incorrect results even with -fwrapv). They also both apply\n> cleanly to master. If it would be useful, I can re-order the patches so\n> that the bug-fixes are first.\n\nOh, thanks. I'm planning on taking a closer look at this one next week.\n\n-- \nnathan\n\n\n", "msg_date": "Sat, 13 Jul 2024 10:40:43 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "I took a closer look at 0002.\n\n+ if (unlikely(isinf(f) || isnan(f)))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+ errmsg(\"invalid float value\")));\n+\n+ fresult = rint(f * c);\n\n+ if (unlikely(f == 0.0))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DIVISION_BY_ZERO),\n+ errmsg(\"division by zero\")));\n+ if (unlikely(isinf(f) || isnan(f)))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+ errmsg(\"invalid float value\")));\n+\n+ fresult = rint(c / f);\n\nI'm curious why you aren't using float8_mul/float8_div here, i.e.,\n\n\tfresult = rint(float8_mul((float8) c, f));\n\tfresult = rint(float8_div((float8) c, f));\n\nnitpick: I'd name the functions something like \"cash_mul_float8\" and\n\"cash_div_float8\". Perhaps we could also add functions like\n\"cash_mul_int64\" and \"cash_sub_int64\" so that we don't need several copies\nof the same \"money out of range\" ERROR.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 15 Jul 2024 10:31:08 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Thanks for the review!\n\nOn Mon, Jul 15, 2024 at 11:31 AM Nathan Bossart <[email protected]>\nwrote:\n>\n> I took a closer look at 0002.\n>\n> I'm curious why you aren't using float8_mul/float8_div here, i.e.,\n>\n> fresult = rint(float8_mul((float8) c, f));\n> fresult = rint(float8_div((float8) c, f));\n\nI wrongly assumed that it was only meant to be used to implement\nmultiplication and division for the built-in float types. I've updated\nthe patch to use these functions.\n\n> nitpick: I'd name the functions something like \"cash_mul_float8\" and\n> \"cash_div_float8\". Perhaps we could also add functions like\n> \"cash_mul_int64\"\n\nDone in the updated patch.\n\n> and \"cash_sub_int64\"\n\nDid you mean \"cash_div_int64\"? There's only a single function that\nsubtracts cash and an integer, but there's multiple functions that\ndivide cash by an integer. I've added a \"cash_div_int64\" in the updated\npatch.\n\nThe other patches, 0001, 0003, and 0004 are unchanged but have their\nversion number incremented.\n\nThanks,\nJoe Koshakow", "msg_date": "Mon, 15 Jul 2024 19:55:22 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Mon, Jul 15, 2024 at 07:55:22PM -0400, Joseph Koshakow wrote:\n> On Mon, Jul 15, 2024 at 11:31 AM Nathan Bossart <[email protected]>\n> wrote:\n>> I'm curious why you aren't using float8_mul/float8_div here, i.e.,\n>>\n>> fresult = rint(float8_mul((float8) c, f));\n>> fresult = rint(float8_div((float8) c, f));\n> \n> I wrongly assumed that it was only meant to be used to implement\n> multiplication and division for the built-in float types. I've updated\n> the patch to use these functions.\n\nThe reason I suggested this is so that we could omit all the prerequisite\nisinf(), isnan(), etc. checks in the cash_mul_float8() and friends. The\nchecks are slighly different, but from a quick glance it just looks like we\nmight end up relying on the FLOAT8_FITS_IN_INT64 check in more cases.\n\n>> and \"cash_sub_int64\"\n> \n> Did you mean \"cash_div_int64\"? There's only a single function that\n> subtracts cash and an integer, but there's multiple functions that\n> divide cash by an integer. I've added a \"cash_div_int64\" in the updated\n> patch.\n\nMy personal preference would be to add helper functions for each of these\nso that all the overflow, etc. checks are centralized in one place and\ndon't clutter the calling code. Plus, it might help ensure error\nhandling/messages remain consistent.\n\n+static Cash\n+cash_mul_float8(Cash c, float8 f)\n\nnitpick: Can you mark these \"inline\"? I imagine most compilers inline them\nwithout any prompting, but we might as well make our intent clear.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 16 Jul 2024 12:57:47 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Tue, Jul 16, 2024 at 1:57 PM Nathan Bossart <[email protected]>\nwrote:\n>>\n>> On Mon, Jul 15, 2024 at 07:55:22PM -0400, Joseph Koshakow wrote:\n>> On Mon, Jul 15, 2024 at 11:31 AM Nathan Bossart <[email protected]\n>\n>> wrote:\n>>> I'm curious why you aren't using float8_mul/float8_div here, i.e.,\n>>>\n>>> fresult = rint(float8_mul((float8) c, f));\n>>> fresult = rint(float8_div((float8) c, f));\n>>\n>> I wrongly assumed that it was only meant to be used to implement\n>> multiplication and division for the built-in float types. I've updated\n>> the patch to use these functions.\n>\n> The reason I suggested this is so that we could omit all the prerequisite\n> isinf(), isnan(), etc. checks in the cash_mul_float8() and friends. The\n> checks are slighly different, but from a quick glance it just looks like\nwe\n> might end up relying on the FLOAT8_FITS_IN_INT64 check in more cases.\n\nI don't think we can omit the prerequisite isnan() checks. Neither\nfloat8_mul() nor float8_div() reject nan inputs/result, and\nFLOAT8_FITS_IN_INT64 has the following to say about inf and nan\n\n These macros will do the right thing for Inf, but not necessarily for\nNaN,\n so check isnan(num) first if that's a possibility.\n\nThough, I think you're right that we can remove the isinf() check from\ncash_mul_float8(). That check is fully covered by FLOAT8_FITS_IN_INT64,\nsince all infinite inputs will result in an infinite output. That also\nmakes the infinite result check in float8_mul() redundant.\nAdditionally, I believe that the underflow check in float8_mul() is\nunnecessary. val1 is an int64 casted to a float8, so it can never be\n-1 < val < 1, so it can never cause an underflow to 0. So I went ahead\nand removed float8_mul() since all of its checks are redundant.\n\nFor cash_div_float8() we have a choice. The isinf() input check\nprotects against the following, which is not rejected by any of\nthe other checks.\n\n test=# SELECT '5'::money / 'inf'::float8;\n ?column?\n ----------\n $0.00\n (1 row)\n\nFor now, I've kept the isinf() input check to reject the above query,\nlet me know if you think we should allow this.\n\nThe infinite check in float8_div() is redundant because it's covered\nby FLOAT8_FITS_IN_INT64. Also, the underflow check in float8_div() is\nunnecessary for similar reasons to float8_mul(). So if we continue to\nhave a divide by zero check in cash_div_float8(), then we can remove\nfloat8_div() as well.\n\n>>> and \"cash_sub_int64\"\n>>\n>> Did you mean \"cash_div_int64\"? There's only a single function that\n>> subtracts cash and an integer, but there's multiple functions that\n>> divide cash by an integer. I've added a \"cash_div_int64\" in the updated\n>> patch.\n>\n> My personal preference would be to add helper functions for each of these\n> so that all the overflow, etc. checks are centralized in one place and\n> don't clutter the calling code. Plus, it might help ensure error\n> handling/messages remain consistent.\n\nAh, OK. I've added helpers for both subtraction and addition then.\n\n> +static Cash\n> +cash_mul_float8(Cash c, float8 f)\n>\n> nitpick: Can you mark these \"inline\"? I imagine most compilers inline\nthem\n> without any prompting, but we might as well make our intent clear.\n\nUpdated in the attached patch.\n\nOnce again, the other patches, 0001, 0003, and 0004 are unchanged but\nhave their version number incremented.\n\nThanks,\nJoe Koshakow", "msg_date": "Tue, 16 Jul 2024 21:23:27 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Tue, Jul 16, 2024 at 09:23:27PM -0400, Joseph Koshakow wrote:\n> On Tue, Jul 16, 2024 at 1:57 PM Nathan Bossart <[email protected]>\n> wrote:\n>> The reason I suggested this is so that we could omit all the prerequisite\n>> isinf(), isnan(), etc. checks in the cash_mul_float8() and friends. The\n>> checks are slighly different, but from a quick glance it just looks like\n> we\n>> might end up relying on the FLOAT8_FITS_IN_INT64 check in more cases.\n> \n> I don't think we can omit the prerequisite isnan() checks. Neither\n> float8_mul() nor float8_div() reject nan inputs/result, and\n> FLOAT8_FITS_IN_INT64 has the following to say about inf and nan\n> \n> These macros will do the right thing for Inf, but not necessarily for\n> NaN,\n> so check isnan(num) first if that's a possibility.\n\nMy instinct would be to just let float8_mul(), float8_div(), etc. handle\nthe inputs and then to add an additional isnan() check for the result.\nThis is how it's done elsewhere (e.g., dtoi8() in int8.c). If something\nabout the float8 helper functions is inadequate, let's go fix those\ninstead.\n\n> Though, I think you're right that we can remove the isinf() check from\n> cash_mul_float8(). That check is fully covered by FLOAT8_FITS_IN_INT64,\n> since all infinite inputs will result in an infinite output. That also\n> makes the infinite result check in float8_mul() redundant.\n> Additionally, I believe that the underflow check in float8_mul() is\n> unnecessary. val1 is an int64 casted to a float8, so it can never be\n> -1 < val < 1, so it can never cause an underflow to 0. So I went ahead\n> and removed float8_mul() since all of its checks are redundant.\n\nTBH I'd rather keep this stuff super simple by farming out the corner case\nhandling whenever we can, even if it is redundant in a few cases. That\nkeeps it centralized and consistent with the rest of the tree so that any\nfixes apply everywhere. If there's a performance angle, then maybe it's\nworth considering the added complexity, though...\n\n> For cash_div_float8() we have a choice. The isinf() input check\n> protects against the following, which is not rejected by any of\n> the other checks.\n> \n> test=# SELECT '5'::money / 'inf'::float8;\n> ?column?\n> ----------\n> $0.00\n> (1 row)\n> \n> For now, I've kept the isinf() input check to reject the above query,\n> let me know if you think we should allow this.\n\nWe appear to allow this for other data types, so I see no reason to\ndisallow it for the money type.\n\nI've attached an editorialized version of 0002 based on my thoughts above.\n\n-- \nnathan", "msg_date": "Tue, 16 Jul 2024 22:17:54 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Tue, Jul 16, 2024 at 11:17 PM Nathan Bossart <[email protected]>\nwrote:\n> I've attached an editorialized version of 0002 based on my thoughts above.\n\nLooks great, thanks!\n\nThanks,\nJoe Koshakow\n\nOn Tue, Jul 16, 2024 at 11:17 PM Nathan Bossart <[email protected]> wrote:> I've attached an editorialized version of 0002 based on my thoughts above.Looks great, thanks!Thanks,Joe Koshakow", "msg_date": "Wed, 17 Jul 2024 20:31:40 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Wed, Jul 17, 2024 at 9:23 AM Joseph Koshakow <[email protected]> wrote:\n>\n> Updated in the attached patch.\n>\n> Once again, the other patches, 0001, 0003, and 0004 are unchanged but\n> have their version number incremented.\n>\n\n\n+-- Test for overflow in array slicing\n+CREATE temp table arroverflowtest (i int[]);\n+INSERT INTO arroverflowtest(i[-2147483648:2147483647]) VALUES ('{}');\n+INSERT INTO arroverflowtest(i[1:2147483647]) VALUES ('{}');\n+INSERT INTO arroverflowtest(i[2147483647:2147483647]) VALUES ('{}');\n\n+INSERT INTO arroverflowtest(i[2147483647:2147483647]) VALUES ('{}');\nthis example, master error is\nERROR: source array too small\nyour patch error is\nERROR: array size exceeds the maximum allowed (134217727)\n\n\nINSERT INTO arroverflowtest(i[2147483647:2147483647]) VALUES ('{1}');\nmaster:\nERROR: array lower bound is too large: 2147483647\nyou patch\nERROR: array size exceeds the maximum allowed (134217727)\n\ni think \"INSERT INTO arroverflowtest(i[2147483647:2147483647]) VALUES ('{}');\"\nmeans to insert one element (size) to a customized lower/upper bounds.\n\n\n", "msg_date": "Thu, 18 Jul 2024 09:31:15 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Wed, Jul 17, 2024 at 9:31 PM jian he <[email protected]> wrote:\n>\n> i think \"INSERT INTO arroverflowtest(i[2147483647:2147483647]) VALUES\n('{}');\"\n> means to insert one element (size) to a customized lower/upper bounds.\n\nAh, thank you, I mistakenly understood that as an array with size\n2147483647, with the first 2147483646 elements NULL.\n\nI've updated the first calculation (upper_bound + 1) to retrun an error\nsaying \"array upper bound is too large: %d\" when it overflows. This\nwill change some of the existing error messages, but is just as correct\nand doesn't require us to check the source array. Is there backwards\ncompatibility guarantees on error messages or is that acceptable?\n\n\nFor the second calculation ((upper_bound + 1) - lower_bound), I've kept the\nexisting error of \"array size exceeds the maximum allowed (%d)\". The\nonly way for that to underflow is if the upper bound is very negative\nand the lower bound is very positive. I'm not entirely sure how to\ninterpret this scenario, but it's consistent with similar scenarios.\n\n # INSERT INTO arroverflowtest(i[10:-999999]) VALUES ('{1,2,3}');\n ERROR: array size exceeds the maximum allowed (134217727)\n\nAs a reminder:\n- 0001 is reviewed.\n- 0002 is reviewed and a bug fix.\n- 0003 is currently under review and a bug fix.\n- 0004 needs a review.\n\nThanks,\nJoe Koshakow", "msg_date": "Thu, 18 Jul 2024 21:08:30 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Jul 18, 2024 at 09:08:30PM -0400, Joseph Koshakow wrote:\n> As a reminder:\n> - 0001 is reviewed.\n> - 0002 is reviewed and a bug fix.\n> - 0003 is currently under review and a bug fix.\n> - 0004 needs a review.\n\nI've committed/back-patched 0002. I plan to review 0003 next.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 19 Jul 2024 12:13:21 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "I took a look at 0003.\n\n+ /* dim[i] = 1 + upperIndx[i] - lowerIndx[i]; */\n+ if (pg_add_s32_overflow(1, upperIndx[i], &dim[i]))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n+ errmsg(\"array upper bound is too large: %d\",\n+ upperIndx[i])));\n+ if (pg_sub_s32_overflow(dim[i], lowerIndx[i], &dim[i]))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n+ errmsg(\"array size exceeds the maximum allowed (%d)\",\n+ (int) MaxArraySize)));\n\nI think the problem with fixing it this way is that it prohibits more than\nis necessary. For example, doing the subtraction first might prevent the\naddition from overflowing, and doing the addition first can prevent the\nsubtraction from overflowing. Granted, this is probably not really worth\nworrying about too much, but we're already dealing with \"absurd slice\nranges,\" so we might as well set an example for elsewhere.\n\nAn easy way to deal with this problem is to first perform the calculation\nwith everything cast to an int64. Before setting dim[i], you'd check that\nthe result is in [PG_INT32_MIN, PG_INT32_MAX] and fail if needed.\n\n\tint64 newdim;\n\n\t...\n\n\tnewdim = (int64) 1 + (int64) upperIndx[i] - (int64) lowerIndx[i];\n\tif (unlikely(newdim < PG_INT32_MIN || newdim > PG_INT32_MAX))\n\t\tereport(ERROR,\n\t\t\t\t...\n\tdim[i] = (int32) newdim;\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 19 Jul 2024 13:45:49 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Fri, Jul 19, 2024 at 2:45 PM Nathan Bossart <[email protected]>\nwrote:\n>\n> + /* dim[i] = 1 + upperIndx[i] - lowerIndx[i]; */\n> + if (pg_add_s32_overflow(1, upperIndx[i], &dim[i]))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> + errmsg(\"array upper bound is too large: %d\",\n> + upperIndx[i])));\n> + if (pg_sub_s32_overflow(dim[i], lowerIndx[i], &dim[i]))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> + errmsg(\"array size exceeds the maximum allowed\n(%d)\",\n> + (int) MaxArraySize)));\n>\n> I think the problem with fixing it this way is that it prohibits more than\n> is necessary.\n\nMy understanding is that 2147483647 (INT32_MAX) is not a valid upper\nbound, which is what the first overflow check is checking. Any query of\nthe form\n `INSERT INTO arroverflowtest(i[<lb>:2147483647]) VALUES ('{...}');`\nwill fail with an error of\n `ERROR: array lower bound is too large: <lb>`\n\nThe reason is the following bounds check found in arrayutils.c\n\n /*\n * Verify sanity of proposed lower-bound values for an array\n *\n * The lower-bound values must not be so large as to cause overflow when\n * calculating subscripts, e.g. lower bound 2147483640 with length 10\n * must be disallowed. We actually insist that dims[i] + lb[i] be\n * computable without overflow, meaning that an array with last\nsubscript\n * equal to INT_MAX will be disallowed.\n *\n * It is assumed that the caller already called ArrayGetNItems, so that\n * overflowed (negative) dims[] values have been eliminated.\n */\n void\n ArrayCheckBounds(int ndim, const int *dims, const int *lb)\n {\n (void) ArrayCheckBoundsSafe(ndim, dims, lb, NULL);\n }\n\n /*\n * This entry point can return the error into an ErrorSaveContext\n * instead of throwing an exception.\n */\n bool\n ArrayCheckBoundsSafe(int ndim, const int *dims, const int *lb,\n struct Node *escontext)\n {\n int i;\n\n for (i = 0; i < ndim; i++)\n {\n /* PG_USED_FOR_ASSERTS_ONLY prevents variable-isn't-read warnings */\n int32 sum PG_USED_FOR_ASSERTS_ONLY;\n\n if (pg_add_s32_overflow(dims[i], lb[i], &sum))\n ereturn(escontext, false,\n (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n errmsg(\"array lower bound is too large: %d\",\n lb[i])));\n }\n\n return true;\n }\n\nSpecifically \"We actually insist that dims[i] + lb[i] be computable\nwithout overflow, meaning that an array with last subscript equal to\nINT32_MAX will be disallowed.\" If the upper bound is INT32_MAX,\nthen there's no lower bound where (lower_bound + size) won't overflow.\n\nIt might be possible to remove this restriction, but it's probably\neasier to keep it.\n\n> An easy way to deal with this problem is to first perform the calculation\n> with everything cast to an int64. Before setting dim[i], you'd check that\n> the result is in [PG_INT32_MIN, PG_INT32_MAX] and fail if needed.\n>\n> int64 newdim;\n>\n> ...\n>\n> newdim = (int64) 1 + (int64) upperIndx[i] - (int64) lowerIndx[i];\n> if (unlikely(newdim < PG_INT32_MIN || newdim > PG_INT32_MAX))\n> ereport(ERROR,\n> ...\n> dim[i] = (int32) newdim;\n\nI've rebased my patches and updated 0002 with this approach if this is\nstill the approach you want to go with. I went with the array size too\nlarge error for similar reasons as the previous version of the patch.\n\nSince the patches have been renumbered, here's an overview of their\nstatus:\n\n- 0001 is reviewed and waiting for v18.\n- 0002 is under review and a bug fix.\n- 0003 needs review.\n\nThanks,\nJoseph Koshakow", "msg_date": "Fri, 19 Jul 2024 19:32:18 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Fri, Jul 19, 2024 at 07:32:18PM -0400, Joseph Koshakow wrote:\n> On Fri, Jul 19, 2024 at 2:45 PM Nathan Bossart <[email protected]>\n> wrote:\n>> + /* dim[i] = 1 + upperIndx[i] - lowerIndx[i]; */\n>> + if (pg_add_s32_overflow(1, upperIndx[i], &dim[i]))\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n>> + errmsg(\"array upper bound is too large: %d\",\n>> + upperIndx[i])));\n>> + if (pg_sub_s32_overflow(dim[i], lowerIndx[i], &dim[i]))\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n>> + errmsg(\"array size exceeds the maximum allowed\n> (%d)\",\n>> + (int) MaxArraySize)));\n>>\n>> I think the problem with fixing it this way is that it prohibits more than\n>> is necessary.\n> \n> My understanding is that 2147483647 (INT32_MAX) is not a valid upper\n> bound, which is what the first overflow check is checking. Any query of\n> the form\n> `INSERT INTO arroverflowtest(i[<lb>:2147483647]) VALUES ('{...}');`\n> will fail with an error of\n> `ERROR: array lower bound is too large: <lb>`\n> \n> The reason is the following bounds check found in arrayutils.c\n> \n> /*\n> * Verify sanity of proposed lower-bound values for an array\n> *\n> * The lower-bound values must not be so large as to cause overflow when\n> * calculating subscripts, e.g. lower bound 2147483640 with length 10\n> * must be disallowed. We actually insist that dims[i] + lb[i] be\n> * computable without overflow, meaning that an array with last\n> subscript\n> * equal to INT_MAX will be disallowed.\n\nI see. I'm still not sure that this is the right place to enforce this\ncheck, especially given we explicitly check the bounds later on in\nconstruct_md_array(). Am I understanding correctly that the main\nbehavioral difference between these two approaches is that users will see\ndifferent error messages?\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 22 Jul 2024 10:17:55 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Mon, Jul 22, 2024 at 11:17 AM Nathan Bossart <[email protected]>\nwrote:\n> On Fri, Jul 19, 2024 at 07:32:18PM -0400, Joseph Koshakow wrote:\n>> On Fri, Jul 19, 2024 at 2:45 PM Nathan Bossart <[email protected]>\n>> wrote:\n>>> + /* dim[i] = 1 + upperIndx[i] - lowerIndx[i]; */\n>>> + if (pg_add_s32_overflow(1, upperIndx[i], &dim[i]))\n>>> + ereport(ERROR,\n>>> + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n>>> + errmsg(\"array upper bound is too large: %d\",\n>>> + upperIndx[i])));\n>>> + if (pg_sub_s32_overflow(dim[i], lowerIndx[i], &dim[i]))\n>>> + ereport(ERROR,\n>>> + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n>>> + errmsg(\"array size exceeds the maximum allowed\n>> (%d)\",\n>>> + (int) MaxArraySize)));\n>\n> Am I understanding correctly that the main\n> behavioral difference between these two approaches is that users will see\n> different error messages?\n\nYes, you are understanding correctly. The approach written above will\nhave the error message \"array upper bound is too large\", while the\napproach attached in patch\nv13-0002-Remove-overflow-from-array_set_slice.patch will have the error\nmessage \"array lower bound is too large\".\n\nThanks,\nJoseph Koshakow\n\nOn Mon, Jul 22, 2024 at 11:17 AM Nathan Bossart <[email protected]> wrote:> On Fri, Jul 19, 2024 at 07:32:18PM -0400, Joseph Koshakow wrote:>> On Fri, Jul 19, 2024 at 2:45 PM Nathan Bossart <[email protected]>>> wrote:>>> +            /* dim[i] = 1 + upperIndx[i] - lowerIndx[i]; */>>> +            if (pg_add_s32_overflow(1, upperIndx[i], &dim[i]))>>> +                ereport(ERROR,>>> +                        (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),>>> +                         errmsg(\"array upper bound is too large: %d\",>>> +                                upperIndx[i])));>>> +            if (pg_sub_s32_overflow(dim[i], lowerIndx[i], &dim[i]))>>> +                ereport(ERROR,>>> +                        (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),>>> +                         errmsg(\"array size exceeds the maximum allowed>> (%d)\",>>> +                                (int) MaxArraySize)));> > Am I understanding correctly that the main> behavioral difference between these two approaches is that users will see> different error messages?Yes, you are understanding correctly. The approach written above willhave the error message \"array upper bound is too large\", while theapproach attached in patchv13-0002-Remove-overflow-from-array_set_slice.patch will have the errormessage \"array lower bound is too large\".Thanks,Joseph Koshakow", "msg_date": "Mon, 22 Jul 2024 17:20:15 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Mon, Jul 22, 2024 at 05:20:15PM -0400, Joseph Koshakow wrote:\n> On Mon, Jul 22, 2024 at 11:17 AM Nathan Bossart <[email protected]>\n> wrote:\n>> Am I understanding correctly that the main\n>> behavioral difference between these two approaches is that users will see\n>> different error messages?\n> \n> Yes, you are understanding correctly. The approach written above will\n> have the error message \"array upper bound is too large\", while the\n> approach attached in patch\n> v13-0002-Remove-overflow-from-array_set_slice.patch will have the error\n> message \"array lower bound is too large\".\n\nOkay. I'll plan on committing v13-0002 in the next couple of days, then.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 22 Jul 2024 16:36:33 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": ">\n> On Mon, Jul 22, 2024 at 5:52 PM Alexander Lakhin <[email protected]>\n> wrote:\n>\n> > Also there are several trap-producing cases with date types:\n> > SELECT to_date('100000000', 'CC');\n>\n> Hi, I’ve attached a patch that fixes the to_date() overflow. Patches 1\n> through 3 remain unchanged.\n>\n> Thank you,\n> Matthew Kim\n>", "msg_date": "Mon, 22 Jul 2024 18:07:34 -0400", "msg_from": "Matthew Kim <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Mon, Jul 22, 2024 at 04:36:33PM -0500, Nathan Bossart wrote:\n> Okay. I'll plan on committing v13-0002 in the next couple of days, then.\n\nActually, I think my concerns about prohibiting more than necessary go away\nif we do the subtraction first. If \"upperIndx[i] - lowerIndx[i]\"\noverflows, we know the array size is too big. Similarly, if adding one to\nthat result overflows, we again know the the array size is too big. This\nappears to be how the surrounding code handles this problem (e.g.,\nReadArrayDimensions()). Thoughts?\n\n-- \nnathan", "msg_date": "Mon, 22 Jul 2024 17:27:13 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Mon, Jul 22, 2024 at 6:27 PM Nathan Bossart <[email protected]>\nwrote:\n>\n> Actually, I think my concerns about prohibiting more than necessary go\naway\n> if we do the subtraction first. If \"upperIndx[i] - lowerIndx[i]\"\n> overflows, we know the array size is too big. Similarly, if adding one to\n> that result overflows, we again know the the array size is too big. This\n> appears to be how the surrounding code handles this problem (e.g.,\n> ReadArrayDimensions()). Thoughts?\n\nI like that approach! It won't reject any valid bounds and is\nconsistent with the surrounding code. Also statements of the following\nformat will maintain the same error messages they had previously:\n\n # INSERT INTO arroverflowtest(i[2147483646:2147483647]) VALUES\n('{1,2}');\n ERROR: array lower bound is too large: 2147483646\n\nThe specific bug that this patch fixes is preventing the following\nstatement:\n\n # INSERT INTO arroverflowtest(i[-2147483648:2147483647]) VALUES ('{1}');\n\nSo we may want to add that test back in.\n\nThanks,\nJoseph Koshakow\n\nOn Mon, Jul 22, 2024 at 6:27 PM Nathan Bossart <[email protected]> wrote:>> Actually, I think my concerns about prohibiting more than necessary go away> if we do the subtraction first.  If \"upperIndx[i] - lowerIndx[i]\"> overflows, we know the array size is too big.  Similarly, if adding one to> that result overflows, we again know the the array size is too big.  This> appears to be how the surrounding code handles this problem (e.g.,> ReadArrayDimensions()).  Thoughts?I like that approach! It won't reject any valid bounds and isconsistent with the surrounding code. Also statements of the followingformat will maintain the same error messages they had previously:    # INSERT INTO arroverflowtest(i[2147483646:2147483647]) VALUES ('{1,2}');    ERROR:  array lower bound is too large: 2147483646The specific bug that this patch fixes is preventing the followingstatement:    # INSERT INTO arroverflowtest(i[-2147483648:2147483647]) VALUES ('{1}');So we may want to add that test back in.Thanks,Joseph Koshakow", "msg_date": "Mon, 22 Jul 2024 18:56:14 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Tue, Jul 23, 2024 at 6:56 AM Joseph Koshakow <[email protected]> wrote:\n>\n> The specific bug that this patch fixes is preventing the following\n> statement:\n>\n> # INSERT INTO arroverflowtest(i[-2147483648:2147483647]) VALUES ('{1}');\n>\n> So we may want to add that test back in.\n>\nI agree with you.\n\n\n\nalso v13-0003-Remove-dependence-on-integer-wrapping-for-jsonb.patch\nin setPathArray we change to can\n\n if (idx == PG_INT32_MIN || -idx > nelems)\n {\n /*\n * If asked to keep elements position consistent, it's not allowed\n * to prepend the array.\n */\n if (op_type & JB_PATH_CONSISTENT_POSITION)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"path element at position %d is out of\nrange: %d\",\n level + 1, idx)));\n idx = PG_INT32_MIN;\n }\n\n\n", "msg_date": "Tue, 23 Jul 2024 14:14:14 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Mon, Jul 22, 2024 at 6:07 PM Matthew Kim <[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 5:52 PM Alexander Lakhin <[email protected]>\nwrote:\n>\n>> Also there are several trap-producing cases with date types:\n>> SELECT to_date('100000000', 'CC');\n>\n> Hi, I’ve attached a patch that fixes the to_date() overflow. Patches 1\nthrough 3 remain unchanged.\n\nThanks for the contribution Mattew!\n\nOn Tue, Jul 23, 2024 at 2:14 AM jian he <[email protected]> wrote:\n>\n> On Tue, Jul 23, 2024 at 6:56 AM Joseph Koshakow <[email protected]> wrote:\n>>\n>> The specific bug that this patch fixes is preventing the following\n>> statement:\n>>\n>> # INSERT INTO arroverflowtest(i[-2147483648:2147483647]) VALUES\n('{1}');\n>>\n>> So we may want to add that test back in.\n>>\n> I agree with you.\n\nI've updated the patch to add this test back in.\n\n> also v13-0003-Remove-dependence-on-integer-wrapping-for-jsonb.patch\n> in setPathArray we change to can\n>\n> if (idx == PG_INT32_MIN || -idx > nelems)\n> {\n> /*\n> * If asked to keep elements position consistent, it's not\nallowed\n> * to prepend the array.\n> */\n> if (op_type & JB_PATH_CONSISTENT_POSITION)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"path element at position %d is out of\n> range: %d\",\n> level + 1, idx)));\n> idx = PG_INT32_MIN;\n> }\n\nDone in the attached patch.", "msg_date": "Tue, 23 Jul 2024 17:41:18 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Tue, Jul 23, 2024 at 05:41:18PM -0400, Joseph Koshakow wrote:\n> On Tue, Jul 23, 2024 at 2:14 AM jian he <[email protected]> wrote:\n>> On Tue, Jul 23, 2024 at 6:56 AM Joseph Koshakow <[email protected]> wrote:\n>>> The specific bug that this patch fixes is preventing the following\n>>> statement:\n>>>\n>>> # INSERT INTO arroverflowtest(i[-2147483648:2147483647]) VALUES\n> ('{1}');\n>>>\n>>> So we may want to add that test back in.\n>>>\n>> I agree with you.\n> \n> I've updated the patch to add this test back in.\n\nI've committed/back-patched the fix for array_set_slice(). I ended up\nfiddling with the test cases a bit more because they generate errors that\ninclude a platform-dependent value (MaxArraySize). To handle that, I moved\nthe new tests to the section added in commit 18b5851 where VERBOSITY is set\nto \"sqlstate\".\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 23 Jul 2024 22:20:50 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Mon, Jul 22, 2024 at 5:52 PM Alexander Lakhin\n<exclusion(at)gmail(dot)com>\nwrote:\n\n> Also there are several trap-producing cases with date types:\n> SELECT make_date(-2147483648, 1, 1);\n\nHi, I’ve attached a patch that fixes the make_date overflow. I’ve upgraded\nthe patches accordingly to reflect Joseph’s committed fix.\n\nThank you,\nMatthew Kim", "msg_date": "Wed, 24 Jul 2024 17:49:41 -0400", "msg_from": "Matthew Kim <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Fri, Jun 14, 2024 at 8:00 AM Alexander Lakhin <[email protected]>\nwrote:\n>\n> And the most interesting case to me:\n> SET temp_buffers TO 1000000000;\n>\n> CREATE TEMP TABLE t(i int PRIMARY KEY);\n> INSERT INTO t VALUES(1);\n>\n> #4 0x00007f385cdd37f3 in __GI_abort () at ./stdlib/abort.c:79\n> #5 0x00005620071c4f51 in __addvsi3 ()\n> #6 0x0000562007143f3c in init_htab (hashp=0x562008facb20,\nnelem=610070812) at dynahash.c:720\n>\n> (gdb) f 6\n> #6 0x0000560915207f3c in init_htab (hashp=0x560916039930,\nnelem=1000000000) at dynahash.c:720\n> 720 hctl->high_mask = (nbuckets << 1) - 1;\n> (gdb) p nbuckets\n> $1 = 1073741824\n\nAlex, are you able to get a full stack trace for this panic? I'm unable\nto reproduce this because I don't have enough memory in my system. I've\ntried reducing `BLCKSZ` to 1024, which is the lowest value allowed per\nmy understanding, and I still don't have enough memory.\n\nHere's what it looks like is happening:\n\n1. When inserting into the table, we create a new dynamic hash table\nand set `nelem` equal to `temp_buffers`, which is 1000000000.\n\n2. `nbuckets` is then set to the the next highest power of 2 from\n `nelem`, which is 1073741824.\n\n /*\n * Allocate space for the next greater power of two number of buckets,\n * assuming a desired maximum load factor of 1.\n */\n nbuckets = next_pow2_int(nelem);\n\n3. Shift `nbuckets` to the left by 1. This would equal 2147483648,\nwhich is larger than `INT_MAX`, which causes an overflow.\n\n hctl->high_mask = (nbuckets << 1) - 1;\n\nThe max value allowed for `temp_buffers` is `INT_MAX / 2` (1073741823),\nSo any value of `temp_buffers` in the range (536870912, 1073741823]\nwould cause this overflow. Without `-ftrapv`, `nbuckets` would wrap\naround to -2147483648, which is likely to cause all sorts of havoc, I'm\njust not sure what exactly.\n\nAlso, `nbuckets = next_pow2_int(nelem);`, by itself is a bit sketchy\nconsidering that `nelem` is a `long` and `nbuckets` is an `int`.\nPotentially, the fix here is to just convert `nbuckets` to a `long`. I\nplan on checking if that's feasible.\n\nI also found this commit [0] that increased the max of `nbuckets` from\n`INT_MAX / BLCKSZ` to `INT_MAX / 2`, which introduced the possibility\nof this overflow. So I plan on reading through that as well.\n\nThanks,\nJoseph Koshakow\n\n[0]\nhttps://github.com/postgres/postgres/commit/0007490e0964d194a606ba79bb11ae1642da3372\n\nOn Fri, Jun 14, 2024 at 8:00 AM Alexander Lakhin <[email protected]> wrote:>>    And the most interesting case to me:>    SET temp_buffers TO 1000000000;>>    CREATE TEMP TABLE t(i int PRIMARY KEY);>    INSERT INTO t VALUES(1);>>    #4  0x00007f385cdd37f3 in __GI_abort () at ./stdlib/abort.c:79>    #5  0x00005620071c4f51 in __addvsi3 ()>    #6  0x0000562007143f3c in init_htab (hashp=0x562008facb20, nelem=610070812) at dynahash.c:720>>    (gdb) f 6>    #6  0x0000560915207f3c in init_htab (hashp=0x560916039930, nelem=1000000000) at dynahash.c:720>    720             hctl->high_mask = (nbuckets << 1) - 1;>    (gdb) p nbuckets>    $1 = 1073741824Alex, are you able to get a full stack trace for this panic? I'm unableto reproduce this because I don't have enough memory in my system. I'vetried reducing `BLCKSZ` to 1024, which is the lowest value allowed permy understanding, and I still don't have enough memory.Here's what it looks like is happening:1. When inserting into the table, we create a new dynamic hash tableand set `nelem` equal to `temp_buffers`, which is 1000000000.2. `nbuckets` is then set to the the next highest power of 2 from   `nelem`, which is 1073741824.    /*     * Allocate space for the next greater power of two number of buckets,     * assuming a desired maximum load factor of 1.     */    nbuckets = next_pow2_int(nelem);3. Shift `nbuckets` to the left by 1. This would equal 2147483648,which is larger than `INT_MAX`, which causes an overflow.    hctl->high_mask = (nbuckets << 1) - 1;The max value allowed for `temp_buffers` is `INT_MAX / 2` (1073741823),So any value of `temp_buffers` in the range (536870912, 1073741823]would cause this overflow. Without `-ftrapv`, `nbuckets` would wraparound to -2147483648, which is likely to cause all sorts of havoc, I'mjust not sure what exactly.Also, `nbuckets = next_pow2_int(nelem);`, by itself is a bit sketchyconsidering that `nelem` is a `long` and `nbuckets` is an `int`.Potentially, the fix here is to just convert `nbuckets` to a `long`. Iplan on checking if that's feasible.I also found this commit [0] that increased the max of `nbuckets` from`INT_MAX / BLCKSZ` to `INT_MAX / 2`, which introduced the possibilityof this overflow. So I plan on reading through that as well.Thanks,Joseph Koshakow[0] https://github.com/postgres/postgres/commit/0007490e0964d194a606ba79bb11ae1642da3372", "msg_date": "Sun, 4 Aug 2024 19:55:12 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Wed, Jul 24, 2024 at 05:49:41PM -0400, Matthew Kim wrote:\n> Hi, I�ve attached a patch that fixes the make_date overflow. I�ve upgraded\n> the patches accordingly to reflect Joseph�s committed fix.\n\ncfbot is unhappy with this one on Windows [0], and I think I see why.\nWhile the patch uses pg_mul_s32_overflow(), it doesn't check its return\nvalue, so we'll just proceed with a bogus value in the event of an\noverflow. Windows apparently doesn't have HAVE__BUILTIN_OP_OVERFLOW\ndefined, so pg_mul_s32_overflow() sets the result to 0x5eed (24,301 in\ndecimal), which is where I'm guessing the year -24300 in the ERROR is from.\n\n[0] https://api.cirrus-ci.com/v1/artifact/task/5872294914949120/testrun/build/testrun/regress/regress/regression.diffs\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 6 Aug 2024 16:43:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "I started looking at 0001 again with the intent of committing it, and this\ncaught my eye:\n\n- /* make the amount positive for digit-reconstruction loop */\n- value = -value;\n+ /*\n+ * make the amount positive for digit-reconstruction loop, we can\n+ * leave INT64_MIN unchanged\n+ */\n+ pg_neg_s64_overflow(value, &value);\n\nThe comment mentions that we can leave the minimum value unchanged, but it\ndoesn't explain why. Can we explain why?\n\n+static inline bool\n+pg_neg_s64_overflow(int64 a, int64 *result)\n+{\n+ if (unlikely(a == PG_INT64_MIN))\n+ {\n+ return true;\n+ }\n+ else\n+ {\n+ *result = -a;\n+ return false;\n+ }\n+}\n\nCan we add a comment that these routines do not set \"result\" when true is\nreturned?\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 7 Aug 2024 10:08:04 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Wed, Aug 7, 2024 at 11:08 AM Nathan Bossart <[email protected]>\nwrote:\n>\n> I started looking at 0001 again with the intent of committing it, and this\n> caught my eye:\n>\n> - /* make the amount positive for digit-reconstruction loop */\n> - value = -value;\n> + /*\n> + * make the amount positive for digit-reconstruction loop, we can\n> + * leave INT64_MIN unchanged\n> + */\n> + pg_neg_s64_overflow(value, &value);\n>\n> The comment mentions that we can leave the minimum value unchanged, but it\n> doesn't explain why. Can we explain why?\n\nI went back to try and figure this out and realized that it would be\nmuch simpler to just convert value to an unsigned integer and not worry\nabout overflow. So I've updated the patch to do that.\n\n> +static inline bool\n> +pg_neg_s64_overflow(int64 a, int64 *result)\n> +{\n> + if (unlikely(a == PG_INT64_MIN))\n> + {\n> + return true;\n> + }\n> + else\n> + {\n> + *result = -a;\n> + return false;\n> + }\n> +}\n>\n> Can we add a comment that these routines do not set \"result\" when true is\n> returned?\n\nI've added a comment to the top of the file where we describe the\nreturn values of the other functions.\n\nI also updated the implementations of the pg_abs_sX() functions to\nsomething a bit simpler. This was based on feedback in another patch\n[0], and more closely matches similar logic in other places.\n\nThanks,\nJoseph Koshakow\n\n[0]\nhttps://postgr.es/m/CAAvxfHdTsMZPWEHUrZ=h3cky9Ccc3Mtx2whUHygY+ABP-mCmUw@mail.gmail.co", "msg_date": "Wed, 7 Aug 2024 21:40:46 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "I've updated patch 0004 to check the return value of pg_mul_s32_overflow().\nSince tm.tm_year overflowed, the error message is hardcoded.\n\nThanks,\nMatthew Kim", "msg_date": "Thu, 8 Aug 2024 15:16:37 -0700", "msg_from": "Matthew Kim <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Fri, Aug 9, 2024 at 6:16 AM Matthew Kim <[email protected]> wrote:\n>\n> I've updated patch 0004 to check the return value of pg_mul_s32_overflow(). Since tm.tm_year overflowed, the error message is hardcoded.\n>\n\n--- a/src/backend/utils/adt/date.c\n+++ b/src/backend/utils/adt/date.c\n@@ -257,7 +257,10 @@ make_date(PG_FUNCTION_ARGS)\n if (tm.tm_year < 0)\n {\n bc = true;\n- tm.tm_year = -tm.tm_year;\n+ if (pg_mul_s32_overflow(tm.tm_year, -1, &tm.tm_year))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n+ errmsg(\"date field value out of range\")));\n }\n\nShould the error about integers be out of range?\n\nSELECT make_date(-2147483648, 1, 1);\n\"-2147483648\" is not an allowed integer.\n\n\n\\df make_date\n List of functions\n Schema | Name | Result data type | Argument data\ntypes | Type\n------------+-----------+------------------+------------------------------------------+------\n pg_catalog | make_date | date | year integer, month\ninteger, day integer | func\n\n\n", "msg_date": "Fri, 9 Aug 2024 09:01:27 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Aug 8, 2024 at 9:01 PM jian he <[email protected]> wrote:\n>\n> Should the error about integers be out of range?\n>\n> SELECT make_date(-2147483648, 1, 1);\n> \"-2147483648\" is not an allowed integer.\n>\n> \\df make_date\n> List of functions\n> Schema | Name | Result data type | Argument data\n> types | Type\n>\n------------+-----------+------------------+------------------------------------------+------\n> pg_catalog | make_date | date | year integer, month\n> integer, day integer | func\n\nAre you saying that with the patch applied you're seeing the above\nerror? If so, I see a different error.\n\n test=# SELECT make_date(-2147483648, 1, 1);\n ERROR: date field value out of range\n\nOr are you saying that we should change the code in the patch so that\nit returns the above error? If so, I'm not sure I understand the\nreasoning. -2147483648 is an allowed integer, it's the minimum allowed\nvalue for integers.\n\n test=# SELECT (-2147483648)::integer;\n int4\n -------------\n -2147483648\n (1 row)\n\nThanks,\nJoseph Koshakow\n\nOn Thu, Aug 8, 2024 at 9:01 PM jian he <[email protected]> wrote:>> Should the error about integers be out of range?>> SELECT make_date(-2147483648, 1, 1);> \"-2147483648\" is not an allowed integer.>> \\df make_date>                                       List of functions>    Schema   |   Name    | Result data type |           Argument data> types            | Type> ------------+-----------+------------------+------------------------------------------+------>  pg_catalog | make_date | date             | year integer, month> integer, day integer | funcAre you saying that with the patch applied you're seeing the aboveerror? If so, I see a different error.    test=# SELECT make_date(-2147483648, 1, 1);    ERROR:  date field value out of rangeOr are you saying that we should change the code in the patch so thatit returns the above error? If so, I'm not sure I understand thereasoning. -2147483648 is an allowed integer, it's the minimum allowedvalue for integers.    test=# SELECT (-2147483648)::integer;        int4         -------------     -2147483648    (1 row)Thanks,Joseph Koshakow", "msg_date": "Sat, 10 Aug 2024 11:41:31 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Mon, Jul 22, 2024 at 5:52 PM Alexander Lakhin\n<exclusion(at)gmail(dot)com>\nwrote:\n\n> Also there are several trap-producing cases with date types:\n> SELECT to_timestamp('1000000000,999', 'Y,YYY');\n\nI attached patch 5 that fixes the to_timestamp overflow.\n\nThank you,\nMatthew Kim", "msg_date": "Sat, 10 Aug 2024 14:54:48 -0700", "msg_from": "Matthew Kim <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Sat, Aug 10, 2024 at 11:41 PM Joseph Koshakow <[email protected]> wrote:\n>\n>\n>\n> On Thu, Aug 8, 2024 at 9:01 PM jian he <[email protected]> wrote:\n> >\n> > Should the error about integers be out of range?\n> >\n> > SELECT make_date(-2147483648, 1, 1);\n> > \"-2147483648\" is not an allowed integer.\n> >\n> > \\df make_date\n> > List of functions\n> > Schema | Name | Result data type | Argument data\n> > types | Type\n> > ------------+-----------+------------------+------------------------------------------+------\n> > pg_catalog | make_date | date | year integer, month\n> > integer, day integer | func\n>\n> Are you saying that with the patch applied you're seeing the above\n> error? If so, I see a different error.\n>\n> test=# SELECT make_date(-2147483648, 1, 1);\n> ERROR: date field value out of range\n>\n> Or are you saying that we should change the code in the patch so that\n> it returns the above error? If so, I'm not sure I understand the\n> reasoning. -2147483648 is an allowed integer, it's the minimum allowed\n> value for integers.\n>\n> test=# SELECT (-2147483648)::integer;\n> int4\n> -------------\n> -2147483648\n> (1 row)\n>\n\nsorry, i mixed up\nselect (-2147483648)::int;\nwith\nselect -2147483648::int;\n\nlooks good to me.\nmaybe make it more explicit:\nerrmsg(\"date field (year) value out of range\")));\ni don't have a huge opinion though.\n\n\n", "msg_date": "Mon, 12 Aug 2024 18:33:13 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "I've been preparing 0001 for commit. I've attached what I have so far.\n\nThe main changes are the implementations of pg_abs_* and pg_neg_*. For the\nformer, I've used abs()/i64abs() for the short/int implementations. For\nthe latter, I've tried to use __builtin_sub_overflow() when possible, as\nthat appears to produce slightly better code. When\n__builtin_sub_overflow() is not available, the values are upcasted before\nnegation, and we check that result before casting to the return type. That\napproach more closely matches the surrounding functions. (One exception is\npg_neg_u64_overflow() when we have neither HAVE__BUILTIN_OP_OVERFLOW nor\nHAVE_INT128. In that case, we have to hand-roll everything.)\n\nThoughts?\n\n-- \nnathan", "msg_date": "Tue, 13 Aug 2024 16:46:34 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Tue, Aug 13, 2024 at 04:46:34PM -0500, Nathan Bossart wrote:\n> I've been preparing 0001 for commit. I've attached what I have so far.\n> \n> The main changes are the implementations of pg_abs_* and pg_neg_*. For the\n> former, I've used abs()/i64abs() for the short/int implementations. For\n> the latter, I've tried to use __builtin_sub_overflow() when possible, as\n> that appears to produce slightly better code. When\n> __builtin_sub_overflow() is not available, the values are upcasted before\n> negation, and we check that result before casting to the return type. That\n> approach more closely matches the surrounding functions. (One exception is\n> pg_neg_u64_overflow() when we have neither HAVE__BUILTIN_OP_OVERFLOW nor\n> HAVE_INT128. In that case, we have to hand-roll everything.)\n\nAnd here's a new version of the patch in which I've attempted to fix the\nsilly mistakes.\n\n-- \nnathan", "msg_date": "Tue, 13 Aug 2024 22:07:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On 14/08/2024 06:07, Nathan Bossart wrote:\n> On Tue, Aug 13, 2024 at 04:46:34PM -0500, Nathan Bossart wrote:\n>> I've been preparing 0001 for commit. I've attached what I have so far.\n>>\n>> The main changes are the implementations of pg_abs_* and pg_neg_*. For the\n>> former, I've used abs()/i64abs() for the short/int implementations. For\n>> the latter, I've tried to use __builtin_sub_overflow() when possible, as\n>> that appears to produce slightly better code. When\n>> __builtin_sub_overflow() is not available, the values are upcasted before\n>> negation, and we check that result before casting to the return type. That\n>> approach more closely matches the surrounding functions. (One exception is\n>> pg_neg_u64_overflow() when we have neither HAVE__BUILTIN_OP_OVERFLOW nor\n>> HAVE_INT128. In that case, we have to hand-roll everything.)\n> \n> And here's a new version of the patch in which I've attempted to fix the\n> silly mistakes.\n\nLGTM, just a few small comments:\n\n> * - If a * b overflows, return true, otherwise store the result of a * b\n> * into *result. The content of *result is implementation defined in case of\n> * overflow.\n> + * - If -a overflows, return true, otherwise store the result of -a into\n> + * *result. The content of *result is implementation defined in case of\n> + * overflow.\n> + * - Return the absolute value of a as an unsigned integer of the same\n> + * width.\n> *---------\n> */\n\nThe last \"Return the absolute value of a ...\" sentence feels a bit \nweird. In all the preceding sentences, 'a' is part of an \"If a\" sentence \nthat defines what 'a' is. In the last one, it's kind of just hanging there.\n\n> +static inline uint16\n> +pg_abs_s16(int16 a)\n> +{\n> +\treturn abs(a);\n> +}\n> +\n\nThis is correct, but it took me a while to understand why. Maybe some \ncomments would be in order.\n\nThe function it calls is \"int abs(int)\". So this first widens the int16 \nto int32, and then narrows the result from int32 to uint16.\n\nThe man page for abs() says \"Trying to take the absolute value of the \nmost negative integer is not defined.\" That's OK in this case, because \nthat refers to the most negative int32 value, and the argument here is \nint16. But that's why the pg_abs_s64(int64) function needs the special \ncheck for the most negative value.\n\n\nThere's also some code in libpq's pqCheckOutBufferSpace() function that \ncould use these functions.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 10:02:28 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Wed, Aug 14, 2024 at 10:02:28AM +0300, Heikki Linnakangas wrote:\n> On 14/08/2024 06:07, Nathan Bossart wrote:\n>> And here's a new version of the patch in which I've attempted to fix the\n>> silly mistakes.\n> \n> LGTM, just a few small comments:\n\nThanks for reviewing.\n\n>> * - If a * b overflows, return true, otherwise store the result of a * b\n>> * into *result. The content of *result is implementation defined in case of\n>> * overflow.\n>> + * - If -a overflows, return true, otherwise store the result of -a into\n>> + * *result. The content of *result is implementation defined in case of\n>> + * overflow.\n>> + * - Return the absolute value of a as an unsigned integer of the same\n>> + * width.\n>> *---------\n>> */\n> \n> The last \"Return the absolute value of a ...\" sentence feels a bit weird. In\n> all the preceding sentences, 'a' is part of an \"If a\" sentence that defines\n> what 'a' is. In the last one, it's kind of just hanging there.\n\nHow about:\n\n\tIf a is negative, return -a, otherwise return a. Overflow cannot occur\n\tbecause the return value is an unsigned integer with the same width as\n\tthe argument.\n\n>> +static inline uint16\n>> +pg_abs_s16(int16 a)\n>> +{\n>> +\treturn abs(a);\n>> +}\n>> +\n> \n> This is correct, but it took me a while to understand why. Maybe some\n> comments would be in order.\n> \n> The function it calls is \"int abs(int)\". So this first widens the int16 to\n> int32, and then narrows the result from int32 to uint16.\n> \n> The man page for abs() says \"Trying to take the absolute value of the most\n> negative integer is not defined.\" That's OK in this case, because that\n> refers to the most negative int32 value, and the argument here is int16. But\n> that's why the pg_abs_s64(int64) function needs the special check for the\n> most negative value.\n\nYeah, I've added some casts/comments to make this clear. I got too excited\nabout trimming it down and ended up obfuscating these important details.\n\n> There's also some code in libpq's pqCheckOutBufferSpace() function that\n> could use these functions.\n\nDuly noted.\n\n-- \nnathan", "msg_date": "Wed, 14 Aug 2024 12:20:40 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Thanks for the improvements Nathan. The current iteration LGTM, just a\nsingle comment on `pg_abs_s64`\n\n> +static inline uint64\n> +pg_abs_s64(int64 a)\n> +{\n> + if (unlikely(a == PG_INT64_MIN))\n> + return (uint64) PG_INT64_MAX + 1;\n> + if (a < 0)\n> + return -a;\n> + return a;\n> +}\n\nSince we know that a does not equal PG_INT64_MIN, could we shorten the\nlast three lines and do the following?\n\n\n static inline uint64\n pg_abs_s64(int64 a)\n {\n if (unlikely(a == PG_INT64_MIN))\n return (uint64) PG_INT64_MAX + 1;\n return i64_abs(a);\n }\n\nThanks,\nJoseph Koshakow\n\nThanks for the improvements Nathan. The current iteration LGTM, just asingle comment on `pg_abs_s64`> +static inline uint64> +pg_abs_s64(int64 a)> +{> +     if (unlikely(a == PG_INT64_MIN))> +         return (uint64) PG_INT64_MAX + 1;> +     if (a < 0)> +         return -a;> +     return a;> +}Since we know that a does not equal PG_INT64_MIN, could we shorten thelast three lines and do the following?    static inline uint64    pg_abs_s64(int64 a)    {        if (unlikely(a == PG_INT64_MIN))          return (uint64) PG_INT64_MAX + 1;        return i64_abs(a);    }Thanks,Joseph Koshakow", "msg_date": "Wed, 14 Aug 2024 13:41:40 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Wed, Aug 14, 2024 at 01:41:40PM -0400, Joseph Koshakow wrote:\n> Since we know that a does not equal PG_INT64_MIN, could we shorten the\n> last three lines and do the following?\n> \n> static inline uint64\n> pg_abs_s64(int64 a)\n> {\n> if (unlikely(a == PG_INT64_MIN))\n> return (uint64) PG_INT64_MAX + 1;\n> return i64_abs(a);\n> }\n\nGood call. That actually produces seemingly better code, too.\n\n-- \nnathan", "msg_date": "Wed, 14 Aug 2024 13:16:06 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On 14/08/2024 20:20, Nathan Bossart wrote:\n> On Wed, Aug 14, 2024 at 10:02:28AM +0300, Heikki Linnakangas wrote:\n>> On 14/08/2024 06:07, Nathan Bossart wrote:\n>>> * - If a * b overflows, return true, otherwise store the result of a * b\n>>> * into *result. The content of *result is implementation defined in case of\n>>> * overflow.\n>>> + * - If -a overflows, return true, otherwise store the result of -a into\n>>> + * *result. The content of *result is implementation defined in case of\n>>> + * overflow.\n>>> + * - Return the absolute value of a as an unsigned integer of the same\n>>> + * width.\n>>> *---------\n>>> */\n>>\n>> The last \"Return the absolute value of a ...\" sentence feels a bit weird. In\n>> all the preceding sentences, 'a' is part of an \"If a\" sentence that defines\n>> what 'a' is. In the last one, it's kind of just hanging there.\n> \n> How about:\n> \n> \tIf a is negative, return -a, otherwise return a. Overflow cannot occur\n> \tbecause the return value is an unsigned integer with the same width as\n> \tthe argument.\n\nHmm, that still doesn't say what operation it's referring to. They \nexisting comments say \"a + b\", \"a - b\" or \"a * b\", but this one isn't \nreferring to anything at all. IMHO the existing comments are not too \nclear on that either though. How about something like this:\n\n/*---------\n *\n * The following guidelines apply to all the *_overflow routines:\n *\n * If the result overflows, return true, otherwise store the result into\n * *result. The content of *result is implementation defined in case of\n * overflow\n *\n * bool pg_add_*_overflow(a, b, *result)\n *\n * Calculate a + b\n *\n * bool pg_sub_*_overflow(a, b, *result)\n *\n * Calculate a - b\n *\n * bool pg_mul_*_overflow(a, b, *result)\n *\n * Calculate a * b\n *\n * bool pg_neg_*_overflow(a, *result)\n *\n * Calculate -a\n *\n *\n * In addition, this file contains:\n *\n * <unsigned int type> pg_abs_*(<signed int type> a)\n *\n * Calculate absolute value of a. Unlike the standard library abs() and\n * labs() functions, the the return type is unsigned, and the operation\n * cannot overflow.\n *---------\n */\n\n\n>>> +static inline uint16\n>>> +pg_abs_s16(int16 a)\n>>> +{\n>>> +\treturn abs(a);\n>>> +}\n>>> +\n>>\n>> This is correct, but it took me a while to understand why. Maybe some\n>> comments would be in order.\n>>\n>> The function it calls is \"int abs(int)\". So this first widens the int16 to\n>> int32, and then narrows the result from int32 to uint16.\n>>\n>> The man page for abs() says \"Trying to take the absolute value of the most\n>> negative integer is not defined.\" That's OK in this case, because that\n>> refers to the most negative int32 value, and the argument here is int16. But\n>> that's why the pg_abs_s64(int64) function needs the special check for the\n>> most negative value.\n> \n> Yeah, I've added some casts/comments to make this clear. I got too excited\n> about trimming it down and ended up obfuscating these important details.\n\nThat's better, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 22:29:39 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Wed, Aug 14, 2024 at 10:29:39PM +0300, Heikki Linnakangas wrote:\n> Hmm, that still doesn't say what operation it's referring to. They existing\n> comments say \"a + b\", \"a - b\" or \"a * b\", but this one isn't referring to\n> anything at all. IMHO the existing comments are not too clear on that either\n> though. How about something like this:\n\nYeah, this crossed my mind, too. I suppose now is as good a time as any to\nimprove it. Your suggestion looks good to me, so I will modify the patch\naccordingly before committing.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 14 Aug 2024 14:38:59 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Aug 15, 2024 at 2:16 AM Nathan Bossart <[email protected]> wrote:\n>\n\n+static inline bool\n+pg_neg_u64_overflow(uint64 a, int64 *result)\n+{\n+#if defined(HAVE__BUILTIN_OP_OVERFLOW)\n+ return __builtin_sub_overflow(0, a, result);\n+#elif defined(HAVE_INT128)\n+ uint128 res = -((int128) a);\n+\n+ if (unlikely(res < PG_INT64_MIN))\n+ {\n+ *result = 0x5EED; /* to avoid spurious warnings */\n+ return true;\n+ }\n+ *result = res;\n+ return false;\n+#else\n+ if (unlikely(a > (uint64) PG_INT64_MAX + 1))\n+ {\n+ *result = 0x5EED; /* to avoid spurious warnings */\n+ return true;\n+ }\n+ if (unlikely(a == (uint64) PG_INT64_MAX + 1))\n+ *result = PG_INT64_MIN;\n+ else\n+ *result = -((int64) a);\n+ return false;\n+#endif\n\nsorry to bother you.\n\ni am confused with\n\"\n+#elif defined(HAVE_INT128)\n+ uint128 res = -((int128) a);\n\"\nI thought \"unsigned\" means non-negative, therefore uint128 means non-negative.\ntherefore \"int128 res = -((int128) a);\" makes sense to me.\n\n\nalso in HAVE_INT128 branch\ndo we need cast int128 to int64, like\n\n*result = (int64) res;\n\n\n", "msg_date": "Thu, 15 Aug 2024 14:56:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Hello Joe,\n\n05.08.2024 02:55, Joseph Koshakow wrote:\n>\n> On Fri, Jun 14, 2024 at 8:00 AM Alexander Lakhin <[email protected]> wrote:\n> >\n> >    And the most interesting case to me:\n> >    SET temp_buffers TO 1000000000;\n> >\n> >    CREATE TEMP TABLE t(i int PRIMARY KEY);\n> >    INSERT INTO t VALUES(1);\n> >\n> ...\n> Alex, are you able to get a full stack trace for this panic? I'm unable\n> to reproduce this because I don't have enough memory in my system. I've\n> tried reducing `BLCKSZ` to 1024, which is the lowest value allowed per\n> my understanding, and I still don't have enough memory.\n\nYes, please take a look at it (sorry for the late reply):\n\n(gdb) bt\n#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=140438687430464) at ./nptl/pthread_kill.c:44\n#1  __pthread_kill_internal (signo=6, threadid=140438687430464) at ./nptl/pthread_kill.c:78\n#2  __GI___pthread_kill (threadid=140438687430464, signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n#3  0x00007fba70025476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26\n#4  0x00007fba7000b7f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x0000563945aed511 in __addvsi3 ()\n#6  0x0000563945a6c106 in init_htab (hashp=0x563947700980, nelem=1000000000) at dynahash.c:720\n#7  0x0000563945a6bd22 in hash_create (tabname=0x563945c591d9 \"Local Buffer Lookup Table\", nelem=1000000000, \ninfo=0x7ffd4d394620, flags=40) at dynahash.c:567\n#8  0x00005639457f2760 in el () at localbuf.c:635\n#9  0x00005639457f19e3 in ExtendBufferedRelLocal (bmr=..., fork=MAIN_FORKNUM, flags=8, extend_by=1, \nextend_upto=4294967295, buffers=0x7ffd4d3948e0, extended_by=0x7ffd4d3947ac) at localbuf.c:326\n#10 0x00005639457e8851 in ExtendBufferedRelCommon (bmr=..., fork=MAIN_FORKNUM, strategy=0x0, flags=8, extend_by=1, \nextend_upto=4294967295, buffers=0x7ffd4d3948e0, extended_by=0x7ffd4d39488c) at bufmgr.c:2175\n#11 0x00005639457e6850 in ExtendBufferedRelBy (bmr=..., fork=MAIN_FORKNUM, strategy=0x0, flags=8, extend_by=1, \nbuffers=0x7ffd4d3948e0, extended_by=0x7ffd4d39488c) at bufmgr.c:923\n#12 0x00005639452d8ae6 in RelationAddBlocks (relation=0x7fba650abd78, bistate=0x0, num_pages=1, use_fsm=true, \ndid_unlock=0x7ffd4d394a3d) at hio.c:341\n#13 0x00005639452d944a in RelationGetBufferForTuple (relation=0x7fba650abd78, len=32, otherBuffer=0, options=0, \nbistate=0x0, vmbuffer=0x7ffd4d394ac4, vmbuffer_other=0x0, num_pages=1) at hio.c:767\n#14 0x00005639452be996 in heap_insert (relation=0x7fba650abd78, tup=0x5639476ecfc0, cid=0, options=0, bistate=0x0) at \nheapam.c:2019\n#15 0x00005639452cee84 in heapam_tuple_insert (relation=0x7fba650abd78, slot=0x5639476ecf30, cid=0, options=0, \nbistate=0x0) at heapam_handler.c:251\n#16 0x00005639455b3b07 in table_tuple_insert (rel=0x7fba650abd78, slot=0x5639476ecf30, cid=0, options=0, bistate=0x0) at \n../../../src/include/access/tableam.h:1405\n#17 0x00005639455b5c60 in ExecInsert (context=0x7ffd4d394d20, resultRelInfo=0x5639476ec390, slot=0x5639476ecf30, \ncanSetTag=true, inserted_tuple=0x0, insert_destrel=0x0) at nodeModifyTable.c:1139\n#18 0x00005639455ba942 in ExecModifyTable (pstate=0x5639476ec180) at nodeModifyTable.c:4077\n#19 0x0000563945575425 in ExecProcNodeFirst (node=0x5639476ec180) at execProcnode.c:469\n#20 0x0000563945568095 in ExecProcNode (node=0x5639476ec180) at ../../../src/include/executor/executor.h:274\n#21 0x000056394556af65 in ExecutePlan (estate=0x5639476ebf00, planstate=0x5639476ec180, use_parallel_mode=false, \noperation=CMD_INSERT, sendTuples=false, numberTuples=0, direction=ForwardScanDirection, dest=0x5639476f5470,\n     execute_once=true) at execMain.c:1646\n#22 0x00005639455687e3 in standard_ExecutorRun (queryDesc=0x5639476f3e70, direction=ForwardScanDirection, count=0, \nexecute_once=true) at execMain.c:363\n#23 0x00005639455685b9 in ExecutorRun (queryDesc=0x5639476f3e70, direction=ForwardScanDirection, count=0, \nexecute_once=true) at execMain.c:304\n#24 0x000056394584986e in ProcessQuery (plan=0x5639476f5310, sourceText=0x56394760d610 \"INSERT INTO t VALUES(1);\", \nparams=0x0, queryEnv=0x0, dest=0x5639476f5470, qc=0x7ffd4d395180) at pquery.c:160\n#25 0x000056394584b445 in PortalRunMulti (portal=0x56394768ab20, isTopLevel=true, setHoldSnapshot=false, \ndest=0x5639476f5470, altdest=0x5639476f5470, qc=0x7ffd4d395180) at pquery.c:1278\n#26 0x000056394584a93c in PortalRun (portal=0x56394768ab20, count=9223372036854775807, isTopLevel=true, run_once=true, \ndest=0x5639476f5470, altdest=0x5639476f5470, qc=0x7ffd4d395180) at pquery.c:791\n#27 0x0000563945842fd9 in exec_simple_query (query_string=0x56394760d610 \"INSERT INTO t VALUES(1);\") at postgres.c:1284\n#28 0x0000563945848536 in PostgresMain (dbname=0x563947644900 \"regression\", username=0x5639476448e8 \"law\") at \npostgres.c:4766\n#29 0x000056394583eb67 in BackendMain (startup_data=0x7ffd4d395404 \"\", startup_data_len=4) at backend_startup.c:107\n#30 0x000056394574e00e in postmaster_child_launch (child_type=B_BACKEND, startup_data=0x7ffd4d395404 \"\", \nstartup_data_len=4, client_sock=0x7ffd4d395450) at launch_backend.c:274\n#31 0x0000563945753f74 in BackendStartup (client_sock=0x7ffd4d395450) at postmaster.c:3414\n#32 0x00005639457515eb in ServerLoop () at postmaster.c:1648\n#33 0x0000563945750eaa in PostmasterMain (argc=3, argv=0x5639476087b0) at postmaster.c:1346\n#34 0x00005639455f738a in main (argc=3, argv=0x5639476087b0) at main.c:197\n\n>\n> Here's what it looks like is happening:\n> ...\n>\n> The max value allowed for `temp_buffers` is `INT_MAX / 2` (1073741823),\n> So any value of `temp_buffers` in the range (536870912, 1073741823]\n> would cause this overflow. Without `-ftrapv`, `nbuckets` would wrap\n> around to -2147483648, which is likely to cause all sorts of havoc, I'm\n> just not sure what exactly.\n\nYeah, the minimum value that triggers the trap is 536870913 and the maximum\naccepted is 1073741823.\n\nWithout -ftrapv, hctl->high_mask is set to 2147483647 on my machine,\nwhen nbuckets is 1073741824, and the INSERT apparently succeeds.\n\n>\n> Also, `nbuckets = next_pow2_int(nelem);`, by itself is a bit sketchy\n> considering that `nelem` is a `long` and `nbuckets` is an `int`.\n> Potentially, the fix here is to just convert `nbuckets` to a `long`. I\n> plan on checking if that's feasible.\n\nYes, it works for me; with s/int         nbuckets;/long nbuckets;/\nI see no issue on 64-bit Linux.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 15 Aug 2024 18:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Aug 15, 2024 at 02:56:00PM +0800, jian he wrote:\n> i am confused with\n> \"\n> +#elif defined(HAVE_INT128)\n> + uint128 res = -((int128) a);\n> \"\n> I thought \"unsigned\" means non-negative, therefore uint128 means non-negative.\n> therefore \"int128 res = -((int128) a);\" makes sense to me.\n\nAh, that's a typo, thanks for pointing it out.\n\n> also in HAVE_INT128 branch\n> do we need cast int128 to int64, like\n> \n> *result = (int64) res;\n\nI don't think we need an explicit cast here since *result is known to be an\nint64. But it certainly wouldn't hurt anything...\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 15 Aug 2024 13:45:48 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "I've committed 0001. Now to 0002...\n\n-\t\tif (-element > nelements)\n+\t\tif (element == PG_INT32_MIN || -element > nelements)\n\nThis seems like a good opportunity to use our new pg_abs_s32() function,\nand godbolt.org [0] seems to indicate that it might produce better code,\ntoo (at least to my eye). I've attached an updated version of the patch\nwith this change. Barring additional feedback, I plan to commit this one\nshortly.\n\n[0] https://godbolt.org/z/57P4vvGYf\n\n-- \nnathan", "msg_date": "Thu, 15 Aug 2024 16:34:30 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Aug 15, 2024 at 5:34 PM Nathan Bossart <[email protected]>\nwrote:\n\n> Now to 0002...\n>\n> - if (-element > nelements)\n> + if (element == PG_INT32_MIN || -element > nelements)\n>\n> This seems like a good opportunity to use our new pg_abs_s32() function,\n> and godbolt.org [0] seems to indicate that it might produce better code,\n> too (at least to my eye).\n\nThis updated version LGTM, I agree it's a good use of pg_abs_s32().\n\nThanks,\nJoseph Koshakow\n\nOn Thu, Aug 15, 2024 at 5:34 PM Nathan Bossart <[email protected]> wrote:> Now to 0002...>> -               if (-element > nelements)> +               if (element == PG_INT32_MIN || -element > nelements)>> This seems like a good opportunity to use our new pg_abs_s32() function,> and godbolt.org [0] seems to indicate that it might produce better code,> too (at least to my eye).This updated version LGTM, I agree it's a good use of pg_abs_s32().Thanks,Joseph Koshakow", "msg_date": "Thu, 15 Aug 2024 22:49:46 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Thu, Aug 15, 2024 at 10:49:46PM -0400, Joseph Koshakow wrote:\n> This updated version LGTM, I agree it's a good use of pg_abs_s32().\n\nCommitted.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 16 Aug 2024 11:52:16 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Hello Nathan and Joe,\n\n16.08.2024 19:52, Nathan Bossart wrote:\n> On Thu, Aug 15, 2024 at 10:49:46PM -0400, Joseph Koshakow wrote:\n>> This updated version LGTM, I agree it's a good use of pg_abs_s32().\n> Committed.\n\nThank you for working on that issue!\n\nI've tried `make check` with CC=gcc-13 CPPFLAGS=\"-O0 -ftrapv\" and got a\nserver crash:\n2024-08-16 17:14:36.102 UTC postmaster[1982703] LOG:   (PID 1983867) was terminated by signal 6: Aborted\n2024-08-16 17:14:36.102 UTC postmaster[1982703] DETAIL:  Failed process was running: select '[]'::jsonb ->> -2147483648;\nwith the stack trace\n...\n#5  0x0000556aec224a11 in __negvsi2 ()\n#6  0x0000556aec046238 in jsonb_array_element_text (fcinfo=0x556aedd70240) at jsonfuncs.c:993\n#7  0x0000556aebc90b68 in ExecInterpExpr (state=0x556aedd70160, econtext=0x556aedd706a0, isnull=0x7ffdf82211e4)\n     at execExprInterp.c:765\n...\n(gdb) f 6\n#6  0x0000556aec046238 in jsonb_array_element_text (fcinfo=0x556aedd70240) at jsonfuncs.c:993\n993                     if (-element > nelements)\n(gdb) p element\n$1 = -2147483648\n\nSp it looks like jsonb_array_element_text() still needs the same\ntreatment as jsonb_array_element().\n\nMoreover, I tried to use \"-ftrapv\" on 32-bit Debian and came across\nanother failure:\nselect '9223372036854775807'::int8 * 2147483648::int8;\nserver closed the connection unexpectedly\n...\n#4  0xb722226a in __GI_abort () at ./stdlib/abort.c:79\n#5  0x004cb2e1 in __mulvdi3.cold ()\n#6  0x00abe7ab in pg_mul_s64_overflow (a=9223372036854775807, b=2147483648, result=0xbff1da68)\n     at ../../../../src/include/common/int.h:264\n#7  0x00abfbff in int8mul (fcinfo=0x14d9d04) at int8.c:496\n#8  0x00782675 in ExecInterpExpr (state=0x14d9c4c, econtext=0x14da15c, isnull=0xbff1dc3f) at execExprInterp.c:765\n\nWhilst\nselect '9223372036854775807'::int8 * 2147483647::int8;\nemits\nERROR:  bigint out of range\n\nI've also discovered another trap-triggering case for a 64-bit platform:\nselect 1 union all select 1 union all select 1 union all select 1 union all\nselect 1 union all select 1 union all select 1 union all select 1 union all\nselect 1 union all select 1 union all select 1 union all select 1 union all\nselect 1 union all select 1 union all select 1 union all select 1 union all\nselect 1 union all select 1 union all select 1 union all select 1 union all\nselect 1 union all select 1 union all select 1 union all select 1 union all\nselect 1 union all select 1 union all select 1 union all select 1 union all\nselect 1 union all select 1 union all select 1;\n\nserver closed the connection unexpectedly\n...\n#5  0x00005576cfb1c9f3 in __negvdi2 ()\n#6  0x00005576cf627c68 in bms_singleton_member (a=0x5576d09f7fb0) at bitmapset.c:691\n#7  0x00005576cf72be0f in fix_append_rel_relids (root=0x5576d09df198, varno=31, subrelids=0x5576d09f7fb0)\n     at prepjointree.c:3830\n#8  0x00005576cf7278c2 in pull_up_simple_subquery (root=0x5576d09df198, jtnode=0x5576d09f7470, rte=0x5576d09de300,\n     lowest_outer_join=0x0, containing_appendrel=0x5576d09f7368) at prepjointree.c:1277\n...\n(gdb) f 6\n#6  0x00005576cf627c68 in bms_singleton_member (a=0x5576d09f7fb0) at bitmapset.c:691\n691                             if (result >= 0 || HAS_MULTIPLE_ONES(w))\n(gdb) p/x w\n$1 = 0x8000000000000000\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 16 Aug 2024 21:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Fri, Aug 16, 2024 at 09:00:00PM +0300, Alexander Lakhin wrote:\n> Sp it looks like jsonb_array_element_text() still needs the same\n> treatment as jsonb_array_element().\n\nD'oh. I added a test for that but didn't actually fix the code. I think\nwe just need something like the following.\n\ndiff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c\nindex 1f8ea51e6a..69cdd84393 100644\n--- a/src/backend/utils/adt/jsonfuncs.c\n+++ b/src/backend/utils/adt/jsonfuncs.c\n@@ -990,7 +990,7 @@ jsonb_array_element_text(PG_FUNCTION_ARGS)\n {\n uint32 nelements = JB_ROOT_COUNT(jb);\n\n- if (-element > nelements)\n+ if (pg_abs_s32(element) > nelements)\n PG_RETURN_NULL();\n else\n element += nelements;\n\n> Moreover, I tried to use \"-ftrapv\" on 32-bit Debian and came across\n> another failure:\n> select '9223372036854775807'::int8 * 2147483648::int8;\n> server closed the connection unexpectedly\n> ...\n> #4� 0xb722226a in __GI_abort () at ./stdlib/abort.c:79\n> #5� 0x004cb2e1 in __mulvdi3.cold ()\n> #6� 0x00abe7ab in pg_mul_s64_overflow (a=9223372036854775807, b=2147483648, result=0xbff1da68)\n> ��� at ../../../../src/include/common/int.h:264\n> #7� 0x00abfbff in int8mul (fcinfo=0x14d9d04) at int8.c:496\n> #8� 0x00782675 in ExecInterpExpr (state=0x14d9c4c, econtext=0x14da15c, isnull=0xbff1dc3f) at execExprInterp.c:765\n\nHm. It looks like that is pointing to __builtin_mul_overflow(), which\nseems strange.\n\n> #6� 0x00005576cf627c68 in bms_singleton_member (a=0x5576d09f7fb0) at bitmapset.c:691\n> 691���������������������������� if (result >= 0 || HAS_MULTIPLE_ONES(w))\n\nAt a glance, this appears to be caused by the RIGHTMOST_ONE macro:\n\n\t#define RIGHTMOST_ONE(x) ((signedbitmapword) (x) & -((signedbitmapword) (x)))\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 16 Aug 2024 13:35:01 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Fri, Aug 16, 2024 at 01:35:01PM -0500, Nathan Bossart wrote:\n> On Fri, Aug 16, 2024 at 09:00:00PM +0300, Alexander Lakhin wrote:\n>> #6� 0x00005576cf627c68 in bms_singleton_member (a=0x5576d09f7fb0) at bitmapset.c:691\n>> 691���������������������������� if (result >= 0 || HAS_MULTIPLE_ONES(w))\n> \n> At a glance, this appears to be caused by the RIGHTMOST_ONE macro:\n> \n> \t#define RIGHTMOST_ONE(x) ((signedbitmapword) (x) & -((signedbitmapword) (x)))\n\nI believe hand-rolling the two's complement calculation should be\nsufficient to avoid depending on -fwrapv here. godbolt.org indicates that\nit produces roughly the same code, too.\n\ndiff --git a/src/backend/nodes/bitmapset.c b/src/backend/nodes/bitmapset.c\nindex cd05c642b0..d37a997c0e 100644\n--- a/src/backend/nodes/bitmapset.c\n+++ b/src/backend/nodes/bitmapset.c\n@@ -67,7 +67,7 @@\n * we get zero.\n *----------\n */\n-#define RIGHTMOST_ONE(x) ((signedbitmapword) (x) & -((signedbitmapword) (x)))\n+#define RIGHTMOST_ONE(x) ((bitmapword) (x) & (~((bitmapword) (x)) + 1))\n\n #define HAS_MULTIPLE_ONES(x) ((bitmapword) RIGHTMOST_ONE(x) != (x))\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 16 Aug 2024 13:56:05 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Fri, Aug 16, 2024 at 01:35:01PM -0500, Nathan Bossart wrote:\n> On Fri, Aug 16, 2024 at 09:00:00PM +0300, Alexander Lakhin wrote:\n>> Sp it looks like jsonb_array_element_text() still needs the same\n>> treatment as jsonb_array_element().\n> \n> D'oh. I added a test for that but didn't actually fix the code. I think\n> we just need something like the following.\n> \n> diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c\n> index 1f8ea51e6a..69cdd84393 100644\n> --- a/src/backend/utils/adt/jsonfuncs.c\n> +++ b/src/backend/utils/adt/jsonfuncs.c\n> @@ -990,7 +990,7 @@ jsonb_array_element_text(PG_FUNCTION_ARGS)\n> {\n> uint32 nelements = JB_ROOT_COUNT(jb);\n> \n> - if (-element > nelements)\n> + if (pg_abs_s32(element) > nelements)\n> PG_RETURN_NULL();\n> else\n> element += nelements;\n\nI've committed this one.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 16 Aug 2024 15:11:52 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Hi,\n\nI wanted to take this opportunity to provide a brief summary of\noutstanding work.\n\n> Also there are several trap-producing cases with date types:\n> SELECT to_date('100000000', 'CC');\n> SELECT to_timestamp('1000000000,999', 'Y,YYY');\n> SELECT make_date(-2147483648, 1, 1);\n\nThis is resolved with Matthew's patches, which I've rebased, squashed\nand attached to this email. They still require a review.\n\n----\n\n> SET temp_buffers TO 1000000000;\n>\n> CREATE TEMP TABLE t(i int PRIMARY KEY);\n> INSERT INTO t VALUES(1);\n>\n> #4 0x00007f385cdd37f3 in __GI_abort () at ./stdlib/abort.c:79\n> #5 0x00005620071c4f51 in __addvsi3 ()\n> #6 0x0000562007143f3c in init_htab (hashp=0x562008facb20,\nnelem=610070812) at dynahash.c:720\n>\n> (gdb) f 6\n> #6 0x0000560915207f3c in init_htab (hashp=0x560916039930,\nnelem=1000000000) at dynahash.c:720\n> 720 hctl->high_mask = (nbuckets << 1) - 1;\n> (gdb) p nbuckets\n> $1 = 1073741824\n\nI've taken a look at this and my current proposal is to convert\n`nbuckets` to 64 bit integer which would prevent the overflow. I'm\nhoping to look into if this is feasible soon.\n\n----\n\n> CREATE FUNCTION check_foreign_key () RETURNS trigger AS .../refint.so'\nLANGUAGE C;\n> CREATE TABLE t (i int4 NOT NULL);\n> CREATE TRIGGER check_fkey BEFORE DELETE ON t FOR EACH ROW EXECUTE\nPROCEDURE\n> check_foreign_key (2147483647, 'cascade', 'i', \"ft\", \"i\");\n> INSERT INTO t VALUES (1);\n> DELETE FROM t;\n>\n> #4 0x00007f57f0bef7f3 in __GI_abort () at ./stdlib/abort.c:79\n> #5 0x00007f57f1671351 in __addvsi3 () from .../src/test/regress/refint.so\n> #6 0x00007f57f1670234 in check_foreign_key (fcinfo=0x7ffebf523650) at\nrefint.c:321\n>\n> (gdb) f 6\n> #6 0x00007f3400ef9234 in check_foreign_key (fcinfo=0x7ffd6e16a600) at\nrefint.c:321\n> 321 nkeys = (nargs - nrefs) / (nrefs + 1);\n> (gdb) p nargs\n> $1 = 3\n> (gdb) p nrefs\n> $2 = 2147483647\n\nI have not looked into this yet, though I was unable to reproduce it\nimmediately.\n\n test=# CREATE FUNCTION check_foreign_key () RETURNS trigger AS\n'.../refint.so' LANGUAGE C;\n ERROR: could not access file \".../refint.so\": No such file or directory\n\nI think I just have to play around with the path.\n\n----\n\n>> Moreover, I tried to use \"-ftrapv\" on 32-bit Debian and came across\n>> another failure:\n>> select '9223372036854775807'::int8 * 2147483648::int8;\n>> server closed the connection unexpectedly\n>> ...\n>> #4 0xb722226a in __GI_abort () at ./stdlib/abort.c:79\n>> #5 0x004cb2e1 in __mulvdi3.cold ()\n>> #6 0x00abe7ab in pg_mul_s64_overflow (a=9223372036854775807,\nb=2147483648, result=0xbff1da68)\n>> at ../../../../src/include/common/int.h:264\n>> #7 0x00abfbff in int8mul (fcinfo=0x14d9d04) at int8.c:496\n>> #8 0x00782675 in ExecInterpExpr (state=0x14d9c4c, econtext=0x14da15c,\nisnull=0xbff1dc3f) at execExprInterp.c:765\n>\n> Hm. It looks like that is pointing to __builtin_mul_overflow(), which\n> seems strange.\n\nAgreed that this looks strange. The docs [0] seem to indicate that this\nshouldn't happen.\n\n> These built-in functions promote the first two operands into infinite\n> precision signed type and perform addition on those promoted\n> operands.\n...\n> As the addition is performed in infinite signed precision, these\n> built-in functions have fully defined behavior for all argument\n> values.\n...\n> The first built-in function allows arbitrary integral types for\n> operands and the result type must be pointer to some integral type\n> other than enumerated or boolean type\n\nThe docs for the mul functions say that they behave the same as\naddition. Alexander, is it possible that you're compiling with\nsomething other than GCC?\n\n----\n\n>>> #6 0x00005576cf627c68 in bms_singleton_member (a=0x5576d09f7fb0) at\nbitmapset.c:691\n>>> 691 if (result >= 0 || HAS_MULTIPLE_ONES(w))\n>>\n>> At a glance, this appears to be caused by the RIGHTMOST_ONE macro:\n>>\n>> #define RIGHTMOST_ONE(x) ((signedbitmapword) (x) &\n-((signedbitmapword) (x)))\n>\n> I believe hand-rolling the two's complement calculation should be\n> sufficient to avoid depending on -fwrapv here. godbolt.org indicates that\n> it produces roughly the same code, too.\n>\n> diff --git a/src/backend/nodes/bitmapset.c b/src/backend/nodes/bitmapset.c\n> index cd05c642b0..d37a997c0e 100644\n> --- a/src/backend/nodes/bitmapset.c\n> +++ b/src/backend/nodes/bitmapset.c\n> @@ -67,7 +67,7 @@\n> * we get zero.\n> *----------\n> */\n> -#define RIGHTMOST_ONE(x) ((signedbitmapword) (x) & -((signedbitmapword)\n(x)))\n> +#define RIGHTMOST_ONE(x) ((bitmapword) (x) & (~((bitmapword) (x)) + 1))\n\nThis approach seems to resolve the issue locally for me, and I think it\nfalls out cleanly from the comment in the code above.\n\nThanks,\nJoseph Koshakow\n\n[0] https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins.html", "msg_date": "Sat, 17 Aug 2024 15:16:01 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": ">>> SET temp_buffers TO 1000000000;\n>>>\n>>> CREATE TEMP TABLE t(i int PRIMARY KEY);\n>>> INSERT INTO t VALUES(1);\n>>>\n>>> #4 0x00007f385cdd37f3 in __GI_abort () at ./stdlib/abort.c:79\n>>> #5 0x00005620071c4f51 in __addvsi3 ()\n>>> #6 0x0000562007143f3c in init_htab (hashp=0x562008facb20,\nnelem=610070812) at dynahash.c:720\n>>>\n>>> (gdb) f 6\n>>> #6 0x0000560915207f3c in init_htab (hashp=0x560916039930,\nnelem=1000000000) at dynahash.c:720\n>>> 720 hctl->high_mask = (nbuckets << 1) - 1;\n>>> (gdb) p nbuckets\n>>> $1 = 1073741824\n>>\n>> Here's what it looks like is happening:\n>>\n>> 1. When inserting into the table, we create a new dynamic hash table\n>> and set `nelem` equal to `temp_buffers`, which is 1000000000.\n>>\n>> 2. `nbuckets` is then set to the the next highest power of 2 from\n>> `nelem`, which is 1073741824.\n>>\n>> /*\n>> * Allocate space for the next greater power of two number of\nbuckets,\n>> * assuming a desired maximum load factor of 1.\n>> */\n>> nbuckets = next_pow2_int(nelem);\n>>\n>> 3. Shift `nbuckets` to the left by 1. This would equal 2147483648,\n>> which is larger than `INT_MAX`, which causes an overflow.\n>>\n>> hctl->high_mask = (nbuckets << 1) - 1;\n>>\n>> The max value allowed for `temp_buffers` is `INT_MAX / 2` (1073741823),\n>> So any value of `temp_buffers` in the range (536870912, 1073741823]\n>> would cause this overflow. Without `-ftrapv`, `nbuckets` would wrap\n>> around to -2147483648, which is likely to cause all sorts of havoc, I'm\n>> just not sure what exactly.\n>>\n>> Also, `nbuckets = next_pow2_int(nelem);`, by itself is a bit sketchy\n>> considering that `nelem` is a `long` and `nbuckets` is an `int`.\n>> Potentially, the fix here is to just convert `nbuckets` to a `long`. >>\nI plan on checking if that's feasible.\n> Yeah, the minimum value that triggers the trap is 536870913 and the\nmaximum\n> accepted is 1073741823.\n>\n> Without -ftrapv, hctl->high_mask is set to 2147483647 on my machine,\n> when nbuckets is 1073741824, and the INSERT apparently succeeds.\n\n\n> I've taken a look at this and my current proposal is to convert\n> `nbuckets` to 64 bit integer which would prevent the overflow. I'm\n> hoping to look into if this is feasible soon.\n\nI've both figured out why the INSERT still succeeds and a simple\nsolution to this. After `nbuckets` wraps around to -2147483648, we\nsubtract 1 which causes it to wrap back around to 2147483647. Which\nexplains the result seen by Alexander.\n\nBy the way,\n\n>> Also, `nbuckets = next_pow2_int(nelem);`, by itself is a bit sketchy\n>> considering that `nelem` is a `long` and `nbuckets` is an `int`.\n\nIt turns out I was wrong about this, `next_pow2_int` will always return\na value that fits into an `int`.\n\n hctl->high_mask = (nbuckets << 1) - 1;\n\nThis calculation is used to ultimately populate the field\n`uint32 high_mask`. I'm not very familiar with this hash table\nimplementation and I'm not entirely sure what it would take to convert\nthis to a `uint64`, but from poking around it looks like it would have\na huge blast radius.\n\nThe largest possible (theoretical) value for `nbuckets` is\n`1073741824`, the largest power of 2 that fits into an `int`. So, the\nlargest possible value for `nbuckets << 1` is `2147483648`. This can\nfully fit in a `uint32`, so the simple fix for this case is to cast\n`nbuckets` to a `uint32` before shifting. I've attached this fix,\nAlexander if you have time I would appreciate if you were able to test\nit.\n\nI noticed another potential issue with next_pow2_int. The\nimplementation is in dynahash.c and is as follows\n\n /* calculate first power of 2 >= num, bounded to what will fit in an\nint */\n static int\n next_pow2_int(long num)\n {\n if (num > INT_MAX / 2)\n num = INT_MAX / 2;\n return 1 << my_log2(num);\n }\n\nI'm pretty sure that `INT_MAX / 2` is not a power of 2, as `INT_MAX`\nis not a power of 2. It should be `num = INT_MAX / 2 + 1;` I've also\nattached a patch with this fix.\n\nThanks,\nJoseph Koshakow", "msg_date": "Sat, 17 Aug 2024 17:52:48 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Hello Joe,\n\n17.08.2024 22:16, Joseph Koshakow wrote:\n> Hi,\n>\n> I wanted to take this opportunity to provide a brief summary of\n> outstanding work.\n>\n> > Also there are several trap-producing cases with date types:\n> > SELECT to_date('100000000', 'CC');\n> > SELECT to_timestamp('1000000000,999', 'Y,YYY');\n> > SELECT make_date(-2147483648, 1, 1);\n>\n> This is resolved with Matthew's patches, which I've rebased, squashed\n> and attached to this email. They still require a review.\n>\n\nI've filed a separate bug report about date/time conversion issues\nyesterday. Maybe it was excessive, but it also demonstrates other\nproblematic cases:\nhttps://www.postgresql.org/message-id/18585-db646741dd649abd%40postgresql.org\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 18 Aug 2024 06:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "18.08.2024 00:52, Joseph Koshakow wrote:\n> The largest possible (theoretical) value for `nbuckets` is\n> `1073741824`, the largest power of 2 that fits into an `int`. So, the\n> largest possible value for `nbuckets << 1` is `2147483648`. This can\n> fully fit in a `uint32`, so the simple fix for this case is to cast\n> `nbuckets` to a `uint32` before shifting. I've attached this fix,\n> Alexander if you have time I would appreciate if you were able to test\n> it.\n>\n\nYes, I've tested v25-0002-*.patch and can confirm that this fix works\nas well.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 18 Aug 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "I've combined all the current proposed changes into one patch. I've also\nintroduced signed versions of the negation functions into int.h to avoid\nrelying on multiplication.\n\n-- \nnathan", "msg_date": "Tue, 20 Aug 2024 16:21:33 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "Hello Nathan,\n\n21.08.2024 00:21, Nathan Bossart wrote:\n> I've combined all the current proposed changes into one patch. I've also\n> introduced signed versions of the negation functions into int.h to avoid\n> relying on multiplication.\n>\n\nThank you for taking care of this!\n\nI'd like to add some info to show how big the iceberg is.\n\nBeside other trap-triggered places in date/time conversion functions, I\nalso discovered:\n1)\nCREATE TABLE jt(j jsonb); INSERT INTO jt VALUES('[]'::jsonb);\nUPDATE jt SET j[0][-2147483648] = '0';\n\n#4  0x00007f15ab00d7f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x00005570113b2591 in __addvsi3 ()\n#6  0x00005570111d55a0 in push_null_elements (ps=0x7fff37385fb8, num=-2147483648) at jsonfuncs.c:1707\n#7  0x00005570111d5749 in push_path (st=0x7fff37385fb8, level=0, path_elems=0x55701300c880, path_nulls=0x55701300d520,\n     path_len=2, newval=0x7fff37386030) at jsonfuncs.c:1770\n\nThe \"problematic\" code:\n         while (num-- > 0)\n                 *ps = 0;\nlooks innocent to me, but is not for good enough for -ftrapv.\nI think there could be other similar places and this raises two questions:\ncan they be reached with INT_MIN and what to do if so?\n\nBy the way, the same can be seen with CC=clang CPPFLAGS=\"-ftrapv\". Please\nlook at the code produced by both compilers for x86_64:\nhttps://godbolt.org/z/vjszjf4b3\n(clang generates ud1, while gcc uses call __addvsi3)\n\nThe aside question is: should jsonb subscripting accept negative indexes\nwhen the target array is not initialized yet?\n\nCompare:\nCREATE TABLE jt(j jsonb); INSERT INTO jt VALUES('[]'::jsonb);\nUPDATE jt SET j[0][-1] = '0';\nSELECT * FROM jt;\n    j\n-------\n  [[0]]\n\nwith\nCREATE TABLE jt(j jsonb); INSERT INTO jt VALUES('[[]]'::jsonb);\nUPDATE jt SET j[0][-1] = '0';\nERROR:  path element at position 2 is out of range: -1\n\n2)\nSELECT x, lag(x, -2147483648) OVER (ORDER BY x) FROM (SELECT 1) x;\n\n#4  0x00007fa7d00f47f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x00005623a7336851 in __negvsi2 ()\n#6  0x00005623a726ae35 in leadlag_common (fcinfo=0x7ffd59cca950, forward=false, withoffset=true, withdefault=false)\n     at windowfuncs.c:551\n#7  0x00005623a726af19 in window_lag_with_offset (fcinfo=0x7ffd59cca950) at windowfuncs.c:594\n\nAs to 32-bit Debian, I wrote about before, I use gcc (Debian 12.2.0-14).\nPlease look at the demo code (and it's assembly, produced with\ngcc -S -ftrapv t.c) attached:\ngcc -Wall -Wextra -fsanitize=signed-integer-overflow -Wstrict-overflow=5 \\\n  -O0 -ftrapv t.c -o t && ./t\nAborted (core dumped)\n\n#4  0xb762226a in __GI_abort () at ./stdlib/abort.c:79\n#5  0x00495077 in __mulvdi3.cold ()\n#6  0x00495347 in pg_mul_s64_overflow ()\n\n(It looks like -Wstrict-overflow can't help with the static analysis\ndesired in such cases.)\n\nMoreover, I got `make check` failed with -ftrapv on aarch64 (using gcc 8.3)\nas follows:\n#1  0x0000007e1edc48e8 in __GI_abort () at abort.c:79\n#2  0x0000005ee66b71cc in __subvdi3 ()\n#3  0x0000005ee6560e24 in int8gcd_internal (arg1=-9223372036854775808, arg2=1) at int8.c:623\n#4  0x0000005ee62f576c in ExecInterpExpr (state=0x5eeaba9d18, econtext=0x5eeaba95f0, isnull=<optimized out>)\n     at execExprInterp.c:770\n...\n#13 0x0000005ee64e5d84 in exec_simple_query (\n     query_string=query_string@entry=0x5eeaac7500 \"SELECT a, b, gcd(a, b), gcd(a, -b), gcd(b, a), gcd(-b, a)\\nFROM \n(VALUES (0::int8, 0::int8),\\n\", ' ' <repeats 13 times>, \"(0::int8, 29893644334::int8),\\n\", ' ' <repeats 13 times>, \n\"(288484263558::int8, 29893644334::int8),\\n\", ' ' <repeats 12 times>...) at postgres.c:1284\n\nSo I wonder whether enabling -ftrapv can really help us prepare the code\nfor -fno-wrapv?\n\nBest regards,\nAlexander", "msg_date": "Wed, 21 Aug 2024 10:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Wed, Aug 21, 2024 at 10:00:00AM +0300, Alexander Lakhin wrote:\n> I'd like to add some info to show how big the iceberg is.\n\nHm. It seems pretty clear that removing -fwrapv won't be happening anytime\nsoon. I don't mind trying to fix a handful of cases from time to time, but\nunless there's a live bug, I'm probably not going to treat this stuff as\nhigh priority.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 21 Aug 2024 10:37:25 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dependence on integer wrapping" }, { "msg_contents": "On Wed, Aug 21, 2024 at 11:37 AM Nathan Bossart <[email protected]>\nwrote:\n>\n> Hm. It seems pretty clear that removing -fwrapv won't be happening\nanytime\n> soon. I don't mind trying to fix a handful of cases from time to time,\nbut\n> unless there's a live bug, I'm probably not going to treat this stuff as\n> high priority.\n\nI think I'm also going to take a step back because I'm a bit\nfatigued on the overflow work. My goal here wasn't necessarily to\nremove -fwrapv, because I think it will always be a useful safeguard.\nInstead I wanted to add -ftrapv to builds with asserts enabled to try\nand prevent future overflow based bugs. Though, it looks like that\nwon't happen anytime soon either.\n\nFWIW, Matthew's patch actually does resolve a bug with `to_timestamp`\nand `to_date`. It converts the following incorrect queries\n\n test=# SELECT to_timestamp('2147483647,999', 'Y,YYY');\n to_timestamp\n ---------------------------------\n 0001-01-01 00:00:00-04:56:02 BC\n (1 row)\n\n test=# SELECT to_date('-2147483648', 'CC');\n to_date\n ------------\n 0001-01-01\n (1 row)\n\ninto errors\n\n test=# SELECT to_timestamp('2147483647,999', 'Y,YYY');\n ERROR: invalid input string for \"Y,YYY\"\n test=# SELECT to_date('-2147483648', 'CC');\n ERROR: date out of range: \"-2147483648\"\n\nSo, it might be worth committing only his changes before moving on.\n\n\nThanks,\nJoseph Koshakow\n\nOn Wed, Aug 21, 2024 at 11:37 AM Nathan Bossart <[email protected]> wrote:>> Hm.  It seems pretty clear that removing -fwrapv won't be happening anytime> soon.  I don't mind trying to fix a handful of cases from time to time, but> unless there's a live bug, I'm probably not going to treat this stuff as> high priority.I think I'm also going to take a step back because I'm a bitfatigued on the overflow work. My goal here wasn't necessarily toremove -fwrapv, because I think it will always be a useful safeguard.Instead I wanted to add -ftrapv to builds with asserts enabled to tryand prevent future overflow based bugs. Though, it looks like thatwon't happen anytime soon either.FWIW, Matthew's patch actually does resolve a bug with `to_timestamp`and `to_date`. It converts the following incorrect queries    test=# SELECT to_timestamp('2147483647,999', 'Y,YYY');              to_timestamp               ---------------------------------     0001-01-01 00:00:00-04:56:02 BC    (1 row)        test=# SELECT to_date('-2147483648', 'CC');      to_date       ------------     0001-01-01    (1 row)into errors    test=# SELECT to_timestamp('2147483647,999', 'Y,YYY');    ERROR:  invalid input string for \"Y,YYY\"    test=# SELECT to_date('-2147483648', 'CC');    ERROR:  date out of range: \"-2147483648\"So, it might be worth committing only his changes before moving on.Thanks,Joseph Koshakow", "msg_date": "Sat, 24 Aug 2024 08:44:40 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dependence on integer wrapping" } ]
[ { "msg_contents": "hi.\nhttps://www.postgresql.org/docs/devel/error-style-guide.html#ERROR-STYLE-GUIDE-FORMATTING\n\"Don't end a message with a newline.\"\n\n\naccidentally, I found some error messages in the function\nCheckMyDatabase spread into two lines.\nso i try to consolidate them into one line.", "msg_date": "Mon, 10 Jun 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "CheckMyDatabase some error messages in two lines." }, { "msg_contents": "On Mon, Jun 10, 2024 at 08:00:00AM +0800, jian he wrote:\n> https://www.postgresql.org/docs/devel/error-style-guide.html#ERROR-STYLE-GUIDE-FORMATTING\n> \"Don't end a message with a newline.\"\n> \n> \n> accidentally, I found some error messages in the function\n> CheckMyDatabase spread into two lines.\n> so i try to consolidate them into one line.\n\n> -\t\t\t\t errdetail(\"The database was initialized with LC_COLLATE \\\"%s\\\", \"\n> -\t\t\t\t\t\t \" which is not recognized by setlocale().\", collate),\n> +\t\t\t\t errdetail(\"The database was initialized with LC_COLLATE \\\"%s\\\", which is not recognized by setlocale().\", collate),\n\nBoth approaches produce the same message. With the existing code, the two\nstring literals will be concatenated without newlines. It is probably\nsplit into two lines to avoid a long line in the source code.\n\n-- \nnathan\n\n\n", "msg_date": "Sun, 9 Jun 2024 21:02:37 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CheckMyDatabase some error messages in two lines." }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Mon, Jun 10, 2024 at 08:00:00AM +0800, jian he wrote:\n>> -\t\t\t\t errdetail(\"The database was initialized with LC_COLLATE \\\"%s\\\", \"\n>> -\t\t\t\t\t\t \" which is not recognized by setlocale().\", collate),\n>> +\t\t\t\t errdetail(\"The database was initialized with LC_COLLATE \\\"%s\\\", which is not recognized by setlocale().\", collate),\n\n> Both approaches produce the same message. With the existing code, the two\n> string literals will be concatenated without newlines. It is probably\n> split into two lines to avoid a long line in the source code.\n\nNo doubt. People have done it both ways in the past, but I think\ncurrently there's a weak consensus in favor of using one line for\nsuch messages even when it runs past 80 columns, mainly because\nthat makes it easier to grep the source code for a message text.\n\nBut: I don't see too much value in changing this particular instance,\nbecause the line break is in a place where it would not likely cause\nyou to miss finding the line. You might grep for the first part of\nthe string or the second part, but probably not for \", which is not\".\nIf the line break were in the middle of a phrase, there'd be more\nargument for collapsing it out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Jun 2024 22:12:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CheckMyDatabase some error messages in two lines." }, { "msg_contents": "On Sun, Jun 09, 2024 at 10:12:53PM -0400, Tom Lane wrote:\n> No doubt. People have done it both ways in the past, but I think\n> currently there's a weak consensus in favor of using one line for\n> such messages even when it runs past 80 columns, mainly because\n> that makes it easier to grep the source code for a message text.\n\nI recall the same consensus here. Greppability matters across the\nboard.\n\n> But: I don't see too much value in changing this particular instance,\n> because the line break is in a place where it would not likely cause\n> you to miss finding the line. You might grep for the first part of\n> the string or the second part, but probably not for \", which is not\".\n> If the line break were in the middle of a phrase, there'd be more\n> argument for collapsing it out.\n\nNot sure these ones are worth it, either, so I'd let them be.\n--\nMichael", "msg_date": "Tue, 11 Jun 2024 08:22:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CheckMyDatabase some error messages in two lines." } ]
[ { "msg_contents": "Hi hackers,\n\nDuring the last pgconf.dev I attended Robert’s presentation about autovacuum and\nit made me remember of an idea I had some time ago: $SUBJECT\n\nPlease find attached a patch doing so by adding a new field (aka \"time_delayed\")\nto the pg_stat_progress_vacuum view. \n\nCurrently one can change [autovacuum_]vacuum_cost_delay and\n[auto vacuum]vacuum_cost_limit but has no reliable way to measure the impact of\nthe changes on the vacuum duration: one could observe the vacuum duration\nvariation but the correlation to the changes is not accurate (as many others\nfactors could impact the vacuum duration (load on the system, i/o latency,...)).\n\nThis new field reports the time that the vacuum has to sleep due to cost delay:\nit could be useful to 1) measure the impact of the current cost_delay and\ncost_limit settings and 2) when experimenting new values (and then help for\ndecision making for those parameters).\n\nThe patch is relatively small thanks to the work that has been done in\nf1889729dd (to allow parallel worker to report to the leader).\n\n[1]: https://www.pgevents.ca/events/pgconfdev2024/schedule/session/29-how-autovacuum-goes-wrong-and-can-we-please-make-it-stop-doing-that/\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 10 Jun 2024 06:05:13 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On Mon, Jun 10, 2024 at 06:05:13AM +0000, Bertrand Drouvot wrote:\n> During the last pgconf.dev I attended Robert�s presentation about autovacuum and\n> it made me remember of an idea I had some time ago: $SUBJECT\n\nThis sounds like useful information to me. I wonder if we should also\nsurface the effective cost limit for each autovacuum worker.\n\n> Currently one can change [autovacuum_]vacuum_cost_delay and\n> [auto vacuum]vacuum_cost_limit but has no reliable way to measure the impact of\n> the changes on the vacuum duration: one could observe the vacuum duration\n> variation but the correlation to the changes is not accurate (as many others\n> factors could impact the vacuum duration (load on the system, i/o latency,...)).\n\nIIUC you'd need to get information from both pg_stat_progress_vacuum and\npg_stat_activity in order to know what percentage of time was being spent\nin cost delay. Is that how you'd expect for this to be used in practice?\n\n> \t\tpgstat_report_wait_start(WAIT_EVENT_VACUUM_DELAY);\n> \t\tpg_usleep(msec * 1000);\n> \t\tpgstat_report_wait_end();\n> +\t\t/* Report the amount of time we slept */\n> +\t\tif (VacuumSharedCostBalance != NULL)\n> +\t\t\tpgstat_progress_parallel_incr_param(PROGRESS_VACUUM_TIME_DELAYED, msec);\n> +\t\telse\n> +\t\t\tpgstat_progress_incr_param(PROGRESS_VACUUM_TIME_DELAYED, msec);\n\nHm. Should we measure the actual time spent sleeping, or is a rough\nestimate good enough? I believe pg_usleep() might return early (e.g., if\nthe process is signaled) or late, so this field could end up being\ninaccurate, although probably not by much. If we're okay with millisecond\ngranularity, my first instinct is that what you've proposed is fine, but I\nfigured I'd bring it up anyway.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 10 Jun 2024 10:36:42 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 10, 2024 at 10:36:42AM -0500, Nathan Bossart wrote:\n> On Mon, Jun 10, 2024 at 06:05:13AM +0000, Bertrand Drouvot wrote:\n> > During the last pgconf.dev I attended Robert�s presentation about autovacuum and\n> > it made me remember of an idea I had some time ago: $SUBJECT\n> \n> This sounds like useful information to me.\n\nThanks for looking at it!\n\n> I wonder if we should also\n> surface the effective cost limit for each autovacuum worker.\n\nI'm not sure about it as I think that it could be misleading: one could query\npg_stat_progress_vacuum and conclude that the time_delayed he is seeing is\ndue to _this_ cost_limit. But that's not necessary true as the cost_limit could\nhave changed multiple times since the vacuum started. So, unless there is\nfrequent sampling on pg_stat_progress_vacuum, displaying the time_delayed and\nthe cost_limit could be misleadind IMHO.\n\n> > Currently one can change [autovacuum_]vacuum_cost_delay and\n> > [auto vacuum]vacuum_cost_limit but has no reliable way to measure the impact of\n> > the changes on the vacuum duration: one could observe the vacuum duration\n> > variation but the correlation to the changes is not accurate (as many others\n> > factors could impact the vacuum duration (load on the system, i/o latency,...)).\n> \n> IIUC you'd need to get information from both pg_stat_progress_vacuum and\n> pg_stat_activity in order to know what percentage of time was being spent\n> in cost delay. Is that how you'd expect for this to be used in practice?\n\nYeah, one could use a query such as:\n\nselect p.*, now() - a.xact_start as duration from pg_stat_progress_vacuum p JOIN pg_stat_activity a using (pid)\n\nfor example. Worth to provide an example somewhere in the doc?\n\n> > \t\tpgstat_report_wait_start(WAIT_EVENT_VACUUM_DELAY);\n> > \t\tpg_usleep(msec * 1000);\n> > \t\tpgstat_report_wait_end();\n> > +\t\t/* Report the amount of time we slept */\n> > +\t\tif (VacuumSharedCostBalance != NULL)\n> > +\t\t\tpgstat_progress_parallel_incr_param(PROGRESS_VACUUM_TIME_DELAYED, msec);\n> > +\t\telse\n> > +\t\t\tpgstat_progress_incr_param(PROGRESS_VACUUM_TIME_DELAYED, msec);\n> \n> Hm. Should we measure the actual time spent sleeping, or is a rough\n> estimate good enough? I believe pg_usleep() might return early (e.g., if\n> the process is signaled) or late, so this field could end up being\n> inaccurate, although probably not by much. If we're okay with millisecond\n> granularity, my first instinct is that what you've proposed is fine, but I\n> figured I'd bring it up anyway.\n\nThanks for bringing that up! I had the same thought when writing the code and\ncame to the same conclusion. I think that's a good enough estimation and specially\nduring a long running vacuum (which is probably the case where users care the \nmost).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Jun 2024 17:48:22 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On Mon, Jun 10, 2024 at 05:48:22PM +0000, Bertrand Drouvot wrote:\n> On Mon, Jun 10, 2024 at 10:36:42AM -0500, Nathan Bossart wrote:\n>> I wonder if we should also\n>> surface the effective cost limit for each autovacuum worker.\n> \n> I'm not sure about it as I think that it could be misleading: one could query\n> pg_stat_progress_vacuum and conclude that the time_delayed he is seeing is\n> due to _this_ cost_limit. But that's not necessary true as the cost_limit could\n> have changed multiple times since the vacuum started. So, unless there is\n> frequent sampling on pg_stat_progress_vacuum, displaying the time_delayed and\n> the cost_limit could be misleadind IMHO.\n\nWell, that's true for the delay, too, right (at least as of commit\n7d71d3d)?\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 10 Jun 2024 14:20:16 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": ">> This sounds like useful information to me.\r\n\r\n> Thanks for looking at it!\r\n\r\nThe VacuumDelay is the only visibility available to\r\ngauge the cost_delay. Having this information\r\nadvertised by pg_stat_progress_vacuum as is being proposed\r\nis much better. However, I also think that the\r\n\"number of times\" the vacuum went into delay will be needed\r\nas well. Both values will be useful to tune cost_delay and cost_limit. \r\n\r\nIt may also make sense to accumulate the total_time in delay\r\nand the number of times delayed in a cumulative statistics [0]\r\nview to allow a user to trend this information overtime.\r\nI don't think this info fits in any of the existing views, i.e.\r\npg_stat_database, so maybe a new view for cumulative\r\nvacuum stats may be needed. This is likely a separate\r\ndiscussion, but calling it out here.\r\n\r\n>> IIUC you'd need to get information from both pg_stat_progress_vacuum and\r\n>> pg_stat_activity in order to know what percentage of time was being spent\r\n>> in cost delay. Is that how you'd expect for this to be used in practice?\r\n\r\n> Yeah, one could use a query such as:\r\n\r\n> select p.*, now() - a.xact_start as duration from pg_stat_progress_vacuum p JOIN pg_stat_activity a using (pid)\r\n\r\nMaybe all progress views should just expose the \"beentry->st_activity_start_timestamp \" \r\nto let the user know when the current operation began.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n[0] https://www.postgresql.org/docs/current/monitoring-stats.html\r\n\r\n\r\n\r\n\r\n", "msg_date": "Mon, 10 Jun 2024 20:12:46 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On Mon, Jun 10, 2024 at 11:36 AM Nathan Bossart\n<[email protected]> wrote:\n> Hm. Should we measure the actual time spent sleeping, or is a rough\n> estimate good enough? I believe pg_usleep() might return early (e.g., if\n> the process is signaled) or late, so this field could end up being\n> inaccurate, although probably not by much. If we're okay with millisecond\n> granularity, my first instinct is that what you've proposed is fine, but I\n> figured I'd bring it up anyway.\n\nI bet you could also sleep for longer than planned, throwing the\nnumbers off in the other direction.\n\nI'm always suspicious of this sort of thing. I tend to find nothing\ngives me the right answer unless I assume that the actual sleep times\nare randomly and systematically different from the intended sleep\ntimes but arbitrarily large amounts. I think we should at least do\nsome testing: if we measure both the intended sleep time and the\nactual sleep time, how close are they? Does it change if the system is\nunder crushing load (which might elongate sleeps) or if we spam\nSIGUSR1 against the vacuum process (which might shorten them)?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 Jun 2024 17:58:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On Mon, Jun 10, 2024 at 02:20:16PM -0500, Nathan Bossart wrote:\n> On Mon, Jun 10, 2024 at 05:48:22PM +0000, Bertrand Drouvot wrote:\n> > On Mon, Jun 10, 2024 at 10:36:42AM -0500, Nathan Bossart wrote:\n> >> I wonder if we should also\n> >> surface the effective cost limit for each autovacuum worker.\n> > \n> > I'm not sure about it as I think that it could be misleading: one could query\n> > pg_stat_progress_vacuum and conclude that the time_delayed he is seeing is\n> > due to _this_ cost_limit. But that's not necessary true as the cost_limit could\n> > have changed multiple times since the vacuum started. So, unless there is\n> > frequent sampling on pg_stat_progress_vacuum, displaying the time_delayed and\n> > the cost_limit could be misleadind IMHO.\n> \n> Well, that's true for the delay, too, right (at least as of commit\n> 7d71d3d)?\n\nYeah right, but the patch exposes the total amount of time the vacuum has\nbeen delayed (not the cost_delay per say) which does not sound misleading to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 06:24:42 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 10, 2024 at 08:12:46PM +0000, Imseih (AWS), Sami wrote:\n> >> This sounds like useful information to me.\n> \n> > Thanks for looking at it!\n> \n> The VacuumDelay is the only visibility available to\n> gauge the cost_delay. Having this information\n> advertised by pg_stat_progress_vacuum as is being proposed\n> is much better.\n\nThanks for looking at it!\n\n> However, I also think that the\n> \"number of times\" the vacuum went into delay will be needed\n> as well. Both values will be useful to tune cost_delay and cost_limit. \n\nYeah, I think that's a good idea. With v1 one could figure out how many times\nthe delay has been triggered but that does not work anymore if: 1) cost_delay\nchanged during the vacuum duration or 2) the patch changes the way time_delayed\nis measured/reported (means get the actual wait time and not the theoritical\ntime as v1 does). \n\n> \n> It may also make sense to accumulate the total_time in delay\n> and the number of times delayed in a cumulative statistics [0]\n> view to allow a user to trend this information overtime.\n> I don't think this info fits in any of the existing views, i.e.\n> pg_stat_database, so maybe a new view for cumulative\n> vacuum stats may be needed. This is likely a separate\n> discussion, but calling it out here.\n\n+1\n\n> >> IIUC you'd need to get information from both pg_stat_progress_vacuum and\n> >> pg_stat_activity in order to know what percentage of time was being spent\n> >> in cost delay. Is that how you'd expect for this to be used in practice?\n> \n> > Yeah, one could use a query such as:\n> \n> > select p.*, now() - a.xact_start as duration from pg_stat_progress_vacuum p JOIN pg_stat_activity a using (pid)\n> \n> Maybe all progress views should just expose the \"beentry->st_activity_start_timestamp \" \n> to let the user know when the current operation began.\n\nYeah maybe, I think this is likely a separate discussion too, thoughts?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 06:50:19 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 10, 2024 at 3:05 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi hackers,\n>\n> During the last pgconf.dev I attended Robert’s presentation about autovacuum and\n> it made me remember of an idea I had some time ago: $SUBJECT\n>\n> Please find attached a patch doing so by adding a new field (aka \"time_delayed\")\n> to the pg_stat_progress_vacuum view.\n>\n> Currently one can change [autovacuum_]vacuum_cost_delay and\n> [auto vacuum]vacuum_cost_limit but has no reliable way to measure the impact of\n> the changes on the vacuum duration: one could observe the vacuum duration\n> variation but the correlation to the changes is not accurate (as many others\n> factors could impact the vacuum duration (load on the system, i/o latency,...)).\n>\n> This new field reports the time that the vacuum has to sleep due to cost delay:\n> it could be useful to 1) measure the impact of the current cost_delay and\n> cost_limit settings and 2) when experimenting new values (and then help for\n> decision making for those parameters).\n>\n> The patch is relatively small thanks to the work that has been done in\n> f1889729dd (to allow parallel worker to report to the leader).\n\nThank you for the proposal and the patch. I understand the motivation\nof this patch. Beside the point Nathan mentioned, I'm slightly worried\nthat massive parallel messages could be sent to the leader process\nwhen the cost_limit value is low.\n\nFWIW when I want to confirm the vacuum delay effect, I often use the\ninformation from the DEBUG2 log message in VacuumUpdateCosts()\nfunction. Exposing these data (per-worker dobalance, cost_lmit,\ncost_delay, active, and failsafe) somewhere in a view might also be\nhelpful for users for checking vacuum delay effects. It doesn't mean\nto measure the impact of the changes on the vacuum duration, though.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 16:07:05 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 10, 2024 at 05:58:13PM -0400, Robert Haas wrote:\n> On Mon, Jun 10, 2024 at 11:36 AM Nathan Bossart\n> <[email protected]> wrote:\n> > Hm. Should we measure the actual time spent sleeping, or is a rough\n> > estimate good enough? I believe pg_usleep() might return early (e.g., if\n> > the process is signaled) or late, so this field could end up being\n> > inaccurate, although probably not by much. If we're okay with millisecond\n> > granularity, my first instinct is that what you've proposed is fine, but I\n> > figured I'd bring it up anyway.\n> \n> I bet you could also sleep for longer than planned, throwing the\n> numbers off in the other direction.\n\nThanks for looking at it! Agree, that's how I read \"or late\" from Nathan's\ncomment above.\n\n> I'm always suspicious of this sort of thing. I tend to find nothing\n> gives me the right answer unless I assume that the actual sleep times\n> are randomly and systematically different from the intended sleep\n> times but arbitrarily large amounts. I think we should at least do\n> some testing: if we measure both the intended sleep time and the\n> actual sleep time, how close are they? Does it change if the system is\n> under crushing load (which might elongate sleeps) or if we spam\n> SIGUSR1 against the vacuum process (which might shorten them)?\n\nOTOH Sami proposed in [1] to count the number of times the vacuum went into\ndelay. I think that's a good idea. His idea makes me think that (in addition to\nthe number of wait times) it would make sense to measure the \"actual\" sleep time\n(and not the intended one) then (so that one could measure the difference between\nthe intended wait time (number of wait times * cost delay (if it does not change\nduring the vacuum duration)) and the actual measured wait time).\n\nSo I think that in v2 we could: 1) measure the actual wait time instead, 2)\ncount the number of times the vacuum slept. We could also 3) reports the\neffective cost limit (as proposed by Nathan up-thread) (I think that 3) could\nbe misleading but I'll yield to majority opinion if people think it's not).\n\nThoughts?\n\n\n[1]: https://www.postgresql.org/message-id/A0935130-7C4B-4094-B6E4-C7D5086D50EF%40amazon.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 07:25:11 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 11, 2024 at 04:07:05PM +0900, Masahiko Sawada wrote:\n\n> Thank you for the proposal and the patch. I understand the motivation\n> of this patch.\n\nThanks for looking at it!\n\n> Beside the point Nathan mentioned, I'm slightly worried\n> that massive parallel messages could be sent to the leader process\n> when the cost_limit value is low.\n\nI see, I can/will do some testing in this area and share the numbers.\n\n> \n> FWIW when I want to confirm the vacuum delay effect, I often use the\n> information from the DEBUG2 log message in VacuumUpdateCosts()\n> function. Exposing these data (per-worker dobalance, cost_lmit,\n> cost_delay, active, and failsafe) somewhere in a view might also be\n> helpful for users for checking vacuum delay effects.\n\nDo you mean add time_delayed in pg_stat_progress_vacuum and cost_limit + the\nother data you mentioned above in another dedicated view?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 08:26:23 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 10, 2024 at 05:58:13PM -0400, Robert Haas wrote:\n> I'm always suspicious of this sort of thing. I tend to find nothing\n> gives me the right answer unless I assume that the actual sleep times\n> are randomly and systematically different from the intended sleep\n> times but arbitrarily large amounts. I think we should at least do\n> some testing: if we measure both the intended sleep time and the\n> actual sleep time, how close are they? Does it change if the system is\n> under crushing load (which might elongate sleeps) or if we spam\n> SIGUSR1 against the vacuum process (which might shorten them)?\n\nThough I (now) think that it would make sense to record the actual delay time \ninstead (see [1]), I think it's interesting to do some testing as you suggested.\n\nWith record_actual_time.txt (attached) applied on top of v1, we can see the\nintended and actual wait time.\n\nOn my system, \"no load at all\" except the vacuum running, I see no diff:\n\n Tue Jun 11 09:22:06 2024 (every 1s)\n\n pid | relid | phase | time_delayed | actual_time_delayed | duration\n-------+-------+---------------+--------------+---------------------+-----------------\n 54754 | 16385 | scanning heap | 41107 | 41107 | 00:00:42.301851\n(1 row)\n\n Tue Jun 11 09:22:07 2024 (every 1s)\n\n pid | relid | phase | time_delayed | actual_time_delayed | duration\n-------+-------+---------------+--------------+---------------------+-----------------\n 54754 | 16385 | scanning heap | 42076 | 42076 | 00:00:43.301848\n(1 row)\n\n Tue Jun 11 09:22:08 2024 (every 1s)\n\n pid | relid | phase | time_delayed | actual_time_delayed | duration\n-------+-------+---------------+--------------+---------------------+-----------------\n 54754 | 16385 | scanning heap | 43045 | 43045 | 00:00:44.301854\n(1 row)\n\nBut if I launch pg_reload_conf() 10 times in a row, I can see:\n\n Tue Jun 11 09:22:09 2024 (every 1s)\n\n pid | relid | phase | time_delayed | actual_time_delayed | duration\n-------+-------+---------------+--------------+---------------------+-----------------\n 54754 | 16385 | scanning heap | 44064 | 44034 | 00:00:45.302965\n(1 row)\n\n Tue Jun 11 09:22:10 2024 (every 1s)\n\n pid | relid | phase | time_delayed | actual_time_delayed | duration\n-------+-------+---------------+--------------+---------------------+-----------------\n 54754 | 16385 | scanning heap | 45033 | 45003 | 00:00:46.301858\n\n\nAs we can see the actual wait time is 30ms less than the intended wait time with\nthis simple test. So I still think we should go with 1) actual wait time and 2)\nreport the number of waits (as mentioned in [1]). Does that make sense to you?\n\n\n[1]: https://www.postgresql.org/message-id/Zmf712A5xcOM9Hlg%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 11 Jun 2024 09:49:11 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On Tue, Jun 11, 2024 at 07:25:11AM +0000, Bertrand Drouvot wrote:\n> So I think that in v2 we could: 1) measure the actual wait time instead, 2)\n> count the number of times the vacuum slept. We could also 3) reports the\n> effective cost limit (as proposed by Nathan up-thread) (I think that 3) could\n> be misleading but I'll yield to majority opinion if people think it's not).\n\nI still think the effective cost limit would be useful, if for no other\nreason than to help reinforce that it is distributed among the autovacuum\nworkers. We could document that this value may change over the lifetime of\na worker to help avoid misleading folks.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 11 Jun 2024 11:40:36 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On Tue, Jun 11, 2024 at 5:49 AM Bertrand Drouvot\n<[email protected]> wrote:\n> As we can see the actual wait time is 30ms less than the intended wait time with\n> this simple test. So I still think we should go with 1) actual wait time and 2)\n> report the number of waits (as mentioned in [1]). Does that make sense to you?\n\nI like the idea of reporting the actual wait time better, provided\nthat we verify that doing so isn't too expensive. I think it probably\nisn't, because in a long-running VACUUM there is likely to be disk\nI/O, so the CPU overhead of a few extra gettimeofday() calls should be\nfairly low by comparison. I wonder if there's a noticeable hit when\neverything is in-memory. I guess probably not, because with any sort\nof normal configuration, we shouldn't be delaying after every block we\nprocess, so the cost of those gettimeofday() calls should still be\ngetting spread across quite a bit of real work.\n\nThat said, I'm not sure this experiment shows a real problem with the\nidea of showing intended wait time. It does establish the concept that\nrepeated signals can throw our numbers off, but 30ms isn't much of a\ndiscrepancy. I'm worried about being off by a factor of two, or an\norder of magnitude. I think we still don't know if that can happen,\nbut if we're going to show actual wait time anyway, then we don't need\nto explore the problems with other hypothetical systems too much.\n\nI'm not convinced that reporting the number of waits is useful. If we\nwere going to report a possibly-inaccurate amount of actual waiting,\nthen also reporting the number of waits might make it easier to figure\nout when the possibly-inaccurate number was in fact inaccurate. But I\nthink it's way better to report an accurate amount of actual waiting,\nand then I'm not sure what we gain by also reporting the number of\nwaits.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 13:13:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On 6/11/24 13:13, Robert Haas wrote:\n> On Tue, Jun 11, 2024 at 5:49 AM Bertrand Drouvot\n> <[email protected]> wrote:\n>> As we can see the actual wait time is 30ms less than the intended wait time with\n>> this simple test. So I still think we should go with 1) actual wait time and 2)\n>> report the number of waits (as mentioned in [1]). Does that make sense to you?\n> \n> I like the idea of reporting the actual wait time better, provided\n> that we verify that doing so isn't too expensive. I think it probably\n> isn't, because in a long-running VACUUM there is likely to be disk\n> I/O, so the CPU overhead of a few extra gettimeofday() calls should be\n> fairly low by comparison. I wonder if there's a noticeable hit when\n> everything is in-memory. I guess probably not, because with any sort\n> of normal configuration, we shouldn't be delaying after every block we\n> process, so the cost of those gettimeofday() calls should still be\n> getting spread across quite a bit of real work.\n\nDoes it even require a call to gettimeofday()? The code in vacuum \ncalculates an msec value and calls pg_usleep(msec * 1000). I don't think \nit is necessary to measure how long that nap was.\n\n\nRegards, Jan\n\n\n\n", "msg_date": "Tue, 11 Jun 2024 13:26:20 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "> I'm not convinced that reporting the number of waits is useful. If we\r\n> were going to report a possibly-inaccurate amount of actual waiting,\r\n> then also reporting the number of waits might make it easier to figure\r\n> out when the possibly-inaccurate number was in fact inaccurate. But I\r\n> think it's way better to report an accurate amount of actual waiting,\r\n> and then I'm not sure what we gain by also reporting the number of\r\n> waits.\r\n\r\nI think including the number of times vacuum went into sleep \r\nwill help paint a full picture of the effect of tuning the vacuum_cost_delay \r\nand vacuum_cost_limit for the user, even if we are reporting accurate \r\namounts of actual sleeping.\r\n\r\nThis is particularly true for autovacuum in which the cost limit is spread\r\nacross all autovacuum workers, and knowing how many times autovacuum\r\nwent to sleep will be useful along with the total time spent sleeping.\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n", "msg_date": "Tue, 11 Jun 2024 18:19:23 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On Tue, Jun 11, 2024 at 06:19:23PM +0000, Imseih (AWS), Sami wrote:\n>> I'm not convinced that reporting the number of waits is useful. If we\n>> were going to report a possibly-inaccurate amount of actual waiting,\n>> then also reporting the number of waits might make it easier to figure\n>> out when the possibly-inaccurate number was in fact inaccurate. But I\n>> think it's way better to report an accurate amount of actual waiting,\n>> and then I'm not sure what we gain by also reporting the number of\n>> waits.\n> \n> I think including the number of times vacuum went into sleep \n> will help paint a full picture of the effect of tuning the vacuum_cost_delay \n> and vacuum_cost_limit for the user, even if we are reporting accurate \n> amounts of actual sleeping.\n> \n> This is particularly true for autovacuum in which the cost limit is spread\n> across all autovacuum workers, and knowing how many times autovacuum\n> went to sleep will be useful along with the total time spent sleeping.\n\nI'm struggling to think of a scenario in which the number of waits would be\nuseful, assuming you already know the amount of time spent waiting. Even\nif the number of waits is huge, it doesn't tell you much else AFAICT. I'd\nbe much more likely to adjust the cost settings based on the percentage of\ntime spent sleeping.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 11 Jun 2024 13:47:29 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On Tue, Jun 11, 2024 at 2:47 PM Nathan Bossart <[email protected]> wrote:\n> I'm struggling to think of a scenario in which the number of waits would be\n> useful, assuming you already know the amount of time spent waiting. Even\n> if the number of waits is huge, it doesn't tell you much else AFAICT. I'd\n> be much more likely to adjust the cost settings based on the percentage of\n> time spent sleeping.\n\nThis is also how I see it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 14:48:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": ">> I'm struggling to think of a scenario in which the number of waits would be\r\n>> useful, assuming you already know the amount of time spent waiting. Even\r\n>> if the number of waits is huge, it doesn't tell you much else AFAICT. I'd\r\n>> be much more likely to adjust the cost settings based on the percentage of\r\n>> time spent sleeping.\r\n\r\n\r\n> This is also how I see it.\r\n\r\nI think it may be useful for a user to be able to answer the \"average\r\nsleep time\" for a vacuum, especially because the vacuum cost \r\nlimit and delay can be adjusted on the fly for a running vacuum.\r\n\r\nIf we only show the total sleep time, the user could make wrong\r\n assumptions about how long each sleep took and they might \r\nassume that all sleep delays for a particular vacuum run have been \r\nuniform in duration, when in-fact they may not have been.\r\n\r\n\r\nRegards,\r\n\r\nSami \r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Tue, 11 Jun 2024 21:04:29 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 11, 2024 at 11:40:36AM -0500, Nathan Bossart wrote:\n> On Tue, Jun 11, 2024 at 07:25:11AM +0000, Bertrand Drouvot wrote:\n> > So I think that in v2 we could: 1) measure the actual wait time instead, 2)\n> > count the number of times the vacuum slept. We could also 3) reports the\n> > effective cost limit (as proposed by Nathan up-thread) (I think that 3) could\n> > be misleading but I'll yield to majority opinion if people think it's not).\n> \n> I still think the effective cost limit would be useful, if for no other\n> reason than to help reinforce that it is distributed among the autovacuum\n> workers.\n\nI also think it can be useful, my concern is more to put this information in\npg_stat_progress_vacuum. What about Sawada-san proposal in [1]? (we could\ncreate a new view that would contain those data: per-worker dobalance, cost_lmit,\ncost_delay, active, and failsafe). \n\n[1]: https://www.postgresql.org/message-id/CAD21AoDOu%3DDZcC%2BPemYmCNGSwbgL1s-5OZkZ1Spd5pSxofWNCw%40mail.gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:52:42 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 11, 2024 at 02:48:30PM -0400, Robert Haas wrote:\n> On Tue, Jun 11, 2024 at 2:47 PM Nathan Bossart <[email protected]> wrote:\n> > I'm struggling to think of a scenario in which the number of waits would be\n> > useful, assuming you already know the amount of time spent waiting.\n\nIf we provide the actual time spent waiting, providing the number of waits would\nallow to see if there is a diff between the actual time and the intended time\n(i.e: number of waits * cost_delay, should the cost_delay be the same during\nthe vacuum duration). That should trigger some thoughts if the diff is large\nenough.\n\nI think that what we are doing here is to somehow add instrumentation around the\n\"WAIT_EVENT_VACUUM_DELAY\" wait event. If we were to add instrumentation for wait\nevents (generaly speaking) we'd probably also expose the number of waits per\nwait event (in addition to the time waited).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 11:27:23 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 11, 2024 at 01:13:48PM -0400, Robert Haas wrote:\n> On Tue, Jun 11, 2024 at 5:49 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > As we can see the actual wait time is 30ms less than the intended wait time with\n> > this simple test. So I still think we should go with 1) actual wait time and 2)\n> > report the number of waits (as mentioned in [1]). Does that make sense to you?\n> \n> I like the idea of reporting the actual wait time better,\n\n+1\n\n> provided\n> that we verify that doing so isn't too expensive. I think it probably\n> isn't, because in a long-running VACUUM there is likely to be disk\n> I/O, so the CPU overhead of a few extra gettimeofday() calls should be\n> fairly low by comparison.\n\nAgree.\n\n> I wonder if there's a noticeable hit when\n> everything is in-memory. I guess probably not, because with any sort\n> of normal configuration, we shouldn't be delaying after every block we\n> process, so the cost of those gettimeofday() calls should still be\n> getting spread across quite a bit of real work.\n\nI did some testing, with:\n\nshared_buffers = 12GB\nvacuum_cost_delay = 1\nautovacuum_vacuum_cost_delay = 1\nmax_parallel_maintenance_workers = 0\nmax_parallel_workers = 0\n\nadded to a default config file.\n\nA table and all its indexes were fully in memory, the numbers are:\n\npostgres=# SELECT n.nspname, c.relname, count(*) AS buffers\n FROM pg_buffercache b JOIN pg_class c\n ON b.relfilenode = pg_relation_filenode(c.oid) AND\n b.reldatabase IN (0, (SELECT oid FROM pg_database\n WHERE datname = current_database()))\n JOIN pg_namespace n ON n.oid = c.relnamespace\n GROUP BY n.nspname, c.relname\n ORDER BY 3 DESC\n LIMIT 11;\n\n nspname | relname | buffers\n---------+-------------------+---------\n public | large_tbl | 222280\n public | large_tbl_pkey | 5486\n public | large_tbl_filler7 | 1859\n public | large_tbl_filler4 | 1859\n public | large_tbl_filler1 | 1859\n public | large_tbl_filler6 | 1859\n public | large_tbl_filler3 | 1859\n public | large_tbl_filler2 | 1859\n public | large_tbl_filler5 | 1859\n public | large_tbl_filler8 | 1859\n public | large_tbl_version | 1576\n(11 rows)\n\n\nThe observed timings when vacuuming this table are:\n\nOn master:\n\nvacuum phase: cumulative duration\n---------------------------------\n\nscanning heap: 00:00:37.808184\nvacuuming indexes: 00:00:41.808176\nvacuuming heap: 00:00:54.808156\n\nOn master patched with actual time delayed:\n\nvacuum phase: cumulative duration\n---------------------------------\n\nscanning heap: 00:00:36.502104 (time_delayed: 22202)\nvacuuming indexes: 00:00:41.002103 (time_delayed: 23769)\nvacuuming heap: 00:00:54.302096 (time_delayed: 34886)\n\nAs we can see there is no noticeable degradation while the vacuum entered about\n34886 times in this instrumentation code path (cost_delay was set to 1).\n\n> That said, I'm not sure this experiment shows a real problem with the\n> idea of showing intended wait time. It does establish the concept that\n> repeated signals can throw our numbers off, but 30ms isn't much of a\n> discrepancy.\n\nYeah, the idea was just to show how easy it is to create a 30ms discrepancy.\n\n> I'm worried about being off by a factor of two, or an\n> order of magnitude. I think we still don't know if that can happen,\n> but if we're going to show actual wait time anyway, then we don't need\n> to explore the problems with other hypothetical systems too much.\n\nAgree.\n\n> I'm not convinced that reporting the number of waits is useful. If we\n> were going to report a possibly-inaccurate amount of actual waiting,\n> then also reporting the number of waits might make it easier to figure\n> out when the possibly-inaccurate number was in fact inaccurate. But I\n> think it's way better to report an accurate amount of actual waiting,\n> and then I'm not sure what we gain by also reporting the number of\n> waits.\n\nSami shared his thoughts in [1] and [2] and so did I in [3]. If some of us still\ndon't think that reporting the number of waits is useful then we can probably\nstart without it.\n\n[1]: https://www.postgresql.org/message-id/0EA474B6-BF88-49AE-82CA-C1A9A3C17727%40amazon.com\n[2]: https://www.postgresql.org/message-id/E12435E2-5FCA-49B0-9ADB-0E7153F95E2D%40amazon.com\n[3]: https://www.postgresql.org/message-id/ZmmGG4e%2BqTBD2kfn%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 12:02:00 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 11, 2024 at 08:26:23AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Tue, Jun 11, 2024 at 04:07:05PM +0900, Masahiko Sawada wrote:\n> \n> > Thank you for the proposal and the patch. I understand the motivation\n> > of this patch.\n> \n> Thanks for looking at it!\n> \n> > Beside the point Nathan mentioned, I'm slightly worried\n> > that massive parallel messages could be sent to the leader process\n> > when the cost_limit value is low.\n> \n> I see, I can/will do some testing in this area and share the numbers.\n\nHere is the result of the test. It has been launched several times and it\nproduced the same (surprising result) each time.\n\n====================== Context ================================================\n\nThe testing has been done with this relation (large_tbl) and its indexes:\n\npostgres=# SELECT n.nspname, c.relname, count(*) AS buffers\n FROM pg_buffercache b JOIN pg_class c\n ON b.relfilenode = pg_relation_filenode(c.oid) AND\n b.reldatabase IN (0, (SELECT oid FROM pg_database\n WHERE datname = current_database()))\n JOIN pg_namespace n ON n.oid = c.relnamespace\n GROUP BY n.nspname, c.relname\n ORDER BY 3 DESC\n LIMIT 22;\n\n nspname | relname | buffers\n---------+--------------------+---------\n public | large_tbl | 222280\n public | large_tbl_filler13 | 125000\n public | large_tbl_filler6 | 125000\n public | large_tbl_filler5 | 125000\n public | large_tbl_filler3 | 125000\n public | large_tbl_filler15 | 125000\n public | large_tbl_filler4 | 125000\n public | large_tbl_filler20 | 125000\n public | large_tbl_filler18 | 125000\n public | large_tbl_filler14 | 125000\n public | large_tbl_filler8 | 125000\n public | large_tbl_filler11 | 125000\n public | large_tbl_filler19 | 125000\n public | large_tbl_filler7 | 125000\n public | large_tbl_filler1 | 125000\n public | large_tbl_filler12 | 125000\n public | large_tbl_filler9 | 125000\n public | large_tbl_filler17 | 125000\n public | large_tbl_filler16 | 125000\n public | large_tbl_filler10 | 125000\n public | large_tbl_filler2 | 125000\n public | large_tbl_pkey | 5486\n(22 rows)\n\nAll of them completly fit in memory (to avoid I/O read latency during the vacuum).\n\nThe config, outside of default is:\n\nmax_wal_size = 4GB\nshared_buffers = 30GB\nvacuum_cost_delay = 1\nautovacuum_vacuum_cost_delay = 1\nmax_parallel_maintenance_workers = 8\nmax_parallel_workers = 10\nvacuum_cost_limit = 10\nautovacuum_vacuum_cost_limit = 10\n\nMy system is not overloaded, has enough resources to run this test and only this\ntest is running.\n\n====================== Results ================================================\n\n========== With v2 (attached) applied on master\n\npostgres=# VACUUM (PARALLEL 8) large_tbl;\nVACUUM\nTime: 1146873.016 ms (19:06.873)\n\nThe duration is splitted that way:\n\nVacuum phase: cumulative time (cumulative time delayed)\n=======================================================\nscanning heap: 00:08:16.414628 (time_delayed: 444370)\nvacuuming indexes: 00:14:55.314699 (time_delayed: 2545293)\nvacuuming heap: 00:19:06.814617 (time_delayed: 2767540)\n\nI sampled active sessions from pg_stat_activity (one second interval), here is\nthe summary during the vacuuming indexes phase (ordered by count):\n\n leader_pid | pid | wait_event | count\n------------+--------+----------------+-------\n 452996 | 453225 | VacuumDelay | 366\n 452996 | 453223 | VacuumDelay | 363\n 452996 | 453226 | VacuumDelay | 362\n 452996 | 453224 | VacuumDelay | 361\n 452996 | 453222 | VacuumDelay | 359\n 452996 | 453221 | VacuumDelay | 359\n | 452996 | VacuumDelay | 331\n | 452996 | CPU | 30\n 452996 | 453224 | WALWriteLock | 23\n 452996 | 453222 | WALWriteLock | 20\n 452996 | 453226 | WALWriteLock | 20\n 452996 | 453221 | WALWriteLock | 19\n | 452996 | WalSync | 18\n 452996 | 453225 | WALWriteLock | 18\n 452996 | 453223 | WALWriteLock | 16\n | 452996 | WALWriteLock | 15\n 452996 | 453221 | CPU | 14\n 452996 | 453222 | CPU | 14\n 452996 | 453223 | CPU | 12\n 452996 | 453224 | CPU | 10\n 452996 | 453226 | CPU | 10\n 452996 | 453225 | CPU | 8\n 452996 | 453223 | WalSync | 4\n 452996 | 453221 | WalSync | 2\n 452996 | 453226 | WalWrite | 2\n 452996 | 453221 | WalWrite | 1\n | 452996 | ParallelFinish | 1\n 452996 | 453224 | WalSync | 1\n 452996 | 453225 | WalSync | 1\n 452996 | 453222 | WalWrite | 1\n 452996 | 453225 | WalWrite | 1\n 452996 | 453222 | WalSync | 1\n 452996 | 453226 | WalSync | 1\n\n\n========== On master (v2 not applied)\n\npostgres=# VACUUM (PARALLEL 8) large_tbl;\nVACUUM\nTime: 1322598.087 ms (22:02.598)\n\nSurprisingly it has been longer on master by about 3 minutes.\n\nLet's see how the time is splitted:\n\nVacuum phase: cumulative time\n=============================\nscanning heap: 00:08:07.061196\nvacuuming indexes: 00:17:50.961228\nvacuuming heap: 00:22:02.561199\n\nI sampled active sessions from pg_stat_activity (one second interval), here is\nthe summary during the vacuuming indexes phase (ordered by count):\n\n leader_pid | pid | wait_event | count\n------------+--------+-------------------+-------\n 468682 | 468858 | VacuumDelay | 548\n 468682 | 468862 | VacuumDelay | 547\n 468682 | 468859 | VacuumDelay | 547\n 468682 | 468860 | VacuumDelay | 545\n 468682 | 468857 | VacuumDelay | 543\n 468682 | 468861 | VacuumDelay | 542\n | 468682 | VacuumDelay | 378\n | 468682 | ParallelFinish | 182\n 468682 | 468861 | WALWriteLock | 19\n 468682 | 468857 | WALWriteLock | 19\n 468682 | 468859 | WALWriteLock | 18\n 468682 | 468858 | WALWriteLock | 16\n 468682 | 468860 | WALWriteLock | 15\n 468682 | 468862 | WALWriteLock | 15\n 468682 | 468862 | CPU | 12\n 468682 | 468857 | CPU | 10\n 468682 | 468859 | CPU | 10\n 468682 | 468861 | CPU | 10\n | 468682 | CPU | 9\n 468682 | 468860 | CPU | 9\n 468682 | 468860 | WalSync | 8\n | 468682 | WALWriteLock | 7\n 468682 | 468858 | WalSync | 6\n 468682 | 468858 | CPU | 6\n 468682 | 468862 | WalSync | 3\n 468682 | 468857 | WalSync | 3\n 468682 | 468861 | WalSync | 3\n 468682 | 468859 | WalSync | 2\n 468682 | 468861 | WalWrite | 2\n 468682 | 468857 | WalWrite | 1\n 468682 | 468858 | WalWrite | 1\n 468682 | 468861 | WALBufMappingLock | 1\n 468682 | 468857 | WALBufMappingLock | 1\n | 468682 | WALBufMappingLock | 1\n\n====================== Observations ===========================================\n\nAs compare to v2:\n\n1. scanning heap time is about the same\n2. vacuuming indexes time is about 3 minutes longer on master\n3. vacuuming heap time is about the same\n\nOne difference we can see in the sampling, is that on master the \"ParallelFinish\"\nhas been sampled about 182 times (so could be _the_ 3 minutes of interest) for\nthe leader.\n\nOn master the vacuum indexes phase has been running between 2024-06-13 10:11:34\nand 2024-06-13 10:21:15. If I extract the exact minutes and the counts for the\n\"ParallelFinish\" wait event I get:\n\n minute | wait_event | count\n--------+----------------+-------\n 18 | ParallelFinish | 48\n 19 | ParallelFinish | 60\n 20 | ParallelFinish | 60\n 21 | ParallelFinish | 14\n\nSo it's likely that the leader waited on ParallelFinish during about 3 minutes\nat the end of the vacuuming indexes phase (as this wait appeared during\nconsecutives samples).\n\n====================== Conclusion =============================================\n\n1. During the scanning heap and vacuuming heap phases no noticeable performance\ndegradation has been observed with v2 applied (as compare to master) (cc'ing\nRobert as it's also related to his question about noticeable hit when everything\nis in-memory in [1]).\n\n2. During the vacuuming indexes phase, v2 has been faster (as compare to master).\nThe reason is that on master the leader has been waiting during about 3 minutes\non \"ParallelFinish\" at the end.\n\n====================== Remarks ================================================\n\nAs v2 is attached, please find below a summary about the current state of this\nthread:\n\n1. v2 implements delay_time as the actual wait time (and not the intended wait\ntime as proposed in v1).\n\n2. some measurements have been done to check the impact of this new\ninstrumentation (see this email and [2]): no noticeable performance degradation\nhas been observed (and surprisingly that's the opposite as mentioned above).\n\n3. there is an ongoing discussion about exposing the number of waits [2].\n\n4. there is an ongoing discussion about exposing the effective cost limit [3].\n\n5. that could be interesting to have a closer look as to why the leader is waiting\nduring 3 minutes on \"ParallelFinish\" on master and not with v2 applied (but that's\nprobably out of scope for this thread).\n\n[1]: https://www.postgresql.org/message-id/CA%2BTgmoZiC%3DzeCDYuMpB%2BGb2yK%3DrTQCGMu0VoxehocKyHxr9Erg%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/ZmmOOPwMFIltkdsN%40ip-10-97-1-34.eu-west-3.compute.internal\n[3]: https://www.postgresql.org/message-id/Zml9%2Bu37iS7DFkJL%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 13 Jun 2024 11:56:26 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 13, 2024 at 11:56:26AM +0000, Bertrand Drouvot wrote:\n> ====================== Observations ===========================================\n> \n> As compare to v2:\n> \n> 1. scanning heap time is about the same\n> 2. vacuuming indexes time is about 3 minutes longer on master\n> 3. vacuuming heap time is about the same\n\nI had a closer look to understand why the vacuuming indexes time has been about\n3 minutes longer on master.\n\nDuring the vacuuming indexes phase, the leader is helping vacuuming the indexes\nuntil it reaches WaitForParallelWorkersToFinish() (means when all the remaining\nindexes are currently handled by the parallel workers, the leader has nothing\nmore to do and so it is waiting for the parallel workers to finish).\n\nDuring the time the leader process is involved in indexes vacuuming it is\nalso subject to wait due to cost delay.\n\nBut with v2 applied, the leader may be interrupted by the parallel workers while\nit is waiting (due to the new pgstat_progress_parallel_incr_param() calls that\nthe patch is adding).\n\nSo, with v2 applied, the leader is waiting less (as interrupted while waiting)\nwhen being involved in indexes vacuuming and that's why v2 is \"faster\" than\nmaster.\n\nTo put some numbers, I did count the number of times the leader's pg_usleep() has\nbeen interrupted (by counting the number of times the nanosleep() did return a\nvalue < 0 in pg_usleep()). Here they are:\n\nv2: the leader has been interrupted about 342605 times\nmaster: the leader has been interrupted about 36 times\n\nThe ones on master are mainly coming from the pgstat_progress_parallel_incr_param() \ncalls in parallel_vacuum_process_one_index().\n\nThe additional ones on v2 are coming from the pgstat_progress_parallel_incr_param()\ncalls added in vacuum_delay_point().\n\n======== Conclusion ======\n\n1. vacuuming indexes time has been longer on master because with v2, the leader\nhas been interrupted 342605 times while waiting, then making v2 \"faster\".\n\n2. the leader being interrupted while waiting is also already happening on master\ndue to the pgstat_progress_parallel_incr_param() calls in\nparallel_vacuum_process_one_index() (that have been added in \n46ebdfe164). It has been the case \"only\" 36 times during my test case.\n\nI think that 2. is less of a concern but I think that 1. is something that needs\nto be addressed because the leader process is not honouring its cost delay wait\ntime in a noticeable way (at least during my test case).\n\nI did not think of a proposal yet, just sharing my investigation as to why\nv2 has been faster than master during the vacuuming indexes phase.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 22 Jun 2024 12:48:33 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Sat, Jun 22, 2024 at 12:48:33PM +0000, Bertrand Drouvot wrote:\n> 1. vacuuming indexes time has been longer on master because with v2, the leader\n> has been interrupted 342605 times while waiting, then making v2 \"faster\".\n> \n> 2. the leader being interrupted while waiting is also already happening on master\n> due to the pgstat_progress_parallel_incr_param() calls in\n> parallel_vacuum_process_one_index() (that have been added in \n> 46ebdfe164). It has been the case \"only\" 36 times during my test case.\n> \n> I think that 2. is less of a concern but I think that 1. is something that needs\n> to be addressed because the leader process is not honouring its cost delay wait\n> time in a noticeable way (at least during my test case).\n> \n> I did not think of a proposal yet, just sharing my investigation as to why\n> v2 has been faster than master during the vacuuming indexes phase.\n\nI think that a reasonable approach is to make the reporting from the parallel\nworkers to the leader less aggressive (means occur less frequently).\n\nPlease find attached v3, that:\n\n- ensures that there is at least 1 second between 2 reports, per parallel worker,\nto the leader.\n\n- ensures that the reported delayed time is still correct (keep track of the\ndelayed time between 2 reports).\n\n- does not add any extra pg_clock_gettime_ns() calls (as compare to v2).\n\nRemarks:\n\n1. Having a time based only approach to throttle the reporting of the parallel\nworkers sounds reasonable. I don't think that the number of parallel workers has\nto come into play as:\n\n 1.1) the more parallel workers is used, the less the impact of the leader on\n the vacuum index phase duration/workload is (because the repartition is done\n on more processes).\n\n 1.2) the less parallel workers is, the less the leader will be interrupted (\n less parallel workers would report their delayed time).\n\n2. The throttling is not based on the cost limit as that value is distributed\nproportionally among the parallel workers (so we're back to the previous point).\n\n3. The throttling is not based on the actual cost delay value because the leader\ncould be interrupted at the beginning, the midle or whatever part of the wait and\nwe are more interested about the frequency of the interrupts.\n\n3. A 1 second reporting \"throttling\" looks a reasonable threshold as:\n\n 3.1 the idea is to have a significant impact when the leader could have been\ninterrupted say hundred/thousand times per second.\n\n 3.2 it does not make that much sense for any tools to sample pg_stat_progress_vacuum\nmultiple times per second (so a one second reporting granularity seems ok).\n\nWith this approach in place, v3 attached applied, during my test case:\n\n- the leader has been interrupted about 2500 times (instead of about 345000\ntimes with v2)\n\n- the vacuum index phase duration is very close to the master one (it has been\n4 seconds faster (over a 8 minutes 40 seconds duration time), instead of 3\nminutes faster with v2).\n\nThoughts?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 24 Jun 2024 10:50:13 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": ">> 2. the leader being interrupted while waiting is also already happening on master\r\n>> due to the pgstat_progress_parallel_incr_param() calls in\r\n>> parallel_vacuum_process_one_index() (that have been added in\r\n>> 46ebdfe164). It has been the case \"only\" 36 times during my test case.\r\n\r\n46ebdfe164 will interrupt the leaders sleep every time a parallel workers reports\r\nprogress, and we currently don't handle interrupts by restarting the sleep with\r\nthe remaining time. nanosleep does provide the ability to restart with the remaining\r\ntime [1], but I don't think it's worth the effort to ensure more accurate\r\nvacuum delays for the leader process. \r\n\r\n\r\n> 1. Having a time based only approach to throttle \r\n\r\nI do agree with a time based approach overall.\r\n\r\n\r\n> 1.1) the more parallel workers is used, the less the impact of the leader on\r\n> the vacuum index phase duration/workload is (because the repartition is done\r\n> on more processes).\r\n\r\nDid you mean \" because the vacuum is done on more processes\"? \r\n\r\nWhen a leader is operating on a large index(s) during the entirety\r\nof the vacuum operation, wouldn't more parallel workers end up\r\ninterrupting the leader more often? This is why I think reporting even more\r\noften than 1 second (more below) will be better.\r\n\r\n> 3. A 1 second reporting \"throttling\" looks a reasonable threshold as:\r\n\r\n> 3.1 the idea is to have a significant impact when the leader could have been\r\n> interrupted say hundred/thousand times per second.\r\n\r\n> 3.2 it does not make that much sense for any tools to sample pg_stat_progress_vacuum\r\n> multiple times per second (so a one second reporting granularity seems ok).\r\n\r\nI feel 1 second may still be too frequent. \r\nWhat about 10 seconds ( or 30 seconds )? \r\nI think this metric in particular will be mainly useful for vacuum runs that are \r\nrunning for minutes or more, making reporting every 10 or 30 seconds \r\nstill useful.\r\n\r\nIt just occurred to me also that pgstat_progress_parallel_incr_param \r\nshould have a code comment that it will interrupt a leader process and\r\ncause activity such as a sleep to end early.\r\n\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n[1] https://man7.org/linux/man-pages/man2/nanosleep.2.html\r\n\r\n\r\n", "msg_date": "Tue, 25 Jun 2024 01:12:16 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 25, 2024 at 01:12:16AM +0000, Imseih (AWS), Sami wrote:\n\nThanks for the feedback!\n\n> >> 2. the leader being interrupted while waiting is also already happening on master\n> >> due to the pgstat_progress_parallel_incr_param() calls in\n> >> parallel_vacuum_process_one_index() (that have been added in\n> >> 46ebdfe164). It has been the case \"only\" 36 times during my test case.\n> \n> 46ebdfe164 will interrupt the leaders sleep every time a parallel workers reports\n> progress, and we currently don't handle interrupts by restarting the sleep with\n> the remaining time. nanosleep does provide the ability to restart with the remaining\n> time [1], but I don't think it's worth the effort to ensure more accurate\n> vacuum delays for the leader process. \n\n+1. I don't think it's necessary to have a 100% accurate delay for all the\ntimes the delay is involded. I think that's an heuristic parameter (among\nwith cost limit). What matters at the end is by how much you've been able to\npause the whole vacuum (and not by a sleep by sleep basis)).\n\n> > 1. Having a time based only approach to throttle \n> \n> I do agree with a time based approach overall.\n> \n> \n> > 1.1) the more parallel workers is used, the less the impact of the leader on\n> > the vacuum index phase duration/workload is (because the repartition is done\n> > on more processes).\n> \n> Did you mean \" because the vacuum is done on more processes\"? \n\nYes.\n\n> When a leader is operating on a large index(s) during the entirety\n> of the vacuum operation, wouldn't more parallel workers end up\n> interrupting the leader more often?\n\nThat's right but my point was about the impact on the \"whole\" duration time and\n\"whole\" workload (leader + workers included) and not about the number of times the\nleader is interrupted. If there is say 100 workers then interrupting the leader\n(1 process out of 101) is probably less of an issue as it means that there is a\nlot of work to be done to have those 100 workers busy. I don't think the size of\nthe index the leader is vacuuming has an impact. I think that having the leader\nvacuuming a 100 GB index or 100 x 1GB indexes is the same (as long as all the\nother workers are actives during all that time).\n\n> > 3. A 1 second reporting \"throttling\" looks a reasonable threshold as:\n> \n> > 3.1 the idea is to have a significant impact when the leader could have been\n> > interrupted say hundred/thousand times per second.\n> \n> > 3.2 it does not make that much sense for any tools to sample pg_stat_progress_vacuum\n> > multiple times per second (so a one second reporting granularity seems ok).\n> \n> I feel 1 second may still be too frequent. \n\nMaybe we'll need more measurements but this is what my test case made of:\n\nvacuum_cost_delay = 1\nvacuum_cost_limit = 10\n8 parallel workers, 1 leader\n21 indexes (about 1GB each, one 40MB), all in memory\n\nlead to:\n\nWith 1 second reporting frequency, the leader has been interruped about 2500\ntimes over 8m39s leading to about the same time as on master (8m43s).\n\n> What about 10 seconds ( or 30 seconds )? \n\nI'm not sure (may need more measurements) but it would probably complicate the\nreporting a bit (as with the current v3 we'd miss reporting the indexes that\ntake less time than the threshold to complete).\n\n> I think this metric in particular will be mainly useful for vacuum runs that are \n> running for minutes or more, making reporting every 10 or 30 seconds \n> still useful.\n\nAgree. OTOH, one could be interested to diagnose what happened during a say 5\nseconds peak on I/O resource consumption/latency. Sampling pg_stat_progress_vacuum\nat 1 second interval and see by how much the vaccum has been paused during that\ntime could help too (specially if it is made of a lot of parallel workers that\ncould lead to a lot of I/O). But it would miss data if we are reporting at a\nlarger rate.\n\n> It just occurred to me also that pgstat_progress_parallel_incr_param \n> should have a code comment that it will interrupt a leader process and\n> cause activity such as a sleep to end early.\n\nGood point, I'll add a comment for it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 08:29:03 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On Mon, Jun 24, 2024 at 7:50 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Sat, Jun 22, 2024 at 12:48:33PM +0000, Bertrand Drouvot wrote:\n> > 1. vacuuming indexes time has been longer on master because with v2, the leader\n> > has been interrupted 342605 times while waiting, then making v2 \"faster\".\n> >\n> > 2. the leader being interrupted while waiting is also already happening on master\n> > due to the pgstat_progress_parallel_incr_param() calls in\n> > parallel_vacuum_process_one_index() (that have been added in\n> > 46ebdfe164). It has been the case \"only\" 36 times during my test case.\n> >\n> > I think that 2. is less of a concern but I think that 1. is something that needs\n> > to be addressed because the leader process is not honouring its cost delay wait\n> > time in a noticeable way (at least during my test case).\n> >\n> > I did not think of a proposal yet, just sharing my investigation as to why\n> > v2 has been faster than master during the vacuuming indexes phase.\n\nThank you for the benchmarking and analyzing the results! I agree with\nyour analysis and was surprised by the fact that the more times\nworkers go to sleep, the more times the leader wakes up.\n\n>\n> I think that a reasonable approach is to make the reporting from the parallel\n> workers to the leader less aggressive (means occur less frequently).\n>\n> Please find attached v3, that:\n>\n> - ensures that there is at least 1 second between 2 reports, per parallel worker,\n> to the leader.\n>\n> - ensures that the reported delayed time is still correct (keep track of the\n> delayed time between 2 reports).\n>\n> - does not add any extra pg_clock_gettime_ns() calls (as compare to v2).\n>\n\nSounds good to me. I think it's better to keep the logic for\nthrottling the reporting the delay message simple. It's an important\nconsideration but executing parallel vacuum with delays would be less\nlikely to be used in practice.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 27 Jun 2024 14:17:38 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "> 46ebdfe164 will interrupt the leaders sleep every time a parallel workers reports\r\n> progress, and we currently don't handle interrupts by restarting the sleep with\r\n> the remaining time. nanosleep does provide the ability to restart with the remaining\r\n> time [1], but I don't think it's worth the effort to ensure more accurate\r\n> vacuum delays for the leader process. \r\n\r\nAfter discussing offline with Bertrand, it may be better to have \r\na solution to deal with the interrupts and allows the sleep to continue to\r\ncompletion. This will simplify this patch and will be useful \r\nfor other cases in which parallel workers need to send a message\r\nto the leader. This is the thread [1] for that discussion.\r\n\r\n[1] https://www.postgresql.org/message-id/01000190606e3d2a-116ead16-84d2-4449-8d18-5053da66b1f4-000000%40email.amazonses.com\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Fri, 28 Jun 2024 20:07:39 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Fri, Jun 28, 2024 at 08:07:39PM +0000, Imseih (AWS), Sami wrote:\n> > 46ebdfe164 will interrupt the leaders sleep every time a parallel workers reports\n> > progress, and we currently don't handle interrupts by restarting the sleep with\n> > the remaining time. nanosleep does provide the ability to restart with the remaining\n> > time [1], but I don't think it's worth the effort to ensure more accurate\n> > vacuum delays for the leader process. \n> \n> After discussing offline with Bertrand, it may be better to have \n> a solution to deal with the interrupts and allows the sleep to continue to\n> completion. This will simplify this patch and will be useful \n> for other cases in which parallel workers need to send a message\n> to the leader. This is the thread [1] for that discussion.\n> \n> [1] https://www.postgresql.org/message-id/01000190606e3d2a-116ead16-84d2-4449-8d18-5053da66b1f4-000000%40email.amazonses.com\n> \n\nYeah, I think it would make sense to put this thread on hold until we know more\nabout [1] (you mentioned above) outcome.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Jul 2024 04:59:25 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 01, 2024 at 04:59:25AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Fri, Jun 28, 2024 at 08:07:39PM +0000, Imseih (AWS), Sami wrote:\n> > > 46ebdfe164 will interrupt the leaders sleep every time a parallel workers reports\n> > > progress, and we currently don't handle interrupts by restarting the sleep with\n> > > the remaining time. nanosleep does provide the ability to restart with the remaining\n> > > time [1], but I don't think it's worth the effort to ensure more accurate\n> > > vacuum delays for the leader process. \n> > \n> > After discussing offline with Bertrand, it may be better to have \n> > a solution to deal with the interrupts and allows the sleep to continue to\n> > completion. This will simplify this patch and will be useful \n> > for other cases in which parallel workers need to send a message\n> > to the leader. This is the thread [1] for that discussion.\n> > \n> > [1] https://www.postgresql.org/message-id/01000190606e3d2a-116ead16-84d2-4449-8d18-5053da66b1f4-000000%40email.amazonses.com\n> > \n> \n> Yeah, I think it would make sense to put this thread on hold until we know more\n> about [1] (you mentioned above) outcome.\n\nAs it looks like we have a consensus not to wait on [0] (as reducing the number\nof interrupts makes sense on its own), then please find attached v4, a rebase\nversion (that also makes clear in the doc that that new field might show slightly\nold values, as mentioned in [1]).\n\n[0]: https://www.postgresql.org/message-id/flat/01000190606e3d2a-116ead16-84d2-4449-8d18-5053da66b1f4-000000%40email.amazonses.com\n[1]: https://www.postgresql.org/message-id/ZruMe-ppopQX4uP8%40nathan\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 20 Aug 2024 12:48:29 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 20, 2024 at 12:48:29PM +0000, Bertrand Drouvot wrote:\n> As it looks like we have a consensus not to wait on [0] (as reducing the number\n> of interrupts makes sense on its own), then please find attached v4, a rebase\n> version (that also makes clear in the doc that that new field might show slightly\n> old values, as mentioned in [1]).\n\nPlease find attached v5, a mandatory rebase.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 2 Sep 2024 05:11:36 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 02, 2024 at 05:11:36AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Tue, Aug 20, 2024 at 12:48:29PM +0000, Bertrand Drouvot wrote:\n> > As it looks like we have a consensus not to wait on [0] (as reducing the number\n> > of interrupts makes sense on its own), then please find attached v4, a rebase\n> > version (that also makes clear in the doc that that new field might show slightly\n> > old values, as mentioned in [1]).\n> \n> Please find attached v5, a mandatory rebase.\n\nPlease find attached v6, a mandatory rebase due to catversion bump conflict.\nI'm removing the catversion bump from the patch as it generates too frequent\nconflicts (just mention it needs to be done in the commit message).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 5 Sep 2024 04:59:54 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "On Thu, Sep 05, 2024 at 04:59:54AM +0000, Bertrand Drouvot wrote:\n> Please find attached v6, a mandatory rebase due to catversion bump conflict.\n> I'm removing the catversion bump from the patch as it generates too frequent\n> conflicts (just mention it needs to be done in the commit message).\n\nv6 looks generally reasonable to me. I think the\nnap_time_since_last_report variable needs to be marked static, though.\n\nOne thing that occurs to me is that this information may not be\nparticularly useful when parallel workers are used. Without parallelism,\nit's easy enough to figure out the percentage of time that your VACUUM is\nspending asleep, but when there are parallel workers, it may be hard to\ndeduce much of anything from the value. I'm not sure that this is a\ndeal-breaker for the patch, though, if for no other reason than it'll most\nlikely be used for autovacuum, which doesn't use parallel vacuum yet.\n\nIf there are no other concerns, I'll plan on committing this one soon after\na bit of editorialization.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 18 Sep 2024 16:04:53 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 18, 2024 at 04:04:53PM -0500, Nathan Bossart wrote:\n> On Thu, Sep 05, 2024 at 04:59:54AM +0000, Bertrand Drouvot wrote:\n> > Please find attached v6, a mandatory rebase due to catversion bump conflict.\n> > I'm removing the catversion bump from the patch as it generates too frequent\n> > conflicts (just mention it needs to be done in the commit message).\n> \n> v6 looks generally reasonable to me.\n\nThanks for looking at it!\n\n> I think the\n> nap_time_since_last_report variable needs to be marked static, though.\n\nAgree.\n\n> One thing that occurs to me is that this information may not be\n> particularly useful when parallel workers are used. Without parallelism,\n> it's easy enough to figure out the percentage of time that your VACUUM is\n> spending asleep, but when there are parallel workers, it may be hard to\n> deduce much of anything from the value.\n\nI think that if the number of parallel workers being used are the same across\nruns then one can measure \"accurately\" the impact of some changes (set\nvacuum_cost_delay=... for example) on the delay. Without the patch one could just\nguess as many others factors could impact the vacuum duration (load on the system,\ni/o latency,...).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 19 Sep 2024 07:54:21 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Track the amount of time waiting due to cost_delay" } ]
[ { "msg_contents": "Hi Hackers\n\nI have identified a potential memory leak in the \n`addRangeTableEntryForJoin()` function. The second parameter of \n`addRangeTableEntryForJoin()`, `colnames`, is a `List*` that is \nconcatenated with another `List*`, `eref->colnames`, using \n`list_concat()`. We need to pass only the last `numaliases` elements of \nthe list, for which we use `list_copy_tail`. This function creates a \ncopy of the `colnames` list, resulting in `colnames` pointing to the \ncurrent list that will not be freed. Consequently, a new list is already \nconcatenated.\n\nTo address this issue, I have invoked `list_free(colnames)` afterwards. \nIf anyone is aware of where the list is being freed or has any \nsuggestions for improvement, I would greatly appreciate your input.\n\nBest Regards,\n\nIlia Evdokimov,\n\nTantorLabs LCC", "msg_date": "Mon, 10 Jun 2024 13:35:39 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "list_free in addRangeTableEntryForJoin" }, { "msg_contents": "Em seg., 10 de jun. de 2024 às 07:35, Ilia Evdokimov <\[email protected]> escreveu:\n\n> Hi Hackers\n>\n> I have identified a potential memory leak in the\n> `addRangeTableEntryForJoin()` function. The second parameter of\n> `addRangeTableEntryForJoin()`, `colnames`, is a `List*` that is\n> concatenated with another `List*`, `eref->colnames`, using\n> `list_concat()`. We need to pass only the last `numaliases` elements of\n> the list, for which we use `list_copy_tail`. This function creates a\n> copy of the `colnames` list, resulting in `colnames` pointing to the\n> current list that will not be freed. Consequently, a new list is already\n> concatenated.\n>\n> To address this issue, I have invoked `list_free(colnames)` afterwards.\n> If anyone is aware of where the list is being freed or has any\n> suggestions for improvement, I would greatly appreciate your input.\n>\nlist_copy_tail actually makes a new copy.\nBut callers of addRangeTableEntryForJoin,\nexpects to handle a list or NIL, if we free the memory,\nShouldn't I set the variable *colnames* to NIL, too?\n\nbest regards,\nRanier Vilela\n\nEm seg., 10 de jun. de 2024 às 07:35, Ilia Evdokimov <[email protected]> escreveu:Hi Hackers\n\nI have identified a potential memory leak in the \n`addRangeTableEntryForJoin()` function. The second parameter of \n`addRangeTableEntryForJoin()`, `colnames`, is a `List*` that is \nconcatenated with another `List*`, `eref->colnames`, using \n`list_concat()`. We need to pass only the last `numaliases` elements of \nthe list, for which we use `list_copy_tail`. This function creates a \ncopy of the `colnames` list, resulting in `colnames` pointing to the \ncurrent list that will not be freed. Consequently, a new list is already \nconcatenated.\n\nTo address this issue, I have invoked `list_free(colnames)` afterwards. \nIf anyone is aware of where the list is being freed or has any \nsuggestions for improvement, I would greatly appreciate your input.list_copy_tail actually makes a new copy.But callers of addRangeTableEntryForJoin,expects to handle a list or NIL, if we free the memory,Shouldn't I set the variable *colnames* to NIL, too?best regards,Ranier Vilela", "msg_date": "Mon, 10 Jun 2024 08:44:36 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: list_free in addRangeTableEntryForJoin" }, { "msg_contents": ">But callers of addRangeTableEntryForJoin(), expects to handle a list \nor NIL, if we free the memory\nI've thoroughly reviewed all callers of the \n`addRangeTableEntryForJoin()` function and confirmed that the list is \nnot used after this function is called. Since \n`addRangeTableEntryForJoin()` is the last function to use this list, it \nwould be more efficient to free the `List` at the point of its declaration.\n\nI'll attach new patch where I free the lists.\n\nRegards,\n\nIlia Evdokimov,\n\nTantor Labs LCC", "msg_date": "Mon, 10 Jun 2024 15:11:27 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: list_free in addRangeTableEntryForJoin" }, { "msg_contents": "Em seg., 10 de jun. de 2024 às 09:11, Ilia Evdokimov <\[email protected]> escreveu:\n\n> >But callers of addRangeTableEntryForJoin(), expects to handle a list\n> or NIL, if we free the memory\n> I've thoroughly reviewed all callers of the\n> `addRangeTableEntryForJoin()` function and confirmed that the list is\n> not used after this function is called. Since\n> `addRangeTableEntryForJoin()` is the last function to use this list, it\n> would be more efficient to free the `List` at the point of its declaration.\n>\n> I'll attach new patch where I free the lists.\n>\nSeems like a better style.\n\nNow you need to analyze whether the memory in question is not managed by a\nContext and released in a block,\nwhich would make this release useless.\n\nbest regards,\nRanier Vilela\n\nEm seg., 10 de jun. de 2024 às 09:11, Ilia Evdokimov <[email protected]> escreveu: >But callers of addRangeTableEntryForJoin(), expects to handle a list \nor NIL, if we free the memory\nI've thoroughly reviewed all callers of the \n`addRangeTableEntryForJoin()` function and confirmed that the list is \nnot used after this function is called. Since \n`addRangeTableEntryForJoin()` is the last function to use this list, it \nwould be more efficient to free the `List` at the point of its declaration.\n\nI'll attach new patch where I free the lists.Seems like a better style.Now you need to analyze whether the memory in question is not managed by a Context and released in a block, which would make this release useless.best regards,Ranier Vilela", "msg_date": "Mon, 10 Jun 2024 09:18:54 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: list_free in addRangeTableEntryForJoin" }, { "msg_contents": ">Now you need to analyze whether the memory in question is not managed \nby a Context\nI've already analyzed. Let's explain details:\n\n\n1. analyze.c\n1718: List* targetnames;\n1815: targetnames = NIL;\n1848: targetnames = lappend(targetnames, makeString(colName));\n1871: addRangeTableEntryForJoin(...);\n=> list_free(targetnames);\n\n2. parse_clause.c\n1163: List* res_colnames;\n1289: res_colnames = NIL;\n1324: foreach(col, res_colnames);\n1396: res_colnames = lappend(res_colnames, lfirst(ucol));\n1520, 1525: extractRemainingColumns(...);\n        |\n     290: *res_colnames = lappend(*res_colnames, lfirst(lc));\n1543: addRangeTableEntryForJoin(...);\n=> list_free(res_colnames);\n\n\nAs you can see, there are no other pointers working with this block of \nmemory, and all operations above are either read-only or append nodes to \nthe lists. If I am mistaken, please correct me.\nFurthermore, I will call `list_free` immediately after \n`addRangeTableEntryForJoin()`. The new patch is attached.\n\nThanks for reviewing. I'm looking forward to new suggestions.\n\nRegards,\n\nIlia Evdokimov,\n\nTantor Labs LCC", "msg_date": "Mon, 10 Jun 2024 16:45:39 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: list_free in addRangeTableEntryForJoin" }, { "msg_contents": "Em seg., 10 de jun. de 2024 às 10:45, Ilia Evdokimov <\[email protected]> escreveu:\n\n> >Now you need to analyze whether the memory in question is not managed\n> by a Context\n> I've already analyzed. Let's explain details:\n>\n>\n> 1. analyze.c\n> 1718: List* targetnames;\n> 1815: targetnames = NIL;\n> 1848: targetnames = lappend(targetnames, makeString(colName));\n> 1871: addRangeTableEntryForJoin(...);\n> => list_free(targetnames);\n>\n> 2. parse_clause.c\n> 1163: List* res_colnames;\n> 1289: res_colnames = NIL;\n> 1324: foreach(col, res_colnames);\n> 1396: res_colnames = lappend(res_colnames, lfirst(ucol));\n> 1520, 1525: extractRemainingColumns(...);\n> |\n> 290: *res_colnames = lappend(*res_colnames, lfirst(lc));\n> 1543: addRangeTableEntryForJoin(...);\n> => list_free(res_colnames);\n>\n>\n> As you can see, there are no other pointers working with this block of\n> memory,\n\nYou can see this line?\nsortnscolumns = (ParseNamespaceColumn *)\npalloc0\n\nAll allocations in the backend occur at Context memory managers.\nResource leak can occur mainly with TopTransactionContext.\n\n\n> and all operations above are either read-only or append nodes to\n> the lists. If I am mistaken, please correct me.\n> Furthermore, I will call `list_free` immediately after\n> `addRangeTableEntryForJoin()`. The new patch is attached.\n>\nThis style is not recommended.\nYou prevent the use of colnames after calling addRangeTableEntryForJoin.\n\nBetter free at the end of the function, like 0002.\n\nTip 0001, 0002, 0003 numerations are to different patchs.\nv1, v2, v3 are new versions of the same patch.\n\nbest regards,\nRanier Vilela\n\nEm seg., 10 de jun. de 2024 às 10:45, Ilia Evdokimov <[email protected]> escreveu: >Now you need to analyze whether the memory in question is not managed \nby a Context\nI've already analyzed. Let's explain details:\n\n\n1. analyze.c\n1718: List* targetnames;\n1815: targetnames = NIL;\n1848: targetnames = lappend(targetnames, makeString(colName));\n1871: addRangeTableEntryForJoin(...);\n=> list_free(targetnames);\n\n2. parse_clause.c\n1163: List* res_colnames;\n1289: res_colnames = NIL;\n1324: foreach(col, res_colnames);\n1396: res_colnames = lappend(res_colnames, lfirst(ucol));\n1520, 1525: extractRemainingColumns(...);\n        |\n     290: *res_colnames = lappend(*res_colnames, lfirst(lc));\n1543: addRangeTableEntryForJoin(...);\n=> list_free(res_colnames);\n\n\nAs you can see, there are no other pointers working with this block of \nmemory, You can see this line?sortnscolumns = (ParseNamespaceColumn *)\t\tpalloc0All allocations in the backend occur at Context memory managers.Resource leak can occur mainly with TopTransactionContext. and all operations above are either read-only or append nodes to \nthe lists. If I am mistaken, please correct me.\nFurthermore, I will call `list_free` immediately after \n`addRangeTableEntryForJoin()`. The new patch is attached.This style is not recommended.You prevent the use of colnames after calling addRangeTableEntryForJoin.Better free at the end of the function, like 0002.Tip 0001, 0002, 0003 numerations are to different patchs.v1, v2, v3 are new versions of the same patch.best regards,Ranier Vilela", "msg_date": "Mon, 10 Jun 2024 10:58:16 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: list_free in addRangeTableEntryForJoin" }, { "msg_contents": "Ilia Evdokimov <[email protected]> writes:\n> I have identified a potential memory leak in the \n> `addRangeTableEntryForJoin()` function. The second parameter of \n> `addRangeTableEntryForJoin()`, `colnames`, is a `List*` that is \n> concatenated with another `List*`, `eref->colnames`, using \n> `list_concat()`. We need to pass only the last `numaliases` elements of \n> the list, for which we use `list_copy_tail`. This function creates a \n> copy of the `colnames` list, resulting in `colnames` pointing to the \n> current list that will not be freed. Consequently, a new list is already \n> concatenated.\n\n> To address this issue, I have invoked `list_free(colnames)` afterwards. \n\nSadly, I think this is basically a waste of development effort.\nThe parser, like the planner and rewriter and many other Postgres\nsubsystems, leaks tons of small allocations like this. That's\n*by design*, and *it's okay*, because we run these steps in short-\nlived memory contexts that will be reclaimed in toto as soon as\nthe useful output data structures are no longer needed. It's not\nworth the sort of intellectual effort you've put in in this thread\nto prove whether individual small structures are no longer needed.\nPlus, in many cases that isn't obvious, and/or it'd be notationally\nmessy to reclaim things explicitly at a suitable point.\n\nIf we were talking about a potentially-very-large data structure,\nor one that we might create very many instances of during one\nparsing pass, then it might be worth the trouble to free explicitly;\nbut I don't see that concern applying here.\n\nYou might find src/backend/utils/mmgr/README to be worth reading.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Jun 2024 11:56:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: list_free in addRangeTableEntryForJoin" } ]
[ { "msg_contents": "\nTom and Nathan opined recently that providing for non-text mode for \npg_dumpall would be a Good Thing (TM). Not having it has been a \nlong-standing complaint, so I've decided to give it a go.\n\nI think we would need to restrict it to directory mode, at least to \nbegin with. I would have a toc.dat with a different magic block (say \n\"PGGLO\" instead of \"PGDMP\") containing the global entries (roles, \ntablespaces, databases). Then for each database there would be a \nsubdirectory (named for its toc entry) with a standard directory mode \ndump for that database. These could be generated in parallel (possibly \nby pg_dumpall calling pg_dump for each database). pg_restore on \ndetecting a global type toc.data would restore the globals and then each \nof the databases (again possibly in parallel).\n\nI'm sure there are many wrinkles I haven't thought of, but I don't see \nany insurmountable obstacles, just a significant amount of code.\n\nBarring the unforeseen my main is to have a preliminary patch by the \nSeptember CF.\n\nFollowing that I would turn my attention to using it in pg_upgrade.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 10 Jun 2024 08:58:49 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Non-text mode for pg_dumpall" }, { "msg_contents": "On Mon, Jun 10, 2024 at 08:58:49AM -0400, Andrew Dunstan wrote:\n> Tom and Nathan opined recently that providing for non-text mode for\n> pg_dumpall would be a Good Thing (TM). Not having it has been a\n> long-standing complaint, so I've decided to give it a go.\n\nThank you!\n\n> I think we would need to restrict it to directory mode, at least to begin\n> with. I would have a toc.dat with a different magic block (say \"PGGLO\"\n> instead of \"PGDMP\") containing the global entries (roles, tablespaces,\n> databases). Then for each database there would be a subdirectory (named for\n> its toc entry) with a standard directory mode dump for that database. These\n> could be generated in parallel (possibly by pg_dumpall calling pg_dump for\n> each database). pg_restore on detecting a global type toc.data would restore\n> the globals and then each of the databases (again possibly in parallel).\n\nI'm curious why we couldn't also support the \"custom\" format.\n\n> Following that I would turn my attention to using it in pg_upgrade.\n\n+1\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 10 Jun 2024 09:14:08 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-text mode for pg_dumpall" }, { "msg_contents": "\nOn 2024-06-10 Mo 10:14, Nathan Bossart wrote:\n> On Mon, Jun 10, 2024 at 08:58:49AM -0400, Andrew Dunstan wrote:\n>> Tom and Nathan opined recently that providing for non-text mode for\n>> pg_dumpall would be a Good Thing (TM). Not having it has been a\n>> long-standing complaint, so I've decided to give it a go.\n> Thank you!\n>\n>> I think we would need to restrict it to directory mode, at least to begin\n>> with. I would have a toc.dat with a different magic block (say \"PGGLO\"\n>> instead of \"PGDMP\") containing the global entries (roles, tablespaces,\n>> databases). Then for each database there would be a subdirectory (named for\n>> its toc entry) with a standard directory mode dump for that database. These\n>> could be generated in parallel (possibly by pg_dumpall calling pg_dump for\n>> each database). pg_restore on detecting a global type toc.data would restore\n>> the globals and then each of the databases (again possibly in parallel).\n> I'm curious why we couldn't also support the \"custom\" format.\n\n\nWe could, but the housekeeping would be a bit harder. We'd need to keep \npointers to the offsets of the per-database TOCs (I don't want to have a \nsingle per-cluster TOC). And we can't produce it in parallel, so I'd \nrather start with something we can produce in parallel.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 10 Jun 2024 10:51:42 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Non-text mode for pg_dumpall" }, { "msg_contents": "On Mon, Jun 10, 2024 at 4:14 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Mon, Jun 10, 2024 at 08:58:49AM -0400, Andrew Dunstan wrote:\n> > Tom and Nathan opined recently that providing for non-text mode for\n> > pg_dumpall would be a Good Thing (TM). Not having it has been a\n> > long-standing complaint, so I've decided to give it a go.\n>\n> Thank you!\n>\n\nIndeed, this has been quite annoying!\n\n\n> I think we would need to restrict it to directory mode, at least to begin\n> > with. I would have a toc.dat with a different magic block (say \"PGGLO\"\n> > instead of \"PGDMP\") containing the global entries (roles, tablespaces,\n> > databases). Then for each database there would be a subdirectory (named\n> for\n> > its toc entry) with a standard directory mode dump for that database.\n> These\n> > could be generated in parallel (possibly by pg_dumpall calling pg_dump\n> for\n> > each database). pg_restore on detecting a global type toc.data would\n> restore\n> > the globals and then each of the databases (again possibly in parallel).\n>\n> I'm curious why we couldn't also support the \"custom\" format.\n>\n\nOr maybe even a combo - a directory of custom format files? Plus that one\nspecial file being globals? I'd say that's what most use cases I've seen\nwould prefer.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Jun 10, 2024 at 4:14 PM Nathan Bossart <[email protected]> wrote:On Mon, Jun 10, 2024 at 08:58:49AM -0400, Andrew Dunstan wrote:\n> Tom and Nathan opined recently that providing for non-text mode for\n> pg_dumpall would be a Good Thing (TM). Not having it has been a\n> long-standing complaint, so I've decided to give it a go.\n\nThank you!Indeed, this has been quite annoying!\n> I think we would need to restrict it to directory mode, at least to begin\n> with. I would have a toc.dat with a different magic block (say \"PGGLO\"\n> instead of \"PGDMP\") containing the global entries (roles, tablespaces,\n> databases). Then for each database there would be a subdirectory (named for\n> its toc entry) with a standard directory mode dump for that database. These\n> could be generated in parallel (possibly by pg_dumpall calling pg_dump for\n> each database). pg_restore on detecting a global type toc.data would restore\n> the globals and then each of the databases (again possibly in parallel).\n\nI'm curious why we couldn't also support the \"custom\" format.Or maybe even a combo - a directory of custom format files? Plus that one special file being globals? I'd say that's what most use cases I've seen would prefer.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 10 Jun 2024 16:52:06 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-text mode for pg_dumpall" }, { "msg_contents": "On Mon, Jun 10, 2024 at 10:51:42AM -0400, Andrew Dunstan wrote:\n> On 2024-06-10 Mo 10:14, Nathan Bossart wrote:\n>> I'm curious why we couldn't also support the \"custom\" format.\n> \n> We could, but the housekeeping would be a bit harder. We'd need to keep\n> pointers to the offsets of the per-database TOCs (I don't want to have a\n> single per-cluster TOC). And we can't produce it in parallel, so I'd rather\n> start with something we can produce in parallel.\n\nGot it.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 10 Jun 2024 09:52:33 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-text mode for pg_dumpall" }, { "msg_contents": "On Mon, Jun 10, 2024 at 04:52:06PM +0200, Magnus Hagander wrote:\n> On Mon, Jun 10, 2024 at 4:14 PM Nathan Bossart <[email protected]>\n> wrote:\n>> I'm curious why we couldn't also support the \"custom\" format.\n> \n> Or maybe even a combo - a directory of custom format files? Plus that one\n> special file being globals? I'd say that's what most use cases I've seen\n> would prefer.\n\nIs there a particular advantage to that approach as opposed to just using\n\"directory\" mode for everything? I know pg_upgrade uses \"custom\" mode for\neach of the databases, so a combo approach would be a closer match to the\nexisting behavior, but that doesn't strike me as an especially strong\nreason to keep doing it that way.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 10 Jun 2024 10:03:05 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-text mode for pg_dumpall" }, { "msg_contents": "On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Mon, Jun 10, 2024 at 04:52:06PM +0200, Magnus Hagander wrote:\n> > On Mon, Jun 10, 2024 at 4:14 PM Nathan Bossart <[email protected]\n> >\n> > wrote:\n> >> I'm curious why we couldn't also support the \"custom\" format.\n> >\n> > Or maybe even a combo - a directory of custom format files? Plus that one\n> > special file being globals? I'd say that's what most use cases I've seen\n> > would prefer.\n>\n> Is there a particular advantage to that approach as opposed to just using\n> \"directory\" mode for everything? I know pg_upgrade uses \"custom\" mode for\n> each of the databases, so a combo approach would be a closer match to the\n> existing behavior, but that doesn't strike me as an especially strong\n> reason to keep doing it that way.\n>\n\nA gazillion files to deal with? Much easier to work with individual custom\nfiles if you're moving databases around and things like that.\nMuch easier to monitor eg sizes/dates if you're using it for backups.\n\nIt's not things that are make-it-or-break-it or anything, but there are\nsome smaller things that definitely can be useful.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <[email protected]> wrote:On Mon, Jun 10, 2024 at 04:52:06PM +0200, Magnus Hagander wrote:\n> On Mon, Jun 10, 2024 at 4:14 PM Nathan Bossart <[email protected]>\n> wrote:\n>> I'm curious why we couldn't also support the \"custom\" format.\n> \n> Or maybe even a combo - a directory of custom format files? Plus that one\n> special file being globals? I'd say that's what most use cases I've seen\n> would prefer.\n\nIs there a particular advantage to that approach as opposed to just using\n\"directory\" mode for everything?  I know pg_upgrade uses \"custom\" mode for\neach of the databases, so a combo approach would be a closer match to the\nexisting behavior, but that doesn't strike me as an especially strong\nreason to keep doing it that way.A gazillion files to deal with? Much easier to work with individual custom files if you're moving databases around and things like that. Much easier to monitor eg sizes/dates if you're using it for backups.It's not things that are make-it-or-break-it or anything, but there are some smaller things that definitely can be useful. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 10 Jun 2024 17:45:19 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-text mode for pg_dumpall" }, { "msg_contents": "On Mon, Jun 10, 2024 at 05:45:19PM +0200, Magnus Hagander wrote:\n> On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <[email protected]>\n> wrote:\n>> Is there a particular advantage to that approach as opposed to just using\n>> \"directory\" mode for everything? I know pg_upgrade uses \"custom\" mode for\n>> each of the databases, so a combo approach would be a closer match to the\n>> existing behavior, but that doesn't strike me as an especially strong\n>> reason to keep doing it that way.\n> \n> A gazillion files to deal with? Much easier to work with individual custom\n> files if you're moving databases around and things like that.\n> Much easier to monitor eg sizes/dates if you're using it for backups.\n> \n> It's not things that are make-it-or-break-it or anything, but there are\n> some smaller things that definitely can be useful.\n\nMakes sense, thanks for elaborating.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 10 Jun 2024 11:20:18 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-text mode for pg_dumpall" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <[email protected]>\n> wrote:\n>> Is there a particular advantage to that approach as opposed to just using\n>> \"directory\" mode for everything?\n\n> A gazillion files to deal with? Much easier to work with individual custom\n> files if you're moving databases around and things like that.\n> Much easier to monitor eg sizes/dates if you're using it for backups.\n\nYou can always tar up the directory tree after-the-fact if you want\none file. Sure, that step's not parallelized, but I think we'd need\nsome non-parallelized copying to create such a file anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Jun 2024 12:21:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-text mode for pg_dumpall" }, { "msg_contents": "\nOn 2024-06-10 Mo 12:21, Tom Lane wrote:\n> Magnus Hagander <[email protected]> writes:\n>> On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <[email protected]>\n>> wrote:\n>>> Is there a particular advantage to that approach as opposed to just using\n>>> \"directory\" mode for everything?\n>> A gazillion files to deal with? Much easier to work with individual custom\n>> files if you're moving databases around and things like that.\n>> Much easier to monitor eg sizes/dates if you're using it for backups.\n> You can always tar up the directory tree after-the-fact if you want\n> one file. Sure, that step's not parallelized, but I think we'd need\n> some non-parallelized copying to create such a file anyway.\n>\n> \t\t\t\n\n\nYeah.\n\nI think I can probably allow for Magnus' suggestion fairly easily, but \nif I have to choose I'm going to go for the format that can be produced \nwith the maximum parallelism.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 10 Jun 2024 13:27:27 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Non-text mode for pg_dumpall" }, { "msg_contents": "On Mon, Jun 10, 2024 at 6:21 PM Tom Lane <[email protected]> wrote:\n\n> Magnus Hagander <[email protected]> writes:\n> > On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <[email protected]\n> >\n> > wrote:\n> >> Is there a particular advantage to that approach as opposed to just\n> using\n> >> \"directory\" mode for everything?\n>\n> > A gazillion files to deal with? Much easier to work with individual\n> custom\n> > files if you're moving databases around and things like that.\n> > Much easier to monitor eg sizes/dates if you're using it for backups.\n>\n> You can always tar up the directory tree after-the-fact if you want\n> one file. Sure, that step's not parallelized, but I think we'd need\n> some non-parallelized copying to create such a file anyway.\n>\n\nThat would require double the disk space.\n\nBut you can also just run pg_dump manually on each database and a\npg_dumpall -g like people are doing today -- I thought this whole thing was\nabout making it more convenient :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Jun 10, 2024 at 6:21 PM Tom Lane <[email protected]> wrote:Magnus Hagander <[email protected]> writes:\n> On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <[email protected]>\n> wrote:\n>> Is there a particular advantage to that approach as opposed to just using\n>> \"directory\" mode for everything?\n\n> A gazillion files to deal with? Much easier to work with individual custom\n> files if you're moving databases around and things like that.\n> Much easier to monitor eg sizes/dates if you're using it for backups.\n\nYou can always tar up the directory tree after-the-fact if you want\none file.  Sure, that step's not parallelized, but I think we'd need\nsome non-parallelized copying to create such a file anyway.That would require double the disk space.But you can also just run pg_dump manually on each database and a pg_dumpall -g like people are doing today -- I thought this whole thing was about making it more convenient :) --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 10 Jun 2024 21:36:37 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-text mode for pg_dumpall" } ]
[ { "msg_contents": "Hi,\n\nTo investigate a report of both postgres and pgbouncer having issues when a\nlot of new connections aree established, I used pgbench -C. Oddly, on an\nearly attempt, the bottleneck wasn't postgres+pgbouncer, it was pgbench. But\nonly when using TCP, not with unix sockets.\n\nc=40;pgbench -C -n -c$c -j$c -T5 -f <(echo 'select 1') 'port=6432 host=127.0.0.1 user=test dbname=postgres password=fake'\n\nhost=127.0.0.1: 16465\nhost=127.0.0.1,gssencmode=disable 20860\nhost=/tmp: 49286\n\nNote that the server does *not* support gss, yet gss has a substantial\nperformance impact.\n\nObviously the connection rates here absurdly high and outside of badly written\napplications likely never practically relevant. However, the number of cores\nin systems are going up, and this quite possibly will become relevant in more\nrealistic scenarios (lock contention kicks in earlier the more cores you\nhave).\n\nAnd it doesn't seem great that something as rarely used as gss introduces\noverhead to very common paths.\n\nHere's a bottom-up profile:\n\n- 32.10% pgbench [kernel.kallsyms] [k] queued_spin_lock_slowpath\n - 32.09% queued_spin_lock_slowpath\n - 16.15% futex_wake\n do_futex\n __x64_sys_futex\n do_syscall_64\n - entry_SYSCALL_64_after_hwframe\n - 16.15% __GI___lll_lock_wake\n - __GI___pthread_mutex_unlock_usercnt\n - 5.12% gssint_select_mech_type\n - 4.36% gss_inquire_attrs_for_mech\n - 2.85% gss_indicate_mechs\n - gss_indicate_mechs_by_attrs\n - 1.58% gss_acquire_cred_from\n gss_acquire_cred\n pg_GSS_have_cred_cache\n select_next_encryption_method (inlined)\n init_allowed_encryption_methods (inlined)\n PQconnectPoll\n pqConnectDBStart (inlined)\n PQconnectStartParams\n PQconnectdbParams\n doConnect\n\n\nAnd a bottom-up profile:\n\n- 32.10% pgbench [kernel.kallsyms] [k] queued_spin_lock_slowpath\n - 32.09% queued_spin_lock_slowpath\n - 16.15% futex_wake\n do_futex\n __x64_sys_futex\n do_syscall_64\n - entry_SYSCALL_64_after_hwframe\n - 16.15% __GI___lll_lock_wake\n - __GI___pthread_mutex_unlock_usercnt\n - 5.12% gssint_select_mech_type\n - 4.36% gss_inquire_attrs_for_mech\n - 2.85% gss_indicate_mechs\n - gss_indicate_mechs_by_attrs\n - 1.58% gss_acquire_cred_from\n gss_acquire_cred\n pg_GSS_have_cred_cache\n select_next_encryption_method (inlined)\n init_allowed_encryption_methods (inlined)\n PQconnectPoll\n pqConnectDBStart (inlined)\n PQconnectStartParams\n PQconnectdbParams\n doConnect\n\n\n\nClearly the contention originates outside of our code, but is triggered by\ndoing pg_GSS_have_cred_cache() every time a connection is established.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Jun 2024 11:12:12 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "libpq contention due to gss even when not using gss" }, { "msg_contents": "> On Mon, Jun 10, 2024 at 11:12:12AM GMT, Andres Freund wrote:\n> Hi,\n>\n> To investigate a report of both postgres and pgbouncer having issues when a\n> lot of new connections aree established, I used pgbench -C. Oddly, on an\n> early attempt, the bottleneck wasn't postgres+pgbouncer, it was pgbench. But\n> only when using TCP, not with unix sockets.\n>\n> c=40;pgbench -C -n -c$c -j$c -T5 -f <(echo 'select 1') 'port=6432 host=127.0.0.1 user=test dbname=postgres password=fake'\n>\n> host=127.0.0.1: 16465\n> host=127.0.0.1,gssencmode=disable 20860\n> host=/tmp: 49286\n>\n> Note that the server does *not* support gss, yet gss has a substantial\n> performance impact.\n>\n> Obviously the connection rates here absurdly high and outside of badly written\n> applications likely never practically relevant. However, the number of cores\n> in systems are going up, and this quite possibly will become relevant in more\n> realistic scenarios (lock contention kicks in earlier the more cores you\n> have).\n\nBy not supporting gss I assume you mean having built with --with-gssapi,\nbut only host (not hostgssenc) records in pg_hba, right?\n\n\n", "msg_date": "Thu, 13 Jun 2024 17:33:57 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq contention due to gss even when not using gss" }, { "msg_contents": "Hi,\n\nOn 2024-06-13 17:33:57 +0200, Dmitry Dolgov wrote:\n> > On Mon, Jun 10, 2024 at 11:12:12AM GMT, Andres Freund wrote:\n> > Hi,\n> >\n> > To investigate a report of both postgres and pgbouncer having issues when a\n> > lot of new connections aree established, I used pgbench -C. Oddly, on an\n> > early attempt, the bottleneck wasn't postgres+pgbouncer, it was pgbench. But\n> > only when using TCP, not with unix sockets.\n> >\n> > c=40;pgbench -C -n -c$c -j$c -T5 -f <(echo 'select 1') 'port=6432 host=127.0.0.1 user=test dbname=postgres password=fake'\n> >\n> > host=127.0.0.1: 16465\n> > host=127.0.0.1,gssencmode=disable 20860\n> > host=/tmp: 49286\n> >\n> > Note that the server does *not* support gss, yet gss has a substantial\n> > performance impact.\n> >\n> > Obviously the connection rates here absurdly high and outside of badly written\n> > applications likely never practically relevant. However, the number of cores\n> > in systems are going up, and this quite possibly will become relevant in more\n> > realistic scenarios (lock contention kicks in earlier the more cores you\n> > have).\n> \n> By not supporting gss I assume you mean having built with --with-gssapi,\n> but only host (not hostgssenc) records in pg_hba, right?\n\nYes, the latter. Or not having kerberos set up on the client side.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 13 Jun 2024 10:30:24 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq contention due to gss even when not using gss" }, { "msg_contents": "> On Thu, Jun 13, 2024 at 10:30:24AM GMT, Andres Freund wrote:\n> > > To investigate a report of both postgres and pgbouncer having issues when a\n> > > lot of new connections aree established, I used pgbench -C. Oddly, on an\n> > > early attempt, the bottleneck wasn't postgres+pgbouncer, it was pgbench. But\n> > > only when using TCP, not with unix sockets.\n> > >\n> > > c=40;pgbench -C -n -c$c -j$c -T5 -f <(echo 'select 1') 'port=6432 host=127.0.0.1 user=test dbname=postgres password=fake'\n> > >\n> > > host=127.0.0.1: 16465\n> > > host=127.0.0.1,gssencmode=disable 20860\n> > > host=/tmp: 49286\n> > >\n> > > Note that the server does *not* support gss, yet gss has a substantial\n> > > performance impact.\n> > >\n> > > Obviously the connection rates here absurdly high and outside of badly written\n> > > applications likely never practically relevant. However, the number of cores\n> > > in systems are going up, and this quite possibly will become relevant in more\n> > > realistic scenarios (lock contention kicks in earlier the more cores you\n> > > have).\n> >\n> > By not supporting gss I assume you mean having built with --with-gssapi,\n> > but only host (not hostgssenc) records in pg_hba, right?\n>\n> Yes, the latter. Or not having kerberos set up on the client side.\n\nI've been experimenting with both:\n\n* The server is built without gssapi, but the client does support it.\n This produces exactly the contention you're talking about.\n\n* The server is built with gssapi, but do not use it in pg_hba, the\n client does support gssapi. In this case the difference between\n gssencmode=disable/prefer is even more dramatic in my test case\n (milliseconds vs seconds) due to the environment with configured\n kerberos (for other purposes, thus gss_init_sec_context spends huge\n amount of time to still return nothing).\n\nAt the same time after quick look I don't see an easy way to avoid that.\nCurrent implementation tries to initialize gss before getting any\nconfirmation from the server whether it's supported. Doing this other\nway around would probably just shift overhead to the server side.\n\n\n", "msg_date": "Fri, 14 Jun 2024 10:46:04 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq contention due to gss even when not using gss" }, { "msg_contents": "> On 14 Jun 2024, at 10:46, Dmitry Dolgov <[email protected]> wrote:\n> \n>> On Thu, Jun 13, 2024 at 10:30:24AM GMT, Andres Freund wrote:\n>>>> To investigate a report of both postgres and pgbouncer having issues when a\n>>>> lot of new connections aree established, I used pgbench -C. Oddly, on an\n>>>> early attempt, the bottleneck wasn't postgres+pgbouncer, it was pgbench. But\n>>>> only when using TCP, not with unix sockets.\n>>>> \n>>>> c=40;pgbench -C -n -c$c -j$c -T5 -f <(echo 'select 1') 'port=6432 host=127.0.0.1 user=test dbname=postgres password=fake'\n>>>> \n>>>> host=127.0.0.1: 16465\n>>>> host=127.0.0.1,gssencmode=disable 20860\n>>>> host=/tmp: 49286\n>>>> \n>>>> Note that the server does *not* support gss, yet gss has a substantial\n>>>> performance impact.\n>>>> \n>>>> Obviously the connection rates here absurdly high and outside of badly written\n>>>> applications likely never practically relevant. However, the number of cores\n>>>> in systems are going up, and this quite possibly will become relevant in more\n>>>> realistic scenarios (lock contention kicks in earlier the more cores you\n>>>> have).\n>>> \n>>> By not supporting gss I assume you mean having built with --with-gssapi,\n>>> but only host (not hostgssenc) records in pg_hba, right?\n>> \n>> Yes, the latter. Or not having kerberos set up on the client side.\n> \n> I've been experimenting with both:\n> \n> * The server is built without gssapi, but the client does support it.\n> This produces exactly the contention you're talking about.\n> \n> * The server is built with gssapi, but do not use it in pg_hba, the\n> client does support gssapi. In this case the difference between\n> gssencmode=disable/prefer is even more dramatic in my test case\n> (milliseconds vs seconds) due to the environment with configured\n> kerberos (for other purposes, thus gss_init_sec_context spends huge\n> amount of time to still return nothing).\n> \n> At the same time after quick look I don't see an easy way to avoid that.\n> Current implementation tries to initialize gss before getting any\n> confirmation from the server whether it's supported. Doing this other\n> way around would probably just shift overhead to the server side.\n\nThe main problem seems to be that we check whether or not there is a credential\ncache when we try to select encryption but not yet authentication, as a way to\nfigure out if gssenc it as all worth trying? I experimented with deferring it\nwith potentially cheaper heuristics in encryption selection, but it seems hard\nto get around since other methods were even more expensive.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 14 Jun 2024 12:12:55 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq contention due to gss even when not using gss" }, { "msg_contents": "> On Fri, Jun 14, 2024 at 12:12:55PM GMT, Daniel Gustafsson wrote:\n> > I've been experimenting with both:\n> >\n> > * The server is built without gssapi, but the client does support it.\n> > This produces exactly the contention you're talking about.\n> >\n> > * The server is built with gssapi, but do not use it in pg_hba, the\n> > client does support gssapi. In this case the difference between\n> > gssencmode=disable/prefer is even more dramatic in my test case\n> > (milliseconds vs seconds) due to the environment with configured\n> > kerberos (for other purposes, thus gss_init_sec_context spends huge\n> > amount of time to still return nothing).\n> >\n> > At the same time after quick look I don't see an easy way to avoid that.\n> > Current implementation tries to initialize gss before getting any\n> > confirmation from the server whether it's supported. Doing this other\n> > way around would probably just shift overhead to the server side.\n>\n> The main problem seems to be that we check whether or not there is a credential\n> cache when we try to select encryption but not yet authentication, as a way to\n> figure out if gssenc it as all worth trying?\n\nYep, this is my understanding as well. Which other methods did you try\nfor checking that?\n\n\n", "msg_date": "Fri, 14 Jun 2024 17:52:27 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq contention due to gss even when not using gss" }, { "msg_contents": "Hi,\n\nOn 2024-06-14 10:46:04 +0200, Dmitry Dolgov wrote:\n> At the same time after quick look I don't see an easy way to avoid that.\n> Current implementation tries to initialize gss before getting any\n> confirmation from the server whether it's supported. Doing this other\n> way around would probably just shift overhead to the server side.\n\nInitializing the gss cache at all isn't so much the problem. It's that we do\nit for every connection. And that doing so requires locking inside gss. So\nmaybe we could just globally cache that gss isn't available, instead of\nrediscovering it over and over for every new connection.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 14 Jun 2024 09:02:01 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq contention due to gss even when not using gss" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> Initializing the gss cache at all isn't so much the problem. It's that we do\n> it for every connection. And that doing so requires locking inside gss. So\n> maybe we could just globally cache that gss isn't available, instead of\n> rediscovering it over and over for every new connection.\n\nI had the impression that krb5 already had such a cache internally.\nMaybe they don't cache the \"failed\" state though. I doubt we'd\nwant to either in long-lived processes --- what if the user\ninstalls the credential while we're running?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Jun 2024 12:27:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq contention due to gss even when not using gss" }, { "msg_contents": "Hi,\n\nOn 2024-06-14 12:27:12 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Initializing the gss cache at all isn't so much the problem. It's that we do\n> > it for every connection. And that doing so requires locking inside gss. So\n> > maybe we could just globally cache that gss isn't available, instead of\n> > rediscovering it over and over for every new connection.\n> \n> I had the impression that krb5 already had such a cache internally.\n\nWell, if so, it clearly doesn't seem to work very well, given that it causes\ncontention at ~15k lookups/sec. That's obviously a trivial number for anything\ncached, even with the worst possible locking regimen.\n\n\n> Maybe they don't cache the \"failed\" state though. I doubt we'd\n> want to either in long-lived processes --- what if the user\n> installs the credential while we're running?\n\nIf we can come up with something better - cool. But it doesn't seem great that\ngss introduces contention for the vast majority of folks that use libpq in\nenvironments that never use gss.\n\nI don't think we should cache the set of credentials when gss is actually\navailable on a process-wide basis, just the fact that gss isn't available at\nall. I think it's very unlikely for that fact to change while an application\nis running. And if it happens, requiring a restart in those cases seems an\nacceptable price to pay for what is effectively a niche feature.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 14 Jun 2024 09:45:04 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq contention due to gss even when not using gss" } ]
[ { "msg_contents": "Hi all,\n\nFor the v18 cycle, I would like to try to get pytest [1] in as a\nsupported test driver, in addition to the current offerings.\n\n(I'm tempted to end the email there.)\n\nWe had an unconference session at PGConf.dev [2] around this topic.\nThere seemed to be a number of nodding heads and some growing\nmomentum. So here's a thread to try to build wider consensus. If you\nhave a competing or complementary test proposal in mind, heads up!\n\n== Problem Statement(s) ==\n\n1. We'd like to rerun a failing test by itself.\n\n2. It'd be helpful to see _what_ failed without poring over logs.\n\nThese two got the most nodding heads of the points I presented. (#1\nreceived tongue-in-cheek applause.) I think most modern test\nframeworks are going to give you these things, but unfortunately we\ndon't have them.\n\nAdditionally,\n\n3. Many would like to use modern developer tooling during testing\n(language servers! autocomplete! debuggers! type checking!) and we\ncan't right now.\n\n4. It'd be great to split apart client-side tests from server-side\ntests. Driving Postgres via psql all the time is fine for acceptance\ntesting, but it becomes a big problem when you need to test how\nclients talk to servers with incompatible feature sets, or how a peer\nbehaves when talking to something buggy.\n\n5. Personally, I want to implement security features test-first (for\nhigh code coverage and other benefits), and our Perl-based tests are\nusually too cumbersome for that.\n\n== Why pytest? ==\n\n From the small and biased sample at the unconference session, it looks\nlike a number of people have independently settled on pytest in their\nown projects. In my opinion, pytest occupies a nice space where it\nsolves some of the above problems for us, and it gives us plenty of\ntools to solve the other problems without too much pain.\n\nProblem 1 (rerun failing tests): One architectural roadblock to this\nin our Test::More suite is that tests depend on setup that's done by\nprevious tests. pytest allows you to declare each test's setup\nrequirements via pytest fixtures, letting the test runner build up the\nworld exactly as it needs to be for a single isolated test. These\nfixtures may be given a \"scope\" so that multiple tests may share the\nsame setup for performance or other reasons.\n\nProblem 2 (seeing what failed): pytest does this via assertion\nintrospection and very detailed failure reporting. If you haven't seen\nthis before, take a look at the pytest homepage [1]; there's an\nexample of a full log.\n\nProblem 3 (modern tooling): We get this from Python's very active\ndeveloper base.\n\nProblems 4 (splitting client and server tests) and 5 (making it easier\nto write tests first) aren't really Python- or pytest-specific, but I\nhave done both quite happily in my OAuth work [3], and I've since\nadapted that suite multiple times to develop and test other proposals\non this list, like LDAP/SCRAM, client encryption, direct SSL, and\ncompression.\n\nPython's standard library has lots of power by itself, with very good\ndocumentation. And virtualenvs and better package tooling have made it\nmuch easier, IMO, to avoid the XKCD dependency tangle [4] of the\n2010s. When it comes to third-party packages, which I think we're\nprobably going to want in moderation, we would still need to discuss\nsupply chain safety. Python is not as mature here as, say, Go.\n\n== A Plan ==\n\nEven if everyone were on board immediately, there's a lot of work to\ndo. I'd like to add pytest in a more probationary status, so we can\niron out the inevitable wrinkles. My proposal would be:\n\n1. Commit bare-bones support in our Meson setup for running pytest, so\neveryone can kick the tires independently.\n2. Add a test for something that we can't currently exercise.\n3. Port a test from a place where the maintenance is terrible, to see\nif we can improve it.\n\nIf we hate it by that point, no harm done; tear it back out. Otherwise\nwe keep rolling forward.\n\nThoughts? Suggestions?\n\nThanks,\n--Jacob\n\n[1] https://docs.pytest.org/\n[2] https://wiki.postgresql.org/wiki/PGConf.dev_2024_Developer_Unconference#New_testing_frameworks\n[3] https://github.com/jchampio/pg-pytest-suite\n[4] https://xkcd.com/1987/\n\n\n", "msg_date": "Mon, 10 Jun 2024 11:46:00 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "RFC: adding pytest as a supported test framework" }, { "msg_contents": "Hi!\n\nOn Mon, Jun 10, 2024 at 9:46 PM Jacob Champion\n<[email protected]> wrote:\n> Thoughts? Suggestions?\n\nThank you for working on this.\nDo you think you could re-use something from testgres[1] package?\n\nLinks.\n1. https://github.com/postgrespro/testgres\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Mon, 10 Jun 2024 22:26:16 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Hi,\n\n\nJust for context for the rest the email: I think we desperately need to move\noff perl for tests. The infrastructure around our testing is basically\nunmaintained and just about nobody that started doing dev stuff in the last 10\nyears learned perl.\n\n\nOn 2024-06-10 11:46:00 -0700, Jacob Champion wrote:\n> 4. It'd be great to split apart client-side tests from server-side\n> tests. Driving Postgres via psql all the time is fine for acceptance\n> testing, but it becomes a big problem when you need to test how\n> clients talk to servers with incompatible feature sets, or how a peer\n> behaves when talking to something buggy.\n\nThat seems orthogonal to using pytest vs something else?\n\n\n> == Why pytest? ==\n> \n> From the small and biased sample at the unconference session, it looks\n> like a number of people have independently settled on pytest in their\n> own projects. In my opinion, pytest occupies a nice space where it\n> solves some of the above problems for us, and it gives us plenty of\n> tools to solve the other problems without too much pain.\n\nWe might be able to alleviate that by simply abstracting it away, but I found\npytest's testrunner pretty painful. Oodles of options that are not very well\ndocumented and that often don't work because they are very specific to some\nsituations, without that being explained.\n\n\n> Problem 1 (rerun failing tests): One architectural roadblock to this\n> in our Test::More suite is that tests depend on setup that's done by\n> previous tests. pytest allows you to declare each test's setup\n> requirements via pytest fixtures, letting the test runner build up the\n> world exactly as it needs to be for a single isolated test. These\n> fixtures may be given a \"scope\" so that multiple tests may share the\n> same setup for performance or other reasons.\n\nOTOH, that's quite likely to increase overall test times very\nsignificantly. Yes, sometimes that can be avoided with careful use of various\nfeatures, but often that's hard, and IME is rarely done rigiorously.\n\n\n> Problem 2 (seeing what failed): pytest does this via assertion\n> introspection and very detailed failure reporting. If you haven't seen\n> this before, take a look at the pytest homepage [1]; there's an\n> example of a full log.\n\nThat's not really different than what the perl tap test stuff allows. We\nindeed are bad at utilizing it, but I'm not sure that switching languages will\nchange that.\n\nI think part of the problem is that the information about what precisely\nfailed is often much harder to collect when testing multiple servers\ninteracting than when doing localized unit tests.\n\nI think we ought to invest a bunch in improving that, I'd hope that a lot of\nthat work would be largely independent of the language the tests are written\nin.\n\n\n> Python's standard library has lots of power by itself, with very good\n> documentation. And virtualenvs and better package tooling have made it\n> much easier, IMO, to avoid the XKCD dependency tangle [4] of the\n> 2010s.\n\nUgh, I think this is actually python's weakest area. There's about a dozen\npackage managers and \"python distributions\", that are at best half compatible,\nand the documentation situation around this is *awful*.\n\n\n> When it comes to third-party packages, which I think we're\n> probably going to want in moderation, we would still need to discuss\n> supply chain safety. Python is not as mature here as, say, Go.\n\nWhat external dependencies are you imagining?\n\n\n\n> == A Plan ==\n> \n> Even if everyone were on board immediately, there's a lot of work to\n> do. I'd like to add pytest in a more probationary status, so we can\n> iron out the inevitable wrinkles. My proposal would be:\n> \n> 1. Commit bare-bones support in our Meson setup for running pytest, so\n> everyone can kick the tires independently.\n> 2. Add a test for something that we can't currently exercise.\n> 3. Port a test from a place where the maintenance is terrible, to see\n> if we can improve it.\n> \n> If we hate it by that point, no harm done; tear it back out. Otherwise\n> we keep rolling forward.\n\nI think somewhere between 1 and 4 a *substantial* amount of work would be\nrequired to provide a bunch of the infrastructure that Cluster.pm etc\nprovide. Otherwise we'll end up with a lot of copy pasted code between tests.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Jun 2024 13:04:11 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On 2024-06-10 Mo 16:04, Andres Freund wrote:\n> Hi,\n>\n>\n> Just for context for the rest the email: I think we desperately need to move\n> off perl for tests. The infrastructure around our testing is basically\n> unmaintained and just about nobody that started doing dev stuff in the last 10\n> years learned perl.\n\n\nAndres,\n\nI get that you don't like perl. But it's hard for me to take this \nterribly seriously. \"desperately\" seems like massive overstatement at \nbest. As for what up and coming developers learn, they mostly don't \nlearn C either, and that's far more critical to what we do.\n\nI'm not sure what part of the testing infrastructure you think is \nunmaintained. For example, the last release of Test::Simple was all the \nway back on April 25.\n\nMaybe there are some technical superiorities about what Jacob is \nproposing, enough for us to add it to our armory. I'll keep an open mind \non that.\n\nBut let's not throw the baby out with the bathwater. Quite apart from \nanything else, a wholesale rework of the test infrastructure would make \nbackpatching more painful.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-10 Mo 16:04, Andres Freund\n wrote:\n\n\nHi,\n\n\nJust for context for the rest the email: I think we desperately need to move\noff perl for tests. The infrastructure around our testing is basically\nunmaintained and just about nobody that started doing dev stuff in the last 10\nyears learned perl.\n\n\n\nAndres,\n\nI get that you don't like perl. But it's hard for me to take this\n terribly seriously. \"desperately\" seems like massive overstatement\n at best. As for what up and coming developers learn, they mostly\n don't learn C either, and that's far more critical to what we do.\nI'm not sure what part of the testing infrastructure you think is\n unmaintained. For example, the last release of Test::Simple was\n all the way back on April 25.\nMaybe there are some technical superiorities about what Jacob is\n proposing, enough for us to add it to our armory. I'll keep an\n open mind on that.\nBut let's not throw the baby out with the bathwater. Quite apart\n from anything else, a wholesale rework of the test infrastructure\n would make backpatching more painful.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 10 Jun 2024 16:46:56 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Mon, 10 Jun 2024 at 20:46, Jacob Champion\n<[email protected]> wrote:\n> For the v18 cycle, I would like to try to get pytest [1] in as a\n> supported test driver, in addition to the current offerings.\n\nHuge +1 from me (but I'm definitely biased here)\n\n> Thoughts? Suggestions?\n\nI think the most important thing is that we make it easy for people to\nuse this thing, and use it in a \"correct\" way. I have met very few\npeople that actually like writing tests, so I think it's very\nimportant to make the barrier to do so as low as possible.\n\nFor the PgBouncer repo I created my own pytest based test suite more\n~1.5 years ago now. I tried to make it as easy as possible to write\ntests there, and it has worked out quite well imho. I don't think it\nmakes sense to copy all things I did there verbatim, because some of\nit is quite specific to testing PgBouncer. But I do think there's\nquite a few things that could probably be copied (or at least inspire\nwhat you do). Some examples:\n\n1. helpers to easily run shell commands, most importantly setting\ncheck=True by default[1]\n2. helper to get a free tcp port[2]\n3. helper to check if the log contains a specific string[3]\n4. automatically show PG logs on test failure[4]\n5. helpers to easily run sql commands (psycopg interface isn't very\nuser friendly imho for the common case)[5]\n6. startup/teardown cleanup logic[6]\n\n[1]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L83-L131\n[2]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L210-L233\n[3]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L1125-L1143\n[4]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L1075-L1103\n[5]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L326-L338\n[6]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L546-L642\n\n\nOn Mon, 10 Jun 2024 at 22:04, Andres Freund <[email protected]> wrote:\n> > Problem 1 (rerun failing tests): One architectural roadblock to this\n> > in our Test::More suite is that tests depend on setup that's done by\n> > previous tests. pytest allows you to declare each test's setup\n> > requirements via pytest fixtures, letting the test runner build up the\n> > world exactly as it needs to be for a single isolated test. These\n> > fixtures may be given a \"scope\" so that multiple tests may share the\n> > same setup for performance or other reasons.\n>\n> OTOH, that's quite likely to increase overall test times very\n> significantly. Yes, sometimes that can be avoided with careful use of various\n> features, but often that's hard, and IME is rarely done rigiorously.\n\nYou definitely want to cache things like initdb and \"pg_ctl start\".\nBut that's fairly easy to do with some startup/teardown logic. For\nPgBouncer I create a dedicated schema for each test that needs to\ncreate objects and automatically drop that schema at the end of the\ntest[6] (including any temporary objects outside of schemas like\nusers/replication slots). You can even choose not to clean up certain\nlarge schemas if they are shared across multiple tests.\n\n[6]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L546-L642\n\n> > Problem 2 (seeing what failed): pytest does this via assertion\n> > introspection and very detailed failure reporting. If you haven't seen\n> > this before, take a look at the pytest homepage [1]; there's an\n> > example of a full log.\n>\n> That's not really different than what the perl tap test stuff allows. We\n> indeed are bad at utilizing it, but I'm not sure that switching languages will\n> change that.\n\nIt's not about allowing, it's about doing the thing that you want by\ndefault. The following code\n\nassert a == b\n\nwill show you the actual values of both a and b when the test fails,\ninstead of saying something like \"false is not true\". Ofcourse you can\nprovide a message here too, like with perl its ok function, but even\nwhen you don't the output is helpful.\n\n> I think part of the problem is that the information about what precisely\n> failed is often much harder to collect when testing multiple servers\n> interacting than when doing localized unit tests.\n>\n> I think we ought to invest a bunch in improving that, I'd hope that a lot of\n> that work would be largely independent of the language the tests are written\n> in.\n\nWell, as you already noted no-one that started doing dev stuff in the\nlast 10 years knows Perl nor wants to learn it. So a large part of the\ncommunity tries to touch the current perl test suite as little as\npossible. I personally haven't tried to improve anything about our\nperl testing framework, even though I'm normally very much into\nimproving developer tooling.\n\n\n> > Python's standard library has lots of power by itself, with very good\n> > documentation. And virtualenvs and better package tooling have made it\n> > much easier, IMO, to avoid the XKCD dependency tangle [4] of the\n> > 2010s.\n>\n> Ugh, I think this is actually python's weakest area. There's about a dozen\n> package managers and \"python distributions\", that are at best half compatible,\n> and the documentation situation around this is *awful*.\n\nI definitely agree this is Python its weakest area. But since venv is\npart of the python standard library it's much better. I have the\nfollowing short blurb in PgBouncer its test README[7] and it has\nworked for all contributors so far:\n\n# create a virtual environment (only needed once)\npython3 -m venv env\n\n# activate the environment. You will need to activate this environment in\n# your shell every time you want to run the tests. (so it's needed once per\n# shell).\nsource env/bin/activate\n\n# Install the dependencies (only needed once, or whenever extra dependencies\n# get added to requirements.txt)\npip install -r requirements.txt\n\n\n[7]: https://github.com/pgbouncer/pgbouncer/blob/master/test/README.md\n\n> I think somewhere between 1 and 4 a *substantial* amount of work would be\n> required to provide a bunch of the infrastructure that Cluster.pm etc\n> provide. Otherwise we'll end up with a lot of copy pasted code between tests.\n\nTotally agreed, that we should have a fairly decent base to work on\ntop of. I think we should at least port a few tests to show that the\nbase has at least the most basic functionality.\n\n\n", "msg_date": "Mon, 10 Jun 2024 23:40:39 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Mon, 10 Jun 2024 at 22:47, Andrew Dunstan <[email protected]> wrote:\n> As for what up and coming developers learn, they mostly don't learn C either, and that's far more critical to what we do.\n\nI think many up and coming devs have at least touched C somewhere\n(e.g. in university). And because it's more critical to the project\nand also to many other low level projects, they don't mind learning it\nso much if they don't know it yet. But I, for example, try to write as\nfew Perl tests as possible, because getting good at Perl has pretty\nmuch no use to me outside of writing tests for postgres.\n\n(I do personally think that official Rust support in Postgres would\nprobably be a good thing, but that is a whole other discussion that\nI'd like to save for some other day)\n\n> But let's not throw the baby out with the bathwater. Quite apart from anything else, a wholesale rework of the test infrastructure would make backpatching more painful.\n\nBackporting test improvements to decrease backporting pain is\nsomething we don't look badly upon afaict (Citus its test suite breaks\nsemi-regularly on minor PG version updates due to some slight output\nchanges introduced by e.g. an updated version of the isolationtester).\n\n\n", "msg_date": "Mon, 10 Jun 2024 23:43:22 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Hi,\n\nOn 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:\n> \n> On 2024-06-10 Mo 16:04, Andres Freund wrote:\n> > Hi,\n> > \n> > \n> > Just for context for the rest the email: I think we desperately need to move\n> > off perl for tests. The infrastructure around our testing is basically\n> > unmaintained and just about nobody that started doing dev stuff in the last 10\n> > years learned perl.\n\n> Andres,\n> \n> I get that you don't like perl.\n\nI indeed don't particularly like perl - but that's really not the main\nissue. I've already learned [some of] it. What is the main issue is that I've\nalso watched several newer folks try to write tests in it, and it was not\npretty.\n\n\n> But it's hard for me to take this terribly seriously. \"desperately\" seems\n> like massive overstatement at best.\n\nShrug.\n\n\n> As for what up and coming developers learn, they mostly don't learn C\n> either, and that's far more critical to what we do.\n\nC is a a lot more useful to to them than perl. And it's actually far more\nwidely known these days than perl. C does teach you some reasonably\nlow-level-ish understanding of hardware. There are gazillions of programs\nwritten in C that we'll have to maintain for decades. I don't think that's\ncomparably true for perl.\n\n\n> I'm not sure what part of the testing infrastructure you think is\n> unmaintained. For example, the last release of Test::Simple was all the way\n> back on April 25.\n\nIPC::Run is quite buggy and basically just maintained by Noah these days.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Jun 2024 18:49:11 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On 2024-06-10 Mo 21:49, Andres Freund wrote:\n> Hi,\n>\n> On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:\n>> On 2024-06-10 Mo 16:04, Andres Freund wrote:\n>>> Hi,\n>>>\n>>>\n>>> Just for context for the rest the email: I think we desperately need to move\n>>> off perl for tests. The infrastructure around our testing is basically\n>>> unmaintained and just about nobody that started doing dev stuff in the last 10\n>>> years learned perl.\n>> Andres,\n>>\n>> I get that you don't like perl.\n> I indeed don't particularly like perl - but that's really not the main\n> issue. I've already learned [some of] it. What is the main issue is that I've\n> also watched several newer folks try to write tests in it, and it was not\n> pretty.\n\n\nHmm. I've done webinars in the past about how to write TAP tests for \nPostgreSQL, maybe I need to beef that up some.\n\n\n>> I'm not sure what part of the testing infrastructure you think is\n>> unmaintained. For example, the last release of Test::Simple was all the way\n>> back on April 25.\n> IPC::Run is quite buggy and basically just maintained by Noah these days.\n>\n\nYes, that's true. I think the biggest pain point is possibly the \nrecovery tests.\n\nSome time ago I did some work on wrapping libpq using the perl FFI \nmodule. It worked pretty well, and would mean we could probably avoid \nmany uses of IPC::Run, and would probably be substantially more \nefficient (no fork required). It wouldn't avoid all uses of IPC::Run, \nthough.\n\nBut my point was mainly that while a new framework might have value, I \ndon't think we need to run out and immediately rewrite several hundred \nTAP tests. Let's pick the major pain points and address those.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-10 Mo 21:49, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:\n\n\n\nOn 2024-06-10 Mo 16:04, Andres Freund wrote:\n\n\nHi,\n\n\nJust for context for the rest the email: I think we desperately need to move\noff perl for tests. The infrastructure around our testing is basically\nunmaintained and just about nobody that started doing dev stuff in the last 10\nyears learned perl.\n\n\n\n\n\n\nAndres,\n\nI get that you don't like perl.\n\n\n\nI indeed don't particularly like perl - but that's really not the main\nissue. I've already learned [some of] it. What is the main issue is that I've\nalso watched several newer folks try to write tests in it, and it was not\npretty.\n\n\n\nHmm. I've done webinars in the past about how to write TAP tests\n for PostgreSQL, maybe I need to beef that up some.\n\n\n\n\n\n\n\n\nI'm not sure what part of the testing infrastructure you think is\nunmaintained. For example, the last release of Test::Simple was all the way\nback on April 25.\n\n\n\nIPC::Run is quite buggy and basically just maintained by Noah these days.\n\n\n\n\n\nYes, that's true. I think the biggest pain point is possibly the\n recovery tests.\nSome time ago I did some work on wrapping libpq using the perl\n FFI module. It worked pretty well, and would mean we could\n probably avoid many uses of IPC::Run, and would probably be\n substantially more efficient (no fork required). It wouldn't avoid\n all uses of IPC::Run, though.\nBut my point was mainly that while a new framework might have\n value, I don't think we need to run out and immediately rewrite\n several hundred TAP tests. Let's pick the major pain points and\n address those.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 11 Jun 2024 08:04:57 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Mon, Jun 10, 2024 at 1:04 PM Andres Freund <[email protected]> wrote:\n> Just for context for the rest the email: I think we desperately need to move\n> off perl for tests. The infrastructure around our testing is basically\n> unmaintained and just about nobody that started doing dev stuff in the last 10\n> years learned perl.\n\nOkay. Personally, I'm going to try to stay out of discussions around\nsubtracting Perl and focus on adding Python, for a bunch of different\nreasons:\n\n- Tests aren't cheap, but in my experience, the maintenance-cost math\nfor tests is a lot different than the math for implementations.\n- I don't personally care for Perl, but having tests in any form is\nusually better than not having them.\n- Trying to convince people to get rid of X while adding Y is a good\nway to make sure Y never happens.\n\n> On 2024-06-10 11:46:00 -0700, Jacob Champion wrote:\n> > 4. It'd be great to split apart client-side tests from server-side\n> > tests. Driving Postgres via psql all the time is fine for acceptance\n> > testing, but it becomes a big problem when you need to test how\n> > clients talk to servers with incompatible feature sets, or how a peer\n> > behaves when talking to something buggy.\n>\n> That seems orthogonal to using pytest vs something else?\n\nYes, I think that's fair. It's going to be hard not to talk about\n\"things that pytest+Python don't give us directly but are much easier\nto build\" in all of this (and I tried to call that out in the next\nsection, maybe belatedly). I think I'm going to have to convince both\na group of people who want to ask \"why pytest in particular?\" and a\ngroup of people who ask \"why isn't what we have good enough?\"\n\n> > == Why pytest? ==\n> >\n> > From the small and biased sample at the unconference session, it looks\n> > like a number of people have independently settled on pytest in their\n> > own projects. In my opinion, pytest occupies a nice space where it\n> > solves some of the above problems for us, and it gives us plenty of\n> > tools to solve the other problems without too much pain.\n>\n> We might be able to alleviate that by simply abstracting it away, but I found\n> pytest's testrunner pretty painful. Oodles of options that are not very well\n> documented and that often don't work because they are very specific to some\n> situations, without that being explained.\n\nHm. There are a bunch of them, but I've never needed to go through the\noodles of options. Anything in particular that caused problems?\n\n> > Problem 1 (rerun failing tests): One architectural roadblock to this\n> > in our Test::More suite is that tests depend on setup that's done by\n> > previous tests. pytest allows you to declare each test's setup\n> > requirements via pytest fixtures, letting the test runner build up the\n> > world exactly as it needs to be for a single isolated test. These\n> > fixtures may be given a \"scope\" so that multiple tests may share the\n> > same setup for performance or other reasons.\n>\n> OTOH, that's quite likely to increase overall test times very\n> significantly. Yes, sometimes that can be avoided with careful use of various\n> features, but often that's hard, and IME is rarely done rigiorously.\n\nWell, scopes are pretty front and center when you start building\npytest fixtures, and the complicated longer setups will hopefully\nconverge correctly early on and be reused everywhere else. I imagine\nno one wants to build cluster setup from scratch.\n\nOn a slight tangent, is this not a problem today? I mean... part of my\npersonal long-term goal is in increasing test hygiene, which is going\nto take some shifts in practice. As long as review keeps the quality\nof the tests fairly high, I see the inevitable \"our tests take too\nlong\" problem as a good one. That's true no matter what framework we\nuse, unless the framework is so bad that no one uses it and the\nruntime is trivial. If we're worried that people will immediately\nstart exploding the runtime and no one will notice during review,\nmaybe we can have some infrastructure flag how much a patch increased\nit?\n\n> > Problem 2 (seeing what failed): pytest does this via assertion\n> > introspection and very detailed failure reporting. If you haven't seen\n> > this before, take a look at the pytest homepage [1]; there's an\n> > example of a full log.\n>\n> That's not really different than what the perl tap test stuff allows. We\n> indeed are bad at utilizing it, but I'm not sure that switching languages will\n> change that.\n\nJelte already touched on this, but I wanted to hammer on the point: If\nno one, not even the developers who chose and like Perl, is using\nTest::More in a way that's maintainable, I would prefer to use a\nframework that does maintainable things by default so that you have to\ntry really hard to screw it up. It is possible to screw up `assert\nactual == expected`, but it takes more work than doing it the right\nway.\n\n> I think part of the problem is that the information about what precisely\n> failed is often much harder to collect when testing multiple servers\n> interacting than when doing localized unit tests.\n>\n> I think we ought to invest a bunch in improving that, I'd hope that a lot of\n> that work would be largely independent of the language the tests are written\n> in.\n\nWe do a lot more acceptance testing than internal testing, which came\nup as a major complaint from me and others during the unconference.\nOne of the reasons people avoid writing internal tests in Perl is\nbecause it's very painful to find a rhythm with Test::More. From\nexperience test-driving the OAuth work, I'm *very* happy with the\ndevelopment cycle that pytest gave me.\n\nOther languages _could_ do that, sure. It's a simple matter of programming...\n\n> Ugh, I think this is actually python's weakest area. There's about a dozen\n> package managers and \"python distributions\", that are at best half compatible,\n> and the documentation situation around this is *awful*.\n\nSo... don't support the half-compatible stuff? I thought this\nconversation was still going on with Windows Perl (ActiveState?\nStrawberry?) but everyone just seems to pick what works for them and\nmove on to better things to do.\n\nModern CPython includes pip and venv. Done. If someone comes to us\nwith some horrible Anaconda setup wanting to know why their duct tape\ndoesn't work, can't we just tell them no?\n\n> > When it comes to third-party packages, which I think we're\n> > probably going to want in moderation, we would still need to discuss\n> > supply chain safety. Python is not as mature here as, say, Go.\n>\n> What external dependencies are you imagining?\n\nThe OAuth pytest suite makes extensive use of\n- psycopg, to easily drive libpq;\n- construct, for on-the-wire packet representations and manipulation; and\n- pyca/cryptography, for easy generation of certificates and manual\ncrypto testing.\n\nI'd imagine each would need considerable discussion, if there is\ninterest in doing the same things that I do with them.\n\n> I think somewhere between 1 and 4 a *substantial* amount of work would be\n> required to provide a bunch of the infrastructure that Cluster.pm etc\n> provide. Otherwise we'll end up with a lot of copy pasted code between tests.\n\nPossibly, yes. I think it depends on what you want to test first, and\nthere's a green-field aspect of hope/anxiety/ennui, too. Are you\ntrying to port the acceptance-test framework that we already have, or\nare you trying to build a framework that can handle the things we\ncan't currently test? Will it be easier to refactor duplication into\nshared fixtures when the language doesn't encourage an infinite number\nof ways to do things? Or will we have to keep on top of it to avoid\npain?\n\n--Jacob\n\n\n", "msg_date": "Tue, 11 Jun 2024 07:28:23 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Mon, Jun 10, 2024 at 12:26 PM Alexander Korotkov\n<[email protected]> wrote:\n> Thank you for working on this.\n> Do you think you could re-use something from testgres[1] package?\n\nPossibly? I think we're all coming at this with our own bags of tricks\nand will need to carve off pieces to port, contribute, or reimplement.\nDoes testgres have something in particular you'd like to see the\nPostgres tests support?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 11 Jun 2024 07:30:56 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Mon, Jun 10, 2024 at 06:49:11PM -0700, Andres Freund wrote:\n> On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:\n> > On 2024-06-10 Mo 16:04, Andres Freund wrote:\n> > > Just for context for the rest the email: I think we desperately need to move\n> > > off perl for tests. The infrastructure around our testing is basically\n> > > unmaintained and just about nobody that started doing dev stuff in the last 10\n> > > years learned perl.\n\n> > As for what up and coming developers learn, they mostly don't learn C\n> > either, and that's far more critical to what we do.\n> \n> C is a a lot more useful to to them than perl. And it's actually far more\n> widely known these days than perl.\n\nIf we're going to test in a non-Perl language, I'd pick C over Python. There\nwould be several other unlikely-community-choice languages I'd pick over\nPython (C#, Java, C++). We'd need a library like today's Perl\nPostgreSQL::Test to make C-language tests nice, but the same would apply to\nany new language.\n\nI also want the initial scope to be the new language coexisting with the\nexisting Perl tests. If a bulk translation ever happens, it should happen\nlong after the debut of the new framework. That said, I don't much trust a\nhuman-written bulk language translation to go through without some tests\naccidentally ceasing to test what they test in Perl today.\n\n\n", "msg_date": "Tue, 11 Jun 2024 16:48:29 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "\nOn 2024-06-11 Tu 19:48, Noah Misch wrote:\n> On Mon, Jun 10, 2024 at 06:49:11PM -0700, Andres Freund wrote:\n>> On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:\n>>> On 2024-06-10 Mo 16:04, Andres Freund wrote:\n>>>> Just for context for the rest the email: I think we desperately need to move\n>>>> off perl for tests. The infrastructure around our testing is basically\n>>>> unmaintained and just about nobody that started doing dev stuff in the last 10\n>>>> years learned perl.\n>>> As for what up and coming developers learn, they mostly don't learn C\n>>> either, and that's far more critical to what we do.\n>> C is a a lot more useful to to them than perl. And it's actually far more\n>> widely known these days than perl.\n> If we're going to test in a non-Perl language, I'd pick C over Python. There\n> would be several other unlikely-community-choice languages I'd pick over\n> Python (C#, Java, C++). We'd need a library like today's Perl\n> PostgreSQL::Test to make C-language tests nice, but the same would apply to\n> any new language.\n\n\nIndeed. We've invested quite a lot of effort on that infrastructure. I \nguess people can learn from what we've done so a second language might \nbe easier to support.\n\n(Java would be my pick from your unlikely set, but I can see the \nattraction of Python.)\n\n\n>\n> I also want the initial scope to be the new language coexisting with the\n> existing Perl tests. If a bulk translation ever happens, it should happen\n> long after the debut of the new framework. That said, I don't much trust a\n> human-written bulk language translation to go through without some tests\n> accidentally ceasing to test what they test in Perl today.\n\n\n+1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 06:17:21 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, 12 Jun 2024 at 01:48, Noah Misch <[email protected]> wrote:\n> If we're going to test in a non-Perl language, I'd pick C over Python. There\n> would be several other unlikely-community-choice languages I'd pick over\n> Python (C#, Java, C++).\n\nMy main goals of this thread are:\n1. Allowing people to quickly write tests\n2. Have those tests do what the writer intended them to do\n3. Have good error reporting by default\n\nThose goals indeed don't necesitate python.\n\nBut I think those are really hard to achieve with any C based\nframework, and probably with C++ too. Also manual memory management in\ntests seems to add tons of complexity for basically no benefit.\n\nI think C#, Java, Go, Rust, Kotlin, and Swift would be acceptable\nchoices for me (and possibly some more). They allow some type of\nintrospection, they have a garbage collector, and their general\ntooling is quite good.\n\nBut I think a dynamically typed scripting language is much more\nfitting for writing tests like this. I love static typing for\nproduction code, but imho it really doesn't have much benefit for\ntests.\n\nAs scripting languages go, the ones that are still fairly heavily in\nuse are Javascript, Python, Ruby, and PHP. I think all of those could\nprobably work, but my personal order of preference would be Python,\nRuby, Javascript, PHP.\n\nFinally, I'm definitely biased towards using Python myself. But I\nthink there's good reasons for that:\n1. In the data space, that Postgres in, Python is very heavily used for analysis\n2. Everyone coming out of university these days knows it to some extent\n3. Multiple people in the community have been writing Postgres related\ntests in python and have enjoyed doing so (me[1], Jacob[2],\nAlexander[3])\n\nWhat language people want to write tests in is obviously very\nsubjective. And obviously we cannot allow every language for writing\ntests. But I think if ~25% of the community prefers to write their\ntests in Python. Then that should be enough of a reason to allow them\nto do so.\n\nTO CLARIFY: This thread is not a proposal to replace Perl with Python.\nIt's a proposal to allow people to also write tests in Python.\n\n> I also want the initial scope to be the new language coexisting with the\n> existing Perl tests. If a bulk translation ever happens, it should happen\n> long after the debut of the new framework. That said, I don't much trust a\n> human-written bulk language translation to go through without some tests\n> accidentally ceasing to test what they test in Perl today.\n\nI definitely don't think we should rewrite all the tests that we have\nin Perl today into some other language. But I do think that whatever\nlanguage we choose, that language should make it as least as easy to\nwrite tests, as easy to read them and as easy to see that they are\ntesting the intended thing, as is currently the case for Perl.\nRewriting a few Perl tests into the new language, even if not merging\nthe rewrite, is a good way of validating that imho.\n\nPS. For PgBouncer I actually hand-rewrote all the tests that we had in\nbash (which is the worst testing language ever) in Python and doing so\nactually found more bugs in PgBouncer code that our bash tests\nwouldn't catch. So it's not necessarily the case that you lose\ncoverage by rewriting tests.\n\n[1]: https://github.com/pgbouncer/pgbouncer/tree/master/test\n[2]: https://github.com/jchampio/pg-pytest-suite\n[3]: https://github.com/postgrespro/testgres\n\n\n", "msg_date": "Wed, 12 Jun 2024 13:40:30 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Tue, Jun 11, 2024 at 5:31 PM Jacob Champion\n<[email protected]> wrote:\n> On Mon, Jun 10, 2024 at 12:26 PM Alexander Korotkov\n> <[email protected]> wrote:\n> > Thank you for working on this.\n> > Do you think you could re-use something from testgres[1] package?\n>\n> Possibly? I think we're all coming at this with our own bags of tricks\n> and will need to carve off pieces to port, contribute, or reimplement.\n> Does testgres have something in particular you'd like to see the\n> Postgres tests support?\n\nGenerally, testgres was initially designed as Python analogue of what\nwe have in src/test/perl/PostgreSQL/Test. In particular its\ntestgres.PostgresNode is analogue of PostgreSQL::Test::Cluster. It\ncomes under PostgreSQL License. So, I wonder if we could revise it\nand fetch most part of it into our source tree.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:48:45 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, Jun 12, 2024 at 2:48 PM Alexander Korotkov <[email protected]> wrote:\n> On Tue, Jun 11, 2024 at 5:31 PM Jacob Champion\n> <[email protected]> wrote:\n> > On Mon, Jun 10, 2024 at 12:26 PM Alexander Korotkov\n> > <[email protected]> wrote:\n> > > Thank you for working on this.\n> > > Do you think you could re-use something from testgres[1] package?\n> >\n> > Possibly? I think we're all coming at this with our own bags of tricks\n> > and will need to carve off pieces to port, contribute, or reimplement.\n> > Does testgres have something in particular you'd like to see the\n> > Postgres tests support?\n>\n> Generally, testgres was initially designed as Python analogue of what\n> we have in src/test/perl/PostgreSQL/Test. In particular its\n> testgres.PostgresNode is analogue of PostgreSQL::Test::Cluster. It\n> comes under PostgreSQL License. So, I wonder if we could revise it\n> and fetch most part of it into our source tree.\n\nPlus testgres exists from 2016 and already have quite amount of use\ncases. This is what I quickly found on github.\n\nhttps://github.com/adjust/pg_querylog\nhttps://github.com/postgrespro/pg_pathman\nhttps://github.com/lanterndata/lantern\nhttps://github.com/orioledb/orioledb\nhttps://github.com/cbandy/pgtwixt\nhttps://github.com/OpenNTI/nti.testing\nhttps://github.com/postgrespro/pg_probackup\nhttps://github.com/postgrespro/rum\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 12 Jun 2024 15:00:54 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, 12 Jun 2024 at 01:48, Noah Misch <[email protected]> wrote:\n> If we're going to test in a non-Perl language, I'd pick C over Python.\n> <snip>\n> We'd need a library like today's Perl\n> PostgreSQL::Test to make C-language tests nice, but the same would apply to\n> any new language.\n\nP.P.S. We already write tests in C, we use it for testing libpq[1].\nI'd personally definitely welcome a C library to make those tests\nnicer to write, because I've written a fair bit of those in the past\nand currently it's not very fun to do.\n\n[1]: https://github.com/postgres/postgres/blob/master/src/test/modules/libpq_pipeline/libpq_pipeline.c\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:09:21 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "\n> On Jun 12, 2024, at 6:40 AM, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> I think C#, Java, Go, Rust, Kotlin, and Swift would be acceptable\n> choices for me (and possibly some more). They allow some type of\n> introspection, they have a garbage collector, and their general\n> tooling is quite good.\n> \n\nHaving used Python for 15+ years and then abandoned it for all projects I would\nsay the single most important points for a long term project like Postgres is,\nnot necessarily in order, package stability, package depth, semantic versioning,\navailable resources, and multiprocessor support.\n\nThe reason I abandoned Python was for the constant API breaks in packages. Yes,\npython is a great language to teach in school for a one-time class project, but\nthat is not Postgres’s use-case. Remember that Fortran and Pascal were the \ndarlings for teaching in school prior to Python and no-one uses them any more.\n\nYes Python innovates fast and is fashionable. But again, not Postgres’s use-case.\n\nI believe that anyone coming out of school these days would have a relatively\neasy transition to any of Go, Rust, Kotlin, Swift, etc. In other words, any of\nthe modern languages. In addition, the language should scale well to\nmultiprocessors, because parallel testing is becoming more important every day.\n\nIf the Postgres project is going to pick a new language for testing, it should\npick a language for the next 50 years based on the projects needs.\n\nPython is good for package depth and resource availability, but fails IMO in the\nother categories. My experience with python where the program flow can change\nbecause of non-visible characters is a terrible way to write robust long term\nmaintainable code. Because of this most of the modern languages are going to be\ncloser in style to Postgres’s C code base than Python.\n\n", "msg_date": "Wed, 12 Jun 2024 08:49:32 -0500", "msg_from": "FWS Neil <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Jelte Fennema-Nio:\n> As scripting languages go, the ones that are still fairly heavily in\n> use are Javascript, Python, Ruby, and PHP. I think all of those could\n> probably work, but my personal order of preference would be Python,\n> Ruby, Javascript, PHP.\n> \n> Finally, I'm definitely biased towards using Python myself. But I\n> think there's good reasons for that:\n> 1. In the data space, that Postgres in, Python is very heavily used for analysis\n> 2. Everyone coming out of university these days knows it to some extent\n> 3. Multiple people in the community have been writing Postgres related\n> tests in python and have enjoyed doing so (me[1], Jacob[2],\n> Alexander[3])\n\nPostgREST also uses pytest for integration tests - and that was a very \ngood decision compared to the bash based tests we had before.\n\nOne more argument for Python compared to the other mentioned scripting \nlanguages: Python is already a development dependency via meson. None of \nthe other 3 are. In a future where meson will be the only build system, \nwe will have python \"for free\" already.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Wed, 12 Jun 2024 16:56:49 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Hi,\n\nOn 2024-06-11 08:04:57 -0400, Andrew Dunstan wrote:\n> Some time ago I did some work on wrapping libpq using the perl FFI module.\n> It worked pretty well, and would mean we could probably avoid many uses of\n> IPC::Run, and would probably be substantially more efficient (no fork\n> required). It wouldn't avoid all uses of IPC::Run, though.\n\nFWIW, I'd *love* to see work on this continue. The reduction in test runtime\non windows is substantial and would shorten the hack->CI->fail->hack loop a\ngood bit shorter. And save money.\n\n\n> But my point was mainly that while a new framework might have value, I don't\n> think we need to run out and immediately rewrite several hundred TAP tests.\n\nOh, yea. That's not at all feasible to just do in one go.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Jun 2024 08:28:12 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, 12 Jun 2024 at 15:49, FWS Neil <[email protected]> wrote:\n> I believe that anyone coming out of school these days would have a relatively\n> easy transition to any of Go, Rust, Kotlin, Swift, etc. In other words, any of\n> the modern languages.\n\nAgreed, which is why I said they'd be acceptable to me. But I think\none important advantage of Python is that it's clear that many people\nin the community are willing to write tests in it. At PGConf.dev there\nwere a lot of people in the unconference session about this. Also many\npeople already wrote a Postgres testing framework for python, and are\nusing it (see list of projects that Alexander shared). I haven't seen\nthat level of willingness to write tests for any of those other\nlanguages (yet).\n\n> In addition, the language should scale well to\n> multiprocessors, because parallel testing is becoming more important every day.\n> <snip>\n> Python is good for package depth and resource availability, but fails IMO in the\n> other categories.\n\nYou can easily pin packages or call a different function based on the\nversion of the package, so I'm not sure what the problem is with\npackage stability. Also chances are we'll pull in very little external\npackages and rely mostly on the stdlib (which is quite stable).\n\nRegarding parallelised running of tests, I agree that's very\nimportant. And indeed normally parallelism in python can be a pain\n(although async/await makes I/O parallelism a lot better at least).\nBut running pytest tests in parallel is extremely easy by using\npytest-xdist[1], so I don't really think there's an issue for this\nspecific Python usecase.\n\n> My experience with python where the program flow can change\n> because of non-visible characters is a terrible way to write robust long term\n> maintainable code. Because of this most of the modern languages are going to be\n> closer in style to Postgres’s C code base than Python.\n\nI'm assuming this is about spaces vs curly braces for blocks? Now that\nwe have auto formatters for every modern programming language I indeed\nprefer curly braces myself too. But honestly that's pretty much a tabs\nvs spaces discussion.\n\n[1]: https://pypi.org/project/pytest-xdist/\n\nOn Wed, 12 Jun 2024 at 15:49, FWS Neil <[email protected]> wrote:\n>\n>\n> > On Jun 12, 2024, at 6:40 AM, Jelte Fennema-Nio <[email protected]> wrote:\n> >\n> > I think C#, Java, Go, Rust, Kotlin, and Swift would be acceptable\n> > choices for me (and possibly some more). They allow some type of\n> > introspection, they have a garbage collector, and their general\n> > tooling is quite good.\n> >\n>\n> Having used Python for 15+ years and then abandoned it for all projects I would\n> say the single most important points for a long term project like Postgres is,\n> not necessarily in order, package stability, package depth, semantic versioning,\n> available resources, and multiprocessor support.\n>\n> The reason I abandoned Python was for the constant API breaks in packages. Yes,\n> python is a great language to teach in school for a one-time class project, but\n> that is not Postgres’s use-case. Remember that Fortran and Pascal were the\n> darlings for teaching in school prior to Python and no-one uses them any more.\n>\n> Yes Python innovates fast and is fashionable. But again, not Postgres’s use-case.\n>\n> I believe that anyone coming out of school these days would have a relatively\n> easy transition to any of Go, Rust, Kotlin, Swift, etc. In other words, any of\n> the modern languages. In addition, the language should scale well to\n> multiprocessors, because parallel testing is becoming more important every day.\n>\n> If the Postgres project is going to pick a new language for testing, it should\n> pick a language for the next 50 years based on the projects needs.\n>\n> Python is good for package depth and resource availability, but fails IMO in the\n> other categories. My experience with python where the program flow can change\n> because of non-visible characters is a terrible way to write robust long term\n> maintainable code. Because of this most of the modern languages are going to be\n> closer in style to Postgres’s C code base than Python.\n\n\n", "msg_date": "Wed, 12 Jun 2024 17:35:22 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Hi,\n\nOn 2024-06-11 07:28:23 -0700, Jacob Champion wrote:\n> On Mon, Jun 10, 2024 at 1:04 PM Andres Freund <[email protected]> wrote:\n> > Just for context for the rest the email: I think we desperately need to move\n> > off perl for tests. The infrastructure around our testing is basically\n> > unmaintained and just about nobody that started doing dev stuff in the last 10\n> > years learned perl.\n> \n> Okay. Personally, I'm going to try to stay out of discussions around\n> subtracting Perl and focus on adding Python, for a bunch of different\n> reasons:\n\nI think I might have formulated my paragraph above badly - I didn't mean that\nwe should move away from perl tests tomorrow, but that we need a path forward\nthat allows folks to write tests without perl.\n\n\n> - Tests aren't cheap, but in my experience, the maintenance-cost math\n> for tests is a lot different than the math for implementations.\n\nAt the moment they tend to be *more* expensive often, due to spurious\nfailures. That's mostly not perl's fault, don't get me wrong, but us not\nhaving better infrastructure for testing complicated behaviour and/or testing\nthings on a more narrow basis.\n\n\n> > > Problem 1 (rerun failing tests): One architectural roadblock to this\n> > > in our Test::More suite is that tests depend on setup that's done by\n> > > previous tests. pytest allows you to declare each test's setup\n> > > requirements via pytest fixtures, letting the test runner build up the\n> > > world exactly as it needs to be for a single isolated test. These\n> > > fixtures may be given a \"scope\" so that multiple tests may share the\n> > > same setup for performance or other reasons.\n> >\n> > OTOH, that's quite likely to increase overall test times very\n> > significantly. Yes, sometimes that can be avoided with careful use of various\n> > features, but often that's hard, and IME is rarely done rigiorously.\n> \n> Well, scopes are pretty front and center when you start building\n> pytest fixtures, and the complicated longer setups will hopefully\n> converge correctly early on and be reused everywhere else. I imagine\n> no one wants to build cluster setup from scratch.\n\nOne (the?) prime source of state in our tap tests is the\ndatabase. Realistically we can't just tear that one down and reset it between\ntests without causing the test times to explode. So we'll have to live with\nsome persistent state.\n\n\n> On a slight tangent, is this not a problem today?\n\nIt is, but that doesn't mean making it even bigger is unproblematic :)\n\n\n\n\n> > I think part of the problem is that the information about what precisely\n> > failed is often much harder to collect when testing multiple servers\n> > interacting than when doing localized unit tests.\n> >\n> > I think we ought to invest a bunch in improving that, I'd hope that a lot of\n> > that work would be largely independent of the language the tests are written\n> > in.\n> \n> We do a lot more acceptance testing than internal testing, which came\n> up as a major complaint from me and others during the unconference.\n> One of the reasons people avoid writing internal tests in Perl is\n> because it's very painful to find a rhythm with Test::More.\n\nWhat definition of internal tests are you using here?\n\nI think a lot of our tests are complicated, fragile and slow because we almost\nexclusively do end-to-end tests, because with a few exceptions we don't have a\nway to exercise code in a more granular way.\n\n\n> > > When it comes to third-party packages, which I think we're\n> > > probably going to want in moderation, we would still need to discuss\n> > > supply chain safety. Python is not as mature here as, say, Go.\n> >\n> > What external dependencies are you imagining?\n> \n> The OAuth pytest suite makes extensive use of\n> - psycopg, to easily drive libpq;\n\nThat's probably not going to fly. It introduces painful circular dependencies\nbetween building postgres (for libpq), building psycopg (requiring libpq) and\ntesting postgres (requiring psycopg).\n\nYou *can* solve such issues, but we've debated that in the past, and I doubt\nwe'll find agreement on the awkwardness it introduces.\n\n\n> - construct, for on-the-wire packet representations and manipulation; and\n\nThat seems fairly minimal.\n\n\n> - pyca/cryptography, for easy generation of certificates and manual\n> crypto testing.\n\nThat's a bit more painful, but I guess maybe not unrealistic?\n\n\n> I'd imagine each would need considerable discussion, if there is\n> interest in doing the same things that I do with them.\n\nOne thing worth thinking about is that such dependencies have to work on a\nrelatively large number of platforms / architectures. A lot of projects\ndon't...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Jun 2024 08:50:40 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, 12 Jun 2024 at 17:50, Andres Freund <[email protected]> wrote:\n> > The OAuth pytest suite makes extensive use of\n> > - psycopg, to easily drive libpq;\n>\n> That's probably not going to fly. It introduces painful circular dependencies\n> between building postgres (for libpq), building psycopg (requiring libpq) and\n> testing postgres (requiring psycopg).\n>\n> You *can* solve such issues, but we've debated that in the past, and I doubt\n> we'll find agreement on the awkwardness it introduces.\n\npsycopg has a few implementations binary, c, & pure python. The pure\npython one can be linked to a specific libpq.so file at runtime[1]. As\nlong as we don't break the libpq API (which we shouldn't), we can just\npoint it to the libpq compiled by meson/make. We wouldn't be able to\nuse the newest libpq features that way (because psycopg wouldn't know\nabout them), but that seems totally fine for most usages (i.e. sending\na query over a connection). If we really want to use those from the\npython tests we could write our own tiny CFFI layer specifically for\nthose.\n\n> One thing worth thinking about is that such dependencies have to work on a\n> relatively large number of platforms / architectures. A lot of projects\n> don't...\n\nDo they really? A bunch of the Perl tests we just skip on windows or\nuncommon platforms. I think we could do the same for these.\n\n\n", "msg_date": "Wed, 12 Jun 2024 18:08:16 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, Jun 12, 2024 at 7:08 PM Jelte Fennema-Nio <[email protected]> wrote:\n> On Wed, 12 Jun 2024 at 17:50, Andres Freund <[email protected]> wrote:\n> > > The OAuth pytest suite makes extensive use of\n> > > - psycopg, to easily drive libpq;\n> >\n> > That's probably not going to fly. It introduces painful circular dependencies\n> > between building postgres (for libpq), building psycopg (requiring libpq) and\n> > testing postgres (requiring psycopg).\n> >\n> > You *can* solve such issues, but we've debated that in the past, and I doubt\n> > we'll find agreement on the awkwardness it introduces.\n>\n> psycopg has a few implementations binary, c, & pure python. The pure\n> python one can be linked to a specific libpq.so file at runtime[1]. As\n> long as we don't break the libpq API (which we shouldn't), we can just\n> point it to the libpq compiled by meson/make. We wouldn't be able to\n> use the newest libpq features that way (because psycopg wouldn't know\n> about them), but that seems totally fine for most usages (i.e. sending\n> a query over a connection). If we really want to use those from the\n> python tests we could write our own tiny CFFI layer specifically for\n> those.\n\nI guess you mean pg8000. Note that pg8000 and psycopg2 have some\ndifferences in interpretation of datatypes (AFAIR arrays, jsonb...).\nSo, it would be easier to chose one particular driver. However, with\na bit efforts it's possible to make all the code driver agnostic.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 12 Jun 2024 19:34:19 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "> On 12 Jun 2024, at 18:08, Jelte Fennema-Nio <[email protected]> wrote:\n> On Wed, 12 Jun 2024 at 17:50, Andres Freund <[email protected]> wrote:\n\n>> One thing worth thinking about is that such dependencies have to work on a\n>> relatively large number of platforms / architectures. A lot of projects\n>> don't...\n> \n> Do they really? A bunch of the Perl tests we just skip on windows or\n> uncommon platforms. I think we could do the same for these.\n\nFor a project intended to improve on the status quo it seems like a too low bar to just port over the deficincies in the thing we’re trying to improve over.\n\n./daniel\n\n", "msg_date": "Wed, 12 Jun 2024 18:45:52 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, 12 Jun 2024 at 18:08, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Wed, 12 Jun 2024 at 17:50, Andres Freund <[email protected]> wrote:\n> > > The OAuth pytest suite makes extensive use of\n> > > - psycopg, to easily drive libpq;\n> >\n> > That's probably not going to fly. It introduces painful circular dependencies\n> > between building postgres (for libpq), building psycopg (requiring libpq) and\n> > testing postgres (requiring psycopg).\n>\n> psycopg has a few implementations binary, c, & pure python. The pure\n> python one can be linked to a specific libpq.so file at runtime[1]. As\n\nThis is true, but [citation needed] :D I assume the pointer wanted to\nbe https://www.psycopg.org/psycopg3/docs/api/pq.html#pq-impl\n\nI see the following use cases and how I would use psycopg to implement them:\n\n- by installing 'psycopg[binary]' you would get a binary bundle\nshipping with a stable version of the libpq, so you can test the\ndatabase server regardless of libpq instabilities in the same\ncodebase.\n- using the pure Python psycopg (enforced by exporting\n'PSYCOPG_IMPL=python') you would use the libpq found on the\nLD_LIBRARY_PATH, which can be useful to test regressions to the libpq\nitself.\n- if you want to test new libpq functions you can reach them in Python\nby dynamic lookup. See [2] for an example of a function only available\nfrom libpq v17.\n\n[2]: https://github.com/psycopg/psycopg/blob/2bf7783d66ab239a2fa330a842fd461c4bb17c48/psycopg/psycopg/pq/_pq_ctypes.py#L564-L569\n\n-- Daniele\n\n\n", "msg_date": "Wed, 12 Jun 2024 18:46:23 +0200", "msg_from": "Daniele Varrazzo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, Jun 12, 2024 at 7:34 PM Alexander Korotkov <[email protected]> wrote:\n> On Wed, Jun 12, 2024 at 7:08 PM Jelte Fennema-Nio <[email protected]> wrote:\n> > On Wed, 12 Jun 2024 at 17:50, Andres Freund <[email protected]> wrote:\n> > > > The OAuth pytest suite makes extensive use of\n> > > > - psycopg, to easily drive libpq;\n> > >\n> > > That's probably not going to fly. It introduces painful circular dependencies\n> > > between building postgres (for libpq), building psycopg (requiring libpq) and\n> > > testing postgres (requiring psycopg).\n> > >\n> > > You *can* solve such issues, but we've debated that in the past, and I doubt\n> > > we'll find agreement on the awkwardness it introduces.\n> >\n> > psycopg has a few implementations binary, c, & pure python. The pure\n> > python one can be linked to a specific libpq.so file at runtime[1]. As\n> > long as we don't break the libpq API (which we shouldn't), we can just\n> > point it to the libpq compiled by meson/make. We wouldn't be able to\n> > use the newest libpq features that way (because psycopg wouldn't know\n> > about them), but that seems totally fine for most usages (i.e. sending\n> > a query over a connection). If we really want to use those from the\n> > python tests we could write our own tiny CFFI layer specifically for\n> > those.\n>\n> I guess you mean pg8000. Note that pg8000 and psycopg2 have some\n> differences in interpretation of datatypes (AFAIR arrays, jsonb...).\n> So, it would be easier to chose one particular driver. However, with\n> a bit efforts it's possible to make all the code driver agnostic.\n\nOps, this is probably outdated due to presence of psycopg3, as pointed\nby Daniele Varrazzo [1].\n\nLinks.\n1. https://www.postgresql.org/message-id/CA%2Bmi_8Zj0gpzPKUEcEx2mPOAsm0zPvznhbcnQDA_eeHVnVqg9Q%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 12 Jun 2024 20:11:55 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "> On 12 Jun 2024, at 17:50, Andres Freund <[email protected]> wrote:\n> On 2024-06-11 07:28:23 -0700, Jacob Champion wrote:\n\n>> The OAuth pytest suite makes extensive use of\n>> - psycopg, to easily drive libpq;\n> \n> That's probably not going to fly. It introduces painful circular dependencies\n> between building postgres (for libpq), building psycopg (requiring libpq) and\n> testing postgres (requiring psycopg).\n\nI might be missing something obvious, but if we use a third-party libpq driver\nin the testsuite doesn't that imply that a patch adding net new functionality\nto libpq also need to add it to the driver in order to write the tests? I'm\nthinking about the SCRAM situation a few years back when drivers weren't up to\ndate.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 12 Jun 2024 19:30:33 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, 12 Jun 2024 at 19:30, Daniel Gustafsson <[email protected]> wrote:\n\n> I might be missing something obvious, but if we use a third-party libpq driver\n> in the testsuite doesn't that imply that a patch adding net new functionality\n> to libpq also need to add it to the driver in order to write the tests? I'm\n> thinking about the SCRAM situation a few years back when drivers weren't up to\n> date.\n\nAs Jelte pointed out, new libpq functions can be tested via CFFI. I\nposted a practical example in a link upthread (pure Python Psycopg is\nentirely implemented on FFI).\n\n-- Daniele\n\n\n", "msg_date": "Wed, 12 Jun 2024 19:38:54 +0200", "msg_from": "Daniele Varrazzo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, Jun 12, 2024 at 1:30 PM Daniel Gustafsson <[email protected]> wrote:\n> > On 12 Jun 2024, at 17:50, Andres Freund <[email protected]> wrote:\n> > On 2024-06-11 07:28:23 -0700, Jacob Champion wrote:\n>\n> >> The OAuth pytest suite makes extensive use of\n> >> - psycopg, to easily drive libpq;\n> >\n> > That's probably not going to fly. It introduces painful circular dependencies\n> > between building postgres (for libpq), building psycopg (requiring libpq) and\n> > testing postgres (requiring psycopg).\n>\n> I might be missing something obvious, but if we use a third-party libpq driver\n> in the testsuite doesn't that imply that a patch adding net new functionality\n> to libpq also need to add it to the driver in order to write the tests? I'm\n> thinking about the SCRAM situation a few years back when drivers weren't up to\n> date.\n\nYeah, I don't think depending on psycopg2 is practical at all. We can\neither shell out to psql like we do now, or we can use something like\nCFFI.\n\nOn the overall topic of this thread, I personally find most of the\nrationale articulated in the original message unconvincing. Many of\nthose improvements could be made on top of the Perl framework we have\ntoday, and some of them have been discussed, but nobody's done the\nwork. I also don't understand the argument that assert a == b is some\nnew and wonderful thing; I mean, you can already do is($a,$b,\"test\nname\") which *also* shows you the values when they don't match, and\nincludes a test name, too! I personally think that most of the\nfrustration associated with writing TAP tests has to do with (1)\nWindows behavior being randomly different than on other platforms in\nways that are severely under-documented, (2)\nPostgreSQL::Test::whatever being somewhat clunky and under-designed,\nand (3) the general difficulty of producing race-free test cases. A\nnew test framework isn't going to solve (3), and (1) and (2) could be\nfixed anyway.\n\nHowever, I understand that a lot of people would prefer to code in\nPython than in Perl. I am not one of them: I learned Perl in the early\nnineties, and I haven't learned Python yet. Nonetheless, Python being\nmore popular than Perl is a reasonable reason to consider allowing its\nuse in PostgreSQL. But if that's the reason, let's be up front about\nit.\n\nI do think we want a scripting language here i.e. not C.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 15:07:16 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Tue, Jun 11, 2024 at 4:48 PM Noah Misch <[email protected]> wrote:\n> If we're going to test in a non-Perl language, I'd pick C over Python.\n\nWe already test in C, though? If the complaint is that those tests are\ndriven by Perl, I agree that something like libcheck or GTest or\nwhatever people are using nowadays would be nicer. But that can be\nadded at any point; the activation energy for a new C-based test\nrunner seems pretty small. IMO, there's no reason to pick it _over_\nanother language, when we already support C tests and agree that\ndevelopers need to be fluent in C.\n\n> We'd need a library like today's Perl\n> PostgreSQL::Test to make C-language tests nice, but the same would apply to\n> any new language.\n\nI think the effort required in rebuilding end-to-end tests in C is\ngoing to be a lot different than in pretty much any other modern\nhigh-level language, so I don't agree that \"the same would apply\".\n\nFor the five problem statements I put forward, I think C moves the\nneedle for zero of them. It neither solves the problems we have nor\ngives us stronger tools to solve them ourselves. And for my personally\nmotivating use case of OAuth, where I need to manipulate HTTP and JSON\nand TLS and so on and so forth, implementing all of that in C would be\nan absolute nightmare. Given that choice, I would rather use Perl --\nand that's saying something, because I like C a lot more than I like\nPerl -- because it's the difference between being given a rusty but\nstill functional table saw, and being given a box of Legos to build a\n\"better\" table saw, when all I want to do is cut a 2x4 in half and\nmove on with my work.\n\nI will use the rusty saw if I have to. But I want to get a better saw\n-- that somebody else with experience in making saws has constructed,\nand other people are pretty happy with -- as opposed to building my\nown.\n\n> I also want the initial scope to be the new language coexisting with the\n> existing Perl tests.\n\nStrongly agreed.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 12 Jun 2024 12:30:53 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, Jun 12, 2024 at 4:40 AM Jelte Fennema-Nio <[email protected]> wrote:\n> I think C#, Java, Go, Rust, Kotlin, and Swift would be acceptable\n> choices for me (and possibly some more). They allow some type of\n> introspection, they have a garbage collector, and their general\n> tooling is quite good.\n>\n> But I think a dynamically typed scripting language is much more\n> fitting for writing tests like this. I love static typing for\n> production code, but imho it really doesn't have much benefit for\n> tests.\n\n+1. I write mostly protocol mocks and glue code in my authn testing,\nto try to set up the system into some initial state and then break it.\nOf the languages mentioned here, I've only used C#, Java, and Go. If I\nhad to reimplement my tests, I'd probably reach for Go out of all of\nthose, but the glue would still be more painful than it probably needs\nto be.\n\n> As scripting languages go, the ones that are still fairly heavily in\n> use are Javascript, Python, Ruby, and PHP. I think all of those could\n> probably work, but my personal order of preference would be Python,\n> Ruby, Javascript, PHP.\n\n- Python is the easiest language I've personally used to glue things\ntogether, bar none.\n- I like Ruby as a language but have no experience using it for\ntesting. (RSpec did come up during the unconference session and\nsubsequent hallway conversations.)\n- Javascript is a completely different mental model from what we're\nused to, IMO. I think we're likely to spend a lot of time fighting the\nengine unless everyone is very clear on how it works.\n- I don't see a use case for PHP here.\n\n> TO CLARIFY: This thread is not a proposal to replace Perl with Python.\n> It's a proposal to allow people to also write tests in Python.\n\n+1. It doesn't need to replace anything. It just needs to help us do\nmore things than we're currently doing.\n\n--Jacob\n\n\n", "msg_date": "Wed, 12 Jun 2024 12:31:23 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, Jun 12, 2024 at 4:48 AM Alexander Korotkov <[email protected]> wrote:\n> Generally, testgres was initially designed as Python analogue of what\n> we have in src/test/perl/PostgreSQL/Test. In particular its\n> testgres.PostgresNode is analogue of PostgreSQL::Test::Cluster. It\n> comes under PostgreSQL License. So, I wonder if we could revise it\n> and fetch most part of it into our source tree.\n\nOkay. If there's wide interest in a port of PostgreSQL::Test::Cluster,\nthat might be something to take a look at. (Since I'm focused on\ntesting things that the current Perl suite can't do at all, I would\nprobably not be the first to volunteer.)\n\n--Jacob\n\n\n", "msg_date": "Wed, 12 Jun 2024 12:31:35 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, Jun 12, 2024 at 8:50 AM Andres Freund <[email protected]> wrote:\n> I think I might have formulated my paragraph above badly - I didn't mean that\n> we should move away from perl tests tomorrow, but that we need a path forward\n> that allows folks to write tests without perl.\n\nOkay, agreed.\n\n> > - Tests aren't cheap, but in my experience, the maintenance-cost math\n> > for tests is a lot different than the math for implementations.\n>\n> At the moment they tend to be *more* expensive often, due to spurious\n> failures. That's mostly not perl's fault, don't get me wrong, but us not\n> having better infrastructure for testing complicated behaviour and/or testing\n> things on a more narrow basis.\n\nWell, okay, but I'm not sure how to respond to this in the frame of\nthis discussion. Bad tests will continue to exist. I am trying to add\na tool that, in my view, has made it easier for me to test complicated\nbehavior than what we currently have. I can't prove that it will solve\nother issues too.\n\n> > Well, scopes are pretty front and center when you start building\n> > pytest fixtures, and the complicated longer setups will hopefully\n> > converge correctly early on and be reused everywhere else. I imagine\n> > no one wants to build cluster setup from scratch.\n>\n> One (the?) prime source of state in our tap tests is the\n> database. Realistically we can't just tear that one down and reset it between\n> tests without causing the test times to explode. So we'll have to live with\n> some persistent state.\n\nYes? If I've given the impression that I disagree, sorry; I agree.\n\n> > On a slight tangent, is this not a problem today?\n>\n> It is, but that doesn't mean making it even bigger is unproblematic :)\n\nGiven all that's been said, I don't understand why you think the\nproblem would get bigger. We would cache expensive state that we need,\nincluding the cluster, and pytest lets us do that, and my test suite\ndoes that. I've never written a suite that spun up a separate cluster\nfor every single test and then immediately threw it away.\n\n(If you want to _enable_ that behavior, to test in extreme isolation,\nthen pytest lets you do that too. But it's not something we should do\nby default.)\n\n> > We do a lot more acceptance testing than internal testing, which came\n> > up as a major complaint from me and others during the unconference.\n> > One of the reasons people avoid writing internal tests in Perl is\n> > because it's very painful to find a rhythm with Test::More.\n>\n> What definition of internal tests are you using here?\n\nThere's a spectrum from unit-testing unexported functions all the way\nto end-to-end acceptance, and personally I believe that anything\nfiner-grained than end-to-end acceptance is unnecessarily painful. My\nOAuth suite sits somewhere in the middle, where it mocks the protocol\nlayer and can test the client and server as independent pieces. Super\nuseful for OAuth, which is asymmetrical.\n\nI'd like to additionally see better support for unit tests of backend\ninternals, but I don't know those seams as well as all of you do and I\nshould not be driving that. I don't think Python will necessarily help\nyou with it. But it sure helped me break apart the client and the\nserver while enjoying the testing process, and other people want to do\nthat too, so that's what I'm pushing for.\n\n> I think a lot of our tests are complicated, fragile and slow because we almost\n> exclusively do end-to-end tests, because with a few exceptions we don't have a\n> way to exercise code in a more granular way.\n\nYep.\n\n> That's probably not going to fly. It introduces painful circular dependencies\n> between building postgres (for libpq), building psycopg (requiring libpq) and\n> testing postgres (requiring psycopg).\n\nI am trying very hard not to drag that, which I understand is\ncontroversial and is in no way a linchpin of my proposal, into the\ndiscussion of whether or not we should try supporting pytest.\n\nI get it; I understand that the circular dependency is weird; there\nare alternatives if it's unacceptable; none of that has anything to do\nwith Python+pytest.\n\n> One thing worth thinking about is that such dependencies have to work on a\n> relatively large number of platforms / architectures. A lot of projects\n> don't...\n\nAgreed.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 12 Jun 2024 12:33:53 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, Jun 12, 2024 at 10:30 AM Daniel Gustafsson <[email protected]> wrote:\n> I might be missing something obvious, but if we use a third-party libpq driver\n> in the testsuite doesn't that imply that a patch adding net new functionality\n> to libpq also need to add it to the driver in order to write the tests?\n\nI use the third-party driver to perform the \"basics\" at a high level\n-- connections, queries during cluster setup, things that don't\ninvolve ABI changes. For new ABI I use ctypes, or as other people have\nmentioned CFFI would work.\n\n--Jacob\n\n\n", "msg_date": "Wed, 12 Jun 2024 12:34:04 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, Jun 12, 2024 at 01:40:30PM +0200, Jelte Fennema-Nio wrote:\n> On Wed, 12 Jun 2024 at 01:48, Noah Misch <[email protected]> wrote:\n> > I also want the initial scope to be the new language coexisting with the\n> > existing Perl tests. If a bulk translation ever happens, it should happen\n> > long after the debut of the new framework. That said, I don't much trust a\n> > human-written bulk language translation to go through without some tests\n> > accidentally ceasing to test what they test in Perl today.\n> \n> I definitely don't think we should rewrite all the tests that we have\n> in Perl today into some other language. But I do think that whatever\n> language we choose, that language should make it as least as easy to\n> write tests, as easy to read them and as easy to see that they are\n> testing the intended thing, as is currently the case for Perl.\n> Rewriting a few Perl tests into the new language, even if not merging\n> the rewrite, is a good way of validating that imho.\n\nAgreed.\n\n> PS. For PgBouncer I actually hand-rewrote all the tests that we had in\n> bash (which is the worst testing language ever) in Python and doing so\n> actually found more bugs in PgBouncer code that our bash tests\n> wouldn't catch. So it's not necessarily the case that you lose\n> coverage by rewriting tests.\n\nYep.\n\n\n", "msg_date": "Wed, 12 Jun 2024 13:56:57 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Hi,\n\n(I don't have an opinion which language should be selected\nhere.)\n\nIn <CAOYmi+mA7-uNqpY-0jNZY=fE-QsbfeM1j5Mc-vu1Xm+=B8NOXA@mail.gmail.com>\n \"Re: RFC: adding pytest as a supported test framework\" on Wed, 12 Jun 2024 12:31:23 -0700,\n Jacob Champion <[email protected]> wrote:\n\n> - I like Ruby as a language but have no experience using it for\n> testing. (RSpec did come up during the unconference session and\n> subsequent hallway conversations.)\n\nIf we want to select Ruby, I can help. (I'm a Ruby committer\nand a maintainer of a testing framework bundled in Ruby.)\n\nI'm using Ruby for PGroonga's tests that can't be covered by\npg_regress. For example, streaming replication related\ntests. PGroonga has a small utility for it:\nhttps://github.com/pgroonga/pgroonga/blob/main/test/helpers/sandbox.rb\n\nHere is a streaming replication test with it:\nhttps://github.com/pgroonga/pgroonga/blob/main/test/test-streaming-replication.rb\n\nI'm using test-unit as testing framework that is bundled in\nRuby: https://github.com/test-unit/test-unit/\n\nI don't recommend that we use RSpec as testing framework\neven if we select Ruby. RSpec may change API. (RSpec did it\nseveral times in the past.) If testing framework changes API, we\nneed to rewrite our tests to adapt the change.\n\nI'll never change test-unit API because I don't want to\nrewrite existing tests.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 13 Jun 2024 06:40:01 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, 12 Jun 2024 at 18:46, Daniele Varrazzo\n<[email protected]> wrote:\n> This is true, but [citation needed] :D I assume the pointer wanted to\n> be https://www.psycopg.org/psycopg3/docs/api/pq.html#pq-impl\n\nUgh, yes I definitely meant to add a link to that [1]. I meant this one though:\n\n[1]: https://www.psycopg.org/psycopg3/docs/basic/install.html#pure-python-installation\n\n> - using the pure Python psycopg (enforced by exporting\n> 'PSYCOPG_IMPL=python') you would use the libpq found on the\n> LD_LIBRARY_PATH, which can be useful to test regressions to the libpq\n> itself.\n\nThis indeed was the main idea I had in mind.\n\n> - if you want to test new libpq functions you can reach them in Python\n> by dynamic lookup. See [2] for an example of a function only available\n> from libpq v17.\n>\n> [2]: https://github.com/psycopg/psycopg/blob/2bf7783d66ab239a2fa330a842fd461c4bb17c48/psycopg/psycopg/pq/_pq_ctypes.py#L564-L569\n\nYeah, that dynamic lookup would work. But due to the cyclic dependency\non postgres commit vs psycopg PR we couldn't depend on psycopg for\nthose dynamic lookups. So we'd need to have our own dynamic lookup\ncode to do this.\n\nI don't see a huge problem with using psycopg for already released\ncommonly used features (i.e. connecting to postgres and doing\nqueries), but still use our own custom dynamic lookup for the rare\ntests that test newly added features. But I can definitely see people\nmaking the argument that if we need to write & maintain some dynamic\nlookup code ourselves anyway, we might as well have all the dynamic\nlookup code in our repo to avoid dependencies.\n\nOn Wed, 12 Jun 2024 at 18:46, Daniele Varrazzo\n<[email protected]> wrote:\n>\n> On Wed, 12 Jun 2024 at 18:08, Jelte Fennema-Nio <[email protected]> wrote:\n> >\n> > On Wed, 12 Jun 2024 at 17:50, Andres Freund <[email protected]> wrote:\n> > > > The OAuth pytest suite makes extensive use of\n> > > > - psycopg, to easily drive libpq;\n> > >\n> > > That's probably not going to fly. It introduces painful circular dependencies\n> > > between building postgres (for libpq), building psycopg (requiring libpq) and\n> > > testing postgres (requiring psycopg).\n> >\n> > psycopg has a few implementations binary, c, & pure python. The pure\n> > python one can be linked to a specific libpq.so file at runtime[1]. As\n>\n> This is true, but [citation needed] :D I assume the pointer wanted to\n> be https://www.psycopg.org/psycopg3/docs/api/pq.html#pq-impl\n>\n> I see the following use cases and how I would use psycopg to implement them:\n>\n> - by installing 'psycopg[binary]' you would get a binary bundle\n> shipping with a stable version of the libpq, so you can test the\n> database server regardless of libpq instabilities in the same\n> codebase.\n> - using the pure Python psycopg (enforced by exporting\n> 'PSYCOPG_IMPL=python') you would use the libpq found on the\n> LD_LIBRARY_PATH, which can be useful to test regressions to the libpq\n> itself.\n> - if you want to test new libpq functions you can reach them in Python\n> by dynamic lookup. See [2] for an example of a function only available\n> from libpq v17.\n>\n> [2]: https://github.com/psycopg/psycopg/blob/2bf7783d66ab239a2fa330a842fd461c4bb17c48/psycopg/psycopg/pq/_pq_ctypes.py#L564-L569\n>\n> -- Daniele\n\n\n", "msg_date": "Wed, 12 Jun 2024 23:50:15 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Tue, Jun 11, 2024 at 8:05 AM Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2024-06-10 Mo 21:49, Andres Freund wrote:\n>\n> Hi,\n>\n> On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:\n>\n> I'm not sure what part of the testing infrastructure you think is\n> unmaintained. For example, the last release of Test::Simple was all the way\n> back on April 25.\n>\n> IPC::Run is quite buggy and basically just maintained by Noah these days.\n>\n>\n> Yes, that's true. I think the biggest pain point is possibly the recovery tests.\n>\n> Some time ago I did some work on wrapping libpq using the perl FFI module. It worked pretty well, and would mean we could probably avoid many uses of IPC::Run, and would probably be substantially more efficient (no fork required). It wouldn't avoid all uses of IPC::Run, though.\n>\n> But my point was mainly that while a new framework might have value, I don't think we need to run out and immediately rewrite several hundred TAP tests. Let's pick the major pain points and address those.\n\nFWIW, I felt a lot of pain trying to write recovery TAP tests with\nIPC::Run's pumping functionality. It was especially painful (as\nsomeone who knows even less Perl than the \"street fighting Perl\"\nThomas Munro has described having) before the additional test\ninfrastructure was added in BackgroudPsql.pm last year. As an example\nof the \"worst case\", it took me two full work days to go from a repro\n(with psql sessions on a primary and replica node) of the vacuum hang\nissue being explored in [1] to a sort-of working TAP test which\ndemonstrated it - and that was with help from several other\ncommitters. Granted, this is a complex case.\n\nA small part of the issue is that, as Tristan has said elsewhere,\nthere aren't good developer tool integrations that I know about for\nPerl. I use neovim's LSP support for C and Python (in other projects),\nand there is a whole ecosystem of tools I can use for both C and\nPython. I know not everyone likes or needs these, but I find that they\nhelp me write and debug code faster.\n\nI had offered to take a stab at writing some of the BackgroundPsql\ntest infrastructure in Python. I haven't started exploring that yet or\nlooking at what Jacob has done so far, but I am optimistic that this\nis an area where it is worth seeing what is available to us outside of\nIPC::Run.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_bXH2g_pchG7rN_4fs-_6_kVbbJ97gYRoN0Zdb9P04Wag%40mail.gmail.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 18:34:16 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, 12 Jun 2024 at 21:07, Robert Haas <[email protected]> wrote:\n> Yeah, I don't think depending on psycopg2 is practical at all. We can\n> either shell out to psql like we do now, or we can use something like\n> CFFI.\n\nQuick clarification I meant psycopg3, not psycopg2. And I'd very much\nlike to avoid using psql for sending queries, luckily CFFI in python\nis very good.\n\n> Many of\n> those improvements could be made on top of the Perl framework we have\n> today, and some of them have been discussed, but nobody's done the\n> work.\n\nI agree it's not a technical issue. It is a people issue. There are\nvery few people skilled in Perl active in the community. And most of\nthose are very senior hackers that have much more important things to\ndo that make our Perl testing framework significantly better. And the\nless senior people that might see improving tooling as a way to get\nhelp out in the community, are try to stay away from Perl with a 10\nfoot pole. So the result is, nothing gets improved. Especially since\nvery few people outside our community improve this tooling either.\n\n I also don't understand the argument that assert a == b is some\n> new and wonderful thing; I mean, you can already do is($a,$b,\"test\n> name\") which *also* shows you the values when they don't match, and\n> includes a test name, too!\n\nSure you can, if you know the function exists. And clearly not\neveryone knows that it exists, as the quick grep below demonstrates:\n\n❯ grep 'ok(.* == .*' **.pl | wc -l\n41\n\nBut apart from the obvious syntax doing what you want, the output is\nalso much better when looking at a slightly more complex case. With\nthe following code:\n\ndef some_returning_func():\n return 1234\n\ndef some_func(val):\n if val > 100:\n return 100\n return val\n\ndef test_mytest():\n assert some_func(some_returning_func()) == 50\n\nPytest will show the following output\n\n def test_mytest():\n> assert some_func(some_returning_func()) == 50\nE assert 100 == 50\nE + where 100 = some_func(1234)\nE + where 1234 = some_returning_func()\n\nI have no clue how you could get output that's even close to that\nclear with Perl.\n\nAnother problem I run into is that, as you probably know, sometimes\nyou need to look at the postgres logs to find out what actually went\nwrong. Currently the only way to find them (for me) is following the\nfollowing steps: hmm, let me figure out what that directory was called\nagain... ah okay it is build/testrun/pg_upgrade/001_basic/... okay\nlet's start opening log files that all have very similar names until\nfind the right one.\n\nWhen a test in pytest fails it automatically outputs all stdout/stderr\nthat was outputted, and hides it on success. So for the PgBouncer test\nsuite. I simply send all the relevant log files to stdout, prefixed by\nsome capitalized identifying line with a few newlines around it.\nSomething like \"PG_LOG: /path/to/actual/logfile\". Then when a test\nfails in my terminal window I can look at the files related to the\nfailed test instantly. This allows me to debug failures much faster.\n\nA related thing that also doesn't help at all is that (afaik) seeing\nany of the perl tap test output in your terminal requires running\n`meson test` with the -v option, and then scrolling up past all the\nsuper verbose output of successfully passing tests to find out what\nexactly failed in the single test that failed. And if you don't want\nto do that you have to navigate to the magic directory path (\nbuild/testrun/pg_upgrade/001_basic/) of the specific tests to look at\nthe stdout file there... Which then turns out not to even be there if\nyou actually had a compilation failure in your perl script (which\nhappens very often to anyone that doesn't use perl often). So now you\nhave to scroll up anyway.\n\nPytest instead is very good at only showing output for the tests that\nfailed, and hiding pretty much all output for the tests that passed.\n\n\n", "msg_date": "Thu, 13 Jun 2024 00:43:07 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "> On 13 Jun 2024, at 00:34, Melanie Plageman <[email protected]> wrote:\n\n> FWIW, I felt a lot of pain trying to write recovery TAP tests with\n> IPC::Run's pumping functionality. It was especially painful (as\n> someone who knows even less Perl than the \"street fighting Perl\"\n> Thomas Munro has described having) before the additional test\n> infrastructure was added in BackgroudPsql.pm last year.\n\nA key aspect of this, which isn't specific to Perl or our use of it, is that\nthis was done in backbranches which doesn't have the (recently) much improved\nBackgroundPsql.pm. The quality of our tools and the ease of use they provide\nis directly related to the investment we make into continuously improving our\ntestharness. Regardless of which toolset we adopt, if we don't make this\ninvestment (taking learnings from the past years and toolsets into account)\nwe're bound to repeat this thread in a few years advocating for toolset X+1.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 13:27:42 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Wed, Jun 12, 2024 at 6:43 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I agree it's not a technical issue. It is a people issue. There are\n> very few people skilled in Perl active in the community. And most of\n> those are very senior hackers that have much more important things to\n> do that make our Perl testing framework significantly better. And the\n> less senior people that might see improving tooling as a way to get\n> help out in the community, are try to stay away from Perl with a 10\n> foot pole. So the result is, nothing gets improved. Especially since\n> very few people outside our community improve this tooling either.\n\nI agree with you, but I'm skeptical that solving it will be as easy as\nswitching to Python. For whatever reason, it seems like every piece of\ninfrastructure that the PostgreSQL community has suffers from severe\nneglect. Literally everything I know of either has one or maybe two\nvery senior hackers maintaining it, or no maintainer at all. Andrew\nmaintains the buildfarm and it evolves quite slowly. Andres did all\nthe work on meson, with some help from Peter. Thomas maintains cfbot\nas a skunkworks. The Perl-based TAP test framework gets barely any\nlove at all. The CommitFest application is pretty much totally\nstagnant, and in fact is a great example of what I'm talking about\nhere: I wrote an original version in Perl and somebody -- I think\nMagnus -- rewrote it in a more maintainable framework -- and then the\ndevelopment pace went to basically zero. All of this stuff is critical\nproject infrastructure and yet it feels like nobody wants to work on\nit.\n\nNow, this case may prove to be an exception to that rule and that will\nbe great. But what I think is a lot more likely is that we'll get a\nlot of pressure to commit something as soon as parity with the Perl\nTAP test system has been achieved, or maybe even before that, and then\nthe rate of further improvements will slow to a trickle. That's not to\nsay that sticking with Perl is better. A quick Google search finds a\nweb page that says Python is two orders of magnitude more popular than\nPerl, and that's not something we should just ignore. But I still\nthink it's fair to question whether the preference of many developers\nfor Python over Perl will translate into sustained investment in\nimproving the infrastructure. Again, I will be thrilled if it does,\nbut that just doesn't seem to be the way that things go around here,\nand I bet the reasons go well beyond choice of programming language.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 09:38:24 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Jun 12, 2024 at 6:43 PM Jelte Fennema-Nio <[email protected]> wrote:\n>> I agree it's not a technical issue. It is a people issue. There are\n>> very few people skilled in Perl active in the community. And most of\n>> those are very senior hackers that have much more important things to\n>> do that make our Perl testing framework significantly better. And the\n>> less senior people that might see improving tooling as a way to get\n>> help out in the community, are try to stay away from Perl with a 10\n>> foot pole. So the result is, nothing gets improved. Especially since\n>> very few people outside our community improve this tooling either.\n\n> I agree with you, but I'm skeptical that solving it will be as easy as\n> switching to Python. For whatever reason, it seems like every piece of\n> infrastructure that the PostgreSQL community has suffers from severe\n> neglect.\n\nYeah. In this case it's perhaps more useful to look at our external\ndependencies, the large majority of which are suffering from age\nand neglect:\n\n * autoconf & gmake (although meson may get us out from under these)\n * bison\n * flex\n * perl\n * tcl\n * regex library (basically from tcl)\n * libxml2\n * kerberos\n * ldap\n * pam\n * uuid library\n\nI think the basic problem is inherent in being a successful long-lived\nproject. Or maybe we're just spectacularly bad at picking which\nthings to depend on. Whichever it is, we'd better have a 10- or 20-\nyear perspective when thinking about adopting new major dependencies.\n\nIn the case at hand, I share Robert's doubts about Python. Sure it's\nmore popular than Perl, but I don't think it's actually better, and\nin some ways it's worse. (The moving-target package collection was\nmentioned as a problem, for instance.) Is it going to age better\nthan Perl? Doubt it.\n\nI wonder if we should be checking out some of the other newer\nlanguages that were mentioned upthread. It feels like going to\nPython here will lead to having two testing infrastructures with\nmas-o-menos the same capabilities, leaving us with a situation\nwhere people have to know both languages in order to make sense of\nour test suite. I find it hard to picture that as an improvement\nover the status quo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Jun 2024 11:19:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, 13 Jun 2024 at 15:38, Robert Haas <[email protected]> wrote:\n> For whatever reason, it seems like every piece of\n> infrastructure that the PostgreSQL community has suffers from severe\n> neglect. Literally everything I know of either has one or maybe two\n> very senior hackers maintaining it, or no maintainer at all. Andrew\n> maintains the buildfarm and it evolves quite slowly. Andres did all\n> the work on meson, with some help from Peter. Thomas maintains cfbot\n> as a skunkworks. The Perl-based TAP test framework gets barely any\n> love at all. The CommitFest application is pretty much totally\n> stagnant, and in fact is a great example of what I'm talking about\n> here: I wrote an original version in Perl and somebody -- I think\n> Magnus -- rewrote it in a more maintainable framework -- and then the\n> development pace went to basically zero. All of this stuff is critical\n> project infrastructure and yet it feels like nobody wants to work on\n> it.\n\nOverall, I agree with the sentiment of us not maintaining our tooling\nwell (although I think meson maintenance has been pretty decent so\nfar). I think there's a bunch of reasons for this (not all apply to\neach of the tools):\n1. pretty much solely maintained by senior community members who don't\nhave time to maintain it\n2. no clear way to contribute. e.g. where should I send a patch/PR for\nthe commitfest app, or the cfbot?\n3. (related to 1) unresponsive when somehow contributions are actually\nsent in (I have two open PRs on the cfbot app from 3 years ago without\nany response)\n\nI think 1 & 3 could be addressed by more easily giving commit/merge\naccess to these tools than to the main PG repo. And I think 2 could be\naddressed by writing on the relevant wiki page where to go, and\nprobably putting a link to the wiki page on the actual website of the\ntool.\n\nBut Perl is at the next level of unmaintained infrastructure. It is\nactually clear how you can contribute to it, but still no new\ncommunity members actually want to contribute to it. Also, it's not\nonly unmaintained by us but it's also pretty much unmaintained by the\nupstream community.\n\n> But I still\n> think it's fair to question whether the preference of many developers\n> for Python over Perl will translate into sustained investment in\n> improving the infrastructure. Again, I will be thrilled if it does,\n> but that just doesn't seem to be the way that things go around here,\n> and I bet the reasons go well beyond choice of programming language.\n\nAs you said, no one in our community wants to maintain our testsuite\nfull time. But our test suite consists partially of upstream\ndependencies and partially of our own code. Right now pretty much\nno-one improves the ustream code, and pretty much no-one improves our\nown code. Using a more modern language gives up much more frequent\nupstream improvements for free, and it will allow new community\nmembers to contribute to our own test suite.\n\nAnd I understand you are sceptical that people will contribute to our\nown test suite, just because it's Python. But as a counterpoint:\npeople are currently already doing exactly that, just outside of the\ncore postgres repo[1][2][3]. I don't see why those people would\nsuddenly stop doing that if we include such a suite in the official\nrepo. Apparently many people hate writing tests in Perl so much that\nthey'd rather build Python test frameworks to test their extensions,\nthan to use/improve the Perl testing framework included in Postgres.\n\n[1]: https://github.com/pgbouncer/pgbouncer/tree/master/test\n[2]: https://github.com/jchampio/pg-pytest-suite\n[3]: https://github.com/postgrespro/testgres\n\n\nPS. I don't think it makes sense to host our tooling like the\ncommitfest app on our own git server instead of github/gitlab. That\nonly makes it harder for community members to contribute and also much\nharder to set up CI. I understand the reasons why we use mailing lists\nfor the development of core postgres, but I don't think those apply\nnearly as much to our tooling repos. And honestly also not to stuff\nlike the website.\n\n\n", "msg_date": "Thu, 13 Jun 2024 19:07:58 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, 13 Jun 2024 at 17:19, Tom Lane <[email protected]> wrote:\n> I wonder if we should be checking out some of the other newer\n> languages that were mentioned upthread.\n\nIf this is actually something that we want to seriously evaluate, I\nthink that's a significant effort. And I think the people that want a\nlanguage would need to step in to make that effort. So far Jacob[1],\nAlexander[2] and me[3] seem to be doing that for Python, and Sutou has\ndone that for Ruby[4].\n\n[1]: https://github.com/pgbouncer/pgbouncer/tree/master/test\n[2]: https://github.com/jchampio/pg-pytest-suite\n[3]: https://github.com/postgrespro/testgres\n[4]: https://github.com/pgroonga/pgroonga/blob/main/test/test-streaming-replication.rb\n\n> It feels like going to\n> Python here will lead to having two testing infrastructures with\n> mas-o-menos the same capabilities, leaving us with a situation\n> where people have to know both languages in order to make sense of\n> our test suite. I find it hard to picture that as an improvement\n> over the status quo.\n\nYou don't have to be fluent in writing Python to be able to read and\nunderstand tests written in it. As someone familiar with Python I can\ndefinitely read our test suite, and I expect everyone smart enough to\nbe fluent in Perl to be able to read and understand Python with fairly\nlittle effort too.\n\nI think having significantly more tests being written, and those tests\nbeing written faster and more correctly, is definitely worth the\nslight mental effort of learning to read two very similarly looking\nscripting languages (they both pretty much looking like pseudo code).\n\n\n", "msg_date": "Thu, 13 Jun 2024 19:28:00 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> You don't have to be fluent in writing Python to be able to read and\n> understand tests written in it.\n\n[ shrug... ] I think the same can be said of Perl, with about as\nmuch basis. It matters a lot if you have previous experience with\nthe language.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Jun 2024 13:47:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 11:19 AM Tom Lane <[email protected]> wrote:\n> I wonder if we should be checking out some of the other newer\n> languages that were mentioned upthread. It feels like going to\n> Python here will lead to having two testing infrastructures with\n> mas-o-menos the same capabilities, leaving us with a situation\n> where people have to know both languages in order to make sense of\n> our test suite. I find it hard to picture that as an improvement\n> over the status quo.\n\nAs I see it, one big problem is that if you pick a language that's too\nnew, it's more likely to fade away. Python is very well-established,\ne.g. see\n\nhttps://www.tiobe.com/tiobe-index/\n\nThat gives Python a rating of 15.39%; vs. Perl at 0.69%. There are\nother things that you could pick, for sure, like Javascript, but if\nyou want a scripting language that's popular now, Python is hard to\nbeat. And that means it's more likely to still have some life in it 10\nor 20 years from now than many other things.\n\nNot all sites agree on which programming languages are actually the\nmost popular and I'm not strongly against considering other\npossibilities, but Python seems to be pretty high on most lists, often\n#1, and that does matter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 13:51:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 7:27 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 13 Jun 2024, at 00:34, Melanie Plageman <[email protected]> wrote:\n>\n> > FWIW, I felt a lot of pain trying to write recovery TAP tests with\n> > IPC::Run's pumping functionality. It was especially painful (as\n> > someone who knows even less Perl than the \"street fighting Perl\"\n> > Thomas Munro has described having) before the additional test\n> > infrastructure was added in BackgroudPsql.pm last year.\n>\n> A key aspect of this, which isn't specific to Perl or our use of it, is that\n> this was done in backbranches which doesn't have the (recently) much improved\n> BackgroundPsql.pm. The quality of our tools and the ease of use they provide\n> is directly related to the investment we make into continuously improving our\n> testharness. Regardless of which toolset we adopt, if we don't make this\n> investment (taking learnings from the past years and toolsets into account)\n> we're bound to repeat this thread in a few years advocating for toolset X+1.\n\nTrue. And thank you for committing BackgroundPsql.pm (and Andres for\nstarting that work). My specific case is likely one of a poor work\nperson blaming her tools :)\n\n- Melanie\n\n\n", "msg_date": "Thu, 13 Jun 2024 14:09:01 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 1:08 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I think 1 & 3 could be addressed by more easily giving commit/merge\n> access to these tools than to the main PG repo. And I think 2 could be\n> addressed by writing on the relevant wiki page where to go, and\n> probably putting a link to the wiki page on the actual website of the\n> tool.\n\n+1.\n\n> But Perl is at the next level of unmaintained infrastructure. It is\n> actually clear how you can contribute to it, but still no new\n> community members actually want to contribute to it. Also, it's not\n> only unmaintained by us but it's also pretty much unmaintained by the\n> upstream community.\n\nI feel like I already agreed to this in a previous email and you're\ncontinuing to argue with me as if I were disagreeing.\n\n> As you said, no one in our community wants to maintain our testsuite\n> full time. But our test suite consists partially of upstream\n> dependencies and partially of our own code. Right now pretty much\n> no-one improves the ustream code, and pretty much no-one improves our\n> own code. Using a more modern language gives up much more frequent\n> upstream improvements for free, and it will allow new community\n> members to contribute to our own test suite.\n\nI also agree with this. I'm just not super optimistic about how much\nof that will actually happen. And I'd like to hear you acknowledge\nthat concern and think about whether it can be addressed in some way,\ninstead of just repeating that we should do it anyway. Because I agree\nwe probably should do it anyway, but that doesn't mean I wouldn't like\nto see the downsides mitigated as much as we can. In particular, if\nthe proposal is exactly \"let's add the smallest possible patch that\nenables people to write tests in Python and then add a few new tests\nin Python while leaving almost everything else in Perl, with no\nmigration plan and no clear vision of how the Python support ever gets\nany better than the minimum stub that is proposed for initial commit,\"\nthen I don't know that I can vote for that plan. Honestly, that sounds\nlike very little work for the person proposing that minimal patch and\na whole lot of work for the rest of the community later on, and the\nevidence is not in favor of volunteers showing up to take care of that\nwork. The plan should be more front-loaded than that: enough initial\ndevelopment should get done by the people making the proposal that if\nthe work stops after, we don't have another big mess on our hands.\n\nOr so I think, anyway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 14:11:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "> On 13 Jun 2024, at 20:09, Melanie Plageman <[email protected]> wrote:\n> \n> On Thu, Jun 13, 2024 at 7:27 AM Daniel Gustafsson <[email protected]> wrote:\n>> \n>>> On 13 Jun 2024, at 00:34, Melanie Plageman <[email protected]> wrote:\n>> \n>>> FWIW, I felt a lot of pain trying to write recovery TAP tests with\n>>> IPC::Run's pumping functionality. It was especially painful (as\n>>> someone who knows even less Perl than the \"street fighting Perl\"\n>>> Thomas Munro has described having) before the additional test\n>>> infrastructure was added in BackgroudPsql.pm last year.\n>> \n>> A key aspect of this, which isn't specific to Perl or our use of it, is that\n>> this was done in backbranches which doesn't have the (recently) much improved\n>> BackgroundPsql.pm. The quality of our tools and the ease of use they provide\n>> is directly related to the investment we make into continuously improving our\n>> testharness. Regardless of which toolset we adopt, if we don't make this\n>> investment (taking learnings from the past years and toolsets into account)\n>> we're bound to repeat this thread in a few years advocating for toolset X+1.\n> \n> True. And thank you for committing BackgroundPsql.pm (and Andres for\n> starting that work). My specific case is likely one of a poor work\n> person blaming her tools :)\n\nI don't think it is since the tools we had then were really hard to use. I\nwrote very similar tests to yours for the online checksums patch and they were\nquite complicated to get right. The point is that the complexity was greatly\nreduced by the community, and that kind of work will be equally important\nregardless of toolset.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 20:39:16 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, 13 Jun 2024 at 20:11, Robert Haas <[email protected]> wrote:\n> > But Perl is at the next level of unmaintained infrastructure. It is\n> > actually clear how you can contribute to it, but still no new\n> > community members actually want to contribute to it. Also, it's not\n> > only unmaintained by us but it's also pretty much unmaintained by the\n> > upstream community.\n>\n> I feel like I already agreed to this in a previous email and you're\n> continuing to argue with me as if I were disagreeing.\n\nSorry about that.\n\n> I also agree with this. I'm just not super optimistic about how much\n> of that will actually happen. And I'd like to hear you acknowledge\n> that concern and think about whether it can be addressed in some way,\n> instead of just repeating that we should do it anyway. Because I agree\n> we probably should do it anyway, but that doesn't mean I wouldn't like\n> to see the downsides mitigated as much as we can.\n\nI'm significantly more optimistic than you, but I also definitely\nunderstand and agree with the concern. I also agree that mitigating\nthat concern beforehand would be a good thing.\n\n> In particular, if\n> the proposal is exactly \"let's add the smallest possible patch that\n> enables people to write tests in Python and then add a few new tests\n> in Python while leaving almost everything else in Perl, with no\n> migration plan and no clear vision of how the Python support ever gets\n> any better than the minimum stub that is proposed for initial commit,\"\n> then I don't know that I can vote for that plan. Honestly, that sounds\n> like very little work for the person proposing that minimal patch and\n> a whole lot of work for the rest of the community later on, and the\n> evidence is not in favor of volunteers showing up to take care of that\n> work. The plan should be more front-loaded than that: enough initial\n> development should get done by the people making the proposal that if\n> the work stops after, we don't have another big mess on our hands.\n>\n> Or so I think, anyway.\n\nI understand and agree with your final stated goal of not ending up in\nanother big mess. It's also clear to me that you don't think the\ncurrent proposal achieves that goal. So I assume you have some\nadditional ideas for the proposal to help achieve that goal and/or\nsome specific worries that you'd like to get addressed better in the\nproposal. But currently it's not really clear to me what either of\nthose are. Could you clarify?\n\n\n", "msg_date": "Thu, 13 Jun 2024 20:52:20 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 9:38 AM Robert Haas <[email protected]> wrote:\n\n> I agree with you, but I'm skeptical that solving it will be as easy as\n> switching to Python. For whatever reason, it seems like every piece of\n> infrastructure that the PostgreSQL community has suffers from severe\n> neglect. Literally everything I know of either has one or maybe two\n> very senior hackers maintaining it, or no maintainer at all.\n\n...\n\n> All of this stuff is critical project infrastructure and yet it feels like\n> nobody wants to work on\n> it.\n\n\nI feel at least some of this is a visibility / marketing problem. I've not\nseen any dire requests for help come across on the lists, nor things on the\nvarious todos/road maps/ blog posts people make from time to time. If I\nhad, I would have jumped in. And for the record, I'm very proficient with\nPerl.\n\nCheers,\nGreg\n\nOn Thu, Jun 13, 2024 at 9:38 AM Robert Haas <[email protected]> wrote:I agree with you, but I'm skeptical that solving it will be as easy as\nswitching to Python. For whatever reason, it seems like every piece of\ninfrastructure that the PostgreSQL community has suffers from severe\nneglect. Literally everything I know of either has one or maybe two\nvery senior hackers maintaining it, or no maintainer at all....All of this stuff is critical project infrastructure and yet it feels like nobody wants to work on\nit.I feel at least some of this is a visibility / marketing problem. I've not seen any dire requests for help come across on the lists, nor things on the various todos/road maps/ blog posts people make from time to time. If I had, I would have jumped in. And for the record, I'm very proficient with Perl.Cheers,Greg", "msg_date": "Thu, 13 Jun 2024 15:16:54 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 2:52 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I understand and agree with your final stated goal of not ending up in\n> another big mess. It's also clear to me that you don't think the\n> current proposal achieves that goal. So I assume you have some\n> additional ideas for the proposal to help achieve that goal and/or\n> some specific worries that you'd like to get addressed better in the\n> proposal. But currently it's not really clear to me what either of\n> those are. Could you clarify?\n\nHmm, I don't know that I have what you're hoping I have, or at least\nnot any more than what I've said already.\n\nI interpreted Jacob's original email as articulating a goal (\"For the\nv18 cycle, I would like to try to get pytest [1] in as a supported\ntest driver, in addition to the current offerings\") rather than a\nplan. There's no patch set yet and, as I understand it, no detailed\nplan for a patch set: that email seemed to focus on the question of\ndesirability, rather than on outlining a plan of work, which I assume\nis still to come. Some things I'd like to see when a patch set does\nshow up are:\n\n- good documentation for people who have no previous experience with\nPython and/or pytest e.g. here's how to set up your environment on\nLinux, Windows, macOS, *BSD so you can run the tests, here's how to\nrun the tests, here's how it's different from the Perl framework we\nhave now\n\n- no external dependencies on PostgreSQL connectors. psql or libpq\nforeign function interface. the latter would be a cool increment of\nprogress over the status quo.\n\n- at least as much in-tree support for writing tests as we have today\nwith PostgreSQL::Test::whatever, but not necessarily a 1:1 reinvention\nof the stuff we have now, and documentation of those facilities that\nis as good or, ideally, better than what we have today.\n\n- high overall code quality and level of maturity, not just something\nsomeone threw together for parity with the Perl system.\n\n- enough tests written for or converted to the new system to give\nreviewers confidence that it's truly usable and fit for purpose.\n\nThe important thing to me here (as it so often is) is to think like a\nmaintainer. Imagine that immediately after the patches for this\nfeature are committed, the developers who did the work all disappear\nfrom the community and are never heard from again. How much pain does\nthat end us causing? The answer doesn't need to be zero; that is\nunrealistic. But it also shouldn't be \"well, if that happens we're\ngoing to have to rip the feature out\" or \"well, a bunch of committers\nwho didn't want to write tests in Python in the first place are now\ngoing to have to do a lot of work in Python to stabilize the work\nalready committed.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 15:20:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 3:17 PM Greg Sabino Mullane <[email protected]> wrote:\n> I feel at least some of this is a visibility / marketing problem. I've not seen any dire requests for help come across on the lists, nor things on the various todos/road maps/ blog posts people make from time to time. If I had, I would have jumped in. And for the record, I'm very proficient with Perl.\n\nI agree with all of that!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 15:21:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 11:11 AM Robert Haas <[email protected]> wrote:\n> I feel like I already agreed to this in a previous email and you're\n> continuing to argue with me as if I were disagreeing.\n\nI also think that maybe arguments are starting to sail past each\nother, and the temperature is starting to climb. (And Jelte may be\narguing to all readers of the thread, rather than just a single\nindividual. It's hard to tell with the list format.) And now I see\nthat there's another email that came in while I was writing this, but\nI think I'm going to have to send this as-is because I can't write\nemails that fast.\n\n> I also agree with this. I'm just not super optimistic about how much\n> of that will actually happen. And I'd like to hear you acknowledge\n> that concern and think about whether it can be addressed in some way,\n> instead of just repeating that we should do it anyway. Because I agree\n> we probably should do it anyway, but that doesn't mean I wouldn't like\n> to see the downsides mitigated as much as we can.\n\nOkay, +1.\n\n> In particular, if\n> the proposal is exactly \"let's add the smallest possible patch that\n> enables people to write tests in Python and then add a few new tests\n> in Python while leaving almost everything else in Perl, with no\n> migration plan and no clear vision of how the Python support ever gets\n> any better than the minimum stub that is proposed for initial commit,\"\n> then I don't know that I can vote for that plan.\n\n(that's not the proposal and I know/think you know that but having my\noriginal email twisted into that is making me feel a bit crispy)\n\nI do not want to migrate, and I have stated so multiple times, which\nis why I have not proposed a migration plan. Other committers have\nalready expressed resistance to the idea that we would rewrite the\nPerl stuff. I think they're right. I think we should not. I think we\nshould accept the cost of both Perl and something else for the\nnear-to-medium future, as long as the \"something else\" gives us value\noffsetting the additional cost.\n\n> Honestly, that sounds\n> like very little work for the person proposing that minimal patch and\n> a whole lot of work for the rest of the community later on, and the\n> evidence is not in favor of volunteers showing up to take care of that\n> work.\n\nOkay, cool. Here: as the person who is 100% signing himself up to do\nthat, time for me to jump in.\n\nI have an entire 6000-something-line suite of protocol tests that has\nbeen linked like four times above. It does something fundamentally\ndifferent from the end-to-end Perl suite; it is not a port. It is far\nfrom perfect and I do not want to pressure people to adopt it as-is,\nwhich is why I have not. In this thread, I am offering it solely as\nevidence that I have follow-up intent.\n\nBut I might get hit by a bus. Or, as far as anyone except me knows, I\nmight lose interest after things get hard, which would be sad. Which\nis why my very first proposal was to add an entry point that can be\nreverted. The suite is not going to infect the codebase any more than\nthe Perl does. A revert will involve pulling the Meson test entry\ncode, and deleting all pytest subdirectories (of which there is only\none, IIRC, in my OAuth suite).\n\n> The plan should be more front-loaded than that: enough initial\n> development should get done by the people making the proposal that if\n> the work stops after, we don't have another big mess on our hands.\n\nFor me personally, the problem is the opposite. I have done _so much_\ninitial development by myself that there's no way it could ever be\nreviewed and accepted. But I had to do that to get meaningful\ndevelopment done in my style of work, which is focused on security and\ntestability and verifiable implementation.\n\nI am trying to carve off pieces of that and say \"hey, does this look\nnice to anyone else?\" That will take time, probably over multiple\ndifferent threads. In the meantime, I don't want to be a serialization\npoint for other people who are excited about trying new testing\nmethods, because very few people are currently doing the exact kind of\nwork I am doing. They may want to do other things, as evidenced by the\nthread contents. At least one committer would have to sign up to be a\nserialization point, unfortunately, but I think that's going to be\ntrue regardless of plan, if we want multiple non-committer members of\nthe community to be involved instead of just one torch-bearer.\n\nBecause of how many moving parts and competing interests and personal\ndisagreements there are, I am firmly in the camp of \"try something\nthat many people think *might* work better, that can be undone if it\nsucks, and iterate on it.\" I want to build community momentum, because\nI think that's a pretty effective way to change the cultural norms\nthat you're saying you're frustrated with. That doesn't mean I want to\ndo this without a plan; it just means that the plan can involve saying\n\"this is not working and we can undo it\" which makes the uncertainty\neasier to take.\n\n--Jacob\n\n\n", "msg_date": "Thu, 13 Jun 2024 12:28:42 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 3:28 PM Jacob Champion\n<[email protected]> wrote:\n> (that's not the proposal and I know/think you know that but having my\n> original email twisted into that is making me feel a bit crispy)\n\nI definitely did not mean to imply that. I took your original email as\na goal, rather than a proposal or plan. My statement was strictly\nintended as a hypothetical because I didn't think any plan had been\nproposed - I only meant to say that *if* the plan were to do X, that\nwould be a hard sell for me.\n\n> I do not want to migrate, and I have stated so multiple times, which\n> is why I have not proposed a migration plan. Other committers have\n> already expressed resistance to the idea that we would rewrite the\n> Perl stuff. I think they're right. I think we should not. I think we\n> should accept the cost of both Perl and something else for the\n> near-to-medium future, as long as the \"something else\" gives us value\n> offsetting the additional cost.\n\nI agree. It's not terribly pretty, IMHO, but it's hard to see doing\nthings any other way.\n\n> For me personally, the problem is the opposite. I have done _so much_\n> initial development by myself that there's no way it could ever be\n> reviewed and accepted. But I had to do that to get meaningful\n> development done in my style of work, which is focused on security and\n> testability and verifiable implementation.\n\nI admire this attitude. I think a lot of people who go off and do a\nton of initial work outside core show up and are like \"ok, now take\nall of my code.\" As you say, that's not realistic. One caveat here,\nperhaps, is that the focus of the work you've done up until now and\nthe things that I and other community members may want as a condition\nof merging stuff may be somewhat distinct. You will have naturally\nbeen focused on your goals rather than other people's goals, or so I\nassume.\n\n> I am trying to carve off pieces of that and say \"hey, does this look\n> nice to anyone else?\" That will take time, probably over multiple\n> different threads.\n\nThis makes sense, but I would be a bit wary of splitting it up over\ntoo many different threads. It may well make sense to split it up, but\nit will probably be easier to review if the core work to enable this\nis one patch set on one thread where someone can read just that one\nthread and understand the situation, rather than many threads where\nyou have to read them all.\n\n> Because of how many moving parts and competing interests and personal\n> disagreements there are, I am firmly in the camp of \"try something\n> that many people think *might* work better, that can be undone if it\n> sucks, and iterate on it.\" I want to build community momentum, because\n> I think that's a pretty effective way to change the cultural norms\n> that you're saying you're frustrated with. That doesn't mean I want to\n> do this without a plan; it just means that the plan can involve saying\n> \"this is not working and we can undo it\" which makes the uncertainty\n> easier to take.\n\nAs a community, we're really bad at this. Once something gets\ncommitted, getting a consensus to revert it is really hard, especially\nif a major release has happened meanwhile, but most of the time even\nif it hasn't. It might be a little easier in this case, since after\nall it's not a directly user-visible feature. But historically what\nhappens if somebody says \"hey, there are six unfixed problems with\nthis feature!\" is that everybody says \"well, you're free to fix the\nproblems if you want, but you're not allowed to revert the feature.\"\nAnd that is *exactly* how we end up with stuff like the current TAP\ntest framework: ripping that out would mean removing all the TAP tests\nthat depend on it, and that wouldn't have achieved consensus two\nmonths after the feature went in, let alone today.\n\nNow, it has been suggested to me by at least one other person involved\nwith the project that we need to be more open to the kind of thing\nthat you propose here: add experimental things and take them out if it\ndoesn't work out. I can definitely understand that this might be a\nculturally better approach than what we currently do. So maybe that's\nthe way forward, but it is hard (at least for me) to get past the fear\nof being the one left holding the bag, and I suspect that other\ncommitters have similar fears. What exactly we should do about that,\nI'm not sure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 16:04:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "\nOn 2024-06-12 We 11:28, Andres Freund wrote:\n> Hi,\n>\n> On 2024-06-11 08:04:57 -0400, Andrew Dunstan wrote:\n>> Some time ago I did some work on wrapping libpq using the perl FFI module.\n>> It worked pretty well, and would mean we could probably avoid many uses of\n>> IPC::Run, and would probably be substantially more efficient (no fork\n>> required). It wouldn't avoid all uses of IPC::Run, though.\n> FWIW, I'd *love* to see work on this continue. The reduction in test runtime\n> on windows is substantial and would shorten the hack->CI->fail->hack loop a\n> good bit shorter. And save money.\n\n\nOK, I will put it high on my list. I just did some checking and it seems \nto be feasible on Windows. StrawberryPerl at least has FFI::Platypus out \nof the box, so we would not need to turn any great handsprings to make \nprogress on this on a fairly wide variety of platforms.\n\nWhat seems a good place to start would be a simple \nPostgreSQL::Test::Session object that would allow us to get rid of a \nwhole heap of start/pump_until/kill cycles and deal with the backend in \na much more straightforward and comprehensible way, not to mention the \npossibility of removing lots of $node->{safe_}psql calls.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 16:06:13 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 12:20 PM Robert Haas <[email protected]> wrote:\n> I interpreted Jacob's original email as articulating a goal (\"For the\n> v18 cycle, I would like to try to get pytest [1] in as a supported\n> test driver, in addition to the current offerings\") rather than a\n> plan.\n\nThat's the first part of it.\n\n> There's no patch set yet and, as I understand it, no detailed\n> plan for a patch set: that email seemed to focus on the question of\n> desirability, rather than on outlining a plan of work, which I assume\n> is still to come.\n\nThere was a four-step plan sketch at the end of that email, titled \"A\nPlan\". That was not intended to be \"the final detailed plan\", because\nI was soliciting feedback on the exact pieces people wanted to try to\nimplement first, and I am not the God Emperor of Pytest. But it was\ndefinitely A Plan.\n\n> Some things I'd like to see when a patch set does\n> show up are:\n>\n> - good documentation for people who have no previous experience with\n> Python and/or pytest e.g. here's how to set up your environment on\n> Linux, Windows, macOS, *BSD so you can run the tests, here's how to\n> run the tests, here's how it's different from the Perl framework we\n> have now\n\n+1\n\n> - no external dependencies on PostgreSQL connectors. psql or libpq\n> foreign function interface. the latter would be a cool increment of\n> progress over the status quo.\n\nIf this is a -1 for psycopg, then I will cast my vote for ctypes/CFFI\nand against psql.\n\n> - at least as much in-tree support for writing tests as we have today\n> with PostgreSQL::Test::whatever, but not necessarily a 1:1 reinvention\n> of the stuff we have now, and documentation of those facilities that\n> is as good or, ideally, better than what we have today.\n\nI think this is way too much expectation for a v1 patch. If you were\ncommitting this by yourself, would you agree to develop the entirety\nof PostgreSQL::Test in a single commit, without the benefit of the\nbuildfarm checking you as you went, and other people trying to write\ntests with it?\n\n> - high overall code quality and level of maturity, not just something\n> someone threw together for parity with the Perl system.\n\n+1\n\n> - enough tests written for or converted to the new system to give\n> reviewers confidence that it's truly usable and fit for purpose.\n\nThis is that \"know everything up front\" tax that I think is not\nreasonable for a test framework. If the thing you're trying to avoid\nis the foot-in-the-door phenomenon, I would agree with you for a\nPostgres feature. But these are tests; we don't ship them, we have\ndifferent rules for backporting them, they are developed in a very\ndifferent way.\n\n> The important thing to me here (as it so often is) is to think like a\n> maintainer. Imagine that immediately after the patches for this\n> feature are committed, the developers who did the work all disappear\n> from the community and are never heard from again. How much pain does\n> that end us causing? The answer doesn't need to be zero; that is\n> unrealistic. But it also shouldn't be \"well, if that happens we're\n> going to have to rip the feature out\"\n\nCan you elaborate on why that's not an okay outcome?\n\n> or \"well, a bunch of committers\n> who didn't want to write tests in Python in the first place are now\n> going to have to do a lot of work in Python to stabilize the work\n> already committed.\"\n\nIs it that? If the problem is that, we should address that. Because if\nthat is truly the fear, I cannot assuage that fear without showing you\nsomething, and I cannot show you something you do not want to see, if\nyou don't want to write tests in Python in the first place.\n\n--Jacob\n\n\n", "msg_date": "Thu, 13 Jun 2024 13:06:44 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "\nOn 2024-06-12 We 18:34, Melanie Plageman wrote:\n> On Tue, Jun 11, 2024 at 8:05 AM Andrew Dunstan <[email protected]> wrote:\n>>\n>> On 2024-06-10 Mo 21:49, Andres Freund wrote:\n>>\n>> Hi,\n>>\n>> On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:\n>>\n>> I'm not sure what part of the testing infrastructure you think is\n>> unmaintained. For example, the last release of Test::Simple was all the way\n>> back on April 25.\n>>\n>> IPC::Run is quite buggy and basically just maintained by Noah these days.\n>>\n>>\n>> Yes, that's true. I think the biggest pain point is possibly the recovery tests.\n>>\n>> Some time ago I did some work on wrapping libpq using the perl FFI module. It worked pretty well, and would mean we could probably avoid many uses of IPC::Run, and would probably be substantially more efficient (no fork required). It wouldn't avoid all uses of IPC::Run, though.\n>>\n>> But my point was mainly that while a new framework might have value, I don't think we need to run out and immediately rewrite several hundred TAP tests. Let's pick the major pain points and address those.\n> FWIW, I felt a lot of pain trying to write recovery TAP tests with\n> IPC::Run's pumping functionality. It was especially painful (as\n> someone who knows even less Perl than the \"street fighting Perl\"\n> Thomas Munro has described having) before the additional test\n> infrastructure was added in BackgroudPsql.pm last year. As an example\n> of the \"worst case\", it took me two full work days to go from a repro\n> (with psql sessions on a primary and replica node) of the vacuum hang\n> issue being explored in [1] to a sort-of working TAP test which\n> demonstrated it - and that was with help from several other\n> committers. Granted, this is a complex case.\n\n\nThe pump stuff is probably the least comprehensible and most fragile \npart of the whole infrastructure. As I just mentioned to Andres, I'm \nhoping to make a substantial improvement in that area.\n\n\n>\n> A small part of the issue is that, as Tristan has said elsewhere,\n> there aren't good developer tool integrations that I know about for\n> Perl. I use neovim's LSP support for C and Python (in other projects),\n> and there is a whole ecosystem of tools I can use for both C and\n> Python. I know not everyone likes or needs these, but I find that they\n> help me write and debug code faster.\n\n\nYou might find this useful: \n<https://climatechangechat.com/setting_up_lsp_nvim-lspconfig_and_perl_in_neovim.html>\n\n(I don't use neovim - I'm an old emacs dinosaur.)\n\n\n>\n> I had offered to take a stab at writing some of the BackgroundPsql\n> test infrastructure in Python. I haven't started exploring that yet or\n> looking at what Jacob has done so far, but I am optimistic that this\n> is an area where it is worth seeing what is available to us outside of\n> IPC::Run.\n\n\nYeah, like I said, I'm working on reducing our reliance on especially \nthe more fragile parts of IPC::Run.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 16:19:21 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 4:06 PM Jacob Champion\n<[email protected]> wrote:\n> There was a four-step plan sketch at the end of that email, titled \"A\n> Plan\". That was not intended to be \"the final detailed plan\", because\n> I was soliciting feedback on the exact pieces people wanted to try to\n> implement first, and I am not the God Emperor of Pytest. But it was\n> definitely A Plan.\n\nWell, OK, now I feel a bit dumb. I guess I missed that or forgot about it.\n\n> > - at least as much in-tree support for writing tests as we have today\n> > with PostgreSQL::Test::whatever, but not necessarily a 1:1 reinvention\n> > of the stuff we have now, and documentation of those facilities that\n> > is as good or, ideally, better than what we have today.\n>\n> I think this is way too much expectation for a v1 patch. If you were\n> committing this by yourself, would you agree to develop the entirety\n> of PostgreSQL::Test in a single commit, without the benefit of the\n> buildfarm checking you as you went, and other people trying to write\n> tests with it?\n\nEh... I'm confused. PostgreSQL::Test::Cluster is more than half of the\ncode in that directory, and without it you wouldn't be able to write\nmost of the TAP tests that we have today. You would really want to\ncall this project done without having an equivalent?\n\n> > The important thing to me here (as it so often is) is to think like a\n> > maintainer. Imagine that immediately after the patches for this\n> > feature are committed, the developers who did the work all disappear\n> > from the community and are never heard from again. How much pain does\n> > that end us causing? The answer doesn't need to be zero; that is\n> > unrealistic. But it also shouldn't be \"well, if that happens we're\n> > going to have to rip the feature out\"\n>\n> Can you elaborate on why that's not an okay outcome?\n\nWell, you just argued that it should be an okay outcome, and I do sort\nof see your point, but I refer you to my earlier reply about the\ndifficulty of getting anything reverted in the culture as it stands.\n\n> > or \"well, a bunch of committers\n> > who didn't want to write tests in Python in the first place are now\n> > going to have to do a lot of work in Python to stabilize the work\n> > already committed.\"\n>\n> Is it that? If the problem is that, we should address that. Because if\n> that is truly the fear, I cannot assuage that fear without showing you\n> something, and I cannot show you something you do not want to see, if\n> you don't want to write tests in Python in the first place.\n\nI have zero desire to write tests in Python. If I could convince\neveryone here to spend their time and energy improving the stuff we\nhave in Perl instead of introducing a whole new test framework, I\nwould 100% do that. But I'm pretty sure that I can't, and I think the\nproject needs to pick from among realistic options rather than\ntheoretical ones. Said differently, it's not all about me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 16:27:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On 2024-06-12 We 11:50, Andres Freund wrote:\n> Hi,\n>\n> On 2024-06-11 07:28:23 -0700, Jacob Champion wrote:\n>> On Mon, Jun 10, 2024 at 1:04 PM Andres Freund<[email protected]> wrote:\n>>> Just for context for the rest the email: I think we desperately need to move\n>>> off perl for tests. The infrastructure around our testing is basically\n>>> unmaintained and just about nobody that started doing dev stuff in the last 10\n>>> years learned perl.\n>> Okay. Personally, I'm going to try to stay out of discussions around\n>> subtracting Perl and focus on adding Python, for a bunch of different\n>> reasons:\n> I think I might have formulated my paragraph above badly - I didn't mean that\n> we should move away from perl tests tomorrow,\n\n\nOK, glad we're on the same page there. Let's move on.\n\n\n> but that we need a path forward\n> that allows folks to write tests without perl.\n\n\nOK, although to be honest I'm more interested in fixing some of the \nthings that have made testing with perl a pain, especially the IPC::Run \npump stuff.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-12 We 11:50, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2024-06-11 07:28:23 -0700, Jacob Champion wrote:\n\n\nOn Mon, Jun 10, 2024 at 1:04 PM Andres Freund <[email protected]> wrote:\n\n\nJust for context for the rest the email: I think we desperately need to move\noff perl for tests. The infrastructure around our testing is basically\nunmaintained and just about nobody that started doing dev stuff in the last 10\nyears learned perl.\n\n\n\nOkay. Personally, I'm going to try to stay out of discussions around\nsubtracting Perl and focus on adding Python, for a bunch of different\nreasons:\n\n\n\nI think I might have formulated my paragraph above badly - I didn't mean that\nwe should move away from perl tests tomorrow, \n\n\n\nOK, glad we're on the same page there. Let's move on.\n\n\n\n\nbut that we need a path forward\nthat allows folks to write tests without perl.\n\n\n\nOK, although to be honest I'm more interested in fixing some of\n the things that have made testing with perl a pain, especially the\n IPC::Run pump stuff.\n\n\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 13 Jun 2024 16:32:53 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On 2024-06-13 Th 15:16, Greg Sabino Mullane wrote:\n> I'm very proficient with Perl.\n>\n>\n\nYes you are, and just as well!\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-13 Th 15:16, Greg Sabino\n Mullane wrote:\n\n\n\nI'm very proficient with Perl.\n \n\n\n\n\n\n\n\n\nYes you are, and just as well!\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 13 Jun 2024 16:41:54 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "\nOn 2024-06-12 We 18:43, Jelte Fennema-Nio wrote:\n>\n> I agree it's not a technical issue. It is a people issue. There are\n> very few people skilled in Perl active in the community. And most of\n> those are very senior hackers that have much more important things to\n> do that make our Perl testing framework significantly better. And the\n> less senior people that might see improving tooling as a way to get\n> help out in the community, are try to stay away from Perl with a 10\n> foot pole. So the result is, nothing gets improved. Especially since\n> very few people outside our community improve this tooling either.\n>\n> \n\n\nFTR, I have put a lot of effort into maintaining and improving the \ninfrastructure over the years. And I don't think there is anything much \nmore important. So I'm going to put more effort in. And I'm not alone. \nAndres, Alvaro, Noah and Thomas are some of those who have spent a lot \nof effort on extending and improving our testing.\n\nPeople tend to get a bit hung up about languages. I lost count of the \nvarious languages I had learned when it got somewhere north of 30.\n\nStill, I understand that perl has a few oddities that make people \nscratch their heads (as do most languages). It's probably losing market \nshare, along with some of the other things we rely on. Not sure that \nalone is a reason to move away from it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 16:54:56 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 4:54 PM Andrew Dunstan <[email protected]> wrote:\n> FTR, I have put a lot of effort into maintaining and improving the\n> infrastructure over the years. And I don't think there is anything much\n> more important. So I'm going to put more effort in. And I'm not alone.\n> Andres, Alvaro, Noah and Thomas are some of those who have spent a lot\n> of effort on extending and improving our testing.\n\nI appreciate the work you've done, and the work others have done, and\nI'm sorry if my comments about the state of the project's\ninfrastructure came across as a personal attack. Some of what is wrong\nhere is completely outside of our control e.g. Perl is less popular.\nAnd even there, some people have done heroic work, like Noah stepping\nup to help maintain IPC::Run. And even with the stuff that is under\nour control, it's not that I don't like what you're doing. It's rather\nthat I think we need more people doing it. For example, the fact that\nnobody's helping Thomas maintain this cfbot that we all have come to\nrely on, or helping him get that integrated into\ncommitfest.postgresql.org, is a problem. You're not on the hook to do\nthat, nor is anyone else. Likewise, the PostgreSQL::Test::whatever\nmodules are mostly evolving when it's absolutely necessary to get some\nother patch committed, rather than anyone looking to improve them very\nmuch for their own sake. Maybe part of the problem, as Greg said, is\nthat we don't do a good enough job advertising what the problems are\nor how people can help, but whatever the cause, it's not a very\nenjoyable experience, at least for me.\n\nBut again, I don't blame you for any of that. You're clearly a big\npart of why it's going as well as it is!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 17:23:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "\nOn 2024-06-13 Th 17:23, Robert Haas wrote:\n> On Thu, Jun 13, 2024 at 4:54 PM Andrew Dunstan <[email protected]> wrote:\n>> FTR, I have put a lot of effort into maintaining and improving the\n>> infrastructure over the years. And I don't think there is anything much\n>> more important. So I'm going to put more effort in. And I'm not alone.\n>> Andres, Alvaro, Noah and Thomas are some of those who have spent a lot\n>> of effort on extending and improving our testing.\n> I appreciate the work you've done, and the work others have done, and\n> I'm sorry if my comments about the state of the project's\n> infrastructure came across as a personal attack. Some of what is wrong\n> here is completely outside of our control e.g. Perl is less popular.\n> And even there, some people have done heroic work, like Noah stepping\n> up to help maintain IPC::Run. And even with the stuff that is under\n> our control, it's not that I don't like what you're doing. It's rather\n> that I think we need more people doing it. For example, the fact that\n> nobody's helping Thomas maintain this cfbot that we all have come to\n> rely on, or helping him get that integrated into\n> commitfest.postgresql.org, is a problem. You're not on the hook to do\n> that, nor is anyone else. Likewise, the PostgreSQL::Test::whatever\n> modules are mostly evolving when it's absolutely necessary to get some\n> other patch committed, rather than anyone looking to improve them very\n> much for their own sake. Maybe part of the problem, as Greg said, is\n> that we don't do a good enough job advertising what the problems are\n> or how people can help, but whatever the cause, it's not a very\n> enjoyable experience, at least for me.\n>\n> But again, I don't blame you for any of that. You're clearly a big\n> part of why it's going as well as it is!\n>\n\nThank you, I'm not offended by anything you or anyone else has said. \nClearly there are areas we can improve, and we need to be somewhat more \nproactive about it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 17:35:52 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, 13 Jun 2024 at 23:35, Andrew Dunstan <[email protected]> wrote:\n> Clearly there are areas we can improve, and we need to be somewhat more\n> proactive about it.\n\nTo follow that great suggestion, I updated meson wiki[1] after I\nrealized some of the major gripes I had with the Perl tap test output\nwere not actually caused by Perl but by meson:\n\nThe main changes I made was using \"-q --print-errorlogs\" instead \"-v\",\nto reduce the enormous clutter in the output of the commands in the\nwiki to something much more reasonable.\n\nAs well as adding examples on how to run specific tests\n\n[1]: https://wiki.postgresql.org/wiki/Meson#Test_related_commands\n\n\n", "msg_date": "Fri, 14 Jun 2024 00:03:12 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 1:04 PM Robert Haas <[email protected]> wrote:\n> One caveat here,\n> perhaps, is that the focus of the work you've done up until now and\n> the things that I and other community members may want as a condition\n> of merging stuff may be somewhat distinct. You will have naturally\n> been focused on your goals rather than other people's goals, or so I\n> assume.\n\nRight. That's a risk I knew I was taking when I wrote it, so it's not\ngoing to offend me when I need to rewrite things.\n\n> I would be a bit wary of splitting it up over\n> too many different threads. It may well make sense to split it up, but\n> it will probably be easier to review if the core work to enable this\n> is one patch set on one thread where someone can read just that one\n> thread and understand the situation, rather than many threads where\n> you have to read them all.\n\nI'll try to avoid too many threads. But right now there is indeed just\none thread (OAUTHBEARER) and it's way too much:\n\n- the introduction of pytest\n- a Construct-based manipulation of the wire protocol, including\nWireshark-style network traces on failure\n- pytest fixtures which spin up libpq and the server in isolation from\neach other, relying on the Construct implementation to complete the\nseam\n- OAuth, which was one of the motivating use cases (but not the only\none) for all of the previous items\n\nI really don't want to derail this thread with those. There are other\npeople here with their own hopes and dreams (see: unconference notes),\nand I want to give them a platform too.\n\n> > That doesn't mean I want to\n> > do this without a plan; it just means that the plan can involve saying\n> > \"this is not working and we can undo it\" which makes the uncertainty\n> > easier to take.\n>\n> As a community, we're really bad at this. [...]\n\nI will carry the response to this to the next email.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Thu, 13 Jun 2024 17:02:11 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 1:27 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Jun 13, 2024 at 4:06 PM Jacob Champion\n> <[email protected]> wrote:\n> > There was a four-step plan sketch at the end of that email, titled \"A\n> > Plan\". That was not intended to be \"the final detailed plan\", because\n> > I was soliciting feedback on the exact pieces people wanted to try to\n> > implement first, and I am not the God Emperor of Pytest. But it was\n> > definitely A Plan.\n>\n> Well, OK, now I feel a bit dumb. I guess I missed that or forgot about it.\n\nNo worries. It's a really long thread. :D\n\nBut also: do you have opinions on what to fill in as steps 2\n(something we have no ability to test today) and 3 (something we do\ntest today, but hate)?\n\nMy vote for step 2 is \"client and server separation\", perhaps by\ntesting libpq fallback against a server that claims support for\ndifferent build-time options. I don't want to have a vote in step 3,\nbecause part of that step is proving that this framework can provide\nvalue for a part of the project I don't really know much about.\n\n> > I think this is way too much expectation for a v1 patch. If you were\n> > committing this by yourself, would you agree to develop the entirety\n> > of PostgreSQL::Test in a single commit, without the benefit of the\n> > buildfarm checking you as you went, and other people trying to write\n> > tests with it?\n>\n> Eh... I'm confused. PostgreSQL::Test::Cluster is more than half of the\n> code in that directory, and without it you wouldn't be able to write\n> most of the TAP tests that we have today.\n\nWell, in my defense, you said \"PostgreSQL::Test::whatever\", which I\nassumed meant all of it, including Kerberos.pm and SSL::Server and\nAdjustUpgrade and... That seemed like way too much to me (and still\ndoes!), but if that's not what you were arguing then never mind.\n\nYes, Cluster.pm seems like a pretty natural thing to ask for. I\nimagine it's one of the first things we're going to need. And yet...\n\n> You would really want to\n> call this project done without having an equivalent?\n\n...I have this really weird sneaking suspicion that, if a replacement\nof the end-to-end Perl acceptance tests can be made an explicit\nanti-goal in the short term, we might not necessarily need an\n\"equivalent\" for v1. I realize that seems bizarre, because of course\nwe need a way to start the server if we want to test the server. But\nfrankly, starting a server is Pretty Easy (tm), and Cluster.pm has to\ndo a lot more than that because IMO it's designed for a variety of\nacceptance-oriented tasks. 3000+ lines!\n\nIf there's widespread interest (as opposed to being just my own\npersonal fever dream) in testing Postgres components as individual\npieces rather than setting up the world, then I wonder if the\nfunctionality from Cluster.pm couldn't be pared down a lot. Maybe you\ndon't need a centralized ->psql() or a ->command_ok() helper, because\nyou're usually not trying to test psql and other utilities during your\nserver-only tests.\n\nMaybe you can just stand up a standby without a primary and drive it\nvia mock replication. Do you need quite as many \"poll and wait for\nsome asynchronous result\" type things when you're not waiting for a\nresult to cascade through a multinode system? Does something like (for\nexample) ->pg_recvlogical_upto() really have to be implemented in our\n\"core\" fixtures or can it be done more easily by whoever needs that in\nthe future? Maybe You Ain't Gonna Need It.\n\nIf (he said, atop his naive idealistic soapbox) we can find a way to\nput off writing utilities until we write the tests that need them,\nwithout procrastinating, and without putting all of the negative\nexternalities of that approach on the committers with low-quality\ncopy-paste proliferation, and I'd like a pony while I'm at it, then I\nthink the result might end up being pretty coherent and maintainable.\nThen not having \"at least as much in-tree support for writing tests as\nwe have today\" for the very first commit would be a feature and not a\nbug.\n\nNow, maybe if the collective ability to do that existed, we would have\ndone it already with Perl, but I do actually wonder whether that's\ntrue or not.\n\nOr, maybe, the very first suggestion for Step 3 will be something that\nneeds absolutely everything in Cluster.pm. So be it; I can live\nwithout a pony.\n\n> You would really want to\n> call this project done without having an equivalent?\n\n(A cop-out but not-really-cop-out alternative answer to this question\nis that this project is not going to be \"done\" any more than Postgres\nwill ever be \"done\", and that's part of what I'm arguing should be\nconsidered natural and okay. I understand that it is easier for me to\ntake that stance when I am not on the hook for maintaining it, so I\ndon't expect us to necessarily see eye-to-eye on it.)\n\n> > Can you elaborate on why that's not an okay outcome?\n>\n> Well, you just argued that it should be an okay outcome, and I do sort\n> of see your point, but I refer you to my earlier reply about the\n> difficulty of getting anything reverted in the culture as it stands.\n\nEarlier reply was:\n\n> As a community, we're really bad at this. Once something gets\n> committed, getting a consensus to revert it is really hard, especially\n> if a major release has happened meanwhile, but most of the time even\n> if it hasn't. It might be a little easier in this case, since after\n> all it's not a directly user-visible feature. But historically what\n> happens if somebody says \"hey, there are six unfixed problems with\n> this feature!\" is that everybody says \"well, you're free to fix the\n> problems if you want, but you're not allowed to revert the feature.\"\n> And that is *exactly* how we end up with stuff like the current TAP\n> test framework: ripping that out would mean removing all the TAP tests\n> that depend on it, and that wouldn't have achieved consensus two\n> months after the feature went in, let alone today.\n\nWell... I don't know how to fix that. Here's a draft proposal after a\nfew minutes of thought, which may need to be discarded after a few\nmore minutes of thought.\n\nIf there's agreement that New Tests -- not necessarily written in\nPython, but I selfishly hope they are -- exist on a probationary\nstatus, then maybe part of that is going to have to be an agreement:\nNew features have to be able to have some minimum maintainability\nlevel *on the basis of the Perl tests only*, while the probationary\nperiod is in effect. It can't be the equivalent maintainability level,\nbecause that's either proof that the New Tests are giving us nothing,\nor proof that everyone is being forced to implement the exact same\ntests in both Perl and New Test. Neither is good.\n\nSince we're currently focused on end-to-end acceptance with Perl, that\nis probably a lower bar than what we'd maybe prefer, but I think that\nis the bar we have right now. It also exists as a forcing function to\nmake sure that the additional tests are adding value over what we get\nwith the Perl, which may paradoxically increase the chances of New\nTest success. (I can't tell if this is magical thinking or not.)\n\nSo if a committer doesn't want responsibility for the feature if the\nnew tests were deleted, they don't commit. Maybe that's unrealistic\nand too painful. It does increase the review requirements of\ncommitters quite a bit. It might disqualify my OAuth work (which is\nmaybe evidence in its favor?). Maybe it increases the foot-in-the-door\neffect too much. Maybe there would have to be some trust-building\nwhere right now there is not? Not sure.\n\n> Now, it has been suggested to me by at least one other person involved\n> with the project that we need to be more open to the kind of thing\n> that you propose here: add experimental things and take them out if it\n> doesn't work out. I can definitely understand that this might be a\n> culturally better approach than what we currently do. So maybe that's\n> the way forward, but it is hard (at least for me) to get past the fear\n> of being the one left holding the bag, and I suspect that other\n> committers have similar fears. What exactly we should do about that,\n> I'm not sure.\n\nYeah.\n\n> I have zero desire to write tests in Python. If I could convince\n> everyone here to spend their time and energy improving the stuff we\n> have in Perl instead of introducing a whole new test framework, I\n> would 100% do that. But I'm pretty sure that I can't, and I think the\n> project needs to pick from among realistic options rather than\n> theoretical ones. Said differently, it's not all about me.\n\nThen, for what it's worth: I really do want to make sure that your\nlife, and the life of all the other committers, does not get\nsignificantly harder if this goes in. I don't think it will, but if\nI'm wrong, I want it to come back out, and then we can regroup or\npivot entirely and move forward together.\n\n--Jacob\n\n\n", "msg_date": "Thu, 13 Jun 2024 17:12:15 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 8:12 PM Jacob Champion\n<[email protected]> wrote:\n> But also: do you have opinions on what to fill in as steps 2\n> (something we have no ability to test today) and 3 (something we do\n> test today, but hate)?\n>\n> My vote for step 2 is \"client and server separation\", perhaps by\n> testing libpq fallback against a server that claims support for\n> different build-time options. I don't want to have a vote in step 3,\n> because part of that step is proving that this framework can provide\n> value for a part of the project I don't really know much about.\n\nI mean, both Perl and Python are Turing-complete. Anything you can do\nin one, you can do in the other, especially when you consider that\nwe're not likely to accept too many dependencies on external Perl or\nPython modules. That's why I see this as nothing more or less than\nexercise in letting people use the programming language they prefer.\nWe've talked about a libpq FFI interface, but it hasn't been done; now\nwe're talking about maybe having a Python one. Perhaps we'll end up\nwith both. Then you can imagine porting tests from one language to the\nother and the only difference is whether you'd rather have line noise\nbefore all of your variable names or semantically significant\nwhitespace.\n\nI just don't believe in the idea that we're going to write one\ncategory of tests in one language and another category in another\nlanguage. As soon as we open the door to Python tests, people are\ngoing to start writing the TAP tests that they would have written in\nPerl in Python instead. And if the test utilities that we have for\nPerl are not there for Python, then they'll either open code things\nfor which they would have used a module, or they'll write a\nstripped-down version of the module that will then get built-up patch\nby patch until, 50 or 100 or 200 hours of committer-review later, it\nresembles the existing Perl module. And, if the committer pushes back\nand says, hey, why not write this test in Perl which already has all\nof this test infrastructure in place already, then the submitter will\nwander off muttering about how PostgreSQL committers are hidebound\nbackward individuals who just try to ruin everybody's day. So as I see\nit, the only reasonable plan here if we want to introduce testing in\nPython (or C#, or Ruby, or Go, or JavaScript, or Lua, or LOLCODE) is\nto try to achieve a reasonable degree of parity between that language\nand Perl. Because then we can at least review the new infrastructure\nall at once, instead of incrementally spread across many patches\nwritten, reviewed, and committed by many different people.\n\nNow, I completely understand if you're not excited about getting\nsucked down that rabbit-hole, and maybe some other committer is going\nto see this differently than I do, and that's fine. But my view is\nthat if you're not interested in doing all the work to let people do\nmore or less the kind of stuff that they currently do in Perl in\nPython instead, then your alternative is to take the tests that you\nwant to add and rewrite them in Perl. And I am fairly cetain that if\nyou choose that option, it will save me, and a bunch of other\ncommitters, a whole lot of work, at least in the short term. If we add\nsupport for Python, we are going to end up having to do a lot of\nthings twice for the next let's say ten to twenty years until somebody\nrewrites the remaining Perl tests in Python or whatever language is\nhot and cool by then. My opinion is that we need to be open to\nenduring that pain because we can't indefinitely hold our breath and\ninsist on using obsolete tools for everything, but that doesn't mean\nthat I don't think it will be painful.\n\nConsider the meson build system project. To get that committed, Andres\nhad to make it do pretty much everything MSVC could do and mostly\neverything that configure could do, and the places where he didn't\nmake it do everything configure could do remain sore spots that I, at\nleast, am not happy about. And in that case, he also managed to get\nMSVC removed entirely, so that we do not have a larger number of build\nsystems in total than we had before. Furthermore, the amount of code\nin buildsystem files (makefiles, meson.build) in a typical patch\nneeding review is usually none or very little, whereas the amount of\ntest code in a patch is sometimes quite large. I've come around to the\nbelief that the meson work was worthwhile -- running tests is so much\nfaster and nicer now -- but it was a ton of work to get done and\nimpacted everyone in the development community, and I think the blast\nradius for this change is likely to be larger for the reasons\nsuggested earlier in this paragraph.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 Jun 2024 08:10:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On 2024-06-14 Fr 08:10, Robert Haas wrote:\n> We've talked about a libpq FFI interface, but it hasn't been done;\n\n\nHold my beer :-)\n\n\nI just posted a POC for that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-14 Fr 08:10, Robert Haas\n wrote:\n\n\n\nWe've talked about a libpq FFI interface, but it hasn't been done;\n\n\n\nHold my beer :-)\n\n\nI just posted a POC for that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 14 Jun 2024 11:15:04 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I mean, both Perl and Python are Turing-complete. Anything you can do\n> in one, you can do in the other, especially when you consider that\n> we're not likely to accept too many dependencies on external Perl or\n> Python modules. That's why I see this as nothing more or less than\n> exercise in letting people use the programming language they prefer.\n\nI think that's an oversimplified analysis. Sure, the languages are\nboth Turing-complete, but for our purposes here they are both simply\nglue languages around some set of testing facilities. Some of those\nfacilities will be provided by the base languages (plus whatever\nextension modules we choose to require) and some by code we write.\nThe overall experience of writing tests will be determined by the\ntesting facilities far more than by the language used for glue.\n\nThat being the case, I do agree with your point that Python\nequivalents to most of PostgreSQL::Test will need to be built up PDQ.\nMaybe they can be better than the originals, in features or ease of\nuse, but \"not there at all\" is not better.\n\nBut what I'd really like to see is some comparison of the\nlanguage-provided testing facilities that we're proposing\nto depend on. Why is pytest (or whatever) better than Test::More?\n\nI also wonder about integration of python-based testing with what\nwe already have. A significant part of what you called the meson\nwork had to do with persuading pg_regress, isolationtester, etc\nto output test results in the common format established by TAP.\nAm I right in guessing that pytest will have nothing to do with that?\nCan we even manage to dump perl and python test scripts into the same\nsubdirectory and sort things out automatically? I'm definitely going\nto be -1 for a python testing feature that cannot integrate with what\nwe have because it demands its own control and result-collection\ninfrastructure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Jun 2024 11:49:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Hi,\n\nOn 2024-06-14 11:49:29 -0400, Tom Lane wrote:\n> I also wonder about integration of python-based testing with what\n> we already have. A significant part of what you called the meson\n> work had to do with persuading pg_regress, isolationtester, etc\n> to output test results in the common format established by TAP.\n\nFWIW, meson's testrunner doesn't require TAP, the lowest common denominator is\njust an exit code. However, for things that run many \"sub\" tests, it's a lot\nnicer if the failures can be reported more granularly than just \"the entire\ntestsuite failed\".\n\nMeson currently supports:\n\n exitcode: the executable's exit code is used by the test harness to record the outcome of the test).\n\n tap: Test Anything Protocol.\n\n gtest (since 0.55.0): for Google Tests.\n\n rust (since 0.56.0): for native rust tests\n\n\n> Am I right in guessing that pytest will have nothing to do with that?\n\nLooks like there's a plugin for pytest to support tap as output:\nhttps://pypi.org/project/pytest-tap/\n\nHowever, it's not available as a debian package. I know that some folks just\nadvocate installing dependencies via venv, but I personally don't think\nthat'll fly. For one, it'll basically prevent tests being run by packagers.\n\n\n> Can we even manage to dump perl and python test scripts into the same\n> subdirectory and sort things out automatically?\n\nThat shouldn't be much of a problem.\n\n\n> I'm definitely going to be -1 for a python testing feature that cannot\n> integrate with what we have because it demands its own control and\n> result-collection infrastructure.\n\nDito.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 14 Jun 2024 09:11:11 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On 2024-06-14 09:11:11 -0700, Andres Freund wrote:\n> On 2024-06-14 11:49:29 -0400, Tom Lane wrote:\n> > Am I right in guessing that pytest will have nothing to do with that?\n> \n> Looks like there's a plugin for pytest to support tap as output:\n> https://pypi.org/project/pytest-tap/\n> \n> However, it's not available as a debian package. I know that some folks just\n> advocate installing dependencies via venv, but I personally don't think\n> that'll fly. For one, it'll basically prevent tests being run by packagers.\n\nIf this were the blocker, I think we could just ship an output adapter\nourselves. pytest-tap is not a lot of code:\nhttps://github.com/python-tap/pytest-tap/blob/main/src/pytest_tap/plugin.py\n\nSo either vendoring it or just writing an even simpler version ourselves seems\nentirely feasible.\n\n\n", "msg_date": "Fri, 14 Jun 2024 09:17:25 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Thu, Jun 13, 2024 at 1:08 PM Jelte Fennema-Nio <[email protected]>\nwrote:\n\n> But Perl is at the next level of unmaintained infrastructure. It is\n> actually clear how you can contribute to it, but still no new\n> community members actually want to contribute to it. Also, it's not\n> only unmaintained by us but it's also pretty much unmaintained by the\n> upstream community.\n\n\nI am not happy with the state of Perl, as it has made some MAJOR missteps\nalong the way, particularly in the last 5 years. But can we dispel this\nstrawman? There is a difference between \"unpopular\" and \"unmaintained\". The\nlatest version of Perl was released May 20, 2024. The latest release of\nTest::More was April 25, 2024. Both are heavily used. Just not as heavily\nas they used to be. :)\n\nCheers,\nGreg\n\nOn Thu, Jun 13, 2024 at 1:08 PM Jelte Fennema-Nio <[email protected]> wrote:But Perl is at the next level of unmaintained infrastructure. It is\nactually clear how you can contribute to it, but still no new\ncommunity members actually want to contribute to it. Also, it's not\nonly unmaintained by us but it's also pretty much unmaintained by the\nupstream community.I am not happy with the state of Perl, as it has made some MAJOR missteps along the way, particularly in the last 5 years. But can we dispel this strawman? There is a difference between \"unpopular\" and \"unmaintained\". The latest version of Perl was released May 20, 2024. The latest release of Test::More was April 25, 2024. Both are heavily used. Just not as heavily as they used to be. :)Cheers,Greg", "msg_date": "Fri, 14 Jun 2024 16:33:00 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Fri, 14 Jun 2024 at 22:33, Greg Sabino Mullane <[email protected]> wrote:\n> I am not happy with the state of Perl, as it has made some MAJOR missteps along the way, particularly in the last 5 years. But can we dispel this strawman? There is a difference between \"unpopular\" and \"unmaintained\". The latest version of Perl was released May 20, 2024. The latest release of Test::More was April 25, 2024. Both are heavily used. Just not as heavily as they used to be. :)\n\nSorry, yes I exaggerated here. Looking at the last Perl changelog[1]\nit's definitely getting more new features and improvements than I had\nthought.\n\nTest::More on the other hand, while indeed still maintained, it's\ndefinitely not getting significant new feature development or\nimprovements[2]. Especially when comparing it to pytest[3].\n\n[1]: https://perldoc.perl.org/perldelta\n[2]: https://github.com/Test-More/test-more/blob/master/Changes\n[3]: https://docs.pytest.org/en/stable/changelog.html\n\n\n", "msg_date": "Fri, 14 Jun 2024 23:08:57 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Fri, Jun 14, 2024 at 9:24 AM Robert Haas <[email protected]> wrote:\n> For example, the fact that\n> nobody's helping Thomas maintain this cfbot that we all have come to\n> rely on, or helping him get that integrated into\n> commitfest.postgresql.org, is a problem.\n\nI've been talking to Magnus and Jelte about cfbot and we're hoping to\nhave some good news soon...\n\n\n", "msg_date": "Sat, 15 Jun 2024 09:24:00 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Fri, 14 Jun 2024 at 17:49, Tom Lane <[email protected]> wrote:\n> But what I'd really like to see is some comparison of the\n> language-provided testing facilities that we're proposing\n> to depend on. Why is pytest (or whatever) better than Test::More?\n\nSome advantages of pytest over Test::More:\n\n1. It's much easier to debug failing tests using the output that\npytest gives. A good example of this is on pytest its homepage[1]\n(i.e. it shows the value given to the call to inc in the error)\n2. No need to remember a specific comparison DSL\n(is/isnt/is_deeply/like/ok/cmp_ok/isa_ok), just put assert in front of\na boolean expression and your output is great (if you want to add a\nmessage too for clarity you can use: assert a == b, \"the world is\nending\")\n3. Very easy to postgres log files on stderr/stdout when a test fails.\nThis might be possible/easy with Perl too, but we currently don't do\nthat. So right now for many failures you're forced to traverse the\nbuild/testrun/... directory tree to find the logs.\n4. Pytest has autodiscovery of test files and functions, so we\nprobably wouldn't have to specify all of the exact test files anymore\nin the meson.build files.\n\nRegarding 2, there are ~150 checks that are using a suboptimal way of\ntesting for a comparison. Mostly a lot that could use \"like(..., ...)\"\ninstead of \"ok(... ~= ...)\"\n❯ grep '\\bok(.*=' **.pl | wc -l\n151\n\n[1]: https://docs.pytest.org/en/8.2.x/#a-quick-example\n\n\n", "msg_date": "Sat, 15 Jun 2024 00:11:04 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "\nOn 2024-06-14 Fr 18:11, Jelte Fennema-Nio wrote:\n> On Fri, 14 Jun 2024 at 17:49, Tom Lane<[email protected]> wrote:\n>> But what I'd really like to see is some comparison of the\n>> language-provided testing facilities that we're proposing\n>> to depend on. Why is pytest (or whatever) better than Test::More?\n> Some advantages of pytest over Test::More:\n>\n> 1. It's much easier to debug failing tests using the output that\n> pytest gives. A good example of this is on pytest its homepage[1]\n> (i.e. it shows the value given to the call to inc in the error)\n> 2. No need to remember a specific comparison DSL\n> (is/isnt/is_deeply/like/ok/cmp_ok/isa_ok), just put assert in front of\n> a boolean expression and your output is great (if you want to add a\n> message too for clarity you can use: assert a == b, \"the world is\n> ending\")\n> 3. Very easy to postgres log files on stderr/stdout when a test fails.\n> This might be possible/easy with Perl too, but we currently don't do\n> that. So right now for many failures you're forced to traverse the\n> build/testrun/... directory tree to find the logs.\n\n\nI see the fact that we stash the output in a file as a feature. Without \nit, capturing failure information in the buildfarm client would be more \ndifficult, especially if there are multiple failures. So this is \nactually something I think we would need for any alternative framework.\n\nMaybe we need an environment setting that would output the \nregress_log_00whatever file to stderr on failure.  That should be pretty \neasy to arrange in the END handler for PostgreSQL::Test::Utils.\n\n\n> 4. Pytest has autodiscovery of test files and functions, so we\n> probably wouldn't have to specify all of the exact test files anymore\n> in the meson.build files.\n\n\nI find this comment a bit ironic. We don't need to do that with the \nMakefiles, and the requirement to do so was promoted as a meson feature \nrather than a limitation, ISTR.\n\n\n> Regarding 2, there are ~150 checks that are using a suboptimal way of\n> testing for a comparison. Mostly a lot that could use \"like(..., ...)\"\n> instead of \"ok(... ~= ...)\"\n> ❯ grep '\\bok(.*=' **.pl | wc -l\n> 151\n\n\nWell, let's fix those. I would be tempted to use cmp_ok() for just about \nall of them.\n\nBut the fact that Test::More has a handful of test primitives rather \nthan just one strikes me as a relatively minor complaint.\n\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 15 Jun 2024 10:45:16 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Sat, 15 Jun 2024 at 16:45, Andrew Dunstan <[email protected]> wrote:\n> I see the fact that we stash the output in a file as a feature. Without\n> it, capturing failure information in the buildfarm client would be more\n> difficult, especially if there are multiple failures. So this is\n> actually something I think we would need for any alternative framework.\n\nI indeed heard that the current behaviour was somehow useful to the\nbuildfarm client.\n\n> Maybe we need an environment setting that would output the\n> regress_log_00whatever file to stderr on failure. That should be pretty\n> easy to arrange in the END handler for PostgreSQL::Test::Utils.\n\nThat sounds awesome! But I'm wondering: do we really need a setting to\nenable/disable that? Can't we always output it to stderr on failure?\nIf we output the log both to stderr and as a file, would that be fine\nfor the build farm? If not, a setting should work. (but I'd prefer the\ndefault for that setting to be on in that case, it seems much easier\nto turn it off in the buildfarm client, instead of asking every\ndeveloper to turn the feature on)\n\n> > 4. Pytest has autodiscovery of test files and functions, so we\n> > probably wouldn't have to specify all of the exact test files anymore\n> > in the meson.build files.\n>\n>\n> I find this comment a bit ironic. We don't need to do that with the\n> Makefiles, and the requirement to do so was promoted as a meson feature\n> rather than a limitation, ISTR.\n\nNow, I'm very curious why that would be considered a feature. I\ncertainly have had many cases where I forgot to add the test file to\nthe meson.build file.\n\n> > Regarding 2, there are ~150 checks that are using a suboptimal way of\n> > testing for a comparison. Mostly a lot that could use \"like(..., ...)\"\n> > instead of \"ok(... ~= ...)\"\n> > ❯ grep '\\bok(.*=' **.pl | wc -l\n> > 151\n>\n>\n> Well, let's fix those. I would be tempted to use cmp_ok() for just about\n> all of them.\n\nSounds great to me.\n\n> But the fact that Test::More has a handful of test primitives rather\n> than just one strikes me as a relatively minor complaint.\n\nIt is indeed a minor paper cut, but paper-cuts add up.\n\nHonestly, my primary *objective* complaint about our current test\nsuite, is that when a test fails, it's very often impossible for me to\nunderstand why the test failed, by only looking at the output of\n\"meson test\". I think logging the postgres log to stderr for Perl, as\nyou proposed, would significantly improve that situation. I think the\nonly thing that we cannot get from Perl Test::More that we can from\npytest, is the fancy recursive introspection of the expression that\npytest shows on error.\n\n\nApart from that my major *subjective* complaint is that I very much\ndislike writing Perl code. I'm slow at writing it and I don't (want\nto) improve at it because I don't have reasons to use it except for\nPostgres tests. So currently I'm not really incentivised to write more\ntests than the bare minimum, help improve the current test tooling, or\nadd new testing frameworks for things we currently cannot test.\nAfaict, there's a significant part of our current community who feel\nthe same way (and I'm pretty sure every sub-30 year old person who\nnewly joins the community would feel the exact same way too).\n\nAs a project I think we would like to have more tests, and to have\nmore custom tooling to test things that we currently cannot (e.g.\noauth or manually messing with the wire-protocol). I think the only\nway to achieve that is by encouraging more people to work on these\nthings. I very much appreciate that you and others are improving our\nPerl tooling, because that makes our current tests easier to work\nwith. But I don't think it significantly increases the willingness to\nwrite tests or test-tooling for people that don't want to write Perl\nin the first place.\n\nSo I think the only way to get more people involved in contributing\ntests and test-tooling is by allowing testing in another language than\nPerl (but also still allow writing tests in Perl). Even if that means\nthat we have two partially-overlapping test frameworks, that are both\neasy to use for different things. In my view that's even a positive\nthing, because that means we are able to test more with two languages\nthan we would be able to with either one (and it's thus useful to have\nboth).\n\nAnd I agree with Robbert that Python seems like the best choice for\nthis other language, given its current popularity level. But as I said\nbefore, I'm open to other languages as well.\n\n\n", "msg_date": "Sat, 15 Jun 2024 18:48:33 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Sat, Jun 15, 2024 at 12:48 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Honestly, my primary *objective* complaint about our current test\n> suite, is that when a test fails, it's very often impossible for me to\n> understand why the test failed, by only looking at the output of\n> \"meson test\". I think logging the postgres log to stderr for Perl, as\n> you proposed, would significantly improve that situation. I think the\n> only thing that we cannot get from Perl Test::More that we can from\n> pytest, is the fancy recursive introspection of the expression that\n> pytest shows on error.\n\nThis surprises me. I agree that the current state of affairs is kind\nof annoying, but the contents of regress_log_whatever are usually\nquite long. Printing all of that out to standard output seems like\nit's just going to flood the terminal with output. I don't think I'd\nbe a fan of that change.\n\nI think I basically agree with all the nearby comments about how the\nadvantages you cite for Python aren't, I don't know, entirely\ncompelling. Switching from ok() to is() or cmp_ok() or like() is minor\nstuff. Where the output goes is minor stuff. The former can be fixed,\nand the latter can be worked around with scripts and aliases. The one\nthing I know about that *I* think is a pretty big problem about Perl\nis that IPC::Run is not really maintained. But I wonder if the\nsolution to that is to do something ourselves instead of depending on\nIPC::Run. Beyond that, I think this is just a language popularity\ncontest.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 15 Jun 2024 13:26:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Hi,\n\nOn 2024-06-15 10:45:16 -0400, Andrew Dunstan wrote:\n> > 4. Pytest has autodiscovery of test files and functions, so we\n> > probably wouldn't have to specify all of the exact test files anymore\n> > in the meson.build files.\n> \n> \n> I find this comment a bit ironic. We don't need to do that with the\n> Makefiles, and the requirement to do so was promoted as a meson feature\n> rather than a limitation, ISTR.\n\nThe reason its good to have the list of tests somewhere explicit is that we\nhave different types of test. With make, there is a single target for all tap\ntests. If you want to run tests concurrently, make can only schedule the tap\ntests at the granularity of a directory. If you want concurrency below that,\nyou need to use concurrency on the prove level. But that means that you have\nextremely varying concurrency, depending on whether make runs targets that\nhave no internal concurrency or make runs e.g. the recovery tap tests.\n\nI don't think we should rely on global test discovery via pytest. That'll lead\nto uncontrollable concurrency again, which means much longer test times. We'll\nalways have different types of tests, just scheduling running them via\n\"top-level\" tools for different test types just won't work well. That's not\ntrue for many projects where tests have vastly lower resource usage.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 15 Jun 2024 10:33:20 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Sat, 15 Jun 2024 at 19:27, Robert Haas <[email protected]> wrote:\n> This surprises me. I agree that the current state of affairs is kind\n> of annoying, but the contents of regress_log_whatever are usually\n> quite long. Printing all of that out to standard output seems like\n> it's just going to flood the terminal with output. I don't think I'd\n> be a fan of that change.\n\nI think at the very least the locations of the different logs should\nbe listed in the output.\n\n\n", "msg_date": "Sat, 15 Jun 2024 19:39:57 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Sat, Jun 15, 2024 at 12:48 PM Jelte Fennema-Nio <[email protected]>\nwrote:\n\n> Afaict, there's a significant part of our current community who feel the\n> same way (and I'm pretty sure every sub-30 year old person who\n> newly joins the community would feel the exact same way too).\n>\n\nThose young-uns are also the same group who hold their nose when coding in\nC, and are always clamoring for rewriting Postgres in Rust. And before\nthat, C++. And next year, some other popular language that is clearly\nbetter and more popular than C.\n\nAnd I agree with Robbert that Python seems like the best choice for this\n> other language, given its current popularity level. But as I said\n> before, I'm open to other languages as well.\n>\n\nDespite my previous posts, I am open to other languages too, including\nPython, but the onus is really on the new language promoters to prove that\nthe very large amount of time and trouble is worth it, and worth it for\nlanguage X.\n\nCheers,\nGreg\n\nOn Sat, Jun 15, 2024 at 12:48 PM Jelte Fennema-Nio <[email protected]> wrote:Afaict, there's a significant part of our current community who feel the same way (and I'm pretty sure every sub-30 year old person who\nnewly joins the community would feel the exact same way too).Those young-uns are also the same group who hold their nose when coding in C, and are always clamoring for rewriting Postgres in Rust. And before that, C++. And next year, some other popular language that is clearly better and more popular than C.And I agree with Robbert that Python seems like the best choice for this other language, given its current popularity level. But as I said\nbefore, I'm open to other languages as well.Despite my previous posts, I am open to other languages too, including Python, but the onus is really on the new language promoters to prove that the very large amount of time and trouble is worth it, and worth it for language X.Cheers,Greg", "msg_date": "Sat, 15 Jun 2024 17:52:32 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Fri, Jun 14, 2024 at 5:09 PM Jelte Fennema-Nio <[email protected]>\nwrote:\n\n> Test::More on the other hand, while indeed still maintained, it's\n> definitely not getting significant new feature development or\n> improvements[2]. Especially when comparing it to pytest[3].\n>\n\nThat's fair, although it's a little hard to tell if the lack of new\nfeatures is because they are not needed for a stable, mature project, or\nbecause few people are asking for and developing new features. Probably a\nbit of both. But I'll be the first to admit Perl is dying; I just don't\nknow what should replace it (or how - or when). Python has its quirks, but\nall languages do, and your claim that it will encourage more and easier\ntest writing by developers is a good one.\n\nCheers,\nGreg\n\nOn Fri, Jun 14, 2024 at 5:09 PM Jelte Fennema-Nio <[email protected]> wrote:Test::More on the other hand, while indeed still maintained, it's\ndefinitely not getting significant new feature development or\nimprovements[2]. Especially when comparing it to pytest[3].That's fair, although it's a little hard to tell if the lack of new features is because they are not needed for a stable, mature project, or because few people are asking for and developing new features. Probably a bit of both. But I'll be the first to admit Perl is dying; I just don't know what should replace it (or how - or when). Python has its quirks, but all languages do, and your claim that it will encourage more and easier test writing by developers is a good one.Cheers,Greg", "msg_date": "Sat, 15 Jun 2024 17:57:29 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Sat, Jun 15, 2024 at 5:53 PM Greg Sabino Mullane <[email protected]> wrote:\n>\n> On Sat, Jun 15, 2024 at 12:48 PM Jelte Fennema-Nio <[email protected]> wrote:\n>>\n>> Afaict, there's a significant part of our current community who feel the same way (and I'm pretty sure every sub-30 year old person who\n>> newly joins the community would feel the exact same way too).\n>\n>\n> Those young-uns are also the same group who hold their nose when coding in C, and are always clamoring for rewriting Postgres in Rust. And before that, C++. And next year, some other popular language that is clearly better and more popular than C.\n\nWriting a new test framework in a popular language that makes it more\nlikely that more people will write more tests and test infrastructure\nis such a completely different thing than suggesting we rewrite\nPostgres in Rust that I feel that this comparison is unfair and,\nfrankly, a distraction from the discussion at hand.\n\n- Melanie\n\n\n", "msg_date": "Sat, 15 Jun 2024 18:00:43 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Sat, Jun 15, 2024 at 6:00 PM Melanie Plageman\n<[email protected]> wrote:\n> > Those young-uns are also the same group who hold their nose when coding in C, and are always clamoring for rewriting Postgres in Rust. And before that, C++. And next year, some other popular language that is clearly better and more popular than C.\n>\n> Writing a new test framework in a popular language that makes it more\n> likely that more people will write more tests and test infrastructure\n> is such a completely different thing than suggesting we rewrite\n> Postgres in Rust that I feel that this comparison is unfair and,\n> frankly, a distraction from the discussion at hand.\n\nI don't really agree with this. We have been told before that we would\nattract more developers to our community if only we allowed backend\ncode to be written in C++ or Rust, and that is not altogether a\ndifferent thing than saying that we would attract more test developers\nif only we allowed test code to be written in Python or whatever. The\ndifference is one of degree rather than of kind. We have a lot more\nbackend code than we do test code, I'm fairly sure, and our tests are\nmore self-contained: it's not *as* problematic if some tests are\nwritten in one language and others in another as it would be if\ndifferent parts of the backend used different languages, and it\nwouldn't be *as* hard if at some point we decided we wanted to convert\nall remaining code to the new language. So, I have a much harder time\nimagining that we would start allowing a new language for backend code\nthan that we would start allowing a new language for tests, but I\ndon't think the issues are fundamentally different.\n\nBut that said, I'm not sure the programming language is the real\nissue. If I really wanted to participate in an open source project,\nI'd probably be willing to learn a new programming language to do\nthat. Maybe some people wouldn't, but I had to learn a whole bunch of\nthem in college, and learning one more doesn't sound like the biggest\nof deals. But, would I feel respected and valued as a participant in\nthat project? Would I have to use weird tools and follow arcane and\nfrustrating processes? If I did, *that* would make me give up. I don't\nwant to say that the choice of programming language doesn't matter at\nall, but it seems to me that it might matter more because it's a\nsymptom of being unwilling to modernize things rather than for its own\nsake.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 16 Jun 2024 17:04:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Hi Greg, Jelte,\n\nOn Sat, 15 Jun 2024 at 23:53, Greg Sabino Mullane <[email protected]>\nwrote:\n>\n> On Sat, Jun 15, 2024 at 12:48 PM Jelte Fennema-Nio <[email protected]>\nwrote:\n>>\n>> Afaict, there's a significant part of our current community who feel the\nsame way (and I'm pretty sure every sub-30 year old person who\n>> newly joins the community would feel the exact same way too).\n\nI feel I'm still relatively new (started in the past 4 years) and I have\nquite some time left before I hit 30 years of age.\n\nI don't specifically feel the way you describe, nor do I think I've ever\nreally felt like that in the previous nearly 4 years of hacking. Then\nagain, I'm not interested in testing frameworks, so I don't feel much at\nall about which test frameworks we use.\n\n> Those young-uns are also the same group who hold their nose when coding\nin C, and are always clamoring for rewriting Postgres in Rust.\n\nCould you point me to one occasion I have 'always' clamored for this, or\nany of \"those young-uns\" in the community? I may not be a huge fan of C,\nbut rewriting PostgreSQL in [other language] is not on the list of things\nI'm clamoring for. I may have given off-hand mentions that [other language]\nwould've helped in certain cases, sure, but I'd hardly call that clamoring.\n\nKind regards,\n\nMatthias van de Meent\n\nHi Greg, Jelte,On Sat, 15 Jun 2024 at 23:53, Greg Sabino Mullane <[email protected]> wrote:\n>\n> On Sat, Jun 15, 2024 at 12:48 PM Jelte Fennema-Nio <[email protected]> wrote:\n>>\n>> Afaict, there's a significant part of our current community who feel the same way (and I'm pretty sure every sub-30 year old person who\n>> newly joins the community would feel the exact same way too).\nI feel I'm still relatively new (started in the past 4 years) and I have quite some time left before I hit 30 years of age.I don't specifically feel the way you describe, nor do I think I've ever really felt like that in the previous nearly 4 years of hacking. Then again, I'm not interested in testing frameworks, so I don't feel much at all about which test frameworks we use.\n\n> Those young-uns are also the same group who hold their nose when coding in C, and are always clamoring for rewriting Postgres in Rust.\nCould you point me to one occasion I have 'always' clamored for this, or any of \"those young-uns\" in the community? I may not be a huge fan of C, but rewriting PostgreSQL in [other language] is not on the list of things I'm clamoring for. I may have given off-hand mentions that [other language] would've helped in certain cases, sure, but I'd hardly call that clamoring.Kind regards,Matthias van de Meent", "msg_date": "Mon, 17 Jun 2024 10:27:09 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "Hi Jacob,\n\n> For the v18 cycle, I would like to try to get pytest [1] in as a\n> supported test driver, in addition to the current offerings.\n>\n> (I'm tempted to end the email there.)\n\nHuge +1 from me and many thanks for working on this.\n\nTwo cents from me.\n\nI spent a fair part of my career writing in Perl. Personally I like\nthe language and often find it more convenient for the tasks I'm\nworking on than Python.\n\nThis being said, there were several projects I was involved in where\nwe had to choose a scripting language. In all the cases there was a\nstrong push-back against Perl and Python always seemed to be a common\ndenominator for everyone. So I ended up with the rule of thumb to use\nPerl for projects I'm working on alone and Python otherwise. Although\nthe objective reality in the entire industry is unknown to me I spoke\nto many people whose observations were similar.\n\nWe could speculate about the reasons why people seem to prefer Python\n(good IDE support*, unique** libraries like Matplotlib / NumPy /\nSciPy, ...) but honestly I don't think they are extremely relevant in\nthis discussion.\n\nI believe supporting Python in our test infrastructure will attract\nmore contributors and thus would be a good step for the project in the\nlong run.\n\n* including PyTest integration\n** citation needed\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 17 Jun 2024 15:19:05 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Sun, 16 Jun 2024 at 23:04, Robert Haas <[email protected]> wrote:\n>\n> On Sat, Jun 15, 2024 at 6:00 PM Melanie Plageman\n> <[email protected]> wrote:\n> > Writing a new test framework in a popular language that makes it more\n> > likely that more people will write more tests and test infrastructure\n> > is such a completely different thing than suggesting we rewrite\n> > Postgres in Rust that I feel that this comparison is unfair and,\n> > frankly, a distraction from the discussion at hand.\n>\n> I don't really agree with this.\n> <snip>\n> it's not *as* problematic if some tests are\n> written in one language and others in another as it would be if\n> different parts of the backend used different languages, and it\n> wouldn't be *as* hard if at some point we decided we wanted to convert\n> all remaining code to the new language.\n\nHonestly, it sounds like you actually do agree with each other. It\nseems you interpreted Melanie her use of \"thing\" as \"people wanting to\nuse Rust/Python in the Postgres codebase\" while I believe she probably\nmeant \"all the problems and effort involved in the task making that\npossible''. And afaict from your response, you definitely agree that\nmaking it possible to use Rust in our main codebase is a lot more\ndifficult than for Python for our testing code.\n\n> But, would I feel respected and valued as a participant in\n> that project? Would I have to use weird tools and follow arcane and\n> frustrating processes? If I did, *that* would make me give up. I don't\n> want to say that the choice of programming language doesn't matter at\n> all, but it seems to me that it might matter more because it's a\n> symptom of being unwilling to modernize things rather than for its own\n> sake.\n\nI can personally definitely relate to this (although I wouldn't frame\nit as strongly as you did). Postgres development definitely requires\nweird tools and arcane processes (imho) when compared to most other\nopen source projects. The elephant in the room is of course the\nmailing list development flow. But we have some good reasons for using\nthat[^1]. But most people have some limit on the amount of weirdness\nthey are willing to accept when wanting to contribute, and the mailing\nlist pushes us quite close to that limit for a bunch of people\nalready. Any additional weird tools/arcane processes might push some\npeople over that limit.\n\nWe've definitely made big improvements in modernizing our development\nworkflow over the last few years though: We now have CI (cfbot), a\nmodern build system (meson), and working autoformatting (requiring\npgindent on commit). These improvements have been very noticeable to\nme, and I think we should continue such efforts. I think allowing\npeople to write tests in Python is one of the easier improvements that\nwe can make.\n\n[^1]: Although I think those reasons apply much less to the\ndocumentation, maybe we could allow github contributions for just\nthose.\n\n\n", "msg_date": "Mon, 17 Jun 2024 15:39:47 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "\nOn 2024-06-17 Mo 4:27 AM, Matthias van de Meent wrote:\n> Hi Greg, Jelte,\n>\n> On Sat, 15 Jun 2024 at 23:53, Greg Sabino Mullane <[email protected]> \n> wrote:\n>\n> > Those young-uns are also the same group who hold their nose when \n> coding in C, and are always clamoring for rewriting Postgres in Rust.\n>\n> Could you point me to one occasion I have 'always' clamored for this, \n> or any of \"those young-uns\" in the community? I may not be a huge fan \n> of C, but rewriting PostgreSQL in [other language] is not on the list \n> of things I'm clamoring for. I may have given off-hand mentions that \n> [other language] would've helped in certain cases, sure, but I'd \n> hardly call that clamoring.\n>\n>\n\nGreg was being a but jocular here. I didn't take him seriously. But \nthere's maybe a better case to make the point he was making. Back in the \ndark ages we used a source code control system called CVS. It's quite \nunlike git and has a great many limitations and uglinesses, and there \nwas some pressure for us to move off it. If we had done so when it was \nfirst suggested, we would probably have moved to using Subversion, which \nis rather like CVS with many of the warts knocked off. Before long, some \ndistributed systems like Mercurial and git came along, and we, like most \nof the world, chose git. Thus by waiting and not immediately doing what \nwas suggested we got a better solution. Moving twice would have been ... \npainful.\n\nI have written Python in the past. Not a huge amount, but it doesn't \nfeel like a foreign country to me, just the next town over instead of my \nimmediate neighbourhood. We even have a python script in the buildfarm \nserver code (not written by me). I'm sure if we started writing tests in \nPython I would adjust. But I think we need to know what the advantages \nare, beyond simple language preference. And to get to an equivalent \nplace for Python that we are at with perl will involve some work.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 12:21:10 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "(slowly catching up from the weekend email backlog)\n\nOn Fri, Jun 14, 2024 at 5:10 AM Robert Haas <[email protected]> wrote:\n> I mean, both Perl and Python are Turing-complete.\n\nTom responded to this better than I could have, but I don't think this\nis a helpful statement. In fact I opened the unconference session with\nit and immediately waved it away as not-the-point.\n\n> I just don't believe in the idea that we're going to write one\n> category of tests in one language and another category in another\n> language.\n\nYou and I will probably not agree, then, because IMO we already do\nthat. SQL behavior is tested in SQL via pg_regress characterization\ntests. End-to-end tests are written in Perl. Lower-level tests are\noften written in C (and, unfortunately, then driven in Perl instead of\nC; see above mail to Noah).\n\nI'm fundamentally a polyglot tester by philosophy, so I don't see\ncareful use of multiple languages as an inherent problem to be solved.\nThey increase complexity (considerably so!) but generally provide\nsufficient value to offset the cost.\n\n> As soon as we open the door to Python tests, people are\n> going to start writing the TAP tests that they would have written in\n> Perl in Python instead.\n\nThere's a wide spectrum of opinions between yours (which I will\ncheekily paraphrase as \"people will love testing in Python so much\nthey'll be willing to reinvent all of the wheels\" -- which in the\nshort term would increase maintenance cost but in the long term sounds\nlike a very good problem to have), and people who seem to think that\nnew test suite infrastructure would sit unused because no one wants to\nwrite tests anyway (to pull a strawman out of some hallway\nconversations at PGConf.dev). I think the truth is probably somewhere\nin the middle?\n\nMy prior mail was an attempt to bridge the gap between today and the\nmedium term, by introducing a series of compromises and incremental\nsteps in response to specific fears. We can jump forward to the end\nstate and try to talk about it, but I don't control the end state and\nI don't have a crystal ball.\n\n> So as I see\n> it, the only reasonable plan here if we want to introduce testing in\n> Python (or C#, or Ruby, or Go, or JavaScript, or Lua, or LOLCODE) is\n> to try to achieve a reasonable degree of parity between that language\n> and Perl. Because then we can at least review the new infrastructure\n> all at once, instead of incrementally spread across many patches\n> written, reviewed, and committed by many different people.\n\nI don't at all believe that a test API which is ported en masse as a\nhorizontal layer, without motivating vertical slices of test\nfunctionality, is going to be fit for purpose.\n\nAnd \"written, reviewed, and committed by many different people\" is a\nfeature for me, not a bug. One of the goals of the thread is to\nencourage more community test writing than we currently have.\nOtherwise, I could have kept silent (I am very happy with my personal\nsuite and have been comfortably maintaining it for a while). I am\ntrying to build community momentum around a pain point that is\ncurrently rusted in place.\n\n> Consider the meson build system project. To get that committed, Andres\n> had to make it do pretty much everything MSVC could do and mostly\n> everything that configure could do\n\nI think some lessons can be pulled from that, but fundamentally that's\na port of the build infrastructure done by a person with a commit bit.\nThere are some pretty considerable differences. (And even then, it\nwasn't \"done\" with the first volley of patches, right?)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 18 Jun 2024 07:34:17 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" }, { "msg_contents": "On Fri, Jun 14, 2024 at 8:49 AM Tom Lane <[email protected]> wrote:\n> I think that's an oversimplified analysis. Sure, the languages are\n> both Turing-complete, but for our purposes here they are both simply\n> glue languages around some set of testing facilities. Some of those\n> facilities will be provided by the base languages (plus whatever\n> extension modules we choose to require) and some by code we write.\n> The overall experience of writing tests will be determined by the\n> testing facilities far more than by the language used for glue.\n\n+1. As an example, the more extensive (and high-quality) a language's\nstandard library, the more tests you'll be able to write. Convincing\ncommitters to adopt a new third-party dependency is hard (for good\nreason); the strength of the standard library should be considered as\na point of technical comparison.\n\n> That being the case, I do agree with your point that Python\n> equivalents to most of PostgreSQL::Test will need to be built up PDQ.\n> Maybe they can be better than the originals, in features or ease of\n> use, but \"not there at all\" is not better.\n\nThere's a wide gulf between \"not there at all\" and \"reimplement it all\nas a prerequisite for v1\" as Robert proposed. I'm arguing for a middle\nground.\n\n> But what I'd really like to see is some comparison of the\n> language-provided testing facilities that we're proposing\n> to depend on. Why is pytest (or whatever) better than Test::More?\n\nPeople are focusing a lot on failure reporting, and I agree with them,\nbut I did try to include more than just that in my OP.\n\nI'll requote what I personally think is the #1 killer feature of\npytest, which prove and Test::More appear to be unable to accomplish\non their own: configurable isolation of tests from each other via\nfixtures [1].\n\n Problem 1 (rerun failing tests): One architectural roadblock to this\n in our Test::More suite is that tests depend on setup that's done by\n previous tests. pytest allows you to declare each test's setup\n requirements via pytest fixtures, letting the test runner build up the\n world exactly as it needs to be for a single isolated test. These\n fixtures may be given a \"scope\" so that multiple tests may share the\n same setup for performance or other reasons.\n\nWhen I'm doing red-to-green feature development (e.g. OAuth) or\ngreen-to-green refactoring (e.g. replacement of libiddawc with libcurl\nin OAuth), quick cycle time and reduction of noise is extremely\nimportant. I want to be able to rerun just the single red test I care\nabout before moving on.\n\n(Tests may additionally be organized with custom attributes. My OAuth\nsuite contains tests that must run slowly due to mandatory timeouts;\nI've marked them as slow, and they can be easily skipped from the test\nrunner.)\n\n2. The ability to break into a test case with the built-in debugger\n[2] is also fantastic for quick red-green work. Much better than\nprint() statements.\n\n(Along similar lines, even the ability to use Python's built-in REPL\nincreases velocity. Python users understand that they can execute\n`python3` and be dropped into a sandbox to try out syntax or some\nunfamiliar library. Searching for how to do this in Perl results in a\nhandful of custom-built scripts; people here may know which to use as\na Perl monk, sure, but the point is to make it easy for newcomers to\nwrite tests.)\n\n> I also wonder about integration of python-based testing with what\n> we already have. A significant part of what you called the meson\n> work had to do with persuading pg_regress, isolationtester, etc\n> to output test results in the common format established by TAP.\n> Am I right in guessing that pytest will have nothing to do with that?\n\nAndres covered this pretty well. I will note that I had problems with\npytest-tap itself [3], and I'm unclear whether that represents a bug\nin pytest-tap or a bug in pytest.\n\nThanks,\n--Jacob\n\n[1] https://docs.pytest.org/en/stable/how-to/fixtures.html\n[2] https://docs.pytest.org/en/stable/how-to/failures.html\n[3] https://github.com/python-tap/pytest-tap/issues/30\n\n\n", "msg_date": "Tue, 18 Jun 2024 07:35:35 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: adding pytest as a supported test framework" } ]
[ { "msg_contents": "Is there an issue with the ODBC Source downloads today?\n\nThe source download URL isn't working: https://www.postgresql.org/ftp/odbc/versions/src/\n\nThanks, Mark\n\n\n\n\n\n\n\n\n\nIs there an issue with the ODBC Source downloads today?  \n\nThe source download URL isn’t working:  https://www.postgresql.org/ftp/odbc/versions/src/      \n\n \nThanks, Mark", "msg_date": "Mon, 10 Jun 2024 19:33:30 +0000", "msg_from": "Mark Hill <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC Source Downloads Missing" }, { "msg_contents": "Getting the same issue at my end, the error message is \"The URL you\nspecified does not exist.\".\n\nOn Tue, Jun 11, 2024 at 12:33 AM Mark Hill <[email protected]> wrote:\n\n> Is there an issue with the ODBC Source downloads today?\n>\n> The source download URL isn’t working:\n> https://www.postgresql.org/ftp/odbc/versions/src/\n>\n>\n>\n> Thanks, Mark\n>\n\nGetting the same issue at my end, the error message is \"The URL you specified does not exist.\".On Tue, Jun 11, 2024 at 12:33 AM Mark Hill <[email protected]> wrote:\n\n\nIs there an issue with the ODBC Source downloads today?  \n\nThe source download URL isn’t working:  https://www.postgresql.org/ftp/odbc/versions/src/      \n\n \nThanks, Mark", "msg_date": "Tue, 11 Jun 2024 08:36:11 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC Source Downloads Missing" }, { "msg_contents": "Hello Mark,\n\nYou can found psqlodbc on Link below\n\nhttps://www.postgresql.org/ftp/odbc/releases/REL-16_00_0005/\n\nKind Regards,\nFahar Abbas\n\nOn Tuesday, June 11, 2024, Mark Hill <[email protected]> wrote:\n\n> Is there an issue with the ODBC Source downloads today?\n>\n> The source download URL isn’t working: https://www.postgresql.org/\n> ftp/odbc/versions/src/\n>\n>\n>\n> Thanks, Mark\n>\n\nHello Mark,You can found psqlodbc on Link belowhttps://www.postgresql.org/ftp/odbc/releases/REL-16_00_0005/Kind Regards,Fahar AbbasOn Tuesday, June 11, 2024, Mark Hill <[email protected]> wrote:\n\n\nIs there an issue with the ODBC Source downloads today?  \n\nThe source download URL isn’t working:  https://www.postgresql.org/ftp/odbc/versions/src/      \n\n \nThanks, Mark", "msg_date": "Tue, 11 Jun 2024 16:52:44 +0500", "msg_from": "Fahar Abbas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC Source Downloads Missing" }, { "msg_contents": "On Tue, Jun 11, 2024 at 4:52 PM Fahar Abbas <[email protected]>\nwrote:\n\n> Hello Mark,\n>\n> You can found psqlodbc on Link below\n>\n> https://www.postgresql.org/ftp/odbc/releases/REL-16_00_0005/\n>\n> He is talking about Source code whereas this link contains installer.\n\nRegards\nKashif Zeeshan\nBitnine Global\n\n> Kind Regards,\n> Fahar Abbas\n>\n> On Tuesday, June 11, 2024, Mark Hill <[email protected]> wrote:\n>\n>> Is there an issue with the ODBC Source downloads today?\n>>\n>> The source download URL isn’t working:\n>> https://www.postgresql.org/ftp/odbc/versions/src/\n>>\n>>\n>>\n>> Thanks, Mark\n>>\n>\n\nOn Tue, Jun 11, 2024 at 4:52 PM Fahar Abbas <[email protected]> wrote:Hello Mark,You can found psqlodbc on Link belowhttps://www.postgresql.org/ftp/odbc/releases/REL-16_00_0005/He is talking about Source code whereas this link contains installer.RegardsKashif ZeeshanBitnine Global Kind Regards,Fahar AbbasOn Tuesday, June 11, 2024, Mark Hill <[email protected]> wrote:\n\n\nIs there an issue with the ODBC Source downloads today?  \n\nThe source download URL isn’t working:  https://www.postgresql.org/ftp/odbc/versions/src/      \n\n \nThanks, Mark", "msg_date": "Tue, 11 Jun 2024 16:54:56 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC Source Downloads Missing" }, { "msg_contents": "Thanks All!\r\n\r\nFrom: Kashif Zeeshan <[email protected]>\r\nSent: Tuesday, June 11, 2024 7:55 AM\r\nTo: Fahar Abbas <[email protected]>\r\nCc: Mark Hill <[email protected]>; [email protected]\r\nSubject: Re: ODBC Source Downloads Missing\r\n\r\n\r\nEXTERNAL\r\n\r\n\r\nOn Tue, Jun 11, 2024 at 4:52 PM Fahar Abbas <[email protected]<mailto:[email protected]>> wrote:\r\nHello Mark,\r\n\r\nYou can found psqlodbc on Link below\r\n\r\nhttps://www.postgresql.org/ftp/odbc/releases/REL-16_00_0005/<https://protect.checkpoint.com/v2/___https://www.postgresql.org/ftp/odbc/releases/REL-16_00_0005/___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NmIwNTRiY2I0NGNhNjAzZDRjZDFmNDg0YjVhZjNmNDA6Njo3ODJhOmRmOTY5ODM1MjgyMDBhYTcxOGJjYjk5YTdhMTcwZjdjZTIyNGVkZjEzNGNjMzYxYjgxOGZjYzY4ZjhiNGM3Y2Q6aDpU>\r\n\r\nHe is talking about Source code whereas this link contains installer.\r\n\r\nRegards\r\nKashif Zeeshan\r\nBitnine Global\r\nKind Regards,\r\nFahar Abbas\r\n\r\nOn Tuesday, June 11, 2024, Mark Hill <[email protected]<mailto:[email protected]>> wrote:\r\nIs there an issue with the ODBC Source downloads today?\r\n\r\nThe source download URL isn’t working: https://www.postgresql.org/ftp/odbc/versions/src/<https://protect.checkpoint.com/v2/___https://www.postgresql.org/ftp/odbc/versions/src/___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NmIwNTRiY2I0NGNhNjAzZDRjZDFmNDg0YjVhZjNmNDA6Njo1NGExOjQ5OTcyMWZmZGUwZmUzODYxOThhYzU2YWM5ZDRlMTA5ODFlZDNjODMxZjAwOTMyYWIxNjM3NWNjZDk4ZWVmNjU6aDpU>\r\n\r\nThanks, Mark\r\n\n\n\n\n\n\n\n\n\nThanks All!\n \n\n\nFrom: Kashif Zeeshan <[email protected]> \nSent: Tuesday, June 11, 2024 7:55 AM\nTo: Fahar Abbas <[email protected]>\nCc: Mark Hill <[email protected]>; [email protected]\nSubject: Re: ODBC Source Downloads Missing\n\n\n \nEXTERNAL\n\n\n\n\n \n\n \n\n\nOn Tue, Jun 11, 2024 at 4:52 PM Fahar Abbas <[email protected]> wrote:\n\n\nHello Mark, \n\n \n\n\nYou can found psqlodbc on Link below\n\n\n \n\n\nhttps://www.postgresql.org/ftp/odbc/releases/REL-16_00_0005/\n\n\n \n\n\n\nHe is talking about Source code whereas this link contains installer.\n\n\n \n\n\nRegards\n\n\nKashif Zeeshan\n\n\nBitnine Global \n\n\n\nKind Regards,\n\n\nFahar Abbas\n\r\nOn Tuesday, June 11, 2024, Mark Hill <[email protected]> wrote:\n\n\n\nIs there an issue with the ODBC Source downloads today? \r\n\n\r\nThe source download URL isn’t working:  \nhttps://www.postgresql.org/ftp/odbc/versions/src/      \r\n\n \nThanks, Mark", "msg_date": "Tue, 11 Jun 2024 13:47:32 +0000", "msg_from": "Mark Hill <[email protected]>", "msg_from_op": true, "msg_subject": "RE: ODBC Source Downloads Missing" }, { "msg_contents": "On 6/10/24 15:33, Mark Hill wrote:\n>\n> Is there an issue with the ODBC Source downloads today?\n>\n> The source download URL isn’t working: \n> https://www.postgresql.org/ftp/odbc/versions/src/\n>\n> Thanks, Mark\n>\n\nHere is github repo https://github.com/postgresql-interfaces/psqlodbc\n\n-- \nKind Regards,\nYogesh Sharma\nPostgreSQL, Linux, and Networking Expert\nOpen Source Enthusiast and Advocate\nPostgreSQL Contributors Team @ RDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 11 Jun 2024 09:57:22 -0400", "msg_from": "Yogesh Sharma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC Source Downloads Missing" }, { "msg_contents": "In case source code is not available, he can use the linked already shared.\n\nOn Tuesday, June 11, 2024, Kashif Zeeshan <[email protected]> wrote:\n\n>\n>\n> On Tue, Jun 11, 2024 at 4:52 PM Fahar Abbas <[email protected]>\n> wrote:\n>\n>> Hello Mark,\n>>\n>> You can found psqlodbc on Link below\n>>\n>> https://www.postgresql.org/ftp/odbc/releases/REL-16_00_0005/\n>>\n>> He is talking about Source code whereas this link contains installer.\n>\n> Regards\n> Kashif Zeeshan\n> Bitnine Global\n>\n>> Kind Regards,\n>> Fahar Abbas\n>>\n>> On Tuesday, June 11, 2024, Mark Hill <[email protected]> wrote:\n>>\n>>> Is there an issue with the ODBC Source downloads today?\n>>>\n>>> The source download URL isn’t working: https://www.postgresql.org/\n>>> ftp/odbc/versions/src/\n>>>\n>>>\n>>>\n>>> Thanks, Mark\n>>>\n>>\n\nIn case source code is not available, he can use the linked already shared.On Tuesday, June 11, 2024, Kashif Zeeshan <[email protected]> wrote:On Tue, Jun 11, 2024 at 4:52 PM Fahar Abbas <[email protected]> wrote:Hello Mark,You can found psqlodbc on Link belowhttps://www.postgresql.org/ftp/odbc/releases/REL-16_00_0005/He is talking about Source code whereas this link contains installer.RegardsKashif ZeeshanBitnine Global Kind Regards,Fahar AbbasOn Tuesday, June 11, 2024, Mark Hill <[email protected]> wrote:\n\n\nIs there an issue with the ODBC Source downloads today?  \n\nThe source download URL isn’t working:  https://www.postgresql.org/ftp/odbc/versions/src/      \n\n \nThanks, Mark", "msg_date": "Tue, 11 Jun 2024 18:57:53 +0500", "msg_from": "Fahar Abbas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC Source Downloads Missing" } ]
[ { "msg_contents": "In [1] Dominique Devienne complained that PQsocketPoll would be\nfar more useful to him if it had better-than-one-second timeout\nresolution. I initially pushed back on that on the grounds that\npost-beta1 is a bit late to be redefining public APIs. Which it is,\nbut if we don't fix it now then we'll be stuck supporting that API\nindefinitely. And it's not like one-second resolution is great\nfor our internal usage either --- for example, I see that psql\nis now doing\n\n\t\tend_time = time(NULL) + 1;\n\t\trc = PQsocketPoll(sock, forRead, !forRead, end_time);\n\nwhich claims to be waiting one second, but actually it's waiting\nsomewhere between 0 and 1 second. So I thought I'd look into\nwhether we can still change it without too much pain, and I think\nwe can.\n\nThe $64 question is how to represent the end_time if not as time_t.\nThe only alternative POSIX offers AFAIK is gettimeofday's \"struct\ntimeval\", which is painful to compute with and I don't think it's\nnative on Windows. What I suggest is that we use int64 microseconds\nsince the epoch, which is the same idea as the backend's TimestampTz\nexcept I think we'd better use the Unix epoch not 2000-01-01.\nThen converting code is just a matter of changing variable types\nand adding some zeroes to constants.\n\nThe next question is how to spell \"int64\" in libpq-fe.h. As a\nclient-exposed header, the portability constraints on it are pretty\nstringent, so even in 2024 I'm loath to make it depend on <stdint.h>;\nand certainly depending on our internal int64 typedef won't do.\nWhat I did in the attached is to write \"long long int\", which is\nrequired to be at least 64 bits by C99. Other opinions are possible\nof course.\n\nLastly, we need a way to get current time in this form. My first\ndraft of the attached patch had the callers calling gettimeofday\nand doing arithmetic from that, but it seems a lot better to provide\na function that just parallels time(2).\n\nBTW, I think this removes the need for libpq-fe.h to #include <time.h>,\nbut I didn't remove that because it seems likely that some callers are\nindirectly relying on it to be present. Removing it wouldn't gain\nvery much anyway.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CAFCRh-8hf%3D7V8UoF63aLxSkeFmXX8-1O5tRxHL61Pngb7V9rcw%40mail.gmail.com", "msg_date": "Mon, 10 Jun 2024 17:39:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "On Mon, 2024-06-10 at 17:39 -0400, Tom Lane wrote:\n> What I suggest is that we use int64 microseconds\n> since the epoch, which is the same idea as the backend's TimestampTz\n> except I think we'd better use the Unix epoch not 2000-01-01.\n> Then converting code is just a matter of changing variable types\n> and adding some zeroes to constants.\n\n...\n\n> Lastly, we need a way to get current time in this form.  My first\n> draft of the attached patch had the callers calling gettimeofday\n> and doing arithmetic from that, but it seems a lot better to provide\n> a function that just parallels time(2).\n\nI briefly skimmed the thread and didn't find the reason why the API\nrequires an absolute time.\n\nMy expectation would be for the last parameter to be a relative timeout\n(\"wait up to X microseconds\"). That avoids the annoyance of creating a\nnew definition of absolute time and exposing a new function to retrieve\nit.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 10 Jun 2024 16:47:28 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> I briefly skimmed the thread and didn't find the reason why the API\n> requires an absolute time.\n\nBecause a common call pattern is to loop around PQsocketPoll calls.\nIn that scenario you generally want to nail down the timeout time\nbefore starting the loop, not have it silently move forward after\nany random event that breaks the current wait (EINTR for example).\npqSocketCheck and pqConnectDBComplete both rely on this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Jun 2024 19:57:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "On Mon, 2024-06-10 at 19:57 -0400, Tom Lane wrote:\n> Because a common call pattern is to loop around PQsocketPoll calls.\n> In that scenario you generally want to nail down the timeout time\n> before starting the loop, not have it silently move forward after\n> any random event that breaks the current wait (EINTR for example).\n> pqSocketCheck and pqConnectDBComplete both rely on this.\n\nI agree it makes things easier for a caller following that pattern,\nbecause it doesn't need to recalculate the timeout each time through\nthe loop.\n\nBut:\n\n1. If your clock goes backwards, you can end up waiting for an\narbitrarily long time. To prevent that you need to do some\nrecalculation each time through the loop anyway.\n\n2. Inventing a new absolute time type just for this single purpose\nseems strange to me. Would it be useful in other places? Are we going\nto define what kinds of operations/transformations are supported?\n\n3. I can't recall another API that uses absolute time for a timeout;\nare you aware of a precedent?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 10 Jun 2024 21:34:40 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> I agree it makes things easier for a caller following that pattern,\n> because it doesn't need to recalculate the timeout each time through\n> the loop.\n\n> But:\n\n> 1. If your clock goes backwards, you can end up waiting for an\n> arbitrarily long time. To prevent that you need to do some\n> recalculation each time through the loop anyway.\n\n[ shrug... ] The only reason this has come up is that f5e4dedfa\nexposed what was previously just libpq-private code. Given that\nthat code has operated in this way for a couple of decades with\napproximately zero trouble reports, I'm not very interested in\nre-litigating its theory of operation. The more so if you don't\nhave a concrete better alternative to propose.\n\n> 2. Inventing a new absolute time type just for this single purpose\n> seems strange to me. Would it be useful in other places? Are we going\n> to define what kinds of operations/transformations are supported?\n\nI'm not that thrilled with inventing a new time type just for this,\neither. However, time_t is not very fit for purpose, so do you\nhave a different suggestion?\n\nWe could make it a bit nicer-looking by wrapping \"long long int\"\nin a typedef, but that's only cosmetic.\n\n> 3. I can't recall another API that uses absolute time for a timeout;\n> are you aware of a precedent?\n\nThe other thing that I've seen done is for select(2) to treat the\ntimeout as an in/out parameter, decrementing it by the amount of\ntime slept. I hope you'll agree that that's a monstrous kluge.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Jun 2024 00:52:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "On Tue, 2024-06-11 at 00:52 -0400, Tom Lane wrote:\n> \n> I'm not that thrilled with inventing a new time type just for this,\n> either.  However, time_t is not very fit for purpose, so do you\n> have a different suggestion?\n\nNo, I don't have a great alternative, so I don't object to your\nsolutions for f5e4dedfa8.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 10 Jun 2024 23:27:30 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "On Mon, Jun 10, 2024 at 11:39 PM Tom Lane <[email protected]> wrote:\n> The next question is how to spell \"int64\" in libpq-fe.h.\n\nHi. Out-of-curiosity, I grep'd for it in my 16.1 libpq:\n\n[ddevienne@marsu include]$ grep 'long long' *.h\necpg_config.h:/* Define to 1 if the system has the type `long long int'. */\necpg_config.h:/* Define to 1 if `long long int' works and is 64 bits. */\npg_config.h:/* The normal alignment of `long long int', in bytes. */\npg_config.h:/* Define to 1 if `long long int' works and is 64 bits. */\npgtypes_interval.h:typedef long long int int64;\n\nAnd the relevant snippet of pgtypes_interval.h is:\n\n#ifdef HAVE_LONG_INT_64\n#ifndef HAVE_INT64\ntypedef long int int64;\n#endif\n#elif defined(HAVE_LONG_LONG_INT_64)\n#ifndef HAVE_INT64\ntypedef long long int int64;\n#endif\n#else\n/* neither HAVE_LONG_INT_64 nor HAVE_LONG_LONG_INT_64 */\n#error must have a working 64-bit integer datatype\n#endif\n\nGiven this precedent, can't the same be done?\n\nAnd if a 64-bit integer is too troublesome, why not just two 32-bit\nparameters instead?\nEither a (time_t + int usec), microsecond offset, clamped to [0, 1M),\nor (int sec + int usec)?\n\nI'm fine with any portable solution that allows sub-second timeouts, TBH.\nJust thinking aloud here. Thanks, --DD\n\n\n", "msg_date": "Tue, 11 Jun 2024 08:55:00 +0200", "msg_from": "Dominique Devienne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Em seg., 10 de jun. de 2024 às 18:39, Tom Lane <[email protected]> escreveu:\n\n> In [1] Dominique Devienne complained that PQsocketPoll would be\n> far more useful to him if it had better-than-one-second timeout\n> resolution. I initially pushed back on that on the grounds that\n> post-beta1 is a bit late to be redefining public APIs. Which it is,\n> but if we don't fix it now then we'll be stuck supporting that API\n> indefinitely. And it's not like one-second resolution is great\n> for our internal usage either --- for example, I see that psql\n> is now doing\n>\n> end_time = time(NULL) + 1;\n> rc = PQsocketPoll(sock, forRead, !forRead, end_time);\n>\n> which claims to be waiting one second, but actually it's waiting\n> somewhere between 0 and 1 second. So I thought I'd look into\n> whether we can still change it without too much pain, and I think\n> we can.\n>\n> The $64 question is how to represent the end_time if not as time_t.\n> The only alternative POSIX offers AFAIK is gettimeofday's \"struct\n> timeval\", which is painful to compute with and I don't think it's\n> native on Windows. What I suggest is that we use int64 microseconds\n> since the epoch, which is the same idea as the backend's TimestampTz\n> except I think we'd better use the Unix epoch not 2000-01-01.\n> Then converting code is just a matter of changing variable types\n> and adding some zeroes to constants.\n>\n> The next question is how to spell \"int64\" in libpq-fe.h. As a\n> client-exposed header, the portability constraints on it are pretty\n> stringent, so even in 2024 I'm loath to make it depend on <stdint.h>;\n> and certainly depending on our internal int64 typedef won't do.\n> What I did in the attached is to write \"long long int\", which is\n> required to be at least 64 bits by C99. Other opinions are possible\n> of course.\n>\n> Lastly, we need a way to get current time in this form. My first\n> draft of the attached patch had the callers calling gettimeofday\n> and doing arithmetic from that, but it seems a lot better to provide\n> a function that just parallels time(2).\n>\n> BTW, I think this removes the need for libpq-fe.h to #include <time.h>,\n> but I didn't remove that because it seems likely that some callers are\n> indirectly relying on it to be present. Removing it wouldn't gain\n> very much anyway.\n>\n> Thoughts?\n>\nHi Tom.\n\nWhy not use uint64?\nI think it's available in (fe-misc.c)\n\nIMO, gettimeofday It also seems to me that it is deprecated.\n\nCan I suggest a version using *clock_gettime*,\nwhich I made based on versions available on the web?\n\n/*\n * PQgetCurrentTimeUSec: get current time with nanosecond precision\n *\n * This provides a platform-independent way of producing a reference\n * value for PQsocketPoll's timeout parameter.\n */\n\nuint64\nPQgetCurrentTimeUSec(void)\n{\n#ifdef __MACH__\nstruct timespec ts;\nclock_serv_t cclock;\nmach_timespec_t mts;\n\nhost_get_clock_service(mach_host_self(), SYSTEM_CLOCK, &cclock);\nclock_get_time(cclock, &mts);\nmach_port_deallocate(mach_task_self(), cclock);\nts.tv_sec = mts.tv_sec;\nts.tv_nsec = mts.tv_nsec;\n#eldef _WIN32_\nstruct timespec ts { long tv_sec; long tv_nsec; };\n__int64 wintime;\n\nGetSystemTimeAsFileTime((FILETIME*) &wintime);\nwintime -= 116444736000000000i64; // 1jan1601 to 1jan1970\nts.tv_sec = wintime / 10000000i64; // seconds\nts.tv_nsec = wintime % 10000000i64 * 100; // nano-seconds\n#else\nstruct timespec ts;\n\nclock_gettime(CLOCK_MONOTONIC, &ts);\n#endif\n\nreturn (ts.tv_sec * 1000000000L) + ts.tv_nsec;\n}\n\nbest regards,\nRanier Vilela\n\nEm seg., 10 de jun. de 2024 às 18:39, Tom Lane <[email protected]> escreveu:In [1] Dominique Devienne complained that PQsocketPoll would be\nfar more useful to him if it had better-than-one-second timeout\nresolution.  I initially pushed back on that on the grounds that\npost-beta1 is a bit late to be redefining public APIs.  Which it is,\nbut if we don't fix it now then we'll be stuck supporting that API\nindefinitely.  And it's not like one-second resolution is great\nfor our internal usage either --- for example, I see that psql\nis now doing\n\n                end_time = time(NULL) + 1;\n                rc = PQsocketPoll(sock, forRead, !forRead, end_time);\n\nwhich claims to be waiting one second, but actually it's waiting\nsomewhere between 0 and 1 second.  So I thought I'd look into\nwhether we can still change it without too much pain, and I think\nwe can.\n\nThe $64 question is how to represent the end_time if not as time_t.\nThe only alternative POSIX offers AFAIK is gettimeofday's \"struct\ntimeval\", which is painful to compute with and I don't think it's\nnative on Windows.  What I suggest is that we use int64 microseconds\nsince the epoch, which is the same idea as the backend's TimestampTz\nexcept I think we'd better use the Unix epoch not 2000-01-01.\nThen converting code is just a matter of changing variable types\nand adding some zeroes to constants.\n\nThe next question is how to spell \"int64\" in libpq-fe.h.  As a\nclient-exposed header, the portability constraints on it are pretty\nstringent, so even in 2024 I'm loath to make it depend on <stdint.h>;\nand certainly depending on our internal int64 typedef won't do.\nWhat I did in the attached is to write \"long long int\", which is\nrequired to be at least 64 bits by C99.  Other opinions are possible\nof course.\n\nLastly, we need a way to get current time in this form.  My first\ndraft of the attached patch had the callers calling gettimeofday\nand doing arithmetic from that, but it seems a lot better to provide\na function that just parallels time(2).\n\nBTW, I think this removes the need for libpq-fe.h to #include <time.h>,\nbut I didn't remove that because it seems likely that some callers are\nindirectly relying on it to be present.  Removing it wouldn't gain\nvery much anyway.\n\nThoughts?Hi Tom.Why not use uint64?I think it's available in (fe-misc.c)IMO, gettimeofday It also seems to me that it is deprecated.Can I suggest a version using *clock_gettime*, which I made based on versions available on the web?/* * PQgetCurrentTimeUSec: get current time with nanosecond precision * * This provides a platform-independent way of producing a reference * value for PQsocketPoll's timeout parameter. */uint64PQgetCurrentTimeUSec(void){#ifdef __MACH__\tstruct timespec ts;\tclock_serv_t cclock;\tmach_timespec_t mts;\thost_get_clock_service(mach_host_self(), SYSTEM_CLOCK, &cclock);\tclock_get_time(cclock, &mts);\tmach_port_deallocate(mach_task_self(), cclock);\tts.tv_sec = mts.tv_sec;\tts.tv_nsec = mts.tv_nsec;#eldef _WIN32_\tstruct timespec ts { long tv_sec; long tv_nsec; };\t__int64 wintime; \tGetSystemTimeAsFileTime((FILETIME*) &wintime);\twintime   -= 116444736000000000i64;             // 1jan1601 to 1jan1970\tts.tv_sec  = wintime / 10000000i64;             // seconds\tts.tv_nsec = wintime % 10000000i64 * 100;      // nano-seconds#else\tstruct timespec ts;\tclock_gettime(CLOCK_MONOTONIC, &ts);#endif\t\treturn (ts.tv_sec * 1000000000L) + ts.tv_nsec;}best regards,Ranier Vilela", "msg_date": "Tue, 11 Jun 2024 10:01:25 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Em seg., 10 de jun. de 2024 às 18:39, Tom Lane <[email protected]> escreveu:\n\n> In [1] Dominique Devienne complained that PQsocketPoll would be\n> far more useful to him if it had better-than-one-second timeout\n> resolution. I initially pushed back on that on the grounds that\n> post-beta1 is a bit late to be redefining public APIs. Which it is,\n> but if we don't fix it now then we'll be stuck supporting that API\n> indefinitely. And it's not like one-second resolution is great\n> for our internal usage either --- for example, I see that psql\n> is now doing\n>\n> end_time = time(NULL) + 1;\n> rc = PQsocketPoll(sock, forRead, !forRead, end_time);\n>\n> which claims to be waiting one second, but actually it's waiting\n> somewhere between 0 and 1 second. So I thought I'd look into\n> whether we can still change it without too much pain, and I think\n> we can.\n>\n> The $64 question is how to represent the end_time if not as time_t.\n> The only alternative POSIX offers AFAIK is gettimeofday's \"struct\n> timeval\", which is painful to compute with and I don't think it's\n> native on Windows. What I suggest is that we use int64 microseconds\n> since the epoch, which is the same idea as the backend's TimestampTz\n> except I think we'd better use the Unix epoch not 2000-01-01.\n> Then converting code is just a matter of changing variable types\n> and adding some zeroes to constants.\n>\n> The next question is how to spell \"int64\" in libpq-fe.h. As a\n> client-exposed header, the portability constraints on it are pretty\n> stringent, so even in 2024 I'm loath to make it depend on <stdint.h>;\n> and certainly depending on our internal int64 typedef won't do.\n> What I did in the attached is to write \"long long int\", which is\n> required to be at least 64 bits by C99. Other opinions are possible\n> of course.\n>\n> Lastly, we need a way to get current time in this form. My first\n> draft of the attached patch had the callers calling gettimeofday\n> and doing arithmetic from that, but it seems a lot better to provide\n> a function that just parallels time(2).\n>\n> BTW, I think this removes the need for libpq-fe.h to #include <time.h>,\n> but I didn't remove that because it seems likely that some callers are\n> indirectly relying on it to be present. Removing it wouldn't gain\n> very much anyway.\n>\n> Thoughts?\n>\nRegarding your patch:\n\n1. I think can remove *int64* in comments:\n+ * The timeout is specified by end_time_us, which is the number of\n+ * microseconds since the Unix epoch (that is, time_t times 1 million).\n+ * Timeout is infinite if end_time is -1. Timeout is immediate (no\nblocking)\n+ * if end_time is 0 (or indeed, any time before now).\n\n+ * The timeout is specified by end_time_us, which is the number of\n+ * microseconds since the Unix epoch (that is, time_t times 1 million).\n\n2. I think it's worth testing whether end_time_ns equals zero,\nwhich can avoid a call to PQgetCurrentTimeNSec()\n\n@@ -1103,14 +1113,16 @@ PQsocketPoll(int sock, int forRead, int forWrite,\ntime_t end_time)\n input_fd.events |= POLLOUT;\n\n /* Compute appropriate timeout interval */\n- if (end_time == ((time_t) -1))\n+ if (end_time_ns == -1)\n timeout_ms = -1;\n+ else if (end_time_ns == 0)\n+ timeout_ms = 0;\n\n3. I think it's worth testing whether end_time_ns equals zero,\nwhich can avoid a call to PQgetCurrentTimeNSec()\n\n@@ -1138,17 +1150,29 @@ PQsocketPoll(int sock, int forRead, int forWrite,\ntime_t end_time)\n FD_SET(sock, &except_mask);\n\n /* Compute appropriate timeout interval */\n- if (end_time == ((time_t) -1))\n+ if (end_time_ns == -1)\n ptr_timeout = NULL;\n+ else if (end_time_ns == 0)\n+ {\n+ timeout.tv_sec = 0;\n+ timeout.tv_usec = 0;\n+\n+ ptr_timeout = &timeout;\n+ }\n\nbest regards,\nRanier Vilela\n\nEm seg., 10 de jun. de 2024 às 18:39, Tom Lane <[email protected]> escreveu:In [1] Dominique Devienne complained that PQsocketPoll would be\nfar more useful to him if it had better-than-one-second timeout\nresolution.  I initially pushed back on that on the grounds that\npost-beta1 is a bit late to be redefining public APIs.  Which it is,\nbut if we don't fix it now then we'll be stuck supporting that API\nindefinitely.  And it's not like one-second resolution is great\nfor our internal usage either --- for example, I see that psql\nis now doing\n\n                end_time = time(NULL) + 1;\n                rc = PQsocketPoll(sock, forRead, !forRead, end_time);\n\nwhich claims to be waiting one second, but actually it's waiting\nsomewhere between 0 and 1 second.  So I thought I'd look into\nwhether we can still change it without too much pain, and I think\nwe can.\n\nThe $64 question is how to represent the end_time if not as time_t.\nThe only alternative POSIX offers AFAIK is gettimeofday's \"struct\ntimeval\", which is painful to compute with and I don't think it's\nnative on Windows.  What I suggest is that we use int64 microseconds\nsince the epoch, which is the same idea as the backend's TimestampTz\nexcept I think we'd better use the Unix epoch not 2000-01-01.\nThen converting code is just a matter of changing variable types\nand adding some zeroes to constants.\n\nThe next question is how to spell \"int64\" in libpq-fe.h.  As a\nclient-exposed header, the portability constraints on it are pretty\nstringent, so even in 2024 I'm loath to make it depend on <stdint.h>;\nand certainly depending on our internal int64 typedef won't do.\nWhat I did in the attached is to write \"long long int\", which is\nrequired to be at least 64 bits by C99.  Other opinions are possible\nof course.\n\nLastly, we need a way to get current time in this form.  My first\ndraft of the attached patch had the callers calling gettimeofday\nand doing arithmetic from that, but it seems a lot better to provide\na function that just parallels time(2).\n\nBTW, I think this removes the need for libpq-fe.h to #include <time.h>,\nbut I didn't remove that because it seems likely that some callers are\nindirectly relying on it to be present.  Removing it wouldn't gain\nvery much anyway.\n\nThoughts?Regarding your patch: 1. I think can remove *int64* in comments:+ * The timeout is specified by end_time_us, which is the number of+ * microseconds since the Unix epoch (that is, time_t times 1 million).+ * Timeout is infinite if end_time is -1.  Timeout is immediate (no blocking)+ * if end_time is 0 (or indeed, any time before now).+ * The timeout is specified by end_time_us, which is the number of+ * microseconds since the Unix epoch (that is, time_t times 1 million).2. I think it's worth testing whether end_time_ns equals zero,which can avoid a call to PQgetCurrentTimeNSec()@@ -1103,14 +1113,16 @@ PQsocketPoll(int sock, int forRead, int forWrite, time_t end_time) \t\tinput_fd.events |= POLLOUT;  \t/* Compute appropriate timeout interval */-\tif (end_time == ((time_t) -1))+\tif (end_time_ns == -1) \t\ttimeout_ms = -1;+\telse if (end_time_ns == 0)+\t\ttimeout_ms = 0;3. \nI think it's worth testing whether end_time_ns equals zero,\nwhich can avoid a call to PQgetCurrentTimeNSec() \n@@ -1138,17 +1150,29 @@ PQsocketPoll(int sock, int forRead, int forWrite, time_t end_time) \tFD_SET(sock, &except_mask);  \t/* Compute appropriate timeout interval */-\tif (end_time == ((time_t) -1))+\tif (end_time_ns == -1) \t\tptr_timeout = NULL;+\telse if (end_time_ns == 0)+\t{+\t\ttimeout.tv_sec = 0;+\t\ttimeout.tv_usec = 0;++\t\tptr_timeout = &timeout;+\t}best regards,Ranier Vilela", "msg_date": "Tue, 11 Jun 2024 13:34:21 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "On Mon, Jun 10, 2024 at 5:39 PM Tom Lane <[email protected]> wrote:\n> In [1] Dominique Devienne complained that PQsocketPoll would be\n> far more useful to him if it had better-than-one-second timeout\n> resolution. I initially pushed back on that on the grounds that\n> post-beta1 is a bit late to be redefining public APIs. Which it is,\n> but if we don't fix it now then we'll be stuck supporting that API\n> indefinitely. And it's not like one-second resolution is great\n> for our internal usage either --- for example, I see that psql\n> is now doing\n>\n> end_time = time(NULL) + 1;\n> rc = PQsocketPoll(sock, forRead, !forRead, end_time);\n\nI agree this is not great. I guess I didn't think about it very hard\nbecause, after all, we were just exposing an API that we'd already\nbeen using internally. But I think it's reasonable to adjust the API\nto allow for better resolution, as you propose. A second is a very\nlong amount of time, and it's entirely reasonable for someone to want\nbetter granularity.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 13:02:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I agree this is not great. I guess I didn't think about it very hard\n> because, after all, we were just exposing an API that we'd already\n> been using internally. But I think it's reasonable to adjust the API\n> to allow for better resolution, as you propose. A second is a very\n> long amount of time, and it's entirely reasonable for someone to want\n> better granularity.\n\nHere's a v2 responding to some of the comments.\n\n* People pushed back against not using \"int64\", but the difficulty\nwith that is that we'd have to #include c.h or at least pg_config.h\nin libpq-fe.h, and that would be a totally disastrous invasion of\napplication namespace. However, I'd forgotten that libpq-fe.h\ndoes include postgres_ext.h, and there's just enough configure\ninfrastructure behind that to allow defining pg_int64, which indeed\nlibpq-fe.h is already relying on. So we can use that.\n\n* I decided to invent a typedef \n\n\ttypedef pg_int64 PGusec_time_t;\n\ninstead of writing \"pg_int64\" explicitly everywhere. This is perhaps\nnot as useful as it was when I was thinking the definition would be\n\"long long int\", but it still seems to add some readability. In my\neyes anyway ... anyone think differently?\n\n* I also undid changes like s/end_time/end_time_us/. I'd done that\nmostly to ensure I looked at/fixed every reference to those variables,\nbut on reflection I don't think it's doing anything for readability.\n\n* I took Ranier's suggestion to make fast paths for end_time == 0.\nI'm not sure this will make any visible performance difference, but\nit's simple and shouldn't hurt. We do have some code paths that use\nthat behavior.\n\n* Ranier also suggested using clock_gettime instead of gettimeofday,\nbut I see no reason to do that. libpq already relies on gettimeofday,\nbut not on clock_gettime, and anyway post-beta1 isn't a great time to\nstart experimenting with portability-relevant changes.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 12 Jun 2024 13:53:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "On Wed, Jun 12, 2024 at 1:53 PM Tom Lane <[email protected]> wrote:\n> * I decided to invent a typedef\n>\n> typedef pg_int64 PGusec_time_t;\n>\n> instead of writing \"pg_int64\" explicitly everywhere. This is perhaps\n> not as useful as it was when I was thinking the definition would be\n> \"long long int\", but it still seems to add some readability. In my\n> eyes anyway ... anyone think differently?\n\nI don't think it's a bad idea to have a typedef, but that particular\none is pretty unreadable. Mmm, let's separate some things with\nunderscores and others by a change in the capitalization conventIon!\n\nI assume you're following an existing convention and therefore this is\nthe Right Thing To Do, but if there's some other approach that is less\nlike line noise, that would be great.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 13:56:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Em qua., 12 de jun. de 2024 às 14:53, Tom Lane <[email protected]> escreveu:\n\n> Robert Haas <[email protected]> writes:\n> > I agree this is not great. I guess I didn't think about it very hard\n> > because, after all, we were just exposing an API that we'd already\n> > been using internally. But I think it's reasonable to adjust the API\n> > to allow for better resolution, as you propose. A second is a very\n> > long amount of time, and it's entirely reasonable for someone to want\n> > better granularity.\n>\n> Here's a v2 responding to some of the comments.\n>\n> * People pushed back against not using \"int64\", but the difficulty\n> with that is that we'd have to #include c.h or at least pg_config.h\n> in libpq-fe.h, and that would be a totally disastrous invasion of\n> application namespace. However, I'd forgotten that libpq-fe.h\n> does include postgres_ext.h, and there's just enough configure\n> infrastructure behind that to allow defining pg_int64, which indeed\n> libpq-fe.h is already relying on. So we can use that.\n>\n> * I decided to invent a typedef\n>\n> typedef pg_int64 PGusec_time_t;\n>\nPerhaps pg_timeusec?\nI think it combines with PQgetCurrentTimeUSec\n\n\n>\n> instead of writing \"pg_int64\" explicitly everywhere. This is perhaps\n> not as useful as it was when I was thinking the definition would be\n> \"long long int\", but it still seems to add some readability. In my\n> eyes anyway ... anyone think differently?\n>\n> * I also undid changes like s/end_time/end_time_us/. I'd done that\n> mostly to ensure I looked at/fixed every reference to those variables,\n> but on reflection I don't think it's doing anything for readability.\n>\nend_time seems much better to me.\n\n\n>\n> * I took Ranier's suggestion to make fast paths for end_time == 0.\n> I'm not sure this will make any visible performance difference, but\n> it's simple and shouldn't hurt. We do have some code paths that use\n> that behavior.\n>\nThanks.\n\n\n>\n> * Ranier also suggested using clock_gettime instead of gettimeofday,\n> but I see no reason to do that. libpq already relies on gettimeofday,\n> but not on clock_gettime, and anyway post-beta1 isn't a great time to\n> start experimenting with portability-relevant changes.\n>\nI agree.\nFor v18, it would be a case of thinking about not using it anymore\ngettimeofday, as it appears to be deprecated.\n\nbest regards,\nRanier Vilela\n\nEm qua., 12 de jun. de 2024 às 14:53, Tom Lane <[email protected]> escreveu:Robert Haas <[email protected]> writes:\n> I agree this is not great. I guess I didn't think about it very hard\n> because, after all, we were just exposing an API that we'd already\n> been using internally. But I think it's reasonable to adjust the API\n> to allow for better resolution, as you propose. A second is a very\n> long amount of time, and it's entirely reasonable for someone to want\n> better granularity.\n\nHere's a v2 responding to some of the comments.\n\n* People pushed back against not using \"int64\", but the difficulty\nwith that is that we'd have to #include c.h or at least pg_config.h\nin libpq-fe.h, and that would be a totally disastrous invasion of\napplication namespace.  However, I'd forgotten that libpq-fe.h\ndoes include postgres_ext.h, and there's just enough configure\ninfrastructure behind that to allow defining pg_int64, which indeed\nlibpq-fe.h is already relying on.  So we can use that.\n\n* I decided to invent a typedef \n\n        typedef pg_int64 PGusec_time_t;Perhaps pg_timeusec?I think it combines with PQgetCurrentTimeUSec \n\ninstead of writing \"pg_int64\" explicitly everywhere.  This is perhaps\nnot as useful as it was when I was thinking the definition would be\n\"long long int\", but it still seems to add some readability.  In my\neyes anyway ... anyone think differently?\n\n* I also undid changes like s/end_time/end_time_us/.  I'd done that\nmostly to ensure I looked at/fixed every reference to those variables,\nbut on reflection I don't think it's doing anything for readability.end_time seems much better to me. \n\n* I took Ranier's suggestion to make fast paths for end_time == 0.\nI'm not sure this will make any visible performance difference, but\nit's simple and shouldn't hurt.  We do have some code paths that use\nthat behavior.Thanks. \n\n* Ranier also suggested using clock_gettime instead of gettimeofday,\nbut I see no reason to do that.  libpq already relies on gettimeofday,\nbut not on clock_gettime, and anyway post-beta1 isn't a great time to\nstart experimenting with portability-relevant changes.I agree.For v18, it would be a case of thinking about not using it anymoregettimeofday, as it appears to be deprecated. best regards,Ranier Vilela", "msg_date": "Wed, 12 Jun 2024 15:14:33 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Jun 12, 2024 at 1:53 PM Tom Lane <[email protected]> wrote:\n>> * I decided to invent a typedef\n>> typedef pg_int64 PGusec_time_t;\n\n> I don't think it's a bad idea to have a typedef, but that particular\n> one is pretty unreadable. Mmm, let's separate some things with\n> underscores and others by a change in the capitalization conventIon!\n\n\"PG\" as a prefix for typedefs in libpq-fe.h is a pretty ancient\nprecedent. I'm not wedded to any of the rest of it --- do you\nhave a better idea?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:25:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "On Wed, Jun 12, 2024 at 2:25 PM Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > On Wed, Jun 12, 2024 at 1:53 PM Tom Lane <[email protected]> wrote:\n> >> * I decided to invent a typedef\n> >> typedef pg_int64 PGusec_time_t;\n>\n> > I don't think it's a bad idea to have a typedef, but that particular\n> > one is pretty unreadable. Mmm, let's separate some things with\n> > underscores and others by a change in the capitalization conventIon!\n>\n> \"PG\" as a prefix for typedefs in libpq-fe.h is a pretty ancient\n> precedent. I'm not wedded to any of the rest of it --- do you\n> have a better idea?\n\nHmm, well, one thing I notice is that most of the other typedefs in\nsrc/interfaces/libpq seem to do PGWordsLikeThis or PGwordsLikeThis\nrather than PGwords_like_this. There are a few that randomly do\npg_words_like_this, too. But I know of no specific precedent for how a\nmicrosecond type should be named.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:45:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Jun 12, 2024 at 2:25 PM Tom Lane <[email protected]> wrote:\n>> \"PG\" as a prefix for typedefs in libpq-fe.h is a pretty ancient\n>> precedent. I'm not wedded to any of the rest of it --- do you\n>> have a better idea?\n\n> Hmm, well, one thing I notice is that most of the other typedefs in\n> src/interfaces/libpq seem to do PGWordsLikeThis or PGwordsLikeThis\n> rather than PGwords_like_this. There are a few that randomly do\n> pg_words_like_this, too. But I know of no specific precedent for how a\n> microsecond type should be named.\n\nHmm ... pg_int64 is the only such typedef I'm seeing in that file.\nBut okay, it's a precedent. The thing I'm having difficulty with\nis that I'd like the typedef name to allude to time_t, and I don't\nthink fooling with the casing of that will be helpful in making\nthe allusion stick. So how about one of\n\n\tpg_usec_time_t\n\tpg_time_t_usec\n\n?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jun 2024 15:00:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "On Wed, Jun 12, 2024 at 3:00 PM Tom Lane <[email protected]> wrote:\n> Hmm ... pg_int64 is the only such typedef I'm seeing in that file.\n\nI grepped the whole directory for '^} '.\n\n> But okay, it's a precedent. The thing I'm having difficulty with\n> is that I'd like the typedef name to allude to time_t, and I don't\n> think fooling with the casing of that will be helpful in making\n> the allusion stick. So how about one of\n>\n> pg_usec_time_t\n> pg_time_t_usec\n>\n> ?\n\nThe former seems better to me, since having _t not at the end does not\nseem too intuitive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 15:08:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Jun 12, 2024 at 3:00 PM Tom Lane <[email protected]> wrote:\n>> So how about one of\n>> \tpg_usec_time_t\n>> \tpg_time_t_usec\n>> ?\n\n> The former seems better to me, since having _t not at the end does not\n> seem too intuitive.\n\nTrue. We can guess about how POSIX might spell this if they ever\ninvent the concept, but one choice they certainly would not make\nis time_t_usec.\n\nv3 attached uses pg_usec_time_t, and fixes one brown-paper-bag\nbug the cfbot noticed in v2.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 12 Jun 2024 15:29:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "I wrote:\n> v3 attached uses pg_usec_time_t, and fixes one brown-paper-bag\n> bug the cfbot noticed in v2.\n\nOh, I just remembered that there's a different bit of\npqConnectDBComplete that we could simplify now:\n\n if (timeout > 0)\n {\n /*\n * Rounding could cause connection to fail unexpectedly quickly;\n * to prevent possibly waiting hardly-at-all, insist on at least\n * two seconds.\n */\n if (timeout < 2)\n timeout = 2;\n }\n else /* negative means 0 */\n timeout = 0;\n\nWith this infrastructure, there's no longer any need to discriminate\nagainst timeout == 1 second, so we might as well reduce this to\n\n if (timeout < 0)\n timeout = 0;\n\nas it's done elsewhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jun 2024 15:45:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Em qua., 12 de jun. de 2024 às 16:45, Tom Lane <[email protected]> escreveu:\n\n> I wrote:\n> > v3 attached uses pg_usec_time_t, and fixes one brown-paper-bag\n> > bug the cfbot noticed in v2.\n>\n> Oh, I just remembered that there's a different bit of\n> pqConnectDBComplete that we could simplify now:\n>\n> if (timeout > 0)\n> {\n> /*\n> * Rounding could cause connection to fail unexpectedly\n> quickly;\n> * to prevent possibly waiting hardly-at-all, insist on at\n> least\n> * two seconds.\n> */\n> if (timeout < 2)\n> timeout = 2;\n> }\n> else /* negative means 0 */\n> timeout = 0;\n>\n> With this infrastructure, there's no longer any need to discriminate\n> against timeout == 1 second, so we might as well reduce this to\n>\n> if (timeout < 0)\n> timeout = 0;\n>\n> as it's done elsewhere.\n>\nI'm unsure if the documentation matches the code?\n\" connect_timeout #\n<https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-CONNECT-TIMEOUT>\n\nMaximum time to wait while connecting, in seconds (write as a decimal\ninteger, e.g., 10). Zero, negative, or not specified means wait\nindefinitely. The minimum allowed timeout is 2 seconds, therefore a value\nof 1 is interpreted as 2. This timeout applies separately to each host name\nor IP address. For example, if you specify two hosts and connect_timeout is\n5, each host will time out if no connection is made within 5 seconds, so\nthe total time spent waiting for a connection might be up to 10 seconds.\n\"\nThe comments says that timeout = 0, means *Timeout is immediate (no\nblocking)*\n\nDoes the word \"indefinitely\" mean infinite?\nIf yes, connect_timeout = -1, mean infinite?\n\nbest regards,\nRanier Vilela\n\nEm qua., 12 de jun. de 2024 às 16:45, Tom Lane <[email protected]> escreveu:I wrote:\n> v3 attached uses pg_usec_time_t, and fixes one brown-paper-bag\n> bug the cfbot noticed in v2.\n\nOh, I just remembered that there's a different bit of\npqConnectDBComplete that we could simplify now:\n\n        if (timeout > 0)\n        {\n            /*\n             * Rounding could cause connection to fail unexpectedly quickly;\n             * to prevent possibly waiting hardly-at-all, insist on at least\n             * two seconds.\n             */\n            if (timeout < 2)\n                timeout = 2;\n        }\n        else                    /* negative means 0 */\n            timeout = 0;\n\nWith this infrastructure, there's no longer any need to discriminate\nagainst timeout == 1 second, so we might as well reduce this to\n\n        if (timeout < 0)\n            timeout = 0;\n\nas it's done elsewhere.I'm unsure if the documentation matches the code?\"\n\n \n\nconnect_timeout #\nMaximum time to wait while connecting, in seconds (write as a decimal integer, e.g., 10). Zero, negative, or not specified means wait indefinitely. The minimum allowed timeout is 2 seconds, therefore a value of 1 is interpreted as 2. This timeout applies separately to each host name or IP address. For example, if you specify two hosts and connect_timeout\n is 5, each host will time out if no connection is made within 5 \nseconds, so the total time spent waiting for a connection might be up to\n 10 seconds.\"The comments says that timeout = 0, means *Timeout is immediate (no blocking)*Does the word \"indefinitely\" mean infinite?If yes, connect_timeout = -1, mean infinite?best regards,Ranier Vilela", "msg_date": "Thu, 13 Jun 2024 08:43:50 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Ranier Vilela <[email protected]> writes:\n> I'm unsure if the documentation matches the code?\n> \" connect_timeout #\n> <https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-CONNECT-TIMEOUT>\n\n> Maximum time to wait while connecting, in seconds (write as a decimal\n> integer, e.g., 10). Zero, negative, or not specified means wait\n> indefinitely. The minimum allowed timeout is 2 seconds, therefore a value\n> of 1 is interpreted as 2. This timeout applies separately to each host name\n> or IP address. For example, if you specify two hosts and connect_timeout is\n> 5, each host will time out if no connection is made within 5 seconds, so\n> the total time spent waiting for a connection might be up to 10 seconds.\n> \"\n> The comments says that timeout = 0, means *Timeout is immediate (no\n> blocking)*\n\n> Does the word \"indefinitely\" mean infinite?\n> If yes, connect_timeout = -1, mean infinite?\n\nThe sentence about \"minimum allowed timeout is 2 seconds\" has to go\naway, but the rest of that seems fine.\n\nBut now that you mention it, we could drop the vestigial\n\n>> if (timeout < 0)\n>> timeout = 0;\n\nas well, because the rest of the function only applies the timeout\nwhen \"timeout > 0\". Otherwise end_time (nee finish_time) stays at -1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Jun 2024 11:25:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Em qui., 13 de jun. de 2024 às 12:25, Tom Lane <[email protected]> escreveu:\n\n> Ranier Vilela <[email protected]> writes:\n> > I'm unsure if the documentation matches the code?\n> > \" connect_timeout #\n> > <\n> https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-CONNECT-TIMEOUT\n> >\n>\n> > Maximum time to wait while connecting, in seconds (write as a decimal\n> > integer, e.g., 10). Zero, negative, or not specified means wait\n> > indefinitely. The minimum allowed timeout is 2 seconds, therefore a value\n> > of 1 is interpreted as 2. This timeout applies separately to each host\n> name\n> > or IP address. For example, if you specify two hosts and connect_timeout\n> is\n> > 5, each host will time out if no connection is made within 5 seconds, so\n> > the total time spent waiting for a connection might be up to 10 seconds.\n> > \"\n> > The comments says that timeout = 0, means *Timeout is immediate (no\n> > blocking)*\n>\n> > Does the word \"indefinitely\" mean infinite?\n> > If yes, connect_timeout = -1, mean infinite?\n>\n> The sentence about \"minimum allowed timeout is 2 seconds\" has to go\n> away, but the rest of that seems fine.\n>\n> But now that you mention it, we could drop the vestigial\n>\n> >> if (timeout < 0)\n> >> timeout = 0;\n>\n> as well, because the rest of the function only applies the timeout\n> when \"timeout > 0\". Otherwise end_time (nee finish_time) stays at -1.\n>\nI think that's OK Tom.\n\n+1 for push.\n\nbest regards,\nRanier Vilela\n\nEm qui., 13 de jun. de 2024 às 12:25, Tom Lane <[email protected]> escreveu:Ranier Vilela <[email protected]> writes:\n> I'm unsure if the documentation matches the code?\n> \" connect_timeout #\n> <https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-CONNECT-TIMEOUT>\n\n> Maximum time to wait while connecting, in seconds (write as a decimal\n> integer, e.g., 10). Zero, negative, or not specified means wait\n> indefinitely. The minimum allowed timeout is 2 seconds, therefore a value\n> of 1 is interpreted as 2. This timeout applies separately to each host name\n> or IP address. For example, if you specify two hosts and connect_timeout is\n> 5, each host will time out if no connection is made within 5 seconds, so\n> the total time spent waiting for a connection might be up to 10 seconds.\n> \"\n> The comments says that timeout = 0, means *Timeout is immediate (no\n> blocking)*\n\n> Does the word \"indefinitely\" mean infinite?\n> If yes, connect_timeout = -1, mean infinite?\n\nThe sentence about \"minimum allowed timeout is 2 seconds\" has to go\naway, but the rest of that seems fine.\n\nBut now that you mention it, we could drop the vestigial\n\n>>         if (timeout < 0)\n>>             timeout = 0;\n\nas well, because the rest of the function only applies the timeout\nwhen \"timeout > 0\".  Otherwise end_time (nee finish_time) stays at -1.I think that's OK Tom.+1 for push.best regards,Ranier Vilela", "msg_date": "Thu, 13 Jun 2024 13:30:06 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "Ranier Vilela <[email protected]> writes:\n> +1 for push.\n\nDone. I noticed in final review that libpq-fe.h's \"#include <time.h>\",\nwhich I'd feared to remove because I thought we'd shipped that\nalready, actually was new in f5e4dedfa. So there shouldn't be\nanything depending on it, and I thought it best to take it out again.\nWidely-used headers shouldn't have unnecessary inclusions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Jun 2024 15:18:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" }, { "msg_contents": "On Thu, Jun 13, 2024 at 9:18 PM Tom Lane <[email protected]> wrote:\n> Ranier Vilela <[email protected]> writes:\n> > +1 for push.\n>\n> Done. [...]\n\nThanks a lot Tom (and reviewers)! --DD\n\n\n", "msg_date": "Fri, 14 Jun 2024 09:16:25 +0200", "msg_from": "Dominique Devienne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the granularity of PQsocketPoll's timeout parameter?" } ]
[ { "msg_contents": "Hi hackers,\n\nBuilding on bf279ddd1c, this patch introduces a GUC\n'standby_slot_names_from_syncrep' which allows logical failover slots\nto wait for changes to have been synchronously replicated before sending\nthe decoded changes to logical subscribers.\n\nThe existing 'standby_slot_names' isn't great for users who are running\nclusters with quorum-based synchronous replicas. For instance, if\nthe user has synchronous_standby_names = 'ANY 3 (A,B,C,D,E)' it's a\nbit tedious to have to reconfigure the standby_slot_names to set it to\nthe most updated 3 sync replicas whenever different sync replicas start\nlagging. In the event that both GUCs are set, 'standby_slot_names' takes\nprecedence.\n\nI did some very brief pgbench runs to compare the latency. Client instance\nwas running pgbench and 10 logical clients while the Postgres box hosted\nthe writer and 5 synchronous replicas.\n\nThere's a hit to TPS, which I'm thinking is due to more contention on the\nSyncRepLock, and that scales with the number of logical walsenders. I'm\nguessing we can avoid this if we introduce another set of\nlsn[NUM_SYNC_REP_WAIT_MODE] and have the logical walsenders check\nand wait on that instead but I wasn't sure if that's the right approach.\n\npgbench numbers:\n\n// Empty standby_slot_names_from_syncrep\nquery mode: simple\nnumber of clients: 8\nnumber of threads: 8\nmaximum number of tries: 1\nduration: 1800 s\nnumber of transactions actually processed: 1720173\nnumber of failed transactions: 0 (0.000%)\nlatency average = 8.371 ms\ninitial connection time = 7.963 ms\ntps = 955.651025 (without initial connection time)\n\n// standby_slot_names_from_syncrep = 'true'\nscaling factor: 200\nquery mode: simple\nnumber of clients: 8\nnumber of threads: 8\nmaximum number of tries: 1\nduration: 1800 s\nnumber of transactions actually processed: 1630105\nnumber of failed transactions: 0 (0.000%)\nlatency average = 8.834 ms\ninitial connection time = 7.670 ms\ntps = 905.610230 (without initial connection time)\n\nThanks,\n\n--\nJohn Hsu - Amazon Web Services", "msg_date": "Mon, 10 Jun 2024 15:51:05 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Mon, Jun 10, 2024 at 03:51:05PM -0700, John H wrote:\n> The existing 'standby_slot_names' isn't great for users who are running\n> clusters with quorum-based synchronous replicas. For instance, if\n> the user has synchronous_standby_names = 'ANY 3 (A,B,C,D,E)' it's a\n> bit tedious to have to reconfigure the standby_slot_names to set it to\n> the most updated 3 sync replicas whenever different sync replicas start\n> lagging. In the event that both GUCs are set, 'standby_slot_names' takes\n> precedence.\n\nHm. IIUC you'd essentially need to set standby_slot_names to \"A,B,C,D,E\"\nto get the desired behavior today. That might ordinarily be okay, but it\ncould cause logical replication to be held back unnecessarily if one of the\nreplicas falls behind for whatever reason. A way to tie standby_slot_names\nto synchronous replication instead does seem like it would be useful in\nthis case.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 10 Jun 2024 21:25:10 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 10, 2024 at 09:25:10PM -0500, Nathan Bossart wrote:\n> On Mon, Jun 10, 2024 at 03:51:05PM -0700, John H wrote:\n> > The existing 'standby_slot_names' isn't great for users who are running\n> > clusters with quorum-based synchronous replicas. For instance, if\n> > the user has synchronous_standby_names = 'ANY 3 (A,B,C,D,E)' it's a\n> > bit tedious to have to reconfigure the standby_slot_names to set it to\n> > the most updated 3 sync replicas whenever different sync replicas start\n> > lagging. In the event that both GUCs are set, 'standby_slot_names' takes\n> > precedence.\n> \n> Hm. IIUC you'd essentially need to set standby_slot_names to \"A,B,C,D,E\"\n> to get the desired behavior today. That might ordinarily be okay, but it\n> could cause logical replication to be held back unnecessarily if one of the\n> replicas falls behind for whatever reason. A way to tie standby_slot_names\n> to synchronous replication instead does seem like it would be useful in\n> this case.\n\nFWIW, I have the same understanding and also think your proposal would be\nuseful in this case.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Jun 2024 10:00:46 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 11, 2024 at 10:00:46AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Mon, Jun 10, 2024 at 09:25:10PM -0500, Nathan Bossart wrote:\n> > On Mon, Jun 10, 2024 at 03:51:05PM -0700, John H wrote:\n> > > The existing 'standby_slot_names' isn't great for users who are running\n> > > clusters with quorum-based synchronous replicas. For instance, if\n> > > the user has synchronous_standby_names = 'ANY 3 (A,B,C,D,E)' it's a\n> > > bit tedious to have to reconfigure the standby_slot_names to set it to\n> > > the most updated 3 sync replicas whenever different sync replicas start\n> > > lagging. In the event that both GUCs are set, 'standby_slot_names' takes\n> > > precedence.\n> > \n> > Hm. IIUC you'd essentially need to set standby_slot_names to \"A,B,C,D,E\"\n> > to get the desired behavior today. That might ordinarily be okay, but it\n> > could cause logical replication to be held back unnecessarily if one of the\n> > replicas falls behind for whatever reason. A way to tie standby_slot_names\n> > to synchronous replication instead does seem like it would be useful in\n> > this case.\n> \n> FWIW, I have the same understanding and also think your proposal would be\n> useful in this case.\n\nA few random comments about v1:\n\n1 ====\n\n+ int mode = SyncRepWaitMode;\n\nIt's set to SyncRepWaitMode and then never change. Worth to get rid of \"mode\"?\n\n2 ====\n\n+ static XLogRecPtr lsn[NUM_SYNC_REP_WAIT_MODE] = {InvalidXLogRecPtr};\n\nI did some testing and saw that the lsn[] values were not always set to\nInvalidXLogRecPtr right after. It looks like that, in that case, we should\navoid setting the lsn[] values at compile time. Then, what about?\n\n1. remove the \"static\". \n\nor\n\n2. keep the static but set the lsn[] values after its declaration.\n\n3 ====\n\n- /*\n- * Return false if not all the standbys have caught up to the specified\n- * WAL location.\n- */\n- if (caught_up_slot_num != standby_slot_names_config->nslotnames)\n- return false;\n+ if (!XLogRecPtrIsInvalid(lsn[mode]) && lsn[mode] >= wait_for_lsn)\n+ return true;\n\nlsn[] values are/(should have been, see 2 above) just been initialized to\nInvalidXLogRecPtr so that XLogRecPtrIsInvalid(lsn[mode]) will always return\ntrue. I think this check is not needed then.\n\n4 ====\n\n> > > I did some very brief pgbench runs to compare the latency. Client instance\n> > > was running pgbench and 10 logical clients while the Postgres box hosted\n> > > the writer and 5 synchronous replicas.\n\n> > > There's a hit to TPS\n\nOut of curiosity, did you compare with standby_slot_names_from_syncrep set to off\nand standby_slot_names not empty?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 17 Jun 2024 15:19:50 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Tue, Jun 11, 2024 at 4:21 AM John H <[email protected]> wrote:\n>\n> Building on bf279ddd1c, this patch introduces a GUC\n> 'standby_slot_names_from_syncrep' which allows logical failover slots\n> to wait for changes to have been synchronously replicated before sending\n> the decoded changes to logical subscribers.\n>\n> The existing 'standby_slot_names' isn't great for users who are running\n> clusters with quorum-based synchronous replicas. For instance, if\n> the user has synchronous_standby_names = 'ANY 3 (A,B,C,D,E)' it's a\n> bit tedious to have to reconfigure the standby_slot_names to set it to\n> the most updated 3 sync replicas whenever different sync replicas start\n> lagging. In the event that both GUCs are set, 'standby_slot_names' takes\n> precedence.\n>\n> I did some very brief pgbench runs to compare the latency. Client instance\n> was running pgbench and 10 logical clients while the Postgres box hosted\n> the writer and 5 synchronous replicas.\n>\n> There's a hit to TPS, which I'm thinking is due to more contention on the\n> SyncRepLock, and that scales with the number of logical walsenders. I'm\n> guessing we can avoid this if we introduce another set of\n> lsn[NUM_SYNC_REP_WAIT_MODE] and have the logical walsenders check\n> and wait on that instead but I wasn't sure if that's the right approach.\n>\n> pgbench numbers:\n>\n> // Empty standby_slot_names_from_syncrep\n> query mode: simple\n..\n> latency average = 8.371 ms\n> initial connection time = 7.963 ms\n> tps = 955.651025 (without initial connection time)\n>\n> // standby_slot_names_from_syncrep = 'true'\n> scaling factor: 200\n...\n> latency average = 8.834 ms\n> initial connection time = 7.670 ms\n> tps = 905.610230 (without initial connection time)\n>\n\nThe reading indicates when you set 'standby_slot_names_from_syncrep',\nthe TPS reduces as compared to when it is not set. It would be better\nto see the data comparing 'standby_slot_names_from_syncrep' and the\nexisting parameter 'standby_slot_names'.\n\nI see the value in your idea but was wondering if can we do something\nwithout introducing a new GUC for this. Can we make it a default\nbehavior that logical slots marked with a failover option will wait\nfor 'synchronous_standby_names' as per your patch's idea unless\n'standby_slot_names' is specified? I don't know if there is any value\nin setting the 'failover' option for a slot without specifying\n'standby_slot_names', so was wondering if we can additionally tie it\nto 'synchronous_standby_names'. Any better ideas?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 Jun 2024 11:04:34 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi,\n\nThanks Bertrand for taking a look at the patch.\n\nOn Mon, Jun 17, 2024 at 8:19 AM Bertrand Drouvot\n<[email protected]> wrote:\n\n>\n> + int mode = SyncRepWaitMode;\n>\n> It's set to SyncRepWaitMode and then never change. Worth to get rid of \"mode\"?\n>\n\nI took a deeper look at this with GDB and I think it's necessary to\ncache the value of \"mode\".\nWe first check:\n\nif (mode == SYNC_REP_NO_WAIT)\nreturn true;\n\nHowever after this check it's possible to receive a SIGHUP changing\nSyncRepWaitMode\nto SYNC_REP_NO_WAIT (e.g. synchronous_commit = 'on' -> 'off'), leading\nto lsn[-1].\n\n> 2 ====\n>\n> + static XLogRecPtr lsn[NUM_SYNC_REP_WAIT_MODE] = {InvalidXLogRecPtr};\n>\n> I did some testing and saw that the lsn[] values were not always set to\n> InvalidXLogRecPtr right after. It looks like that, in that case, we should\n> avoid setting the lsn[] values at compile time. Then, what about?\n>\n> 1. remove the \"static\".\n>\n> or\n>\n> 2. keep the static but set the lsn[] values after its declaration.\n\nI'd prefer to keep the static because it reduces unnecessary\ncontention on SyncRepLock if logical client has fallen behind.\nI'll add a change with your second suggestion.\n\n\n\n> 3 ====\n>\n> - /*\n> - * Return false if not all the standbys have caught up to the specified\n> - * WAL location.\n> - */\n> - if (caught_up_slot_num != standby_slot_names_config->nslotnames)\n> - return false;\n> + if (!XLogRecPtrIsInvalid(lsn[mode]) && lsn[mode] >= wait_for_lsn)\n> + return true;\n>\n> lsn[] values are/(should have been, see 2 above) just been initialized to\n> InvalidXLogRecPtr so that XLogRecPtrIsInvalid(lsn[mode]) will always return\n> true. I think this check is not needed then.\n\nRemoved.\n\n> Out of curiosity, did you compare with standby_slot_names_from_syncrep set to off\n> and standby_slot_names not empty?\n\nI didn't think 'standby_slot_names' would impact TPS as much since\nit's not grabbing the SyncRepLock but here's a quick test.\nWriter with 5 synchronous replicas, 10 pg_recvlogical clients and\npgbench all running from the same server.\n\nCommand: pgbench -c 4 -j 4 -T 600 -U \"ec2-user\" -d postgres -r -P 5\n\nResult with: standby_slot_names =\n'replica_1,replica_2,replica_3,replica_4,replica_5'\n\nlatency average = 5.600 ms\nlatency stddev = 2.854 ms\ninitial connection time = 5.503 ms\ntps = 714.148263 (without initial connection time)\n\nResult with: standby_slot_names_from_syncrep = 'true',\nsynchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n\nlatency average = 5.740 ms\nlatency stddev = 2.543 ms\ninitial connection time = 4.093 ms\ntps = 696.776249 (without initial connection time)\n\nResult with nothing set:\n\nlatency average = 5.090 ms\nlatency stddev = 3.467 ms\ninitial connection time = 4.989 ms\ntps = 785.665963 (without initial connection time)\n\nAgain I think it's possible to improve the synchronous numbers if we\ncache but I'll try that out in a bit.\n\n\n\n-- \nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Mon, 8 Jul 2024 12:08:58 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi Amit,\n\nThanks for taking a look.\n\nOn Tue, Jun 18, 2024 at 10:34 PM Amit Kapila <[email protected]> wrote:\n>\n\n>\n> The reading indicates when you set 'standby_slot_names_from_syncrep',\n> the TPS reduces as compared to when it is not set. It would be better\n> to see the data comparing 'standby_slot_names_from_syncrep' and the\n> existing parameter 'standby_slot_names'.\n\nI added new benchmark numbers in the reply to Bertrand, but I'll\ninclude in this thread for posterity.\n\nWriter with 5 synchronous replicas, 10 pg_recvlogical clients and\npgbench all running from the same server.\nCommand: pgbench -c 4 -j 4 -T 600 -U \"ec2-user\" -d postgres -r -P 5\n\nResult with: standby_slot_names =\n'replica_1,replica_2,replica_3,replica_4,replica_5'\n\nlatency average = 5.600 ms\nlatency stddev = 2.854 ms\ninitial connection time = 5.503 ms\ntps = 714.148263 (without initial connection time)\n\nResult with: standby_slot_names_from_syncrep = 'true',\nsynchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n\nlatency average = 5.740 ms\nlatency stddev = 2.543 ms\ninitial connection time = 4.093 ms\ntps = 696.776249 (without initial connection time)\n\nResult with nothing set:\n\nlatency average = 5.090 ms\nlatency stddev = 3.467 ms\ninitial connection time = 4.989 ms\ntps = 785.665963 (without initial connection time)\n\n> Can we make it a default\n> behavior that logical slots marked with a failover option will wait\n> for 'synchronous_standby_names' as per your patch's idea unless\n> 'standby_slot_names' is specified? I don't know if there is any value\n> in setting the 'failover' option for a slot without specifying\n> 'standby_slot_names', so was wondering if we can additionally tie it\n> to 'synchronous_standby_names'. Any better ideas?\n>\n\nNo, I think that works pretty cleanly actually. Reserving some special\nkeyword isn't great\nwhich is the only other thing I can think of. I've updated the patch\nand tests to reflect that.\n\nAttached the patch that addresses these changes.\n\n\n\n-- \nJohn Hsu - Amazon Web Services", "msg_date": "Mon, 8 Jul 2024 12:12:13 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 08, 2024 at 12:08:58PM -0700, John H wrote:\n> I took a deeper look at this with GDB and I think it's necessary to\n> cache the value of \"mode\".\n> We first check:\n> \n> if (mode == SYNC_REP_NO_WAIT)\n> return true;\n> \n> However after this check it's possible to receive a SIGHUP changing\n> SyncRepWaitMode\n> to SYNC_REP_NO_WAIT (e.g. synchronous_commit = 'on' -> 'off'), leading\n> to lsn[-1].\n\nWhat about adding \"SyncRepWaitMode\" as a third StandbySlotsHaveCaughtup()\nparameter then? (so that the function will used whatever value was passed during\nthe call).\n\n> > 2 ====\n> >\n> > + static XLogRecPtr lsn[NUM_SYNC_REP_WAIT_MODE] = {InvalidXLogRecPtr};\n> >\n> > I did some testing and saw that the lsn[] values were not always set to\n> > InvalidXLogRecPtr right after. It looks like that, in that case, we should\n> > avoid setting the lsn[] values at compile time. Then, what about?\n> >\n> > 1. remove the \"static\".\n> >\n> > or\n> >\n> > 2. keep the static but set the lsn[] values after its declaration.\n> \n> I'd prefer to keep the static because it reduces unnecessary\n> contention on SyncRepLock if logical client has fallen behind.\n> I'll add a change with your second suggestion.\n\nGot it, you want lsn[] to be initialized only one time and that each call to\nStandbySlotsHaveCaughtup() relies on the values that were previously stored in \nlsn[] and then return if lsn[mode] >= wait_for_lsn.\n\nThen I think that:\n\n1 ===\n\nThat's worth additional comments in the code.\n\n2 ===\n\n+ if (!initialized)\n+ {\n+ for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n+ {\n+ lsn[i] = InvalidXLogRecPtr;\n+ }\n+ }\n\nLooks like setting initialized to true is missing once done.\n\nAlso,\n\n3 ===\n\n+ /* Cache values to reduce contention on lock */\n+ for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n+ {\n+ lsn[i] = walsndctl->lsn[i];\n+ }\n\nNUM_SYNC_REP_WAIT_MODE is small but as the goal is the keep the lock time as\nshort as possible I wonder if it wouldn't be better to use memcpy() here instead\nof this for loop.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jul 2024 16:19:12 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Tue, Jul 9, 2024 at 12:42 AM John H <[email protected]> wrote:\n>\n>\n> > Can we make it a default\n> > behavior that logical slots marked with a failover option will wait\n> > for 'synchronous_standby_names' as per your patch's idea unless\n> > 'standby_slot_names' is specified? I don't know if there is any value\n> > in setting the 'failover' option for a slot without specifying\n> > 'standby_slot_names', so was wondering if we can additionally tie it\n> > to 'synchronous_standby_names'. Any better ideas?\n> >\n>\n> No, I think that works pretty cleanly actually. Reserving some special\n> keyword isn't great\n> which is the only other thing I can think of. I've updated the patch\n> and tests to reflect that.\n>\n> Attached the patch that addresses these changes.\n\nThank You for the patch. I like the overall idea, it is a useful one\nand will make user experience better. Please find few comments.\n\n1)\nI agree with the idea that instead of introducing new GUC, we can make\nfailover logical slots wait on 'synchronous_standby_names', but this\nwill leave user no option to unlink failover-enabled logical\nsubscribers from the wait on synchronous standbys. We provide user a\nway to switch off automatic slot-sync by disabling\n'sync_replication_slots' and we also provide user a way to manually\nsync the slots using function 'pg_sync_replication_slots()' and\nnowhere we make it mandatory to set 'synchronized_standby_slots'; in\nfact in docs, it is a recommended setting and not a mandatory one.\nUser can always create failover slots, switch off automatic slot sync\nand disable wait on standbys by not specifying\n'synchronized_standby_slots' and do the slot-sync and consistency\nchecks manually when needed. I feel, for worst case scenarios, we\nshould provide user an option to delink failover-enabled logical\nsubscriptions from 'synchronous_standby_names'.\n\nWe can have 'synchronized_standby_slots' (standby_slot_names) to\naccept one such option as in 'SAME_AS_SYNCREP_STANDBYS' or say\n'DEFAULT'. So when 'synchronous_standby_names' is comma separated\nlist, we pick those slots; if it is empty, then no wait on standbys,\nand if its value is 'DEFAULT' as configured by user, then go with\n'synchronous_standby_names'. Thoughts?\n\n\n2)\nWhen 'synchronized_standby_slots' is configured but standby named in\nit is down blocking logical replication, then we get a WARNING in\nsubscriber's log file:\n\nWARNING: replication slot \"standby_2\" specified in parameter\nsynchronized_standby_slots does not have active_pid.\nDETAIL: Logical replication is waiting on the standby associated with\n\"standby_2\".\nHINT: Consider starting standby associated with \"standby_2\" or amend\nparameter synchronized_standby_slots.\n\nBut OTOH, when 'synchronous_standby_names' is configured instead of\n'synchronized_standby_slots' and any of the standbys listed is down\nblocking logical replication, we do not get any sort of warning. It is\ninconsistent behavior. Also user might be left clueless on why\nsubscribers are not getting changes.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 18 Jul 2024 09:25:37 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Thu, Jul 18, 2024 at 9:25 AM shveta malik <[email protected]> wrote:\n>\n> On Tue, Jul 9, 2024 at 12:42 AM John H <[email protected]> wrote:\n> >\n> >\n> > > Can we make it a default\n> > > behavior that logical slots marked with a failover option will wait\n> > > for 'synchronous_standby_names' as per your patch's idea unless\n> > > 'standby_slot_names' is specified? I don't know if there is any value\n> > > in setting the 'failover' option for a slot without specifying\n> > > 'standby_slot_names', so was wondering if we can additionally tie it\n> > > to 'synchronous_standby_names'. Any better ideas?\n> > >\n> >\n> > No, I think that works pretty cleanly actually. Reserving some special\n> > keyword isn't great\n> > which is the only other thing I can think of. I've updated the patch\n> > and tests to reflect that.\n> >\n> > Attached the patch that addresses these changes.\n>\n> Thank You for the patch. I like the overall idea, it is a useful one\n> and will make user experience better. Please find few comments.\n>\n> 1)\n> I agree with the idea that instead of introducing new GUC, we can make\n> failover logical slots wait on 'synchronous_standby_names', but this\n> will leave user no option to unlink failover-enabled logical\n> subscribers from the wait on synchronous standbys. We provide user a\n> way to switch off automatic slot-sync by disabling\n> 'sync_replication_slots' and we also provide user a way to manually\n> sync the slots using function 'pg_sync_replication_slots()' and\n> nowhere we make it mandatory to set 'synchronized_standby_slots'; in\n> fact in docs, it is a recommended setting and not a mandatory one.\n> User can always create failover slots, switch off automatic slot sync\n> and disable wait on standbys by not specifying\n> 'synchronized_standby_slots' and do the slot-sync and consistency\n> checks manually when needed. I feel, for worst case scenarios, we\n> should provide user an option to delink failover-enabled logical\n> subscriptions from 'synchronous_standby_names'.\n>\n> We can have 'synchronized_standby_slots' (standby_slot_names) to\n> accept one such option as in 'SAME_AS_SYNCREP_STANDBYS' or say\n> 'DEFAULT'. So when 'synchronous_standby_names' is comma separated\n> list, we pick those slots; if it is empty, then no wait on standbys,\n> and if its value is 'DEFAULT' as configured by the user, then go with\n> 'synchronous_standby_names'. Thoughts?\n\nOne correction here\n('synchronous_standby_names-->synchronized_standby_slots). Corrected\nversion:\n\nSo when 'synchronized_standby_slots' is comma separated list, we pick\nthose slots; if it is empty, then no wait on standbys, and if its\nvalue is 'DEFAULT' as configured by user, then go with\n'synchronous_standby_names'. Thoughts?\n\n>\n> 2)\n> When 'synchronized_standby_slots' is configured but standby named in\n> it is down blocking logical replication, then we get a WARNING in\n> subscriber's log file:\n>\n> WARNING: replication slot \"standby_2\" specified in parameter\n> synchronized_standby_slots does not have active_pid.\n> DETAIL: Logical replication is waiting on the standby associated with\n> \"standby_2\".\n> HINT: Consider starting standby associated with \"standby_2\" or amend\n> parameter synchronized_standby_slots.\n>\n> But OTOH, when 'synchronous_standby_names' is configured instead of\n> 'synchronized_standby_slots' and any of the standbys listed is down\n> blocking logical replication, we do not get any sort of warning. It is\n> inconsistent behavior. Also user might be left clueless on why\n> subscribers are not getting changes.\n>\n> thanks\n> Shveta\n\n\n", "msg_date": "Thu, 18 Jul 2024 09:57:17 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi Shveta,\n\nThanks for taking a look at the patch.\n\n> > will leave user no option to unlink failover-enabled logical\n> > subscribers from the wait on synchronous standbys.\n\nThat's a good point. I'm a bit biased in that I don't think there's a\ngreat reason why someone would\nwant to replicate logical changes out of the synchronous cluster\nwithout it having been synchronously replicated\n but yes this would be different behavior compared to strictly the slot one.\n\n> ...\n> So when 'synchronized_standby_slots' is comma separated list, we pick\n> those slots; if it is empty, then no wait on standbys, and if its\n> value is 'DEFAULT' as configured by user, then go with\n> 'synchronous_standby_names'. Thoughts?\n\nI think I'd prefer having a separate GUC if the alternative is to reserve\nspecial keywords in 'synchronized_standby_slots' but I'm not sure if I\nfeel strongly about that.\n\n\n> > 2)\n> > When 'synchronized_standby_slots' is configured but standby named in\n> > it is down blocking logical replication, then we get a WARNING in\n> > subscriber's log file:\n> >\n> > WARNING: replication slot \"standby_2\" specified in parameter\n> > synchronized_standby_slots does not have active_pid.\n> > DETAIL: Logical replication is waiting on the standby associated with\n> > \"standby_2\".\n> > HINT: Consider starting standby associated with \"standby_2\" or amend\n> > parameter synchronized_standby_slots.\n> >\n> > But OTOH, when 'synchronous_standby_names' is configured instead of\n> > 'synchronized_standby_slots' and any of the standbys listed is down\n> > blocking logical replication, we do not get any sort of warning. It is\n> > inconsistent behavior. Also user might be left clueless on why\n> > subscribers are not getting changes.\n\nAh that's a gap. Let me add some logging/warning in a similar fashion.\nAlthough I think I'd have the warning be relatively generic (e.g.\nchanges are blocked because\nthey're not synchronously committed)\n\nThanks,\n\n--\nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Thu, 18 Jul 2024 14:21:59 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi Bertrand,\n\n> 1 ===\n> ...\n> That's worth additional comments in the code.\n\nThere's this comment already about caching the value already, not sure\nif you prefer something more?\n\n/* Cache values to reduce contention on lock */\n\n> 2 ===\n> ...\n> Looks like setting initialized to true is missing once done.\n\nThanks, will update.\n\n> 3 ===\n> ...\n> NUM_SYNC_REP_WAIT_MODE is small but as the goal is the keep the lock time as\n> short as possible I wonder if it wouldn't be better to use memcpy() here instead\n> of this for loop.\n>\n\nIt results in a \"Wdiscarded-qualifiers\" which is safe given we take\nthe lock, but adds noise?\nWhat do you think?\n\n\"slot.c:2756:46: warning: passing argument 2 of ‘memcpy’ discards\n‘volatile’ qualifier from pointer target type\n[-Wdiscarded-qualifiers]\"\n\nThanks,\n\n\n--\nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Thu, 18 Jul 2024 14:22:08 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Fri, Jul 19, 2024 at 2:52 AM John H <[email protected]> wrote:\n>\n> Hi Shveta,\n>\n> Thanks for taking a look at the patch.\n>\n> > > will leave user no option to unlink failover-enabled logical\n> > > subscribers from the wait on synchronous standbys.\n>\n> That's a good point. I'm a bit biased in that I don't think there's a\n> great reason why someone would\n> want to replicate logical changes out of the synchronous cluster\n> without it having been synchronously replicated\n> but yes this would be different behavior compared to strictly the slot one.\n>\n> > ...\n> > So when 'synchronized_standby_slots' is comma separated list, we pick\n> > those slots; if it is empty, then no wait on standbys, and if its\n> > value is 'DEFAULT' as configured by user, then go with\n> > 'synchronous_standby_names'. Thoughts?\n>\n> I think I'd prefer having a separate GUC if the alternative is to reserve\n> special keywords in 'synchronized_standby_slots' but I'm not sure if I\n> feel strongly about that.\n\nMy only concern is, earlier we provided a way to set the failover\nproperty of slots even without mandatorily wait on physical standbys.\nBut now we will be changing this behaviour. Okay, we can see what\nother comments. If we plan to go this way, we can change docs to\nclearly mention this.\n\n\n> > > 2)\n> > > When 'synchronized_standby_slots' is configured but standby named in\n> > > it is down blocking logical replication, then we get a WARNING in\n> > > subscriber's log file:\n> > >\n> > > WARNING: replication slot \"standby_2\" specified in parameter\n> > > synchronized_standby_slots does not have active_pid.\n> > > DETAIL: Logical replication is waiting on the standby associated with\n> > > \"standby_2\".\n> > > HINT: Consider starting standby associated with \"standby_2\" or amend\n> > > parameter synchronized_standby_slots.\n> > >\n> > > But OTOH, when 'synchronous_standby_names' is configured instead of\n> > > 'synchronized_standby_slots' and any of the standbys listed is down\n> > > blocking logical replication, we do not get any sort of warning. It is\n> > > inconsistent behavior. Also user might be left clueless on why\n> > > subscribers are not getting changes.\n>\n> Ah that's a gap. Let me add some logging/warning in a similar fashion.\n> Although I think I'd have the warning be relatively generic (e.g.\n> changes are blocked because\n> they're not synchronously committed)\n>\n\nokay, sounds good.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 22 Jul 2024 09:11:48 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Mon, Jul 22, 2024 at 9:12 AM shveta malik <[email protected]> wrote:\n>\n> On Fri, Jul 19, 2024 at 2:52 AM John H <[email protected]> wrote:\n> >\n> > Hi Shveta,\n> >\n> > Thanks for taking a look at the patch.\n> >\n> > > > will leave user no option to unlink failover-enabled logical\n> > > > subscribers from the wait on synchronous standbys.\n> >\n> > That's a good point. I'm a bit biased in that I don't think there's a\n> > great reason why someone would\n> > want to replicate logical changes out of the synchronous cluster\n> > without it having been synchronously replicated\n> > but yes this would be different behavior compared to strictly the slot one.\n> >\n> > > ...\n> > > So when 'synchronized_standby_slots' is comma separated list, we pick\n> > > those slots; if it is empty, then no wait on standbys, and if its\n> > > value is 'DEFAULT' as configured by user, then go with\n> > > 'synchronous_standby_names'. Thoughts?\n> >\n> > I think I'd prefer having a separate GUC if the alternative is to reserve\n> > special keywords in 'synchronized_standby_slots' but I'm not sure if I\n> > feel strongly about that.\n>\n> My only concern is, earlier we provided a way to set the failover\n> property of slots even without mandatorily wait on physical standbys.\n> But now we will be changing this behaviour.\n>\n\nAdding a new GUC as John suggests addressing this concern is one way\nto go but we should think some more before adding a new GUC. Then\nsecond as you are proposing to add a special value for GUC\nsynchronized_standby_slots will also have a downside in that it will\ncreate dependency among two GUCs (synchronized_standby_slots and\nsynchronous_standby_names) which can also make the code complex.\n\nYet another possibility is to have a slot-level parameter (something\nlike failover_wait_for) which can be used to decide the GUC preference\nfor failover-enabled slots.\n\nAs this is a new feature and we haven't got much feedback from users\nso like John, I am also not very sure how much merit we have in\nretaining the old behavior where failover slots don't need to wait for\nany of the standbys. But anyway, we have at least some escape route\nwhere logical subscribers keep on waiting for some physical standby\nthat is down to come back and one may want to use that in some\nsituations, so there is clearly some value in retaining old behavior.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 23 Jul 2024 09:51:10 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Tue, Jul 9, 2024 at 12:39 AM John H <[email protected]> wrote:\n>\n> > Out of curiosity, did you compare with standby_slot_names_from_syncrep set to off\n> > and standby_slot_names not empty?\n>\n> I didn't think 'standby_slot_names' would impact TPS as much since\n> it's not grabbing the SyncRepLock but here's a quick test.\n> Writer with 5 synchronous replicas, 10 pg_recvlogical clients and\n> pgbench all running from the same server.\n>\n> Command: pgbench -c 4 -j 4 -T 600 -U \"ec2-user\" -d postgres -r -P 5\n>\n> Result with: standby_slot_names =\n> 'replica_1,replica_2,replica_3,replica_4,replica_5'\n>\n> latency average = 5.600 ms\n> latency stddev = 2.854 ms\n> initial connection time = 5.503 ms\n> tps = 714.148263 (without initial connection time)\n>\n> Result with: standby_slot_names_from_syncrep = 'true',\n> synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n>\n> latency average = 5.740 ms\n> latency stddev = 2.543 ms\n> initial connection time = 4.093 ms\n> tps = 696.776249 (without initial connection time)\n>\n> Result with nothing set:\n>\n> latency average = 5.090 ms\n> latency stddev = 3.467 ms\n> initial connection time = 4.989 ms\n> tps = 785.665963 (without initial connection time)\n>\n> Again I think it's possible to improve the synchronous numbers if we\n> cache but I'll try that out in a bit.\n>\n\nOkay, so the tests done till now conclude that we won't get the\nbenefit by using 'standby_slot_names_from_syncrep'. Now, if we\nincrease the number of standby's in both lists and still keep ANY 3 in\nsynchronous_standby_names then the results may vary. We should try to\nfind out if there is a performance benefit with the use of\nsynchronous_standby_names in the normal configurations like the one\nyou used in the above tests to prove the value of this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 23 Jul 2024 10:35:07 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Tue, Jul 23, 2024 at 10:35 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jul 9, 2024 at 12:39 AM John H <[email protected]> wrote:\n> >\n> > > Out of curiosity, did you compare with standby_slot_names_from_syncrep set to off\n> > > and standby_slot_names not empty?\n> >\n> > I didn't think 'standby_slot_names' would impact TPS as much since\n> > it's not grabbing the SyncRepLock but here's a quick test.\n> > Writer with 5 synchronous replicas, 10 pg_recvlogical clients and\n> > pgbench all running from the same server.\n> >\n> > Command: pgbench -c 4 -j 4 -T 600 -U \"ec2-user\" -d postgres -r -P 5\n> >\n> > Result with: standby_slot_names =\n> > 'replica_1,replica_2,replica_3,replica_4,replica_5'\n> >\n> > latency average = 5.600 ms\n> > latency stddev = 2.854 ms\n> > initial connection time = 5.503 ms\n> > tps = 714.148263 (without initial connection time)\n> >\n> > Result with: standby_slot_names_from_syncrep = 'true',\n> > synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n> >\n> > latency average = 5.740 ms\n> > latency stddev = 2.543 ms\n> > initial connection time = 4.093 ms\n> > tps = 696.776249 (without initial connection time)\n> >\n> > Result with nothing set:\n> >\n> > latency average = 5.090 ms\n> > latency stddev = 3.467 ms\n> > initial connection time = 4.989 ms\n> > tps = 785.665963 (without initial connection time)\n> >\n> > Again I think it's possible to improve the synchronous numbers if we\n> > cache but I'll try that out in a bit.\n> >\n>\n> Okay, so the tests done till now conclude that we won't get the\n> benefit by using 'standby_slot_names_from_syncrep'. Now, if we\n> increase the number of standby's in both lists and still keep ANY 3 in\n> synchronous_standby_names then the results may vary. We should try to\n> find out if there is a performance benefit with the use of\n> synchronous_standby_names in the normal configurations like the one\n> you used in the above tests to prove the value of this patch.\n>\n\nI didn't fully understand the parameters mentioned above, specifically\nwhat 'latency stddev' and 'latency average' represent.. But shouldn't\nwe see the benefit/value of this patch by having a setup where a\nparticular standby is slow in sending the response back to primary\n(could be due to network lag or other reasons) and then measuring the\nlatency in receiving changes on failover-enabled logical subscribers?\nWe can perform this test with both of the below settings and say make\nD and E slow in sending responses:\n1) synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n2) standby_slot_names = A_slot, B_slot, C_slot, D_slot, E_slot.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 26 Jul 2024 15:28:08 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Fri, Jul 26, 2024 at 3:28 PM shveta malik <[email protected]> wrote:\n>\n> On Tue, Jul 23, 2024 at 10:35 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jul 9, 2024 at 12:39 AM John H <[email protected]> wrote:\n> > >\n> > > > Out of curiosity, did you compare with standby_slot_names_from_syncrep set to off\n> > > > and standby_slot_names not empty?\n> > >\n> > > I didn't think 'standby_slot_names' would impact TPS as much since\n> > > it's not grabbing the SyncRepLock but here's a quick test.\n> > > Writer with 5 synchronous replicas, 10 pg_recvlogical clients and\n> > > pgbench all running from the same server.\n> > >\n> > > Command: pgbench -c 4 -j 4 -T 600 -U \"ec2-user\" -d postgres -r -P 5\n> > >\n> > > Result with: standby_slot_names =\n> > > 'replica_1,replica_2,replica_3,replica_4,replica_5'\n> > >\n> > > latency average = 5.600 ms\n> > > latency stddev = 2.854 ms\n> > > initial connection time = 5.503 ms\n> > > tps = 714.148263 (without initial connection time)\n> > >\n> > > Result with: standby_slot_names_from_syncrep = 'true',\n> > > synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n> > >\n> > > latency average = 5.740 ms\n> > > latency stddev = 2.543 ms\n> > > initial connection time = 4.093 ms\n> > > tps = 696.776249 (without initial connection time)\n> > >\n> > > Result with nothing set:\n> > >\n> > > latency average = 5.090 ms\n> > > latency stddev = 3.467 ms\n> > > initial connection time = 4.989 ms\n> > > tps = 785.665963 (without initial connection time)\n> > >\n> > > Again I think it's possible to improve the synchronous numbers if we\n> > > cache but I'll try that out in a bit.\n> > >\n> >\n> > Okay, so the tests done till now conclude that we won't get the\n> > benefit by using 'standby_slot_names_from_syncrep'. Now, if we\n> > increase the number of standby's in both lists and still keep ANY 3 in\n> > synchronous_standby_names then the results may vary. We should try to\n> > find out if there is a performance benefit with the use of\n> > synchronous_standby_names in the normal configurations like the one\n> > you used in the above tests to prove the value of this patch.\n> >\n>\n> I didn't fully understand the parameters mentioned above, specifically\n> what 'latency stddev' and 'latency average' represent.. But shouldn't\n> we see the benefit/value of this patch by having a setup where a\n> particular standby is slow in sending the response back to primary\n> (could be due to network lag or other reasons) and then measuring the\n> latency in receiving changes on failover-enabled logical subscribers?\n> We can perform this test with both of the below settings and say make\n> D and E slow in sending responses:\n> 1) synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n> 2) standby_slot_names = A_slot, B_slot, C_slot, D_slot, E_slot.\n>\n\nYes, I also expect the patch should perform better in such a scenario\nbut it is better to test it. Also, irrespective of that, we should\ninvestigate why the reported case is slower for\nsynchronous_standby_names and see if we can improve it.\n\nBTW, you for 2), I think you wanted to say synchronized_standby_slots,\nnot standby_slot_names. We have recently changed the GUC name.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 26 Jul 2024 17:11:45 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Fri, Jul 26, 2024 at 5:11 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jul 26, 2024 at 3:28 PM shveta malik <[email protected]> wrote:\n> >\n> > On Tue, Jul 23, 2024 at 10:35 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Tue, Jul 9, 2024 at 12:39 AM John H <[email protected]> wrote:\n> > > >\n> > > > > Out of curiosity, did you compare with standby_slot_names_from_syncrep set to off\n> > > > > and standby_slot_names not empty?\n> > > >\n> > > > I didn't think 'standby_slot_names' would impact TPS as much since\n> > > > it's not grabbing the SyncRepLock but here's a quick test.\n> > > > Writer with 5 synchronous replicas, 10 pg_recvlogical clients and\n> > > > pgbench all running from the same server.\n> > > >\n> > > > Command: pgbench -c 4 -j 4 -T 600 -U \"ec2-user\" -d postgres -r -P 5\n> > > >\n> > > > Result with: standby_slot_names =\n> > > > 'replica_1,replica_2,replica_3,replica_4,replica_5'\n> > > >\n> > > > latency average = 5.600 ms\n> > > > latency stddev = 2.854 ms\n> > > > initial connection time = 5.503 ms\n> > > > tps = 714.148263 (without initial connection time)\n> > > >\n> > > > Result with: standby_slot_names_from_syncrep = 'true',\n> > > > synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n> > > >\n> > > > latency average = 5.740 ms\n> > > > latency stddev = 2.543 ms\n> > > > initial connection time = 4.093 ms\n> > > > tps = 696.776249 (without initial connection time)\n> > > >\n> > > > Result with nothing set:\n> > > >\n> > > > latency average = 5.090 ms\n> > > > latency stddev = 3.467 ms\n> > > > initial connection time = 4.989 ms\n> > > > tps = 785.665963 (without initial connection time)\n> > > >\n> > > > Again I think it's possible to improve the synchronous numbers if we\n> > > > cache but I'll try that out in a bit.\n> > > >\n> > >\n> > > Okay, so the tests done till now conclude that we won't get the\n> > > benefit by using 'standby_slot_names_from_syncrep'. Now, if we\n> > > increase the number of standby's in both lists and still keep ANY 3 in\n> > > synchronous_standby_names then the results may vary. We should try to\n> > > find out if there is a performance benefit with the use of\n> > > synchronous_standby_names in the normal configurations like the one\n> > > you used in the above tests to prove the value of this patch.\n> > >\n> >\n> > I didn't fully understand the parameters mentioned above, specifically\n> > what 'latency stddev' and 'latency average' represent.. But shouldn't\n> > we see the benefit/value of this patch by having a setup where a\n> > particular standby is slow in sending the response back to primary\n> > (could be due to network lag or other reasons) and then measuring the\n> > latency in receiving changes on failover-enabled logical subscribers?\n> > We can perform this test with both of the below settings and say make\n> > D and E slow in sending responses:\n> > 1) synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n> > 2) standby_slot_names = A_slot, B_slot, C_slot, D_slot, E_slot.\n> >\n>\n> Yes, I also expect the patch should perform better in such a scenario\n> but it is better to test it. Also, irrespective of that, we should\n> investigate why the reported case is slower for\n> synchronous_standby_names and see if we can improve it.\n\n+1\n\n> BTW, you for 2), I think you wanted to say synchronized_standby_slots,\n> not standby_slot_names. We have recently changed the GUC name.\n\nyes, sorry, synchronized_standby_slots it is.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 29 Jul 2024 08:41:37 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi John,\n\nOn Thu, Jul 18, 2024 at 02:22:08PM -0700, John H wrote:\n> Hi Bertrand,\n> \n> > 1 ===\n> > ...\n> > That's worth additional comments in the code.\n> \n> There's this comment already about caching the value already, not sure\n> if you prefer something more?\n> \n> /* Cache values to reduce contention on lock */\n\nYeah, at the same place as the static lsn[] declaration, something like:\n\nstatic XLogRecPtr lsn[NUM_SYNC_REP_WAIT_MODE]; /* cached LSNs */\n\nbut that may just be a matter of taste.\n\n> > 3 ===\n> > ...\n> > NUM_SYNC_REP_WAIT_MODE is small but as the goal is the keep the lock time as\n> > short as possible I wonder if it wouldn't be better to use memcpy() here instead\n> > of this for loop.\n> >\n> \n> It results in a \"Wdiscarded-qualifiers\" which is safe given we take\n> the lock, but adds noise?\n> What do you think?\n> \n> \"slot.c:2756:46: warning: passing argument 2 of ‘memcpy’ discards\n> ‘volatile’ qualifier from pointer target type\n> [-Wdiscarded-qualifiers]\"\n\nRight, we may want to cast it then but given that the for loop is \"small\" I think\nthat's also fine to keep the for loop.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 29 Jul 2024 05:00:55 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi Shveta,\n\nOn Sun, Jul 21, 2024 at 8:42 PM shveta malik <[email protected]> wrote:\n\n> > Ah that's a gap. Let me add some logging/warning in a similar fashion.\n> > Although I think I'd have the warning be relatively generic (e.g.\n> > changes are blocked because\n> > they're not synchronously committed)\n> >\n>\n> okay, sounds good.\n>\n> thanks\n> Shveta\n\nI took a look at having similar warnings the existing\n'synchronized_standby_slots' feature has\nand I don't think it's particularly feasible.\n\nThe checks/warnings for 'synchronized_standby_slots' are intended to\nprotect against misconfiguration.\nThey consist of slot validation (valid slot_name, not logical slot,\nslot has not been invalidated), and\nwhether or not the slot is active.\n\nI don't think there's a \"misconfiguration\" equivalent for waiting on\nsynchronous_commit.\nWith the current proposal, once you have (synchronous_commit enabled\n&& failover_slots), logical\ndecoding is dependent on whether or not the writes have been\nreplicated to a synchronous replica.\nIf there is no data being replicated out of the logical slot, it is\nbecause from the perspective of the\ndatabase no writes have been committed yet. I don't think it would\nmake sense to add logging/warning as to\nwhy a transaction is still not committed (e.g. which potential replica\nis the one lagging). There isn't a\nnice way to determine why synchronous commit is waiting without being\nparticularly invasive, and even then\nit could be particularly noisy (e.g. provide all the application_names\nthat we are potentially waiting on).\n\nThanks,\n\n\n-- \nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Mon, 26 Aug 2024 12:25:51 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi Shveta, Amit,\n\n> > > > ... We should try to\n> > > > find out if there is a performance benefit with the use of\n> > > > synchronous_standby_names in the normal configurations like the one\n> > > > you used in the above tests to prove the value of this patch.\n\nI don't expect there to be a performance benefit, if anything I would\nexpect it to perform\nslightly worse because of the contention on SyncRepLock. The main\nvalue of the patch\nfor me is it makes it easy for administrators to set the parameter and\navoid having to\nre-toggle configuration if they want very up-to-date logical clients\nwhen one of the\nreplicas they previously specified in 'synchronized_standby_slots ' starts being\nunavailable in a synchronous configuration setup.\n\n> > > I didn't fully understand the parameters mentioned above, specifically\n> > > what 'latency stddev' and 'latency average' represent\n\nIf I understand correctly, latency is just representing the average latency of\neach transaction from commit, while stddev is the standard deviation of these\ntransactions.\n\n> > Yes, I also expect the patch should perform better in such a scenario\n> > but it is better to test it. Also, irrespective of that, we should\n> > investigate why the reported case is slower for\n> > synchronous_standby_names and see if we can improve it.\n\nWe could test it but I'm not sure how interesting it is since depending\non how much the chosen slot in 'synchronized_standby_slots' lags behind\nwe can easily show that this patch will perform better.\n\nFor instance, in Shveta's suggestion of\n\n> > > We can perform this test with both of the below settings and say make\n> > > D and E slow in sending responses:\n> > > 1) synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n> > > 2) standby_slot_names = A_slot, B_slot, C_slot, D_slot, E_slot.\n\nif the server associated with E_slot is just down or undergoing\nsome sort of maintenance, then all logical consumers would start lagging until\nthe server is back up. I could also mimic a network lag of 20 seconds\nand it's guaranteed\nthat this patch will perform better.\n\nI re-ran the benchmarks with a longer run time of 3 hours, and testing\na new shared cache\nfor walsenders to check the value before obtaining the SyncRepLock.\n\nI also saw I was being throttled on storage in my previous benchmarks\nso I moved to a new setup.\nI benchmarked a new test case with an additional shared cache between\nall the walsenders to\nreduce potential contention on SyncRepLock, and have attached said patch.\n\nDatabase: Writer on it's own disk, 5 RRs on the other disk together\nClient: 10 logical clients, pgbench running from here as well\n\n'pgbench -c 32 -j 4 -T 10800 -U \"ec2-user\" -d postgres -r -P 1'\n\n# Test failover_slots with synchronized_standby_slots = 'rr_1, rr_2,\nrr_3, rr_4, rr_5'\nlatency average = 10.683 ms\nlatency stddev = 11.851 ms\ninitial connection time = 145.876 ms\ntps = 2994.595673 (without initial connection time)\n\n# Test failover_slots waiting on sync_rep no new shared cache\nlatency average = 10.684 ms\nlatency stddev = 12.247 ms\ninitial connection time = 142.561 ms\ntps = 2994.160136 (without initial connection time)\nstatement latencies in milliseconds and failures:\n\n# Test failover slots with additional shared cache\nlatency average = 10.674 ms\nlatency stddev = 11.917 ms\ninitial connection time = 142.486 ms\ntps = 2997.315874 (without initial connection time)\n\nThe tps improvement between no cache and shared_cache seems marginal, but we do\nsee the slight improvement in stddev which makes sense from a\ncontention perspective.\nI think the cache would demonstrate a lot more improvement if we had\nsay 1000 logical slots\nand all of them are trying to obtain SyncRepLock for updating its values.\n\nI've attached the patch but don't feel particularly strongly about the\nnew shared LSN values.\n\nThanks,\n\n\n\n\n--\nJohn Hsu - Amazon Web Services", "msg_date": "Mon, 26 Aug 2024 12:28:06 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi Bertrand,\n\nOn Sun, Jul 28, 2024 at 10:00 PM Bertrand Drouvot\n<[email protected]> wrote:\n\n> Yeah, at the same place as the static lsn[] declaration, something like:\n>\n> static XLogRecPtr lsn[NUM_SYNC_REP_WAIT_MODE]; /* cached LSNs */\n>\n> but that may just be a matter of taste.\n>\n\nI've updated the patch to reflect that.\n\n>\n> Right, we may want to cast it then but given that the for loop is \"small\" I think\n> that's also fine to keep the for loop.\n>\n\nAh I see what you mean. I've updated these changes and attached the\npatch to the other thread.\n\n\nThanks,\n-- \nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Mon, 26 Aug 2024 12:31:53 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Tue, Aug 27, 2024 at 12:58 AM John H <[email protected]> wrote:\n>\n> For instance, in Shveta's suggestion of\n>\n> > > > We can perform this test with both of the below settings and say make\n> > > > D and E slow in sending responses:\n> > > > 1) synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n> > > > 2) standby_slot_names = A_slot, B_slot, C_slot, D_slot, E_slot.\n>\n> if the server associated with E_slot is just down or undergoing\n> some sort of maintenance, then all logical consumers would start lagging until\n> the server is back up. I could also mimic a network lag of 20 seconds\n> and it's guaranteed\n> that this patch will perform better.\n>\n\nI wanted a simple test where in the first case you use\nsynchronous_standby_names = 'ANY 3 (A,B,C,D,E)' and in the second case\nuse standby_slot_names = A_slot, B_slot, C_slot, D_slot, E_slot. You\ncan try some variations of it as well. The idea is that even if the\nperformance is less for synchronous_standby_names configuration, we\nshould be able to document it. This will help users to decide what is\nbest for them.\n\n> I re-ran the benchmarks with a longer run time of 3 hours, and testing\n> a new shared cache\n> for walsenders to check the value before obtaining the SyncRepLock.\n>\n> I also saw I was being throttled on storage in my previous benchmarks\n> so I moved to a new setup.\n> I benchmarked a new test case with an additional shared cache between\n> all the walsenders to\n> reduce potential contention on SyncRepLock, and have attached said patch.\n>\n> Database: Writer on it's own disk, 5 RRs on the other disk together\n> Client: 10 logical clients, pgbench running from here as well\n>\n> 'pgbench -c 32 -j 4 -T 10800 -U \"ec2-user\" -d postgres -r -P 1'\n>\n> # Test failover_slots with synchronized_standby_slots = 'rr_1, rr_2,\n> rr_3, rr_4, rr_5'\n> latency average = 10.683 ms\n> latency stddev = 11.851 ms\n> initial connection time = 145.876 ms\n> tps = 2994.595673 (without initial connection time)\n>\n> # Test failover_slots waiting on sync_rep no new shared cache\n> latency average = 10.684 ms\n> latency stddev = 12.247 ms\n> initial connection time = 142.561 ms\n> tps = 2994.160136 (without initial connection time)\n> statement latencies in milliseconds and failures:\n>\n> # Test failover slots with additional shared cache\n> latency average = 10.674 ms\n> latency stddev = 11.917 ms\n> initial connection time = 142.486 ms\n> tps = 2997.315874 (without initial connection time)\n>\n> The tps improvement between no cache and shared_cache seems marginal, but we do\n> see the slight improvement in stddev which makes sense from a\n> contention perspective.\n>\n\nWhat is the difference between \"Test failover_slots with\nsynchronized_standby_slots = 'rr_1, rr_2,\n> rr_3, rr_4, rr_5'\" and \"Test failover_slots waiting on sync_rep no new shared cache\"? I want to know what configuration did you used for synchronous_standby_names in the latter case.\n\n> I think the cache would demonstrate a lot more improvement if we had\n> say 1000 logical slots\n> and all of them are trying to obtain SyncRepLock for updating its values.\n>\n> I've attached the patch but don't feel particularly strongly about the\n> new shared LSN values.\n>\n\nI am also not sure especially as the test results didn't shown much\nimprovement and the code also becomes bit complicated. BTW, in the\n0003 version in the below code:\n+ /* Cache values to reduce contention */\n+ LWLockAcquire(SyncRepLock, LW_SHARED);\n+ memcpy((XLogRecPtr *) walsndctl->cached_lsn, (XLogRecPtr *)\nwalsndctl->lsn, sizeof(lsn));\n+ LWLockRelease(SyncRepLock);\n\nWhich mode lsn is being copied? I am not sure if I understood this\npart of the code.\n\nIn the 0002 version, in the following code [1], you are referring to\nLSN mode which is enabled for logical walsender irrespective of the\nmode used by the physical walsender. It is possible that they are\nalways the same but that is not evident from the code or comments in\nthe patch.\n[1] :\n+ /* Cache values to reduce contention on lock */\n+ for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n+ {\n+ lsn[i] = walsndctl->lsn[i];\n+ }\n\n- ss_oldest_flush_lsn = min_restart_lsn;\n+ LWLockRelease(SyncRepLock);\n\n- return true;\n+ if (lsn[mode] >= wait_for_lsn)\n+ return true;\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Aug 2024 11:30:08 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Tue, Aug 27, 2024 at 12:56 AM John H <[email protected]> wrote:\n>\n> Hi Shveta,\n>\n> On Sun, Jul 21, 2024 at 8:42 PM shveta malik <[email protected]> wrote:\n>\n> > > Ah that's a gap. Let me add some logging/warning in a similar fashion.\n> > > Although I think I'd have the warning be relatively generic (e.g.\n> > > changes are blocked because\n> > > they're not synchronously committed)\n> > >\n> >\n> > okay, sounds good.\n> >\n> > thanks\n> > Shveta\n>\n> I took a look at having similar warnings the existing\n> 'synchronized_standby_slots' feature has\n> and I don't think it's particularly feasible.\n>\n> The checks/warnings for 'synchronized_standby_slots' are intended to\n> protect against misconfiguration.\n> They consist of slot validation (valid slot_name, not logical slot,\n> slot has not been invalidated), and\n> whether or not the slot is active.\n>\n> I don't think there's a \"misconfiguration\" equivalent for waiting on\n> synchronous_commit.\n> With the current proposal, once you have (synchronous_commit enabled\n> && failover_slots), logical\n> decoding is dependent on whether or not the writes have been\n> replicated to a synchronous replica.\n> If there is no data being replicated out of the logical slot, it is\n> because from the perspective of the\n> database no writes have been committed yet. I don't think it would\n> make sense to add logging/warning as to\n> why a transaction is still not committed (e.g. which potential replica\n> is the one lagging). There isn't a\n> nice way to determine why synchronous commit is waiting without being\n> particularly invasive, and even then\n> it could be particularly noisy (e.g. provide all the application_names\n> that we are potentially waiting on).\n>\n\nOkay. Thanks for the details. I see your point. I will review to see\nif anything comes to my mind for a simpler way to do this.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 28 Aug 2024 08:46:21 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi Amit,\n\nOn Mon, Aug 26, 2024 at 11:00 PM Amit Kapila <[email protected]> wrote:\n> I wanted a simple test where in the first case you use\n> synchronous_standby_names = 'ANY 3 (A,B,C,D,E)' and in the second case\n> use standby_slot_names = A_slot, B_slot, C_slot, D_slot, E_slot. You\n> can try some variations of it as well. The idea is that even if the\n> performance is less for synchronous_standby_names configuration, we\n> should be able to document it. This will help users to decide what is\n> ...\n> What is the difference between \"Test failover_slots with\n> synchronized_standby_slots = 'rr_1, rr_2,\n> > rr_3, rr_4, rr_5'\" and \"Test failover_slots waiting on sync_rep no new shared cache\"? I want to know what configuration did you used for synchronous_standby_names in the latter case.\n\nSorry for the confusion due to the bad-naming of the test cases, let\nme rephrase.\n All three tests had synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\nset with synchronous_commit = 'on', and failover_slots = 'on'\nfor the 10 logical slots.\n\n# Test failover_slots with synchronized_standby_slots = 'rr_1, rr_2,\nrr_3, rr_4, rr_5'\nThis is the test you wanted where the logical clients are waiting on\nall 5 slots to acknowledge the change since\n'synchronized_standby_slots' takes priority when set.\n\n# Test failover_slots sync rep no cache\nThis test has 'synchronized_standby_slots' commented out, and without\nrelying on the new cache introduced in 0003.\nLogical clients will wait on synchronous_standby_names in this case.\n\n# Test failover slots with additional shared cache\nThis test also has 'synchronized_standby_slots' commented out, and\nlogical clients will wait on the LSNs\nreported from synchronous_standby_names but it relies on a new cache\nto reduce contention on SyncRepLock.\n\n> The idea is that even if the\n> performance is less for synchronous_standby_names configuration, we\n> should be able to document it. This will help users to decide what is\n> best for them.\n\nMakes sense.\n\n> I am also not sure especially as the test results didn't shown much\n> improvement and the code also becomes bit complicated. BTW, in the\n> 0003 version in the below code:\n\nThat's fair, I've updated to be more in line with 0002.\n\n> + /* Cache values to reduce contention */\n> + LWLockAcquire(SyncRepLock, LW_SHARED);\n> + memcpy((XLogRecPtr *) walsndctl->cached_lsn, (XLogRecPtr *)\n> walsndctl->lsn, sizeof(lsn));\n> + LWLockRelease(SyncRepLock);\n>\n> Which mode lsn is being copied? I am not sure if I understood this\n> part of the code.\n\nAll of the mode LSNs are being copied in case SyncRepWaitMode changes in\nthe next iteration. I've removed that part but kept:\n\n> + memcpy(lsn, (XLogRecPtr *) walsndctl->lsn, sizeof(lsn));\n\nas suggested by Bertrand to avoid the for loop updating values one-by-one.\n\nHere's what's logged after the memcpy:\n\n2024-08-28 19:41:13.798 UTC [1160413] LOG: lsn[0] after memcpy is: 279/752C7FF0\n2024-08-28 19:41:13.798 UTC [1160413] LOG: lsn[1] after memcpy is: 279/752C7F20\n2024-08-28 19:41:13.798 UTC [1160413] LOG: lsn[2] after memcpy is: 279/752C7F20\n\n> In the 0002 version, in the following code [1], you are referring to\n> LSN mode which is enabled for logical walsender irrespective of the\n> mode used by the physical walsender. It is possible that they are\n> always the same but that is not evident from the code or comments in\n> the patch.\n\nThey are almost always the same, I tried to indicate that with the\nfollowing comment in the patch, but I could make it more explicit?\n> /* Initialize value in case SIGHUP changing to SYNC_REP_NO_WAIT */\n\nAt the beginning we set\n\n> int mode = SyncRepWaitMode;\n\nAt this time, the logical walsender mode it's checking against is the\nsame as what the physical walsenders are using.\nIt's possible that this mode is no longer the same when we execute the\nfollowing check:\n\n> if (lsn[mode] >= wait_for_lsn)\n\nbecause of a SIGHUP to synchronous_commit that changes SyncRepWaitMode\nto some other value\n\nWe cache the value instead of\n> if (lsn[SyncRepWaitMode] >= wait_for_lsn)\n\nbecause SYNC_REP_NO_WAIT is -1. If SyncRepWaitMode is set to this it\nleads to out of bounds access.\n\nI've attached a new patch that removes the shared cache introduced in 0003.\n\nThanks,\n-- \nJohn Hsu - Amazon Web Services", "msg_date": "Wed, 28 Aug 2024 14:01:01 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Thu, Aug 29, 2024 at 2:31 AM John H <[email protected]> wrote:\n>\n> Hi Amit,\n>\n> On Mon, Aug 26, 2024 at 11:00 PM Amit Kapila <[email protected]> wrote:\n> > I wanted a simple test where in the first case you use\n> > synchronous_standby_names = 'ANY 3 (A,B,C,D,E)' and in the second case\n> > use standby_slot_names = A_slot, B_slot, C_slot, D_slot, E_slot. You\n> > can try some variations of it as well. The idea is that even if the\n> > performance is less for synchronous_standby_names configuration, we\n> > should be able to document it. This will help users to decide what is\n> > ...\n> > What is the difference between \"Test failover_slots with\n> > synchronized_standby_slots = 'rr_1, rr_2,\n> > > rr_3, rr_4, rr_5'\" and \"Test failover_slots waiting on sync_rep no new shared cache\"? I want to know what configuration did you used for synchronous_standby_names in the latter case.\n>\n> Sorry for the confusion due to the bad-naming of the test cases, let\n> me rephrase.\n> All three tests had synchronous_standby_names = 'ANY 3 (A,B,C,D,E)'\n> set with synchronous_commit = 'on', and failover_slots = 'on'\n> for the 10 logical slots.\n>\n> # Test failover_slots with synchronized_standby_slots = 'rr_1, rr_2,\n> rr_3, rr_4, rr_5'\n> This is the test you wanted where the logical clients are waiting on\n> all 5 slots to acknowledge the change since\n> 'synchronized_standby_slots' takes priority when set.\n>\n> # Test failover_slots sync rep no cache\n> This test has 'synchronized_standby_slots' commented out, and without\n> relying on the new cache introduced in 0003.\n> Logical clients will wait on synchronous_standby_names in this case.\n>\n> # Test failover slots with additional shared cache\n> This test also has 'synchronized_standby_slots' commented out, and\n> logical clients will wait on the LSNs\n> reported from synchronous_standby_names but it relies on a new cache\n> to reduce contention on SyncRepLock.\n>\n> > The idea is that even if the\n> > performance is less for synchronous_standby_names configuration, we\n> > should be able to document it. This will help users to decide what is\n> > best for them.\n>\n> Makes sense.\n>\n> > I am also not sure especially as the test results didn't shown much\n> > improvement and the code also becomes bit complicated. BTW, in the\n> > 0003 version in the below code:\n>\n> That's fair, I've updated to be more in line with 0002.\n>\n> > + /* Cache values to reduce contention */\n> > + LWLockAcquire(SyncRepLock, LW_SHARED);\n> > + memcpy((XLogRecPtr *) walsndctl->cached_lsn, (XLogRecPtr *)\n> > walsndctl->lsn, sizeof(lsn));\n> > + LWLockRelease(SyncRepLock);\n> >\n> > Which mode lsn is being copied? I am not sure if I understood this\n> > part of the code.\n>\n> All of the mode LSNs are being copied in case SyncRepWaitMode changes in\n> the next iteration. I've removed that part but kept:\n>\n> > + memcpy(lsn, (XLogRecPtr *) walsndctl->lsn, sizeof(lsn));\n>\n> as suggested by Bertrand to avoid the for loop updating values one-by-one.\n>\n> Here's what's logged after the memcpy:\n>\n> 2024-08-28 19:41:13.798 UTC [1160413] LOG: lsn[0] after memcpy is: 279/752C7FF0\n> 2024-08-28 19:41:13.798 UTC [1160413] LOG: lsn[1] after memcpy is: 279/752C7F20\n> 2024-08-28 19:41:13.798 UTC [1160413] LOG: lsn[2] after memcpy is: 279/752C7F20\n>\n> > In the 0002 version, in the following code [1], you are referring to\n> > LSN mode which is enabled for logical walsender irrespective of the\n> > mode used by the physical walsender. It is possible that they are\n> > always the same but that is not evident from the code or comments in\n> > the patch.\n>\n> They are almost always the same, I tried to indicate that with the\n> following comment in the patch, but I could make it more explicit?\n> > /* Initialize value in case SIGHUP changing to SYNC_REP_NO_WAIT */\n>\n> At the beginning we set\n>\n> > int mode = SyncRepWaitMode;\n>\n> At this time, the logical walsender mode it's checking against is the\n> same as what the physical walsenders are using.\n> It's possible that this mode is no longer the same when we execute the\n> following check:\n>\n> > if (lsn[mode] >= wait_for_lsn)\n>\n> because of a SIGHUP to synchronous_commit that changes SyncRepWaitMode\n> to some other value\n>\n> We cache the value instead of\n> > if (lsn[SyncRepWaitMode] >= wait_for_lsn)\n>\n> because SYNC_REP_NO_WAIT is -1. If SyncRepWaitMode is set to this it\n> leads to out of bounds access.\n>\n> I've attached a new patch that removes the shared cache introduced in 0003.\n>\n\nThanks for the patch. Few comments and queries:\n\n1)\n+ static XLogRecPtr lsn[NUM_SYNC_REP_WAIT_MODE];\n\nWe shall name it as 'lsns' as there are multiple.\n\n2)\n\n+ for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n+ {\n+ lsn[i] = InvalidXLogRecPtr;\n+ }\n\nCan we do it like below similar to what you have done at another place:\nmemset(lsn, InvalidXLogRecPtr, sizeof(lsn));\n\n3)\n+ if (!initialized)\n+ {\n+ for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n+ {\n+ lsn[i] = InvalidXLogRecPtr;\n+ }\n+ }\n\nI do not see 'initialized' set to TRUE anywhere. Can you please\nelaborate the intent here?\n\n4)\n+ int mode = SyncRepWaitMode;\n+ int i;\n+\n+ if (!initialized)\n+ {\n+ for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n+ {\n+ lsn[i] = InvalidXLogRecPtr;\n+ }\n+ }\n+ if (mode == SYNC_REP_NO_WAIT)\n+ return true;\n\nI do not understand this code well. As Amit also pointed out, 'mode'\nmay change. When we initialize 'mode' lets say SyncRepWaitMode is\nSYNC_REP_NO_WAIT but by the time we check 'if (mode ==\nSYNC_REP_NO_WAIT)', SyncRepWaitMode has changed to say\nSYNC_REP_WAIT_FLUSH, if so, then we will wrongly return true from\nhere. Is that a possibility? ProcessConfigFile() is in the caller, and\nthus we may end up using the wrong mode.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 29 Aug 2024 15:04:57 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi Shveta,\n\nThanks for reviewing it so quickly.\n\nOn Thu, Aug 29, 2024 at 2:35 AM shveta malik <[email protected]> wrote:\n>\n> Thanks for the patch. Few comments and queries:\n>\n> 1)\n> + static XLogRecPtr lsn[NUM_SYNC_REP_WAIT_MODE];\n>\n> We shall name it as 'lsns' as there are multiple.\n>\n\nThis follows the same naming convention in walsender_private.h\n\n> 2)\n>\n> + for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n> + {\n> + lsn[i] = InvalidXLogRecPtr;\n> + }\n>\n> Can we do it like below similar to what you have done at another place:\n> memset(lsn, InvalidXLogRecPtr, sizeof(lsn));\n>\n\nI don't think memset works in this case? Well, I think *technically* works but\nnot sure if that's something worth optimizing.\nIf I understand correctly, memset takes in a char for the value and\nnot XLogRecPtr (uint64).\n\nSo something like: memset(lsn, 0, sizeof(lsn))\n\nInvalidXLogRecPtr is defined as 0 so I think it works but there's an\nimplicit dependency here\nfor correctness.\n\n> 3)\n> + if (!initialized)\n> + {\n> + for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n> + {\n> + lsn[i] = InvalidXLogRecPtr;\n> + }\n> + }\n>\n> I do not see 'initialized' set to TRUE anywhere. Can you please\n> elaborate the intent here?\n\nYou're right I thought I fixed this. WIll update.\n\n>\n> 4)\n> + int mode = SyncRepWaitMode;\n> + int i;\n> +\n> + if (!initialized)\n> + {\n> + for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n> + {\n> + lsn[i] = InvalidXLogRecPtr;\n> + }\n> + }\n> + if (mode == SYNC_REP_NO_WAIT)\n> + return true;\n>\n> I do not understand this code well. As Amit also pointed out, 'mode'\n> may change. When we initialize 'mode' lets say SyncRepWaitMode is\n> SYNC_REP_NO_WAIT but by the time we check 'if (mode ==\n> SYNC_REP_NO_WAIT)', SyncRepWaitMode has changed to say\n> SYNC_REP_WAIT_FLUSH, if so, then we will wrongly return true from\n> here. Is that a possibility? ProcessConfigFile() is in the caller, and\n> thus we may end up using the wrong mode.\n>\n\nYes it's possible for mode to change. In my comment to Amit in the other thread,\nI think we have to store mode and base our execution of this logic and ignore\nSyncRepWaitMode changing due to ProcesConfigFile/SIGHUP for one iteration.\n\nWe can store the value of mode later, so something like:\n\n> if (SyncRepWaitMode == SYNC_REP_NO_WAIT)\n> return true;\n> mode = SyncRepWaitMode\n> if (lsn[mode] >= wait_for_lsn)\n> return true;\n\nBut it's the same issue which is when you check lsn[mode],\nSyncRepWaitMode could have changed to\nsomething else, so you always have to initialize the value and will\nalways have this discrepancy.\n\nI'm skeptical end users are changing SyncRepWaitMode in their database\nclusters as\nit has implications for their durability and I would assume they run\nwith the same durability guarantees.\n\nThanks,\n-- \nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Thu, 29 Aug 2024 12:26:04 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Fri, Aug 30, 2024 at 12:56 AM John H <[email protected]> wrote:\n>\n> Hi Shveta,\n>\n> Thanks for reviewing it so quickly.\n>\n> On Thu, Aug 29, 2024 at 2:35 AM shveta malik <[email protected]> wrote:\n> >\n> > Thanks for the patch. Few comments and queries:\n> >\n> > 1)\n> > + static XLogRecPtr lsn[NUM_SYNC_REP_WAIT_MODE];\n> >\n> > We shall name it as 'lsns' as there are multiple.\n> >\n>\n> This follows the same naming convention in walsender_private.h\n>\n> > 2)\n> >\n> > + for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n> > + {\n> > + lsn[i] = InvalidXLogRecPtr;\n> > + }\n> >\n> > Can we do it like below similar to what you have done at another place:\n> > memset(lsn, InvalidXLogRecPtr, sizeof(lsn));\n> >\n>\n> I don't think memset works in this case? Well, I think *technically* works but\n> not sure if that's something worth optimizing.\n> If I understand correctly, memset takes in a char for the value and\n> not XLogRecPtr (uint64).\n>\n> So something like: memset(lsn, 0, sizeof(lsn))\n>\n> InvalidXLogRecPtr is defined as 0 so I think it works but there's an\n> implicit dependency here\n> for correctness.\n>\n> > 3)\n> > + if (!initialized)\n> > + {\n> > + for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n> > + {\n> > + lsn[i] = InvalidXLogRecPtr;\n> > + }\n> > + }\n> >\n> > I do not see 'initialized' set to TRUE anywhere. Can you please\n> > elaborate the intent here?\n>\n> You're right I thought I fixed this. WIll update.\n>\n> >\n> > 4)\n> > + int mode = SyncRepWaitMode;\n> > + int i;\n> > +\n> > + if (!initialized)\n> > + {\n> > + for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; i++)\n> > + {\n> > + lsn[i] = InvalidXLogRecPtr;\n> > + }\n> > + }\n> > + if (mode == SYNC_REP_NO_WAIT)\n> > + return true;\n> >\n> > I do not understand this code well. As Amit also pointed out, 'mode'\n> > may change. When we initialize 'mode' lets say SyncRepWaitMode is\n> > SYNC_REP_NO_WAIT but by the time we check 'if (mode ==\n> > SYNC_REP_NO_WAIT)', SyncRepWaitMode has changed to say\n> > SYNC_REP_WAIT_FLUSH, if so, then we will wrongly return true from\n> > here. Is that a possibility? ProcessConfigFile() is in the caller, and\n> > thus we may end up using the wrong mode.\n> >\n>\n> Yes it's possible for mode to change. In my comment to Amit in the other thread,\n> I think we have to store mode and base our execution of this logic and ignore\n> SyncRepWaitMode changing due to ProcesConfigFile/SIGHUP for one iteration.\n>\n> We can store the value of mode later, so something like:\n>\n> > if (SyncRepWaitMode == SYNC_REP_NO_WAIT)\n> > return true;\n> > mode = SyncRepWaitMode\n> > if (lsn[mode] >= wait_for_lsn)\n> > return true;\n>\n> But it's the same issue which is when you check lsn[mode],\n> SyncRepWaitMode could have changed to\n> something else, so you always have to initialize the value and will\n> always have this discrepancy.\n>\n> I'm skeptical end users are changing SyncRepWaitMode in their database\n\n> clusters as\n> it has implications for their durability and I would assume they run\n> with the same durability guarantees.\n>\n\nI was trying to have a look at the patch again, it doesn't apply on\nthe head, needs rebase.\n\nRegarding 'mode = SyncRepWaitMode', FWIW, SyncRepWaitForLSN() also\ndoes in a similar way. It gets mode in local var initially and uses it\nlater. See [1]. So isn't there a chance too that 'SyncRepWaitMode' can\nchange in between.\n\n[1]:\nmode = SyncRepWaitMode;\n.....\n....\n if (!WalSndCtl->sync_standbys_defined ||\n lsn <= WalSndCtl->lsn[mode])\n {\n LWLockRelease(SyncRepLock);\n return;\n }\n\n\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 9 Sep 2024 11:46:39 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi Shveta,\n\nOn Sun, Sep 8, 2024 at 11:16 PM shveta malik <[email protected]> wrote:\n\n>\n> I was trying to have a look at the patch again, it doesn't apply on\n> the head, needs rebase.\n>\n\nRebased with the latest changes.\n\n> Regarding 'mode = SyncRepWaitMode', FWIW, SyncRepWaitForLSN() also\n> does in a similar way. It gets mode in local var initially and uses it\n> later. See [1]. So isn't there a chance too that 'SyncRepWaitMode' can\n> change in between.\n>\n> [1]:\n> mode = SyncRepWaitMode;\n> .....\n> ....\n> if (!WalSndCtl->sync_standbys_defined ||\n> lsn <= WalSndCtl->lsn[mode])\n> {\n> LWLockRelease(SyncRepLock);\n> return;\n> }\n\nYou are right, thanks for the correction. I tried reproducing with GDB\nwhere SyncRepWaitMode\nchanges due to pg_ctl reload but was unable to do so. It seems like\nSIGHUP only sets ConfigReloadPending = true,\nwhich gets processed in the next loop in WalSndLoop() and that's\nprobably where I was getting confused.\n\nIn the latest patch, I've added:\n\nAssert(SyncRepWaitMode >= 0);\n\nwhich should be true since we call SyncRepConfigured() at the\nbeginning of StandbySlotsHaveCaughtup(),\nand used SyncRepWaitMode directly.\n\nThank you\n\n-- \nJohn Hsu - Amazon Web Services", "msg_date": "Tue, 10 Sep 2024 14:10:32 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Wed, Sep 11, 2024 at 2:40 AM John H <[email protected]> wrote:\n>\n> Hi Shveta,\n>\n> On Sun, Sep 8, 2024 at 11:16 PM shveta malik <[email protected]> wrote:\n>\n> >\n> > I was trying to have a look at the patch again, it doesn't apply on\n> > the head, needs rebase.\n> >\n>\n> Rebased with the latest changes.\n>\n> > Regarding 'mode = SyncRepWaitMode', FWIW, SyncRepWaitForLSN() also\n> > does in a similar way. It gets mode in local var initially and uses it\n> > later. See [1]. So isn't there a chance too that 'SyncRepWaitMode' can\n> > change in between.\n> >\n> > [1]:\n> > mode = SyncRepWaitMode;\n> > .....\n> > ....\n> > if (!WalSndCtl->sync_standbys_defined ||\n> > lsn <= WalSndCtl->lsn[mode])\n> > {\n> > LWLockRelease(SyncRepLock);\n> > return;\n> > }\n>\n> You are right, thanks for the correction. I tried reproducing with GDB\n> where SyncRepWaitMode\n> changes due to pg_ctl reload but was unable to do so. It seems like\n> SIGHUP only sets ConfigReloadPending = true,\n> which gets processed in the next loop in WalSndLoop() and that's\n> probably where I was getting confused.\n\nyes, SIGHUP will be processed in the caller of\nStandbySlotsHaveCaughtup() (see ProcessConfigFile() in\nWaitForStandbyConfirmation()). So we can use 'SyncRepWaitMode'\ndirectly as it is not going to change in StandbySlotsHaveCaughtup()\neven if user triggers the change. And thus it was okay to use it even\nin the local variable too similar to how SyncRepWaitForLSN() does it.\n\n> In the latest patch, I've added:\n>\n> Assert(SyncRepWaitMode >= 0);\n>\n> which should be true since we call SyncRepConfigured() at the\n> beginning of StandbySlotsHaveCaughtup(),\n> and used SyncRepWaitMode directly.\n\nYes, it should be okay I think. As SyncRepRequested() in the beginning\nmakes sure synchronous_commit > SYNCHRONOUS_COMMIT_LOCAL_FLUSH and\nthus SyncRepWaitMode should be mapped to either of\nWAIT_WRITE/FLUSH/APPLY etc. Will review further.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 12 Sep 2024 15:04:55 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Thu, Sep 12, 2024 at 3:04 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Sep 11, 2024 at 2:40 AM John H <[email protected]> wrote:\n> >\n> > Hi Shveta,\n> >\n> > On Sun, Sep 8, 2024 at 11:16 PM shveta malik <[email protected]> wrote:\n> >\n> > >\n> > > I was trying to have a look at the patch again, it doesn't apply on\n> > > the head, needs rebase.\n> > >\n> >\n> > Rebased with the latest changes.\n> >\n> > > Regarding 'mode = SyncRepWaitMode', FWIW, SyncRepWaitForLSN() also\n> > > does in a similar way. It gets mode in local var initially and uses it\n> > > later. See [1]. So isn't there a chance too that 'SyncRepWaitMode' can\n> > > change in between.\n> > >\n> > > [1]:\n> > > mode = SyncRepWaitMode;\n> > > .....\n> > > ....\n> > > if (!WalSndCtl->sync_standbys_defined ||\n> > > lsn <= WalSndCtl->lsn[mode])\n> > > {\n> > > LWLockRelease(SyncRepLock);\n> > > return;\n> > > }\n> >\n> > You are right, thanks for the correction. I tried reproducing with GDB\n> > where SyncRepWaitMode\n> > changes due to pg_ctl reload but was unable to do so. It seems like\n> > SIGHUP only sets ConfigReloadPending = true,\n> > which gets processed in the next loop in WalSndLoop() and that's\n> > probably where I was getting confused.\n>\n> yes, SIGHUP will be processed in the caller of\n> StandbySlotsHaveCaughtup() (see ProcessConfigFile() in\n> WaitForStandbyConfirmation()). So we can use 'SyncRepWaitMode'\n> directly as it is not going to change in StandbySlotsHaveCaughtup()\n> even if user triggers the change. And thus it was okay to use it even\n> in the local variable too similar to how SyncRepWaitForLSN() does it.\n>\n> > In the latest patch, I've added:\n> >\n> > Assert(SyncRepWaitMode >= 0);\n> >\n> > which should be true since we call SyncRepConfigured() at the\n> > beginning of StandbySlotsHaveCaughtup(),\n> > and used SyncRepWaitMode directly.\n>\n> Yes, it should be okay I think. As SyncRepRequested() in the beginning\n> makes sure synchronous_commit > SYNCHRONOUS_COMMIT_LOCAL_FLUSH and\n> thus SyncRepWaitMode should be mapped to either of\n> WAIT_WRITE/FLUSH/APPLY etc. Will review further.\n>\n\nI was wondering if we need somethign similar to SyncRepWaitForLSN() here:\n\n /* Cap the level for anything other than commit to remote flush only. */\n if (commit)\n mode = SyncRepWaitMode;\n else\n mode = Min(SyncRepWaitMode, SYNC_REP_WAIT_FLUSH);\n\nThe header comment says:\n * 'lsn' represents the LSN to wait for. 'commit' indicates whether this LSN\n * represents a commit record. If it doesn't, then we wait only for the WAL\n * to be flushed if synchronous_commit is set to the higher level of\n * remote_apply, because only commit records provide apply feedback.\n\nIf we don't do something similar, then aren't there chances that we\nkeep on waiting on the wrong lsn[mode] for the case when mode =\nSYNC_REP_WAIT_APPLY while sync-rep-wait infrastructure is updating\ndifferent mode's lsn. Is my understanding correct?\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 13 Sep 2024 15:13:36 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Fri, Sep 13, 2024 at 3:13 PM shveta malik <[email protected]> wrote:\n>\n> On Thu, Sep 12, 2024 at 3:04 PM shveta malik <[email protected]> wrote:\n> >\n> > On Wed, Sep 11, 2024 at 2:40 AM John H <[email protected]> wrote:\n> > >\n> > > Hi Shveta,\n> > >\n> > > On Sun, Sep 8, 2024 at 11:16 PM shveta malik <[email protected]> wrote:\n> > >\n> > > >\n> > > > I was trying to have a look at the patch again, it doesn't apply on\n> > > > the head, needs rebase.\n> > > >\n> > >\n> > > Rebased with the latest changes.\n> > >\n> > > > Regarding 'mode = SyncRepWaitMode', FWIW, SyncRepWaitForLSN() also\n> > > > does in a similar way. It gets mode in local var initially and uses it\n> > > > later. See [1]. So isn't there a chance too that 'SyncRepWaitMode' can\n> > > > change in between.\n> > > >\n> > > > [1]:\n> > > > mode = SyncRepWaitMode;\n> > > > .....\n> > > > ....\n> > > > if (!WalSndCtl->sync_standbys_defined ||\n> > > > lsn <= WalSndCtl->lsn[mode])\n> > > > {\n> > > > LWLockRelease(SyncRepLock);\n> > > > return;\n> > > > }\n> > >\n> > > You are right, thanks for the correction. I tried reproducing with GDB\n> > > where SyncRepWaitMode\n> > > changes due to pg_ctl reload but was unable to do so. It seems like\n> > > SIGHUP only sets ConfigReloadPending = true,\n> > > which gets processed in the next loop in WalSndLoop() and that's\n> > > probably where I was getting confused.\n> >\n> > yes, SIGHUP will be processed in the caller of\n> > StandbySlotsHaveCaughtup() (see ProcessConfigFile() in\n> > WaitForStandbyConfirmation()). So we can use 'SyncRepWaitMode'\n> > directly as it is not going to change in StandbySlotsHaveCaughtup()\n> > even if user triggers the change. And thus it was okay to use it even\n> > in the local variable too similar to how SyncRepWaitForLSN() does it.\n> >\n> > > In the latest patch, I've added:\n> > >\n> > > Assert(SyncRepWaitMode >= 0);\n> > >\n> > > which should be true since we call SyncRepConfigured() at the\n> > > beginning of StandbySlotsHaveCaughtup(),\n> > > and used SyncRepWaitMode directly.\n> >\n> > Yes, it should be okay I think. As SyncRepRequested() in the beginning\n> > makes sure synchronous_commit > SYNCHRONOUS_COMMIT_LOCAL_FLUSH and\n> > thus SyncRepWaitMode should be mapped to either of\n> > WAIT_WRITE/FLUSH/APPLY etc. Will review further.\n> >\n>\n> I was wondering if we need somethign similar to SyncRepWaitForLSN() here:\n>\n> /* Cap the level for anything other than commit to remote flush only. */\n> if (commit)\n> mode = SyncRepWaitMode;\n> else\n> mode = Min(SyncRepWaitMode, SYNC_REP_WAIT_FLUSH);\n>\n> The header comment says:\n> * 'lsn' represents the LSN to wait for. 'commit' indicates whether this LSN\n> * represents a commit record. If it doesn't, then we wait only for the WAL\n> * to be flushed if synchronous_commit is set to the higher level of\n> * remote_apply, because only commit records provide apply feedback.\n>\n> If we don't do something similar, then aren't there chances that we\n> keep on waiting on the wrong lsn[mode] for the case when mode =\n> SYNC_REP_WAIT_APPLY while sync-rep-wait infrastructure is updating\n> different mode's lsn. Is my understanding correct?\n>\n\nI think here we always need the lsn values corresponding to\nSYNC_REP_WAIT_FLUSH as we want to ensure that the WAL has to be\nflushed on physical standby before sending it to the logical\nsubscriber. See ProcessStandbyReplyMessage() where we always use\nflushPtr to advance the physical_slot via\nPhysicalConfirmReceivedLocation().\n\nAnother question aside from the above point, what if someone has\nspecified logical subscribers in synchronous_standby_names? In the\ncase of synchronized_standby_slots, we won't proceed with such slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 16 Sep 2024 11:13:24 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Mon, Sep 16, 2024 at 11:13 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Sep 13, 2024 at 3:13 PM shveta malik <[email protected]> wrote:\n> >\n> > On Thu, Sep 12, 2024 at 3:04 PM shveta malik <[email protected]> wrote:\n> > >\n> > > On Wed, Sep 11, 2024 at 2:40 AM John H <[email protected]> wrote:\n> > > >\n> > > > Hi Shveta,\n> > > >\n> > > > On Sun, Sep 8, 2024 at 11:16 PM shveta malik <[email protected]> wrote:\n> > > >\n> > > > >\n> > > > > I was trying to have a look at the patch again, it doesn't apply on\n> > > > > the head, needs rebase.\n> > > > >\n> > > >\n> > > > Rebased with the latest changes.\n> > > >\n> > > > > Regarding 'mode = SyncRepWaitMode', FWIW, SyncRepWaitForLSN() also\n> > > > > does in a similar way. It gets mode in local var initially and uses it\n> > > > > later. See [1]. So isn't there a chance too that 'SyncRepWaitMode' can\n> > > > > change in between.\n> > > > >\n> > > > > [1]:\n> > > > > mode = SyncRepWaitMode;\n> > > > > .....\n> > > > > ....\n> > > > > if (!WalSndCtl->sync_standbys_defined ||\n> > > > > lsn <= WalSndCtl->lsn[mode])\n> > > > > {\n> > > > > LWLockRelease(SyncRepLock);\n> > > > > return;\n> > > > > }\n> > > >\n> > > > You are right, thanks for the correction. I tried reproducing with GDB\n> > > > where SyncRepWaitMode\n> > > > changes due to pg_ctl reload but was unable to do so. It seems like\n> > > > SIGHUP only sets ConfigReloadPending = true,\n> > > > which gets processed in the next loop in WalSndLoop() and that's\n> > > > probably where I was getting confused.\n> > >\n> > > yes, SIGHUP will be processed in the caller of\n> > > StandbySlotsHaveCaughtup() (see ProcessConfigFile() in\n> > > WaitForStandbyConfirmation()). So we can use 'SyncRepWaitMode'\n> > > directly as it is not going to change in StandbySlotsHaveCaughtup()\n> > > even if user triggers the change. And thus it was okay to use it even\n> > > in the local variable too similar to how SyncRepWaitForLSN() does it.\n> > >\n> > > > In the latest patch, I've added:\n> > > >\n> > > > Assert(SyncRepWaitMode >= 0);\n> > > >\n> > > > which should be true since we call SyncRepConfigured() at the\n> > > > beginning of StandbySlotsHaveCaughtup(),\n> > > > and used SyncRepWaitMode directly.\n> > >\n> > > Yes, it should be okay I think. As SyncRepRequested() in the beginning\n> > > makes sure synchronous_commit > SYNCHRONOUS_COMMIT_LOCAL_FLUSH and\n> > > thus SyncRepWaitMode should be mapped to either of\n> > > WAIT_WRITE/FLUSH/APPLY etc. Will review further.\n> > >\n> >\n> > I was wondering if we need somethign similar to SyncRepWaitForLSN() here:\n> >\n> > /* Cap the level for anything other than commit to remote flush only. */\n> > if (commit)\n> > mode = SyncRepWaitMode;\n> > else\n> > mode = Min(SyncRepWaitMode, SYNC_REP_WAIT_FLUSH);\n> >\n> > The header comment says:\n> > * 'lsn' represents the LSN to wait for. 'commit' indicates whether this LSN\n> > * represents a commit record. If it doesn't, then we wait only for the WAL\n> > * to be flushed if synchronous_commit is set to the higher level of\n> > * remote_apply, because only commit records provide apply feedback.\n> >\n> > If we don't do something similar, then aren't there chances that we\n> > keep on waiting on the wrong lsn[mode] for the case when mode =\n> > SYNC_REP_WAIT_APPLY while sync-rep-wait infrastructure is updating\n> > different mode's lsn. Is my understanding correct?\n> >\n>\n> I think here we always need the lsn values corresponding to\n> SYNC_REP_WAIT_FLUSH as we want to ensure that the WAL has to be\n> flushed on physical standby before sending it to the logical\n> subscriber. See ProcessStandbyReplyMessage() where we always use\n> flushPtr to advance the physical_slot via\n> PhysicalConfirmReceivedLocation().\n\nI agree. So even if the mode is SYNC_REP_WAIT_WRITE (lower one) or\nSYNC_REP_WAIT_APPLY (higher one), we need to wait for\nlsn[SYNC_REP_WAIT_FLUSH].\n\n> Another question aside from the above point, what if someone has\n> specified logical subscribers in synchronous_standby_names? In the\n> case of synchronized_standby_slots, we won't proceed with such slots.\n>\n\nYes, it is a possibility. I have missed this point earlier. Now I\ntried a case where I give a mix of logical subscriber and physical\nstandby in 'synchronous_standby_names' on pgHead, it even takes that\n'mix' configuration and starts waiting accordingly.\n\nsynchronous_standby_names = 'FIRST 2(logicalsub_1, phy_standby_1,\nphy_standby_2)';\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 16 Sep 2024 14:55:42 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Mon, Sep 16, 2024 at 2:55 PM shveta malik <[email protected]> wrote:\n>\n> On Mon, Sep 16, 2024 at 11:13 AM Amit Kapila <[email protected]> wrote:\n> >\n>\n> > Another question aside from the above point, what if someone has\n> > specified logical subscribers in synchronous_standby_names? In the\n> > case of synchronized_standby_slots, we won't proceed with such slots.\n> >\n>\n> Yes, it is a possibility. I have missed this point earlier. Now I\n> tried a case where I give a mix of logical subscriber and physical\n> standby in 'synchronous_standby_names' on pgHead, it even takes that\n> 'mix' configuration and starts waiting accordingly.\n>\n> synchronous_standby_names = 'FIRST 2(logicalsub_1, phy_standby_1,\n> phy_standby_2)';\n>\n\nThis should not happen as we don't support syncing failover slots on\nlogical subscribers. The other point to consider here is that the user\nmay not have set 'sync_replication_slots' on all the physical standbys\nmentioned in 'synchronous_standby_names' and in that case, it doesn't\nmake sense to wait for WAL to get flushed on those standbys. What do\nyou think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 16 Sep 2024 16:04:16 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Mon, Sep 16, 2024 at 4:04 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Sep 16, 2024 at 2:55 PM shveta malik <[email protected]> wrote:\n> >\n> > On Mon, Sep 16, 2024 at 11:13 AM Amit Kapila <[email protected]> wrote:\n> > >\n> >\n> > > Another question aside from the above point, what if someone has\n> > > specified logical subscribers in synchronous_standby_names? In the\n> > > case of synchronized_standby_slots, we won't proceed with such slots.\n> > >\n> >\n> > Yes, it is a possibility. I have missed this point earlier. Now I\n> > tried a case where I give a mix of logical subscriber and physical\n> > standby in 'synchronous_standby_names' on pgHead, it even takes that\n> > 'mix' configuration and starts waiting accordingly.\n> >\n> > synchronous_standby_names = 'FIRST 2(logicalsub_1, phy_standby_1,\n> > phy_standby_2)';\n> >\n>\n> This should not happen as we don't support syncing failover slots on\n> logical subscribers.\n\n+1\n\n> The other point to consider here is that the user\n> may not have set 'sync_replication_slots' on all the physical standbys\n> mentioned in 'synchronous_standby_names' and in that case, it doesn't\n> make sense to wait for WAL to get flushed on those standbys. What do\n> you think?\n>\n\nYes, it is a possibility. But then it is a possibility in case of\n'synchronized_standby_slots' as well. User may always configure one of\nthe standbys in 'synchronized_standby_slots' while may not configure\nslot-sync GUCs on that standby (hot_standby_feedback,\nsync_replication_slots etc). In such a case, logical replication is\ndependent upon the concerned physical standby even though latter is\nnot syncing failover slots.\nBut there is no reliable way to detect this at the publisher side to\nstop the 'wait' for the concerned physical standby. We tried in the\npast but it was not that simple as the sync related GUCs may change\nanytime on the physical standby and thus need consistent feedback\nmechanism to detect this. IMO, we can explain the recommendations and\nrisks for 'synchronous_standby_names' in docs similar to what we do\nfor 'sync_replication_slots'. Or do you have something else in mind?\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 17 Sep 2024 09:08:17 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Tue, Sep 17, 2024 at 9:08 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Sep 16, 2024 at 4:04 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Sep 16, 2024 at 2:55 PM shveta malik <[email protected]> wrote:\n> > >\n> > > On Mon, Sep 16, 2024 at 11:13 AM Amit Kapila <[email protected]> wrote:\n> > > >\n> > >\n> > > > Another question aside from the above point, what if someone has\n> > > > specified logical subscribers in synchronous_standby_names? In the\n> > > > case of synchronized_standby_slots, we won't proceed with such slots.\n> > > >\n> > >\n> > > Yes, it is a possibility. I have missed this point earlier. Now I\n> > > tried a case where I give a mix of logical subscriber and physical\n> > > standby in 'synchronous_standby_names' on pgHead, it even takes that\n> > > 'mix' configuration and starts waiting accordingly.\n> > >\n> > > synchronous_standby_names = 'FIRST 2(logicalsub_1, phy_standby_1,\n> > > phy_standby_2)';\n> > >\n> >\n> > This should not happen as we don't support syncing failover slots on\n> > logical subscribers.\n>\n> +1\n>\n> > The other point to consider here is that the user\n> > may not have set 'sync_replication_slots' on all the physical standbys\n> > mentioned in 'synchronous_standby_names' and in that case, it doesn't\n> > make sense to wait for WAL to get flushed on those standbys. What do\n> > you think?\n> >\n>\n> Yes, it is a possibility. But then it is a possibility in case of\n> 'synchronized_standby_slots' as well. User may always configure one of\n> the standbys in 'synchronized_standby_slots' while may not configure\n> slot-sync GUCs on that standby (hot_standby_feedback,\n> sync_replication_slots etc). In such a case, logical replication is\n> dependent upon the concerned physical standby even though latter is\n> not syncing failover slots.\n>\n\nThe difference is that the purpose of 'synchronized_standby_slots' is\nto mention slot names for which the user expects logical walsenders to\nwait before sending the logical changes to subscribers. OTOH,\n'synchronous_standby_names' has a different purpose as well. It is not\nclear to me if the users would be interested in syncing failover slots\nto all the standbys mentioned in 'synchronous_standby_names'.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 19 Sep 2024 12:02:37 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Thu, Sep 19, 2024 at 12:02 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Sep 17, 2024 at 9:08 AM shveta malik <[email protected]> wrote:\n> >\n> > On Mon, Sep 16, 2024 at 4:04 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Sep 16, 2024 at 2:55 PM shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Mon, Sep 16, 2024 at 11:13 AM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > >\n> > > > > Another question aside from the above point, what if someone has\n> > > > > specified logical subscribers in synchronous_standby_names? In the\n> > > > > case of synchronized_standby_slots, we won't proceed with such slots.\n> > > > >\n> > > >\n> > > > Yes, it is a possibility. I have missed this point earlier. Now I\n> > > > tried a case where I give a mix of logical subscriber and physical\n> > > > standby in 'synchronous_standby_names' on pgHead, it even takes that\n> > > > 'mix' configuration and starts waiting accordingly.\n> > > >\n> > > > synchronous_standby_names = 'FIRST 2(logicalsub_1, phy_standby_1,\n> > > > phy_standby_2)';\n> > > >\n> > >\n> > > This should not happen as we don't support syncing failover slots on\n> > > logical subscribers.\n> >\n> > +1\n> >\n> > > The other point to consider here is that the user\n> > > may not have set 'sync_replication_slots' on all the physical standbys\n> > > mentioned in 'synchronous_standby_names' and in that case, it doesn't\n> > > make sense to wait for WAL to get flushed on those standbys. What do\n> > > you think?\n> > >\n> >\n> > Yes, it is a possibility. But then it is a possibility in case of\n> > 'synchronized_standby_slots' as well. User may always configure one of\n> > the standbys in 'synchronized_standby_slots' while may not configure\n> > slot-sync GUCs on that standby (hot_standby_feedback,\n> > sync_replication_slots etc). In such a case, logical replication is\n> > dependent upon the concerned physical standby even though latter is\n> > not syncing failover slots.\n> >\n>\n> The difference is that the purpose of 'synchronized_standby_slots' is\n> to mention slot names for which the user expects logical walsenders to\n> wait before sending the logical changes to subscribers. OTOH,\n> 'synchronous_standby_names' has a different purpose as well. It is not\n> clear to me if the users would be interested in syncing failover slots\n> to all the standbys mentioned in 'synchronous_standby_names'.\n>\n\nOkay, I see your point. But not sure what could be the solution here\nexcept documenting. But let me think more.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 20 Sep 2024 15:14:14 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 16, 2024 at 2:25 AM shveta malik <[email protected]> wrote:\n\n> > >\n> > > If we don't do something similar, then aren't there chances that we\n> > > keep on waiting on the wrong lsn[mode] for the case when mode =\n> > > SYNC_REP_WAIT_APPLY while sync-rep-wait infrastructure is updating\n> > > different mode's lsn. Is my understanding correct?\n> > >\n\nLet me take a deeper look at this, I think you're right though.\n\n>\n> I agree. So even if the mode is SYNC_REP_WAIT_WRITE (lower one) or\n> SYNC_REP_WAIT_APPLY (higher one), we need to wait for\n> lsn[SYNC_REP_WAIT_FLUSH].\n>\n\nI'm not sure if I agree with that. I think the sychronous_commit mode should be\na good enough proxy for what the user wants from a durability\nperspective for their\napplication.\n\nFor an application writing to the database, if they've set mode as\nSYNC_REP_WAIT_WRITE\nas fine being when a commit is treated as durable, why do we need to\nbe concerned\nwith overriding that to SYNC_REP_WAIT_FLUSH?\n\nSimilarly, if a user has mode set to SYNC_REP_WAIT_APPLY, to me it's even more\nconfusing that there can be scenarios where the application wouldn't\nsee the data as committed\nnor would subsequent reads but a logical consumer would be able to.\nThe database should be\ntreated as the source of truth and I don't think logical consumers\nshould be ever ahead of\nwhat the database is treating as committed.\n\n\nThanks,\n\n-- \nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Fri, 20 Sep 2024 17:53:39 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "Hi,\n\nOn Fri, Sep 20, 2024 at 2:44 AM shveta malik <[email protected]> wrote:\n> > >\n> >\n> > The difference is that the purpose of 'synchronized_standby_slots' is\n> > to mention slot names for which the user expects logical walsenders to\n> > wait before sending the logical changes to subscribers. OTOH,\n> > 'synchronous_standby_names' has a different purpose as well. It is not\n> > clear to me if the users would be interested in syncing failover slots\n> > to all the standbys mentioned in 'synchronous_standby_names'.\n> >\n>\n> Okay, I see your point. But not sure what could be the solution here\n> except documenting. But let me think more.\n>\n\nThat's a great find. I didn't consider mixed physical and logical\nreplicas in synchronous_standby_names.\nI wonder if there are users running synchronous_standby_names with a\nmix of logical and\nphysical replicas and what the use case would be.\n\nNot sure if there's anything straight forward we could do in general\nfor slot syncing if synchronous_standby_names\nrefers to application_names of logical replicas, the feature can't be supported.\n\n-- \nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Fri, 20 Sep 2024 18:04:24 -0700", "msg_from": "John H <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" }, { "msg_contents": "On Sat, Sep 21, 2024 at 6:34 AM John H <[email protected]> wrote:\n>\n> On Fri, Sep 20, 2024 at 2:44 AM shveta malik <[email protected]> wrote:\n> > > >\n> > >\n> > > The difference is that the purpose of 'synchronized_standby_slots' is\n> > > to mention slot names for which the user expects logical walsenders to\n> > > wait before sending the logical changes to subscribers. OTOH,\n> > > 'synchronous_standby_names' has a different purpose as well. It is not\n> > > clear to me if the users would be interested in syncing failover slots\n> > > to all the standbys mentioned in 'synchronous_standby_names'.\n> > >\n> >\n> > Okay, I see your point. But not sure what could be the solution here\n> > except documenting. But let me think more.\n> >\n>\n> That's a great find. I didn't consider mixed physical and logical\n> replicas in synchronous_standby_names.\n> I wonder if there are users running synchronous_standby_names with a\n> mix of logical and\n> physical replicas and what the use case would be.\n>\n\nI am also not aware of the actual use cases of mixing physical and\nlogical synchronous standbys but as we provide that functionality, we\ncan't ignore it. BTW, I am also not sure if users would like the slots\nto be synced on all the standbys mentioned in\nsynchronous_standby_names. and even, if they are, it is better to have\nan explicit way of letting users specify it.\n\nOne possible approach is to extend the syntax of\n\"synchronized_standby_slots\" similar to \"synchronous_standby_names\" so\nthat users can directly specify slots similarly and avoid waiting for\nmore than required standbys.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 23 Sep 2024 16:19:34 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow logical failover slots to wait on synchronous replication" } ]
[ { "msg_contents": "Hi,\n\nI was reading the walsender.c fileheader comment while studying\nanother thread. I think if there is logical replication in progress\nthen the PROCSIG_WALSND_INIT_STOPPING handler will *always* switch to\na \"stopping\" state: e.g.,\n\n/*\n * Handle PROCSIG_WALSND_INIT_STOPPING signal.\n */\nvoid\nHandleWalSndInitStopping(void)\n{\nAssert(am_walsender);\n\n/*\n* If replication has not yet started, die like with SIGTERM. If\n* replication is active, only set a flag and wake up the main loop. It\n* will send any outstanding WAL, wait for it to be replicated to the\n* standby, and then exit gracefully.\n*/\nif (!replication_active)\nkill(MyProcPid, SIGTERM);\nelse\ngot_STOPPING = true;\n}\n\n~~~\n\nBut the walsender.c fileheader comment seems to be saying something\nslightly different. IIUC, some minor rewording of the comment is\nneeded so it describes the code better.\n\nHEAD\n...\n * shutdown, if logical replication is in progress all existing WAL records\n * are processed followed by a shutdown. Otherwise this causes the walsender\n * to switch to the \"stopping\" state. In this state, the walsender will reject\n * any further replication commands. The checkpointer begins the shutdown\n ...\n\nSUGGESTION\n.. shutdown. If logical replication is in progress, the walsender\nswitches to a \"stopping\" state. In this state, the walsender will\nreject any further replication commands - but all existing WAL records\nare processed - followed by a shutdown.\n\n~~~\n\nI attached a patch for the above-suggested change.\n\nThoughts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 11 Jun 2024 12:35:42 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "walsender.c fileheader comment" }, { "msg_contents": "\n\nOn 6/11/24 04:35, Peter Smith wrote:\n> Hi,\n> \n> I was reading the walsender.c fileheader comment while studying\n> another thread. I think if there is logical replication in progress\n> then the PROCSIG_WALSND_INIT_STOPPING handler will *always* switch to\n> a \"stopping\" state: e.g.,\n> \n> /*\n> * Handle PROCSIG_WALSND_INIT_STOPPING signal.\n> */\n> void\n> HandleWalSndInitStopping(void)\n> {\n> Assert(am_walsender);\n> \n> /*\n> * If replication has not yet started, die like with SIGTERM. If\n> * replication is active, only set a flag and wake up the main loop. It\n> * will send any outstanding WAL, wait for it to be replicated to the\n> * standby, and then exit gracefully.\n> */\n> if (!replication_active)\n> kill(MyProcPid, SIGTERM);\n> else\n> got_STOPPING = true;\n> }\n> \n> ~~~\n> \n> But the walsender.c fileheader comment seems to be saying something\n> slightly different. IIUC, some minor rewording of the comment is\n> needed so it describes the code better.\n> \n> HEAD\n> ...\n> * shutdown, if logical replication is in progress all existing WAL records\n> * are processed followed by a shutdown. Otherwise this causes the walsender\n> * to switch to the \"stopping\" state. In this state, the walsender will reject\n> * any further replication commands. The checkpointer begins the shutdown\n> ...\n> \n> SUGGESTION\n> .. shutdown. If logical replication is in progress, the walsender\n> switches to a \"stopping\" state. In this state, the walsender will\n> reject any further replication commands - but all existing WAL records\n> are processed - followed by a shutdown.\n> \n> ~~~\n>\n> I attached a patch for the above-suggested change.\n>\n> Thoughts?\n\nI did look at this, and while the explanation in the current comment may\nseem a bit confusing, I'm not sure the suggested changes improve the\nsituation very much.\n\nThis suggests the two comments somehow disagree, but it does not say in\nwhat exactly, so perhaps I just missed it :-(\n\nISTM there's a bit of confusion what is meant by \"stopping\" state - you\nseem to be interpreting it as a general concept, where the walsender is\nrequested to stop (through the signal), and starts doing stuff to exit.\nBut the comments actually talk about WalSnd->state, where \"stopping\"\nmeans it needs to be set to WALSNDSTATE_STOPPING.\n\nAnd we only ever switch to that state in two places - in WalSndPhysical\nand exec_replication_command. And that does not happen in regular\nlogical replication (which is what \"logical replication is in progress\"\nrefers to) - if you have a walsender just replicating DML, it will never\nsee the WALSNDSTATE_STOPPING state. It will simply do the cleanup while\nstill in WALSNDSTATE_STREAMING state, and then just exit.\n\nSo from this point of view, the suggestion is actually wrong.\n\nTo conclude, I think this probably makes the comments more confusing. If\nwe want to make it clearer, I'd probably start by clarifying what the\n\"stopping\" state means. Also, it's a bit surprising we may not actually\ngo through the \"stopping\" state during shutdown.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 16 Jul 2024 22:56:11 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c fileheader comment" }, { "msg_contents": "Hi, Thankyou for taking the time to look at this and reply.\n\n>\n> I did look at this, and while the explanation in the current comment may\n> seem a bit confusing, I'm not sure the suggested changes improve the\n> situation very much.\n>\n> This suggests the two comments somehow disagree, but it does not say in\n> what exactly, so perhaps I just missed it :-(\n>\n> ISTM there's a bit of confusion what is meant by \"stopping\" state - you\n> seem to be interpreting it as a general concept, where the walsender is\n> requested to stop (through the signal), and starts doing stuff to exit.\n> But the comments actually talk about WalSnd->state, where \"stopping\"\n> means it needs to be set to WALSNDSTATE_STOPPING.\n\nYes, I interpreted the \"stopping\" state meaning as when the boolean\nflag 'got_STOPPING' is assigned true.\n\n>\n> And we only ever switch to that state in two places - in WalSndPhysical\n> and exec_replication_command. And that does not happen in regular\n> logical replication (which is what \"logical replication is in progress\"\n> refers to) - if you have a walsender just replicating DML, it will never\n> see the WALSNDSTATE_STOPPING state. It will simply do the cleanup while\n> still in WALSNDSTATE_STREAMING state, and then just exit.\n>\n> So from this point of view, the suggestion is actually wrong.\n\nOK.\n\n>\n> To conclude, I think this probably makes the comments more confusing. If\n> we want to make it clearer, I'd probably start by clarifying what the\n> \"stopping\" state means. Also, it's a bit surprising we may not actually\n> go through the \"stopping\" state during shutdown.\n>\n\nI agree. My interpretation of the (ambiguous) \"stopping\" state led me\nto believe the comment was quite wrong. So, this thread was only\nintended as a trivial comment fix in passing but clearly there is more\nto this than I anticipated. I would be happy if someone with more\nknowledge about the WALSNDSTATE_STOPPING versus got_STOPPING could\ndisambiguate the file header comment, but that's not me, so I have\nwithdrawn this from the Commitfest.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 19 Jul 2024 15:02:43 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: walsender.c fileheader comment" }, { "msg_contents": "On 7/19/24 07:02, Peter Smith wrote:\n> ...\n>>\n>> To conclude, I think this probably makes the comments more confusing. If\n>> we want to make it clearer, I'd probably start by clarifying what the\n>> \"stopping\" state means. Also, it's a bit surprising we may not actually\n>> go through the \"stopping\" state during shutdown.\n>>\n> \n> I agree. My interpretation of the (ambiguous) \"stopping\" state led me\n> to believe the comment was quite wrong. So, this thread was only\n> intended as a trivial comment fix in passing but clearly there is more\n> to this than I anticipated. I would be happy if someone with more\n> knowledge about the WALSNDSTATE_STOPPING versus got_STOPPING could\n> disambiguate the file header comment, but that's not me, so I have\n> withdrawn this from the Commitfest.\n> \n\nUnderstood. Thanks for the patch anyway, I appreciate you took the time\nto try to improve the comments!\n\nI agree the state transitions in walsender are not very clear, and the\nfact that it may shutdown without ever going through STOPPING state is\nquite confusing. That being said, I personally don't have ambition to\nimprove this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 19 Jul 2024 15:25:18 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walsender.c fileheader comment" } ]
[ { "msg_contents": "Hi,\n\nI was looking around for an exotic index type to try the experience of\nstreamifying an extension, ie out-of-core code. I am totally new to\npgvector, but since everyone keeps talking about it, I could not avoid\npicking up some basic facts in the pgconf.dev hallway track, and\nunderstood that its scans have some degree of known-order access\npredictability, and then also some degree of fuzzy-predictable\norder-not-yet-determined access too. It's also quite random in the\nI/O sense.\n\nHere's a toy to streamify the known-order part. I think for the fuzzy\npart that links those parts together, maybe there is some way to guess\nwhen it's a reasonable time to speculatively prefetch the lowest order\nstuff in the pairing heap, and then deal with it if you're wrong, but\nI didn't try that...\n\nSomeone involved in that project mentioned that it's probably not a\ngreat topic to research in practice, because real world users of HNSW\nuse fully cached ie prewarmed indexes, because the performance is so\nbad otherwise. (Though maybe that argument is a little circular...).\nSo although this patch clearly speeds up cold HSNW searches to a\ndegree controlled by effective_io_concurrency, I'll probably look for\nsomething else. Suggestions for interesting index types to look at\nstreamifying are very welcome!\n\nHmm. If that's really true about HNSW though, then there may still be\nan opportunity to do automatic memory prefetching[1]. But then in the\ncase of index building, \"stream\" is NULL in this patch anyway. It\nsurely must also be possible to find some good places to put\nprofitable explicit pg_mem_prefetch() calls given the predictability\nand the need to get only ~60ns ahead for that usage. I didn't look\ninto that because I was trying to prove things about read_stream.c,\nnot get involved in another project :-D\n\nHere ends my science experiment report, which I'm dropping here just\nin case others see useful ideas here. The main thing I learned about\nthe read stream API is that it'd be nice to be able to reset the\nstream but preserve the distance (something that came up on the\nstreaming sequential scan thread for a different reason), to deal with\ncases where look-ahead opportunities come in bursts but you want a\nlonger lived stream than I used here. That is the reason the patch\ncreates and destroys temporary streams in a loop; doh. It also\nprovides an interesting case study for what speculative random\nlook-ahead support might need to look like.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGKNUMnqubrrz8pRBdEM8vHeSCZcNq7iqERmkt6zPtpA3g%40mail.gmail.com\n\n=== setup ====\n\ncreate extension vector;\n\ncreate or replace function random_vector(dimensions int)\nreturns vector language sql\nbegin atomic;\n select array_agg(random())::vector\n from generate_series(1, dimensions);\nend;\n\ncreate table t (id serial, embedding vector(6));\n\ninsert into t (embedding)\nselect random_vector(6)\n from generate_series(1, 1000000);\n\nset maintenance_work_mem = '2GB';\n\ncreate index on t using hnsw(embedding vector_l2_ops);\n\n=== test of a hot search, assuming repeated ===\n\nselect embedding <-> '[0.5,0.5,0.5,0.5,0.5,0.5]'::vector\n from t\n where embedding <-> '[0.5,0.5,0.5,0.5,0.5,0.5]'::vector < 0.2\n order by 1 limit 20;\n\n=== test of a cold search, assuming empty caches ===\n\ncreate or replace function test()\nreturns void\nlanguage plpgsql as\n$$\ndeclare\n my_vec vector(6) := random_vector(6);\nbegin\n perform embedding <-> my_vec\n from t\n where embedding <-> my_vec < 0.2\n order by 1 limit 20;\nend;\n$$;\n\nselect test();\n\n(Make sure you remember to set effective_io_concurrency to an\ninteresting number if you want to generate a lot of overlapping\nfadvise calls.)", "msg_date": "Tue, 11 Jun 2024 16:53:41 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Trying out read streams in pgvector (an extension)" }, { "msg_contents": "On 11/06/2024 07:53, Thomas Munro wrote:\n> Someone involved in that project mentioned that it's probably not a\n> great topic to research in practice, because real world users of HNSW\n> use fully cached ie prewarmed indexes, because the performance is so\n> bad otherwise. (Though maybe that argument is a little circular...).\n\nI think that's true in practice for *building* an HNSW index, but faster \n*searching* when the index is not in memory seems quite useful. And of \ncourse, faster is always better, even if it's only in a non-optimal \nscenario.\n\n> So although this patch clearly speeds up cold HSNW searches to a\n> degree controlled by effective_io_concurrency, I'll probably look for\n> something else. Suggestions for interesting index types to look at\n> streamifying are very welcome!\n\nGiST and GIN?\n\n> Hmm. If that's really true about HNSW though, then there may still be\n> an opportunity to do automatic memory prefetching[1]. But then in the\n> case of index building, \"stream\" is NULL in this patch anyway. It\n> surely must also be possible to find some good places to put\n> profitable explicit pg_mem_prefetch() calls given the predictability\n> and the need to get only ~60ns ahead for that usage. I didn't look\n> into that because I was trying to prove things about read_stream.c,\n> not get involved in another project :-D\n> \n> Here ends my science experiment report, which I'm dropping here just\n> in case others see useful ideas here. The main thing I learned about\n> the read stream API is that it'd be nice to be able to reset the\n> stream but preserve the distance (something that came up on the\n> streaming sequential scan thread for a different reason), to deal with\n> cases where look-ahead opportunities come in bursts but you want a\n> longer lived stream than I used here. That is the reason the patch\n> creates and destroys temporary streams in a loop; doh. It also\n> provides an interesting case study for what speculative random\n> look-ahead support might need to look like.\n\nThis reminds me of a prototype I wrote earlier, see \nhttps://github.com/pgvector/pgvector/pull/386, 1st commit. It \nreorganizes HnswSearchLayer() so that it in iteration, it first collects \nall the neighbors to visit, and then visits them, somewhat similar to \nyour patch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 11 Jun 2024 10:47:31 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying out read streams in pgvector (an extension)" }, { "msg_contents": "On 6/11/24 12:53 AM, Thomas Munro wrote:\r\n> Hi,\r\n> \r\n> I was looking around for an exotic index type to try the experience of\r\n> streamifying an extension, ie out-of-core code. I am totally new to\r\n> pgvector, but since everyone keeps talking about it, I could not avoid\r\n> picking up some basic facts in the pgconf.dev hallway track, and\r\n> understood that its scans have some degree of known-order access\r\n> predictability, and then also some degree of fuzzy-predictable\r\n> order-not-yet-determined access too. It's also quite random in the\r\n> I/O sense.\r\n\r\nCool! I happened to be chatting w/Andrew about this yesterday to see if \r\nthere could be some benefits for folks who are running pgvector on PG17.\r\n\r\n> Here's a toy to streamify the known-order part. I think for the fuzzy\r\n> part that links those parts together, maybe there is some way to guess\r\n> when it's a reasonable time to speculatively prefetch the lowest order\r\n> stuff in the pairing heap, and then deal with it if you're wrong, but\r\n> I didn't try that...\r\n\r\nI would suggest submitting this at least as a draft PR to the pgvector \r\nproject[1]:\r\n\r\nhttps://github.com/pgvector/pgvector\r\n\r\n> Someone involved in that project mentioned that it's probably not a\r\n> great topic to research in practice, because real world users of HNSW\r\n> use fully cached ie prewarmed indexes, because the performance is so\r\n> bad otherwise.\r\n\r\nI don't think that was me, at least in those words (and I had noted I'd \r\nlove to chat w/you about this, but we didn't find time). Stating it \r\ndifferently, the \"ideal\" is to keep the indexes in memory, as that leads \r\nto the best performance, but reality is more complicated. These datasets \r\nare quite large (e.g. the 1536-dim vector is a 6KB payload, excluding \r\nwhat's in the index) and if you're storing the full vector in the index \r\n(there are now some quantization methods available[4]), you can easily \r\ndouble your dataset size, and quickly exceed available memory. So I \r\nthink in the real world, you're more likely to see swapping pages \r\nbetween disk and memory. Some of this was addressed in the talk @ \r\nPGConf.dev[3] (slides here[2]).\r\n\r\n> (Though maybe that argument is a little circular...).\r\n> So although this patch clearly speeds up cold HSNW searches to a\r\n> degree controlled by effective_io_concurrency, I'll probably look for\r\n> something else. Suggestions for interesting index types to look at\r\n> streamifying are very welcome!\r\n\r\nYup, so this makes sense for HNSW particularly at the higher-level \r\npages. But it may make more sense for IVFFlat, given how it clusters \r\ndata. With IVFFlat, you first find your lists/centers, and then you \r\ndetermine how you index each vector around the lists. When those lists \r\nare stored to disk, they're basically sequential. A lot of the struggles \r\nwith IVFFlat is both the long load from disk and ultimately some \r\ncomptuational issues for a larger set of vector comparisons (though if \r\nyou're able to build small, efficient clusters, it can be much faster \r\nthan HNSW!). HNSW behaves more like a (bear with me) typically \r\n\"tree-based\" index, where you'll have hot spots at the top, but because \r\nof the nature of vector search, the lower levels tend to be more random \r\nin access.\r\n\r\nRegardless, the part where this is interesting (at least to me) is that \r\na lot of these vectors tend to take up a full page anyway, so anything \r\nwe can do to read them faster from disk will generally get a thumbs up \r\nfrom me.\r\n\r\n> Hmm. If that's really true about HNSW though, then there may still be\r\n> an opportunity to do automatic memory prefetching[1]. But then in the\r\n> case of index building, \"stream\" is NULL in this patch anyway. It\r\n> surely must also be possible to find some good places to put\r\n> profitable explicit pg_mem_prefetch() calls given the predictability\r\n> and the need to get only ~60ns ahead for that usage. I didn't look\r\n> into that because I was trying to prove things about read_stream.c,\r\n> not get involved in another project :-D\r\n\r\nWell, as alluded to in[2], thinking about how another project uses this \r\nwill certainly help, and anything we can do to continue to speed up \r\nvector queries helps PostgreSQL ;) Some of the contributions from folks \r\nwho have focused on core have significantly helped pgvector.\r\n\r\n> Here ends my science experiment report, which I'm dropping here just\r\n> in case others see useful ideas here. The main thing I learned about\r\n> the read stream API is that it'd be nice to be able to reset the\r\n> stream but preserve the distance (something that came up on the\r\n> streaming sequential scan thread for a different reason), to deal with\r\n> cases where look-ahead opportunities come in bursts but you want a\r\n> longer lived stream than I used here. That is the reason the patch\r\n> creates and destroys temporary streams in a loop; doh. It also\r\n> provides an interesting case study for what speculative random\r\n> look-ahead support might need to look like.\r\n\r\nIf you're curious, I can fire up some of my more serious benchmarks on \r\nthis to do a before/after to see if there's anything interesting. I have \r\na few large datasets (10s of millions) of larger vectors (1536dim => 6KB \r\npayloads) that could see the net effect here.\r\n\r\n> (Make sure you remember to set effective_io_concurrency to an\r\n> interesting number if you want to generate a lot of overlapping\r\n> fadvise calls.)\r\n\r\nWhat would you recommend as an \"interesting number?\" - particularly \r\nusing the data parameters above.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://github.com/pgvector/pgvector\r\n[2] \r\nhttps://www.pgevents.ca/events/pgconfdev2024/sessions/session/1/slides/42/pgconfdev-2024-vectors.pdf\r\n[3] \r\nhttps://www.pgevents.ca/events/pgconfdev2024/schedule/session/1-vectors-how-to-better-support-a-nasty-data-type-in-postgresql/\r\n[4] https://jkatz05.com/post/postgres/pgvector-scalar-binary-quantization/", "msg_date": "Tue, 11 Jun 2024 11:37:09 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying out read streams in pgvector (an extension)" }, { "msg_contents": "On Wed, Jun 12, 2024 at 3:37 AM Jonathan S. Katz <[email protected]> wrote:\n> If you're curious, I can fire up some of my more serious benchmarks on\n> this to do a before/after to see if there's anything interesting. I have\n> a few large datasets (10s of millions) of larger vectors (1536dim => 6KB\n> payloads) that could see the net effect here.\n>\n> > (Make sure you remember to set effective_io_concurrency to an\n> > interesting number if you want to generate a lot of overlapping\n> > fadvise calls.)\n>\n> What would you recommend as an \"interesting number?\" - particularly\n> using the data parameters above.\n\nHi Jonathan,\n\nSorry for not replying sooner (ETOOMANYPROJECTS). For HNSW, I think\nthe maximum useful effective_io_concurrency is bound by the number of\nconnections per HNSW layer (\"m\"). Here are some times I measured\nusing m=16 on two laptops:\n\n | linux (xfs) | macos (apfs)\n branch | eic | avg | speedup | stdev | avg | speedup | stdev\n--------+-----+--------+---------+--------+--------+---------+--------\n master | | 73.959 | 1.0 | 24.168 | 72.290 | 1.0 | 11.851\n stream | 0 | 70.117 | 1.1 | 36.699 | 76.289 | 1.0 | 12.742\n stream | 1 | 57.983 | 1.3 | 5.845 | 79.969 | 1.2 | 8.308\n stream | 2 | 35.629 | 2.1 | 4.088 | 49.198 | 2.0 | 7.686\n stream | 3 | 28.477 | 2.6 | 2.607 | 37.540 | 2.5 | 5.272\n stream | 4 | 26.493 | 2.8 | 3.691 | 33.014 | 2.7 | 4.444\n stream | 5 | 23.711 | 3.1 | 2.435 | 32.622 | 3.0 | 2.270\n stream | 6 | 22.885 | 3.2 | 1.908 | 31.254 | 3.2 | 4.170\n stream | 7 | 21.910 | 3.4 | 2.153 | 33.669 | 3.3 | 4.616\n stream | 8 | 20.741 | 3.6 | 1.594 | 34.182 | 3.5 | 3.819\n stream | 9 | 22.471 | 3.3 | 3.094 | 30.690 | 3.2 | 2.677\n stream | 10 | 19.895 | 3.7 | 1.695 | 32.631 | 3.6 | 4.976\n stream | 11 | 19.447 | 3.8 | 1.647 | 31.163 | 3.7 | 3.351\n stream | 12 | 18.658 | 4.0 | 1.503 | 30.817 | 3.9 | 3.538\n stream | 13 | 18.886 | 3.9 | 0.874 | 29.184 | 3.8 | 4.832\n stream | 14 | 18.667 | 4.0 | 1.692 | 28.783 | 3.9 | 3.459\n stream | 15 | 19.080 | 3.9 | 1.429 | 28.928 | 3.8 | 3.396\n stream | 16 | 18.929 | 3.9 | 3.469 | 29.282 | 3.8 | 2.868\n\nThose are millisecond times to run the test() function shown earlier,\nwith empty kernel cache and PostgreSQL cache (see below) for maximum\nphysical I/O. I ran the master test 30 times, and each\neffective_io_concurrency level 10 times, to show that the variance\ndecreases even at the default effective_io_concurency = 1, so we're\nnot only talking about the avg speed improving.\n\nThe all-cached performance also seems to improve, ~8.9ms -> ~6.9ms on\nLinux, but I can't fully explain why that is, maybe just some random\nstuff about memory layout run-to-run in my quick and dirty test or\nsomething like that, so I'm not claiming that is significant. It\ncertainly didn't get slower, anyway.\n\nI think you would get very different numbers on a high latency storage\nsystem (say, non-local cloud storage) and potentially much more\nspeedup with your large test indexes. Also my 6d random number test\nmay not be very representative and you may be able to come up with\nmuch better tests.\n\nHere's a new version with a TODO tidied up. I also understood that we\nneed to tweak the read_stream_reset() function, so that it doesn't\nforget its current readhead distance when it hops between HNSW nodes\n(which is something that comes up in several other potential uses\ncases including another one I am working in in core). Without this\npatch for PostgreSQL, it reads 1, 2, 4, 7 blocks (= 16 in total)\nbefore it has to take a break to hop to a new page, and then it start\nagain at 1. Oops. With this patch, it is less forgetful, and reaches\nthe full possible I/O concurrency of 16 (or whatever the minimum of\nHNSW's m parameter and effective_io_concurrency is for you).\n\nPSA two patches, one for PostgreSQL and one for pgvector.\n\nI am not actively working on this right now. If someone wants to try\nto develop it further, please feel free! I haven't looked at IVFFlat\nat all.\n\n--- function to let you do SELECT uncache('t_embedding_idx'),\n--- which is the opposite of SELECT pg_prewarm('t_embedding_idx')\n--- see also \"echo 1 | sudo tee /proc/sys/vm/drop_caches\" (Linux)\n--- \"sudo purge\" (macOS)\ncreate extension pg_buffercache;\ncreate or replace function uncache(name text) returns bool\nbegin atomic;\n select bool_and(pg_buffercache_evict(bufferid))\n from pg_buffercache where relfilenode = name::regclass;\nend;", "msg_date": "Fri, 6 Sep 2024 16:28:52 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trying out read streams in pgvector (an extension)" }, { "msg_contents": "On Fri, Sep 6, 2024 at 4:28 PM Thomas Munro <[email protected]> wrote:\n> Without this\n> patch for PostgreSQL, it reads 1, 2, 4, 7 blocks (= 16 in total)\n> before it has to take a break to hop to a new page, and then it start\n> again at 1. Oops.\n\nErm, correction: 1, 2, 4, 8, 1 (because it runs out due to m == 16 and resets).\n\n\n", "msg_date": "Fri, 6 Sep 2024 16:49:49 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trying out read streams in pgvector (an extension)" }, { "msg_contents": "There was a mistake in my query, so the macOS speedup column was wrong\n(was accidentally comparing Linux number with macOS master, sorry for\nthe noise). I also forgot to mention that you don't actually get the\nspeedup on PostgreSQL 17 on a Mac, because Peter only recently\nimplemented the needed read-ahead support for macOS in master/18, but\nit doesn't get slower. Here's the corrected table:\n\n | linux (xfs) | macos (apfs)\n branch | eic | avg | speedup | stdev | avg | speedup | stdev\n--------+-----+--------+---------+--------+--------+---------+--------\n master | | 73.959 | 1.0 | 24.168 | 72.290 | 1.0 | 11.851\n stream | 0 | 70.117 | 1.1 | 36.699 | 76.289 | 0.9 | 12.742\n stream | 1 | 57.983 | 1.3 | 5.845 | 79.969 | 0.9 | 8.308\n stream | 2 | 35.629 | 2.1 | 4.088 | 49.198 | 1.5 | 7.686\n stream | 3 | 28.477 | 2.6 | 2.607 | 37.540 | 1.9 | 5.272\n stream | 4 | 26.493 | 2.8 | 3.691 | 33.014 | 2.2 | 4.444\n stream | 5 | 23.711 | 3.1 | 2.435 | 32.622 | 2.2 | 2.270\n stream | 6 | 22.885 | 3.2 | 1.908 | 31.254 | 2.3 | 4.170\n stream | 7 | 21.910 | 3.4 | 2.153 | 33.669 | 2.1 | 4.616\n stream | 8 | 20.741 | 3.6 | 1.594 | 34.182 | 2.1 | 3.819\n stream | 9 | 22.471 | 3.3 | 3.094 | 30.690 | 2.4 | 2.677\n stream | 10 | 19.895 | 3.7 | 1.695 | 32.631 | 2.2 | 4.976\n stream | 11 | 19.447 | 3.8 | 1.647 | 31.163 | 2.3 | 3.351\n stream | 12 | 18.658 | 4.0 | 1.503 | 30.817 | 2.3 | 3.538\n stream | 13 | 18.886 | 3.9 | 0.874 | 29.184 | 2.5 | 4.832\n stream | 14 | 18.667 | 4.0 | 1.692 | 28.783 | 2.5 | 3.459\n stream | 15 | 19.080 | 3.9 | 1.429 | 28.928 | 2.5 | 3.396\n stream | 16 | 18.929 | 3.9 | 3.469 | 29.282 | 2.5 | 2.868\n\n\n", "msg_date": "Sat, 7 Sep 2024 10:27:27 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trying out read streams in pgvector (an extension)" } ]
[ { "msg_contents": "Hi,\n\nWhile searching the definition of auxiliary processes, I noticed that \nthe\ndescription of the WAL summarizer is incorrect. Additionally, I think \nit's\nbetter to add a description for the WAL writer similar to other \nAuxiliary\nprocesses. What do you think?\n\n# patch\n\n```\ndiff --git a/doc/src/sgml/glossary.sgml b/doc/src/sgml/glossary.sgml\nindex a81c17a869..405fe6dc8b 100644\n--- a/doc/src/sgml/glossary.sgml\n+++ b/doc/src/sgml/glossary.sgml\n@@ -164,6 +164,7 @@\n the <glossterm linkend=\"glossary-wal-archiver\">WAL \narchiver</glossterm>,\n the <glossterm linkend=\"glossary-wal-receiver\">WAL \nreceiver</glossterm>\n (but not the <glossterm linkend=\"glossary-wal-sender\">WAL \nsenders</glossterm>),\n+ the <glossterm linkend=\"glossary-wal-summarizer\">WAL \nsummarizer</glossterm>,\n and the <glossterm linkend=\"glossary-wal-writer\">WAL \nwriter</glossterm>.\n </para>\n </glossdef>\n@@ -2199,7 +2200,7 @@\n <glossterm>WAL summarizer (process)</glossterm>\n <glossdef>\n <para>\n- A special <glossterm linkend=\"glossary-backend\">backend \nprocess</glossterm>\n+ An <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary \nprocess</glossterm>\n that summarizes WAL data for\n <glossterm linkend=\"glossary-incremental-backup\">incremental \nbackups</glossterm>.\n </para>\n@@ -2213,7 +2214,8 @@\n <glossterm>WAL writer (process)</glossterm>\n <glossdef>\n <para>\n- A process that writes <glossterm linkend=\"glossary-wal-record\">WAL \nrecords</glossterm>\n+ An <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary \nprocess</glossterm>\n+ that writes <glossterm linkend=\"glossary-wal-record\">WAL \nrecords</glossterm>\n from <glossterm linkend=\"glossary-shared-memory\">shared \nmemory</glossterm> to\n <glossterm linkend=\"glossary-wal-file\">WAL files</glossterm>.\n </para>\n```\n\n# additional information\n\nAs mentioned in miscadmin.h, WAL summarizer is one of auxiliary \nprocesses.\n(postgres/src/include/miscadmin.h)\n```\n\t/*\n\t * Auxiliary processes. These have PGPROC entries, but they are not\n\t * attached to any particular database. There can be only one of each \nof\n\t * these running at a time.\n\t *\n\t * If you modify these, make sure to update NUM_AUXILIARY_PROCS and the\n\t * glossary in the docs.\n\t */\n\tB_ARCHIVER,\n\tB_BG_WRITER,\n\tB_CHECKPOINTER,\n\tB_STARTUP,\n\tB_WAL_RECEIVER,\n\tB_WAL_SUMMARIZER,\n\tB_WAL_WRITER,\n```\nIndeed, it calls InitAuxiliaryProcess() instead of InitProcess().\n\nBut, the description of WAL summarizer says that it's one of backend \nprocesses.\n> WAL summarizer (process)\n> A special backend process that summarizes WAL data for incremental \n> backups.\nhttps://www.postgresql.org/docs/devel/glossary.html\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 11 Jun 2024 18:06:11 +0900", "msg_from": "Masahiro Ikeda <[email protected]>", "msg_from_op": true, "msg_subject": "Doc: fix a description regarding WAL summarizer on glossary page" }, { "msg_contents": "On Tue, Jun 11, 2024 at 06:06:11PM +0900, Masahiro Ikeda wrote:\n> While searching the definition of auxiliary processes, I noticed that the\n> description of the WAL summarizer is incorrect. Additionally, I think it's\n> better to add a description for the WAL writer similar to other Auxiliary\n> processes. What do you think?\n\nGood catch. Would you like to attach a patch?\n--\nMichael", "msg_date": "Wed, 12 Jun 2024 07:52:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: fix a description regarding WAL summarizer on glossary page" }, { "msg_contents": ">> While searching the definition of auxiliary processes, I noticed that\n>> the description of the WAL summarizer is incorrect. Additionally, I\n>> think it's better to add a description for the WAL writer similar to\n>> other Auxiliary processes. What do you think?\n>\n> Good catch. Would you like to attach a patch?\n\nThanks for your response. Please take a look at the attached patch.\nI've confirmed that it passes all tests.\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Thu, 13 Jun 2024 07:24:10 +0000", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "RE: Doc: fix a description regarding WAL summarizer on glossary page" }, { "msg_contents": "On Thu, Jun 13, 2024 at 07:24:10AM +0000, [email protected] wrote:\n> Thanks for your response. Please take a look at the attached patch.\n> I've confirmed that it passes all tests.\n\nThanks for the patch. Will check and apply.\n--\nMichael", "msg_date": "Thu, 13 Jun 2024 17:01:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: fix a description regarding WAL summarizer on glossary page" }, { "msg_contents": "On 2024-06-13 17:01, Michael Paquier wrote:\n> On Thu, Jun 13, 2024 at 07:24:10AM +0000, [email protected] \n> wrote:\n>> Thanks for your response. Please take a look at the attached patch.\n>> I've confirmed that it passes all tests.\n> \n> Thanks for the patch. Will check and apply.\n\nThanks for applying.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 14 Jun 2024 12:52:59 +0900", "msg_from": "Masahiro Ikeda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Doc: fix a description regarding WAL summarizer on glossary page" } ]
[ { "msg_contents": "Hello, hackers.\n\nWhile testing my work on (1) I was struggling with addressing a strange\nissue with ON CONFLICT UPDATE and REINDEX CONCURRENTLY.\n\nAfter some time, I have realized the same issue persists on the master\nbranch as well :)\n\nI have prepared two TAP tests to reproduce the issues (2), also in\nattachment.\n\nFirst one, does the next thing:\n\n CREATE UNLOGGED TABLE tbl(i int primary key, updated_at timestamp);\n CREATE INDEX idx ON tbl(i, updated_at); -- it is not required to\nreproduce but make it to happen faster\n\nThen it runs next scripts with pgbench concurrently:\n\n 1) INSERT INTO tbl VALUES(13,now()) on conflict(i) do update set\nupdated_at = now();\n 2) INSERT INTO tbl VALUES(42,now()) on conflict(i) do update set\nupdated_at = now();\n 3) INSERT INTO tbl VALUES(69,now()) on conflict(i) do update set\nupdated_at = now();\n\nAlso, during pgbench the next command is run in the loop:\n\n REINDEX INDEX CONCURRENTLY tbl_pkey;\n\nFor some time, everything looks more-less fine (except live locks, but this\nis the issue for the next test).\nBut after some time, about a minute or so (on ~3000th REINDEX) it just\nfails like this:\n\n make -C src/test/modules/test_misc/ check\nPROVE_TESTS='t/006_*'\n\n # waiting for an about 3000, now is 2174, seconds passed :\n84\n # waiting for an about 3000, now is 2175, seconds passed :\n84\n # waiting for an about 3000, now is 2176, seconds passed :\n84\n # waiting for an about 3000, now is 2177, seconds passed :\n84\n # waiting for an about 3000, now is 2178, seconds passed :\n84\n # waiting for an about 3000, now is 2179, seconds passed :\n84\n # waiting for an about 3000, now is 2180, seconds passed :\n84\n # waiting for an about 3000, now is 2181, seconds passed :\n84\n # waiting for an about 3000, now is 2182, seconds passed :\n84\n # waiting for an about 3000, now is 2183, seconds passed :\n84\n # waiting for an about 3000, now is 2184, seconds passed :\n84\n\n # Failed test 'concurrent INSERTs, UPDATES and RC status\n(got 2 vs expected 0)'\n # at t/006_concurrently_unique_fail.pl line 69.\n\n # Failed test 'concurrent INSERTs, UPDATES and RC stderr\n/(?^:^$)/'\n # at t/006_concurrently_unique_fail.pl line 69.\n # 'pgbench: error: pgbench: error: client\n4 script 0 aborted in command 1 query 0: ERROR: duplicate key value\nviolates unique constraint \"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(13) already exists.\n # client 15 script 0 aborted in command 1 query 0: ERROR:\n duplicate key value violates unique constraint \"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(13) already exists.\n # pgbench: error: client 9 script 0 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(13) already exists.\n # pgbench: error: client 11 script 0 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(13) already exists.\n # pgbench: error: client 8 script 0 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(13) already exists.\n # pgbench: error: client 3 script 2 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(69) already exists.\n # pgbench: error: client 2 script 2 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(69) already exists.\n # pgbench: error: client 12 script 0 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccold\"\n # DETAIL: Key (i)=(13) already exists.\n # pgbench: error: client 10 script 0 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccold\"\n # DETAIL: Key (i)=(13) already exists.\n # pgbench: error: client 18 script 2 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(69) already exists.\n # pgbench: error: pgbench:client 14 script 0 aborted in\ncommand 1 query 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey\"\n # DETAIL: Key (i)=(13) already exists.\n # error: client 1 script 0 aborted in command 1 query 0:\nERROR: duplicate key value violates unique constraint \"tbl_pkey\"\n # DETAIL: Key (i)=(13) already exists.\n # pgbench: error: client 0 script 2 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint \"tbl_pkey\"\n # DETAIL: Key (i)=(69) already exists.\n # pgbench: error: client 13 script 1 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(42) already exists.\n # pgbench: error: client 16 script 1 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(42) already exists.\n # pgbench: error: client 5 script 1 aborted in command 1\nquery 0: ERROR: duplicate key value violates unique constraint\n\"tbl_pkey_ccnew\"\n # DETAIL: Key (i)=(42) already exists.\n # pgbench: error: Run was aborted; the above results are\nincomplete.\n # '\n\nProbably something wrong with arbiter index selection for different\nbackends. I am afraid it could be a symptom of a more serious issue.\n\n-------------------------------------\n\nThe second test shows an interesting live lock state in the similar\nsituation.\n\n CREATE UNLOGGED TABLE tbl(i int primary key, n int);\n CREATE INDEX idx ON tbl(i, n);\n INSERT INTO tbl VALUES(13,1);\n\npgbench concurrently runs single command\n\n INSERT INTO tbl VALUES(13,1) on conflict(i) do update set n = tbl.n +\nEXCLUDED.n;\n\nAnd also reindexing in the loop\n\n REINDEX INDEX CONCURRENTLY tbl_pkey;\n\nAfter the start, a little bit strange issue happens\n\n make -C src/test/modules/test_misc/ check PROVE_TESTS='t/007_*'\n\n # going to start reindex, num tuples in table is 1\n # reindex 0 done in 0.00704598426818848 seconds, num inserted\nduring reindex tuples is 0 speed is 0 per second\n # going to start reindex, num tuples in table is 7\n # reindex 1 done in 0.453176021575928 seconds, num inserted during\nreindex tuples is 632 speed is 1394.60158947115 per second\n # going to start reindex, num tuples in table is 647\n # current n is 808, 808 per one second\n # current n is 808, 0 per one second\n # current n is 808, 0 per one second\n # current n is 808, 0 per one second\n # current n is 808, 0 per one second\n # current n is 811, 3 per one second\n # current n is 917, 106 per one second\n # current n is 1024, 107 per one second\n # reindex 2 done in 8.4104950428009 seconds, num inserted during\nreindex tuples is 467 speed is 55.5258635340064 per second\n # going to start reindex, num tuples in table is 1136\n # current n is 1257, 233 per one second\n # current n is 1257, 0 per one second\n # current n is 1257, 0 per one second\n # current n is 1257, 0 per one second\n # current n is 1257, 0 per one second\n # current n is 1490, 233 per one second\n # reindex 3 done in 5.21368479728699 seconds, num inserted during\nreindex tuples is 411 speed is 78.8310026363446 per second\n # going to start reindex, num tuples in table is 1566\n\nIn some moments, all CPUs all hot but 30 connections are unable to do any\nupsert. I think it may be somehow caused by two arbiter indexes (old and\nnew reindexed one).\n\nBest regards,\nMikhail.\n\n[1]:\nhttps://www.postgresql.org/message-id/flat/CANtu0ogBOtd9ravu1CUbuZWgq6qvn1rny38PGKDPk9zzQPH8_A%40mail.gmail.com#d4be02ff70f3002522f9fadbd165d631\n[2]:\nhttps://github.com/michail-nikolaev/postgres/commit/9446f944b415306d9e5d5ab98f69938d8f5ee87f", "msg_date": "Tue, 11 Jun 2024 13:00:00 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "On Tue, Jun 11, 2024 at 01:00:00PM +0200, Michail Nikolaev wrote:\n> Probably something wrong with arbiter index selection for different\n> backends. I am afraid it could be a symptom of a more serious issue.\n\nON CONFLICT selects an index that may be rebuilt in parallel of the\nREINDEX happening, and its contents may be incomplete. Isn't the\nissue that we may select as arbiter indexes stuff that's !indisvalid?\nUsing the ccnew or ccold indexes would not be correct for the conflict\nresolutions.\n--\nMichael", "msg_date": "Fri, 14 Jun 2024 08:18:15 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "Hello, Michael.\n\n> Isn't the issue that we may select as arbiter indexes stuff that's\n!indisvalid?\nAs far as I can see (1) !indisvalid indexes are filtered out.\n\nBut... It looks like this choice is not locked in any way (2), so\nindex_concurrently_swap or index_concurrently_set_dead can change this\nindex after the decision is made, even despite WaitForLockersMultiple (3).\nIn some cases, it may cause a data loss...\nBut I was unable to reproduce that using some random usleep(), however -\nmaybe it is a wrong assumption.\n\n[1]:\nhttps://github.com/postgres/postgres/blob/915de706d28c433283e9dc63701e8f978488a2b9/src/backend/optimizer/util/plancat.c#L804\n\n[2]:\nhttps://github.com/postgres/postgres/blob/915de706d28c433283e9dc63701e8f978488a2b9/src/backend/optimizer/util/plancat.c#L924-L928\n[3]:\nhttps://github.com/postgres/postgres/blob/8aee330af55d8a759b2b73f5a771d9d34a7b887f/src/backend/commands/indexcmds.c#L4153\n\nHello, Michael.> Isn't the issue that we may select as arbiter indexes stuff that's !indisvalid?As far as I can see (1) !indisvalid indexes are filtered out.But... It looks like this choice is not locked in any way (2), so index_concurrently_swap or index_concurrently_set_dead can change this index after the decision is made, even despite WaitForLockersMultiple (3). In some cases, it may cause a data loss...But I was unable to reproduce that using some random usleep(), however - maybe it is a wrong assumption.[1]: https://github.com/postgres/postgres/blob/915de706d28c433283e9dc63701e8f978488a2b9/src/backend/optimizer/util/plancat.c#L804 [2]: https://github.com/postgres/postgres/blob/915de706d28c433283e9dc63701e8f978488a2b9/src/backend/optimizer/util/plancat.c#L924-L928[3]: https://github.com/postgres/postgres/blob/8aee330af55d8a759b2b73f5a771d9d34a7b887f/src/backend/commands/indexcmds.c#L4153", "msg_date": "Fri, 14 Jun 2024 12:06:13 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "Hello.\n\n> But I was unable to reproduce that using some random usleep(), however -\nmaybe it is a wrong assumption.\nIt seems like the assumption is correct - we may use an invalid index as\narbiter due to race condition.\n\nThe attached patch adds a check for that case, and now the test fails like\nthis:\n\n # pgbench: error: client 16 script 1 aborted in command 1 query 0:\nERROR: duplicate key value violates unique constraint \"tbl_pkey_ccold\"\n # DETAIL: Key (i)=(42) already exists.\n # pgbench: error: client 9 script 1 aborted in command 1 query 0:\nERROR: ON CONFLICT does not support invalid indexes as arbiters\n # pgbench: error: client 0 script 2 aborted in command 1 query 0:\nERROR: duplicate key value violates unique constraint \"tbl_pkey\"\n # DETAIL: Key (i)=(69) already exists.\n # pgbench: error: client 7 script 0 aborted in command 1 query 0:\nERROR: ON CONFLICT does not support invalid indexes as arbiters\n # pgbench: error: client 10 script 0 aborted in command 1 query 0:\nERROR: ON CONFLICT does not support invalid indexes as arbiters\n # pgbench: error: client 11 script 0 aborted in command 1 query 0:\nERROR: ON CONFLICT does not support invalid indexes as arbiters\n\nI think It is even possible to see !alive index in the same situation (it\nis worse case), but I was unable to reproduce it so far.\n\nBest regards,\nMikhail.", "msg_date": "Fri, 14 Jun 2024 15:30:55 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "Hello, everyone.\n\n> I think It is even possible to see !alive index in the same situation (it\nis worse case), but I was unable to reproduce it so far.\nFortunately, it is not possible.\n\nSo, seems like I have found the source of the problem:\n\n1) infer_arbiter_indexes calls RelationGetIndexList to get the list of\ncandidates.\nIt does no lock selected indexes in any additional way which\nprevents index_concurrently_swap changing them (set and clear validity).\n\n RelationGetIndexList relcache.c:4857\n infer_arbiter_indexes plancat.c:780\n make_modifytable createplan.c:7097 ----------\nnode->arbiterIndexes = infer_arbiter_indexes(root);\n create_modifytable_plan createplan.c:2826\n create_plan_recurse createplan.c:532\n create_plan createplan.c:349\n standard_planner planner.c:421\n planner planner.c:282\n pg_plan_query postgres.c:904\n pg_plan_queries postgres.c:996\n exec_simple_query postgres.c:1193\n\n2) other backend marks some index as invalid and commits\n\n index_concurrently_swap index.c:1600\n ReindexRelationConcurrently indexcmds.c:4115\n ReindexIndex indexcmds.c:2814\n ExecReindex indexcmds.c:2743\n ProcessUtilitySlow utility.c:1567\n standard_ProcessUtility utility.c:1067\n ProcessUtility utility.c:523\n PortalRunUtility pquery.c:1158\n PortalRunMulti pquery.c:1315\n PortalRun pquery.c:791\n exec_simple_query postgres.c:1274\n\n3) first backend invalidates catalog snapshot because transactional snapshot\n\n InvalidateCatalogSnapshot snapmgr.c:426\n GetTransactionSnapshot snapmgr.c:278\n PortalRunMulti pquery.c:1244\n PortalRun pquery.c:791\n exec_simple_query postgres.c:1274\n\n4) first backend copies indexes selected using previous catalog snapshot\n\n ExecInitModifyTable nodeModifyTable.c:4499 --------\nresultRelInfo->ri_onConflictArbiterIndexes = node->arbiterIndexes;\n ExecInitNode execProcnode.c:177\n InitPlan execMain.c:966\n standard_ExecutorStart execMain.c:261\n ExecutorStart execMain.c:137\n ProcessQuery pquery.c:155\n PortalRunMulti pquery.c:1277\n PortalRun pquery.c:791\n exec_simple_query postgres.c:1274\n\n5) then reads indexes using new fresh snapshot\n\n RelationGetIndexList relcache.c:4816\n ExecOpenIndices execIndexing.c:175\n ExecInsert nodeModifyTable.c:792 -------------\nExecOpenIndices(resultRelInfo, onconflict != ONCONFLICT_NONE);\n ExecModifyTable nodeModifyTable.c:4059\n ExecProcNodeFirst execProcnode.c:464\n ExecProcNode executor.h:274\n ExecutePlan execMain.c:1646\n standard_ExecutorRun execMain.c:363\n ExecutorRun execMain.c:304\n ProcessQuery pquery.c:160\n PortalRunMulti pquery.c:1277\n PortalRun pquery.c:791\n exec_simple_query postgres.c:1274\n\n5) and uses arbiter selected with stale snapshot with new index view\n(marked as invalid)\n\n ExecInsert nodeModifyTable.c:1016 -------------- arbiterIndexes\n= resultRelInfo->ri_onConflictArbiterIndexes;\n ............\n\n ExecInsert nodeModifyTable.c:1048 ---------------if\n(!ExecCheckIndexConstraints(resultRelInfo, slot, estate, conflictTid,\narbiterIndexes))\n ExecModifyTable nodeModifyTable.c:4059\n ExecProcNodeFirst execProcnode.c:464\n ExecProcNode executor.h:274\n ExecutePlan execMain.c:1646\n standard_ExecutorRun execMain.c:363\n ExecutorRun execMain.c:304\n ProcessQuery pquery.c:160\n PortalRunMulti pquery.c:1277\n PortalRun pquery.c:791\n exec_simple_query postgres.c:1274\n\n\nI have attached an updated test for the issue (it fails on assert quickly\nand uses only 2 backends).\nThe same issue may happen in case of CREATE/DROP INDEX CONCURRENTLY as well.\n\nThe simplest possible fix is to use ShareLock\ninstead ShareUpdateExclusiveLock in the index_concurrently_swap\n\n oldClassRel = relation_open(oldIndexId, ShareLock);\n newClassRel = relation_open(newIndexId, ShareLock);\n\nBut this is not a \"concurrent\" way. But such update should be fast enough\nas far as I understand.\n\nBest regards,\nMikhail.", "msg_date": "Mon, 17 Jun 2024 19:00:51 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "On Mon, Jun 17, 2024 at 07:00:51PM +0200, Michail Nikolaev wrote:\n> The simplest possible fix is to use ShareLock\n> instead ShareUpdateExclusiveLock in the index_concurrently_swap\n> \n> oldClassRel = relation_open(oldIndexId, ShareLock);\n> newClassRel = relation_open(newIndexId, ShareLock);\n> \n> But this is not a \"concurrent\" way. But such update should be fast enough\n> as far as I understand.\n\nNope, that won't fly far. We should not use a ShareLock in this step\nor we are going to conflict with row exclusive locks, impacting all\nworkloads when doing a REINDEX CONCURRENTLY.\n\nThat may be a long shot, but the issue is that we do the swap of all\nthe indexes in a single transaction, but do not wait for them to\ncomplete when committing the swap's transaction in phase 4. Your\nreport is telling us that we really have a good reason to wait for all\nthe transactions that may use these indexes to finish. One thing\ncoming on top of my mind to keep things concurrent-safe while allowing\na clean use of the arbiter indexes would be to stick a\nWaitForLockersMultiple() on AccessExclusiveLock just *before* the\ntransaction commit of phase 4, say, lacking the progress report part:\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -4131,6 +4131,8 @@ ReindexRelationConcurrently(const ReindexStmt *stmt, Oid relationOid, const Rein\n \t\tCommandCounterIncrement();\n \t}\n \n+\tWaitForLockersMultiple(lockTags, AccessExclusiveLock, true);\n+\n \t/* Commit this transaction and make index swaps visible */\n \tCommitTransactionCommand();\n \tStartTransactionCommand();\n\nThis is a non-fresh Friday-afternoon idea, but it would make sure that\nwe don't have any transactions using the indexes switched to _ccold\nwith indisvalid that are waiting for a drop in phase 5. Your tests\nseem to pass with that, and that keeps the operation intact\nconcurrent-wise (I'm really wishing for isolation tests with injection\npoints just now, because I could use them here).\n\n> +\t\tAssert(indexRelation->rd_index->indislive);\n> +\t\tAssert(indexRelation->rd_index->indisvalid);\n> +\n> \t\tif (!indexRelation->rd_index->indimmediate)\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n\nThis kind of validation check may be a good idea in the long term.\nThat seems incredibly useful to me if we were to add more code paths\nthat do concurrent index rebuilds, to make sure that we don't rely on\nan index we should not use at all. That's a HEAD-only thing IMO,\nthough.\n--\nMichael", "msg_date": "Fri, 21 Jun 2024 15:53:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "Hello, Michael!\n\n> This is a non-fresh Friday-afternoon idea, but it would make sure that\n> we don't have any transactions using the indexes switched to _ccold\n> with indisvalid that are waiting for a drop in phase 5. Your tests\n> seem to pass with that, and that keeps the operation intact\n> concurrent-wise (I'm really wishing for isolation tests with injection\n> points just now, because I could use them here).\n\nYes, I also have tried that approach, but it doesn't work, unfortunately.\nYou may fail test increasing number of connections:\n\n'--no-vacuum --client=10 -j 2 --transactions=1000',\n\nThe source of the issue is not the swap of the indexes (and not related to\nREINDEX CONCURRENTLY only), but the fact that indexes are fetched once\nduring planning (to find the arbiter), but then later reread with a new\ncatalog snapshot for the the actual execution.\n\nSo, other possible fixes I see:\n* fallback to replanning in case we see something changed during the\nexecution\n* select arbiter indexes during actual execution\n\n> That's a HEAD-only thing IMO,\n> though.\nDo you mean that it needs to be moved to a separate patch?\n\nBest regards,\nMikhail.\n\nHello, Michael!> This is a non-fresh Friday-afternoon idea, but it would make sure that> we don't have any transactions using the indexes switched to _ccold> with indisvalid that are waiting for a drop in phase 5.  Your tests> seem to pass with that, and that keeps the operation intact> concurrent-wise (I'm really wishing for isolation tests with injection> points just now, because I could use them here).Yes, I also have tried that approach, but it doesn't work, unfortunately.You may fail test increasing number of connections:'--no-vacuum --client=10 -j 2 --transactions=1000',The source of the issue is not the swap of the indexes (and not related to REINDEX CONCURRENTLY only), but the fact that indexes are fetched once during planning (to find the arbiter), but then later reread with a new catalog snapshot for the the actual execution.So, other possible fixes I see:* fallback to replanning in case we see something changed during the execution* select arbiter indexes during actual execution> That's a HEAD-only thing IMO,> though.Do you mean that it needs to be moved to a separate patch?Best regards,Mikhail.", "msg_date": "Fri, 21 Jun 2024 11:31:21 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "On Mon, Jun 17, 2024 at 07:00:51PM +0200, Michail Nikolaev wrote:\n> The same issue may happen in case of CREATE/DROP INDEX CONCURRENTLY as well.\n\nWhile looking at all that, I've been also curious about this specific\npoint, and it is indeed possible to finish in a state where a\nduplicate key would be found in one of indexes selected by the\nexecutor during an INSERT ON CONFLICT while a concurrent set of CICs\nand DICs are run, so you don't really need a REINDEX. See for example\nthe attached test.\n--\nMichael", "msg_date": "Mon, 24 Jun 2024 16:06:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "On Fri, Jun 21, 2024 at 11:31:21AM +0200, Michail Nikolaev wrote:\n> Yes, I also have tried that approach, but it doesn't work, unfortunately.\n> You may fail test increasing number of connections:\n> \n> '--no-vacuum --client=10 -j 2 --transactions=1000',\n> \n> The source of the issue is not the swap of the indexes (and not related to\n> REINDEX CONCURRENTLY only), but the fact that indexes are fetched once\n> during planning (to find the arbiter), but then later reread with a new\n> catalog snapshot for the the actual execution.\n\nWhen I first saw this report, my main worry was that I have somewhat\nmanaged to break the state of the indexes leading to data corruption\nbecause of an incorrect step in the concurrent operations. However,\nas far as I can see this is not the case, as an effect of two\nproperties we rely on for concurrent index operations, that hold in\nthe executor and the planner. Simply put:\n- The planner ignores indexes with !indisvalid.\n- The executor ignores indexes with !indislive.\n\nThe critical point is that we wait in DROP INDEX CONC and REINDEX CONC\nfor any transactions still using an index that's waiting to be marked\nas !indislive, because such indexes *must* not be used in the\nexecutor.\n\n> So, other possible fixes I see:\n> * fallback to replanning in case we see something changed during the\n> execution\n> * select arbiter indexes during actual execution\n\nThese two properties make ON CONFLICT react the way it should\ndepending on the state of the indexes selected by the planner based on\nthe query clauses, with changes reflecting when executing, with two\npatterns involved:\n- An index may be created in a concurrent operation after the planner\nhas selected the arbiter indexes (the index may be defined, still not\nvalid yet, or just created after), then the query execution would need\nto handle the extra index created available at execution, with a\nfailure on a ccnew index.\n- An index may be selected at planning phase, then a different index\ncould be used by a constraint once both indexes swap, with a failure\non a ccold index.\n\nAs far as I can see, it depends on what kind of query semantics and\nthe amount of transparency you are looking for here in your\napplication. An error in the query itself can also be defined as\nuseful so as your application is aware of what happens as an effect of\nthe concurrent index build (reindex or CIC/DIC), and it is not really\nclear to me why silently falling back to a re-selection of the arbiter\nindexes would be always better. Replanning could be actually\ndangerous if a workload is heavily concurrently REINDEX'd, as we could\nfall into a trap where a query can never decide which index to use.\nI'm not saying that it cannot be improved, but it's not completely\nclear to me what query semantics are the best for all users because\nthe behavior of HEAD and your suggestions have merits and demerits.\nAnything we could come up with would be an improvement anyway, AFAIU.\n\n>> That's a HEAD-only thing IMO,\n>> though.\n>\n> Do you mean that it needs to be moved to a separate patch?\n\nIt should, but I'm wondering if that's necessary for two reasons.\n\nFirst, a check on indisvalid would be incorrect, because indexes\nmarked as !indisvalid && indislive mean that there is a concurrent\noperation happening, and that this concurrent operation is waiting for\nall transactions working with a lock on this index to finish before\nflipping the live flag and make this index invalid for decisions taken\nin the executor, like HOT updates, etc.\n\nA check on indislive may be an idea, still I'm slightly biased\nregarding its additional value because any indexes opened for a\nrelation are fetched from the relcache with RelationGetIndexList()\nexplaining why indislive indexes cannot be fetched, and we rely on\nthat in the executor for the indexes opened by a relation.\n--\nMichael", "msg_date": "Tue, 25 Jun 2024 10:14:16 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "Hello, Michael!\n\n> As far as I can see, it depends on what kind of query semantics and\n> the amount of transparency you are looking for here in your\n> application. An error in the query itself can also be defined as\n> useful so as your application is aware of what happens as an effect of\n> the concurrent index build (reindex or CIC/DIC), and it is not really\n> clear to me why silently falling back to a re-selection of the arbiter\n> indexes would be always better.\n\n From my point of view, INSERT ON CONFLICT UPDATE should never fail with\n\"ERROR: duplicate key value violates unique constraint\" because the main\nidea of upsert is to avoid such situations.\nSo, it is expected by majority and, probably, is even documented.\n\nOn the other side, REINDEX CONCURRENTLY should not cause any queries to\nfail accidentally without any clear reason.\n\nAlso, as you can see from the topic starter letter, we could see errors\nlike this:\n\n* ERROR: duplicate key value violates unique constraint \"tbl_pkey\"\n* ERROR: duplicate key value violates unique constraint \"tbl_pkey_ccnew\"\n* ERROR: duplicate key value violates unique constraint \"tbl_pkey_ccold\"\n\nSo, the first error message does not provide any clue for the developer to\nunderstand what happened.\n\n> - The planner ignores indexes with !indisvalid.\n> - The executor ignores indexes with !indislive.\n\nYes, and it feels like we need one more flag here to distinguish\n!indisvalid indexes which are going to become valid and which are going to\nbecome !indislive.\n\nFor example, let name it as indiscorrect (it means it contains all the\ndata). In such case, we may use the following logic:\n\n1) !indisvalid && !indiscorrect - index in validation phase probably, do\nnot use it as arbiter because it does not contain all the data yet\n2) !indisvalid && indiscorrect - index will be dropped most likely. Do not\nplan new queries with it, but it still may be used by other queries\n(including upserts). So, we still need to include it to the arbiters.\n\nAnd, during the reindex concurrently:\n\n1) begin; mark new index as indisvalid and indiscorrect; mark old one as\n!indisvalid but still indiscorrect. invalidate relcache; commit;\n\nCurrently, some queries are still using the old one as arbiter, some\nqueries use both.\n\n2) WaitForLockersMultiple\n\nNow all queries use both indexes as arbiter.\n\n3) begin; mark old index as !indiscorrect, additionally to !indisvalid;\ninvalidate cache; commit;\n\nNow, some queries use only the new index, both some still use both.\n\n4) WaitForLockersMultiple;\n\nNow, all queries use only the new index - we are safe to mark the old\none it as !indislive.\n\n> It should, but I'm wondering if that's necessary for two reasons.\nIn that case, it becomes:\n\n Assert(indexRelation->rd_index->indiscorrect);\n Assert(indexRelation->rd_index->indislive);\n\nand it is always the valid check.\n\nBest regards,\nMikhail.\n\nHello, Michael!> As far as I can see, it depends on what kind of query semantics and> the amount of transparency you are looking for here in your> application.  An error in the query itself can also be defined as> useful so as your application is aware of what happens as an effect of> the concurrent index build (reindex or CIC/DIC), and it is not really> clear to me why silently falling back to a re-selection of the arbiter> indexes would be always better.From my point of view, INSERT ON CONFLICT UPDATE should never fail with \"ERROR:  duplicate key value violates unique constraint\" because the main idea of upsert is to avoid such situations.So, it is expected by majority and, probably, is even documented.On the other side, REINDEX CONCURRENTLY should not cause any queries to fail accidentally without any clear reason.Also, as you can see from the topic starter letter, we could see errors like this:* ERROR:  duplicate key value violates unique constraint \"tbl_pkey\"* ERROR:  duplicate key value violates unique constraint \"tbl_pkey_ccnew\"* ERROR:  duplicate key value violates unique constraint \"tbl_pkey_ccold\"So, the first error message does not provide any clue for the developer to understand what happened.> - The planner ignores indexes with !indisvalid.> - The executor ignores indexes with !indislive.Yes, and it feels like we need one more flag here to distinguish !indisvalid indexes which are going to become valid and which are going to become !indislive.For example, let name it as indiscorrect (it means it contains all the data). In such case, we may use the following logic:1) !indisvalid && !indiscorrect - index in validation phase probably, do not use it as arbiter because it does not contain all the data yet2) !indisvalid && indiscorrect - index will be dropped most likely. Do not plan new queries with it, but it still may be used by other queries (including upserts). So, we still need to include it to the arbiters.And, during the reindex concurrently:1) begin; mark new index as indisvalid and indiscorrect; mark old one as !indisvalid but still indiscorrect. invalidate relcache; commit;Currently, some queries are still using the old one as arbiter, some queries use both.2) WaitForLockersMultipleNow all queries use both indexes as arbiter.3) begin; mark old index as !indiscorrect, additionally to !indisvalid; invalidate cache; commit;Now, some queries use only the new index, both some still use both.4)  WaitForLockersMultiple;Now, all queries use only the new index - we are safe to mark the old one it as !indislive.> It should, but I'm wondering if that's necessary for two reasons.In that case, it becomes:    Assert(indexRelation->rd_index->indiscorrect);    Assert(indexRelation->rd_index->indislive);and it is always the valid check.Best regards,Mikhail.", "msg_date": "Tue, 25 Jun 2024 13:47:00 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "Hello, Noah!\n\nAnswering\nhttps://www.postgresql.org/message-id/flat/20240612194857.1c.nmisch%40google.com#684361ba86bad11f4e9fd84dfa8e0084\n\n> On your other thread, it would be useful to see stack traces from the\nhigh-CPU\n> processes once the live lock has ended all query completion.\n\nI was wrong, it is not a livelock, it is a deadlock, actually. I missed it\nbecause pgbench retries deadlocks automatically.\n\nIt looks like this:\n\n2024-06-25 17:16:17.447 CEST [711743] 007_concurrently_unique_stuck.pl\nERROR: deadlock detected\n2024-06-25 17:16:17.447 CEST [711743] 007_concurrently_unique_stuck.pl\nDETAIL: Process 711743 waits for ShareLock on transaction 3633; blocked by\nprocess 711749.\nProcess 711749 waits for ShareLock on speculative token 2 of transaction\n3622; blocked by process 711743.\nProcess 711743: INSERT INTO tbl VALUES(13,89318) on conflict(i) do update\nset n = tbl.n + 1 RETURNING n\nProcess 711749: INSERT INTO tbl VALUES(13,41011) on conflict(i) do update\nset n = tbl.n + 1 RETURNING n\n2024-06-25 17:16:17.447 CEST [711743] 007_concurrently_unique_stuck.pl\nHINT: See server log for query details.\n2024-06-25 17:16:17.447 CEST [711743] 007_concurrently_unique_stuck.pl\nCONTEXT: while inserting index tuple (15,145) in relation \"tbl_pkey_ccnew\"\n2024-06-25 17:16:17.447 CEST [711743] 007_concurrently_unique_stuck.pl\nSTATEMENT: INSERT INTO tbl VALUES(13,89318) on conflict(i) do update set n\n= tbl.n + 1 RETURNING n\n\nStacktraces:\n\n-------------------------\n\nINSERT INTO tbl VALUES(13,41011) on conflict(i) do update set n = tbl.n + 1\nRETURNING n\n\n#0 in epoll_wait (epfd=5, events=0x1203328, maxevents=1, timeout=-1) at\n../sysdeps/unix/sysv/linux/epoll_wait.c:30\n#1 in WaitEventSetWaitBlock (set=0x12032c0, cur_timeout=-1,\noccurred_events=0x7ffcc4e38e30, nevents=1) at latch.c:1570\n#2 in WaitEventSetWait (set=0x12032c0, timeout=-1,\noccurred_events=0x7ffcc4e38e30, nevents=1, wait_event_info=50331655) at\nlatch.c:1516\n#3 in WaitLatch (latch=0x7acb2a2f5f14, wakeEvents=33, timeout=0,\nwait_event_info=50331655) at latch.c:538\n#4 in ProcSleep (locallock=0x122f778, lockMethodTable=0x1037340\n<default_lockmethod>, dontWait=false) at proc.c:1355\n#5 in WaitOnLock (locallock=0x122f778, owner=0x1247408, dontWait=false) at\nlock.c:1833\n#6 in LockAcquireExtended (locktag=0x7ffcc4e39220, lockmode=5,\nsessionLock=false, dontWait=false, reportMemoryError=true, locallockp=0x0)\nat lock.c:1046\n#7 in LockAcquire (locktag=0x7ffcc4e39220, lockmode=5, sessionLock=false,\ndontWait=false) at lock.c:739\n#8 in SpeculativeInsertionWait (xid=3622, token=2) at lmgr.c:833\n#9 in _bt_doinsert (rel=0x7acb2dbb12e8, itup=0x12f1308,\ncheckUnique=UNIQUE_CHECK_YES, indexUnchanged=true, heapRel=0x7acb2dbb0f08)\nat nbtinsert.c:225\n#10 in btinsert (rel=0x7acb2dbb12e8, values=0x7ffcc4e39440,\nisnull=0x7ffcc4e39420, ht_ctid=0x12ebe20, heapRel=0x7acb2dbb0f08,\ncheckUnique=UNIQUE_CHECK_YES, indexUnchanged=true, indexInfo=0x12f08a8) at\nnbtree.c:195\n#11 in index_insert (indexRelation=0x7acb2dbb12e8, values=0x7ffcc4e39440,\nisnull=0x7ffcc4e39420, heap_t_ctid=0x12ebe20, heapRelation=0x7acb2dbb0f08,\ncheckUnique=UNIQUE_CHECK_YES, indexUnchanged=true, indexInfo=0x12f08a8) at\nindexam.c:230\n#12 in ExecInsertIndexTuples (resultRelInfo=0x12eaa00, slot=0x12ebdf0,\nestate=0x12ea560, update=true, noDupErr=false, specConflict=0x0,\narbiterIndexes=0x0, onlySummarizing=false) at execIndexing.c:438\n#13 in ExecUpdateEpilogue (context=0x7ffcc4e39870,\nupdateCxt=0x7ffcc4e3962c, resultRelInfo=0x12eaa00, tupleid=0x7ffcc4e39732,\noldtuple=0x0, slot=0x12ebdf0) at nodeModifyTable.c:2130\n#14 in ExecUpdate (context=0x7ffcc4e39870, resultRelInfo=0x12eaa00,\ntupleid=0x7ffcc4e39732, oldtuple=0x0, slot=0x12ebdf0, canSetTag=true) at\nnodeModifyTable.c:2478\n#15 in ExecOnConflictUpdate (context=0x7ffcc4e39870,\nresultRelInfo=0x12eaa00, conflictTid=0x7ffcc4e39732,\nexcludedSlot=0x12f05b8, canSetTag=true, returning=0x7ffcc4e39738) at\nnodeModifyTable.c:2694\n#16 in ExecInsert (context=0x7ffcc4e39870, resultRelInfo=0x12eaa00,\nslot=0x12f05b8, canSetTag=true, inserted_tuple=0x0, insert_destrel=0x0) at\nnodeModifyTable.c:1048\n#17 in ExecModifyTable (pstate=0x12ea7f0) at nodeModifyTable.c:4059\n#18 in ExecProcNodeFirst (node=0x12ea7f0) at execProcnode.c:464\n#19 in ExecProcNode (node=0x12ea7f0) at\n../../../src/include/executor/executor.h:274\n#20 in ExecutePlan (estate=0x12ea560, planstate=0x12ea7f0,\nuse_parallel_mode=false, operation=CMD_INSERT, sendTuples=true,\nnumberTuples=0, direction=ForwardScanDirection, dest=0x12daac8,\nexecute_once=true) at execMain.c:1646\n#21 in standard_ExecutorRun (queryDesc=0x12dab58,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:363\n#22 in ExecutorRun (queryDesc=0x12dab58, direction=ForwardScanDirection,\ncount=0, execute_once=true) at execMain.c:304\n#23 in ProcessQuery (plan=0x12e1360, sourceText=0x12083b0 \"INSERT INTO tbl\nVALUES(13,41011) on conflict(i) do update set n = tbl.n + 1 RETURNING n \",\nparams=0x0, queryEnv=0x0, dest=0x12daac8, qc=0x7ffcc4e39ae0) at pquery.c:160\n#24 in PortalRunMulti (portal=0x1289c90, isTopLevel=true,\nsetHoldSnapshot=true, dest=0x12daac8, altdest=0x10382a0 <donothingDR>,\nqc=0x7ffcc4e39ae0) at pquery.c:1277\n#25 in FillPortalStore (portal=0x1289c90, isTopLevel=true) at pquery.c:1026\n#26 in PortalRun (portal=0x1289c90, count=9223372036854775807,\nisTopLevel=true, run_once=true, dest=0x12e14c0, altdest=0x12e14c0,\nqc=0x7ffcc4e39d30) at pquery.c:763\n#27 in exec_simple_query (query_string=0x12083b0 \"INSERT INTO tbl\nVALUES(13,41011) on conflict(i) do update set n = tbl.n + 1 RETURNING n \")\nat postgres.c:1274\n\n\n-------------------------\n\nINSERT INTO tbl VALUES(13,89318) on conflict(i) do update set n = tbl.n + 1\nRETURNING n\n\n#0 in epoll_wait (epfd=5, events=0x1203328, maxevents=1, timeout=-1) at\n../sysdeps/unix/sysv/linux/epoll_wait.c:30\n#1 in WaitEventSetWaitBlock (set=0x12032c0, cur_timeout=-1,\noccurred_events=0x7ffcc4e38f60, nevents=1) at latch.c:1570\n#2 in WaitEventSetWait (set=0x12032c0, timeout=-1,\noccurred_events=0x7ffcc4e38f60, nevents=1, wait_event_info=50331653) at\nlatch.c:1516\n#3 in WaitLatch (latch=0x7acb2a2f4dbc, wakeEvents=33, timeout=0,\nwait_event_info=50331653) at latch.c:538\n#4 in ProcSleep (locallock=0x122f670, lockMethodTable=0x1037340\n<default_lockmethod>, dontWait=false) at proc.c:1355\n#5 in WaitOnLock (locallock=0x122f670, owner=0x1247408, dontWait=false) at\nlock.c:1833\n#6 in LockAcquireExtended (locktag=0x7ffcc4e39370, lockmode=5,\nsessionLock=false, dontWait=false, reportMemoryError=true, locallockp=0x0)\nat lock.c:1046\n#7 in LockAcquire (locktag=0x7ffcc4e39370, lockmode=5, sessionLock=false,\ndontWait=false) at lock.c:739\n#8 in XactLockTableWait (xid=3633, rel=0x7acb2dba66d8, ctid=0x1240a68,\noper=XLTW_InsertIndex) at lmgr.c:701\n#9 in _bt_doinsert (rel=0x7acb2dba66d8, itup=0x1240a68,\ncheckUnique=UNIQUE_CHECK_YES, indexUnchanged=false, heapRel=0x7acb2dbb0f08)\nat nbtinsert.c:227\n#10 in btinsert (rel=0x7acb2dba66d8, values=0x7ffcc4e395c0,\nisnull=0x7ffcc4e395a0, ht_ctid=0x12400e8, heapRel=0x7acb2dbb0f08,\ncheckUnique=UNIQUE_CHECK_YES, indexUnchanged=false, indexInfo=0x1240500) at\nnbtree.c:195\n#11 in index_insert (indexRelation=0x7acb2dba66d8, values=0x7ffcc4e395c0,\nisnull=0x7ffcc4e395a0, heap_t_ctid=0x12400e8, heapRelation=0x7acb2dbb0f08,\ncheckUnique=UNIQUE_CHECK_YES, indexUnchanged=false, indexInfo=0x1240500) at\nindexam.c:230\n#12 in ExecInsertIndexTuples (resultRelInfo=0x12eaa00, slot=0x12400b8,\nestate=0x12ea560, update=false, noDupErr=true, specConflict=0x7ffcc4e39722,\narbiterIndexes=0x12e0998, onlySummarizing=false) at execIndexing.c:438\n#13 in ExecInsert (context=0x7ffcc4e39870, resultRelInfo=0x12eaa00,\nslot=0x12400b8, canSetTag=true, inserted_tuple=0x0, insert_destrel=0x0) at\nnodeModifyTable.c:1095\n#14 in ExecModifyTable (pstate=0x12ea7f0) at nodeModifyTable.c:4059\n#15 in ExecProcNodeFirst (node=0x12ea7f0) at execProcnode.c:464\n#16 in ExecProcNode (node=0x12ea7f0) at\n../../../src/include/executor/executor.h:274\n#17 in ExecutePlan (estate=0x12ea560, planstate=0x12ea7f0,\nuse_parallel_mode=false, operation=CMD_INSERT, sendTuples=true,\nnumberTuples=0, direction=ForwardScanDirection, dest=0x12daac8,\nexecute_once=true) at execMain.c:1646\n#18 in standard_ExecutorRun (queryDesc=0x12dab58,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:363\n#19 in ExecutorRun (queryDesc=0x12dab58, direction=ForwardScanDirection,\ncount=0, execute_once=true) at execMain.c:304\n#20 in ProcessQuery (plan=0x12e1360, sourceText=0x12083b0 \"INSERT INTO tbl\nVALUES(13,89318) on conflict(i) do update set n = tbl.n + 1 RETURNING n \",\nparams=0x0, queryEnv=0x0, dest=0x12daac8, qc=0x7ffcc4e39ae0) at pquery.c:160\n#21 in PortalRunMulti (portal=0x1289c90, isTopLevel=true,\nsetHoldSnapshot=true, dest=0x12daac8, altdest=0x10382a0 <donothingDR>,\nqc=0x7ffcc4e39ae0) at pquery.c:1277\n#22 in FillPortalStore (portal=0x1289c90, isTopLevel=true) at pquery.c:1026\n#23 in PortalRun (portal=0x1289c90, count=9223372036854775807,\nisTopLevel=true, run_once=true, dest=0x12e14c0, altdest=0x12e14c0,\nqc=0x7ffcc4e39d30) at pquery.c:763\n#24 in exec_simple_query (query_string=0x12083b0 \"INSERT INTO tbl\nVALUES(13,89318) on conflict(i) do update set n = tbl.n + 1 RETURNING n \")\nat postgres.c:1274\n\n-------------------------\n\nAlso, at that time (but not reported in deadlock) reindex is happening.\nWithout reindex I am unable to reproduce deadlock.\n\n#0 in epoll_wait (epfd=5, events=0x1203328, maxevents=1, timeout=-1) at\n../sysdeps/unix/sysv/linux/epoll_wait.c:30\n#1 in WaitEventSetWaitBlock (set=0x12032c0, cur_timeout=-1,\noccurred_events=0x7ffcc4e38cd0, nevents=1) at latch.c:1570\n#2 in WaitEventSetWait (set=0x12032c0, timeout=-1,\noccurred_events=0x7ffcc4e38cd0, nevents=1, wait_event_info=50331654) at\nlatch.c:1516\n#3 in WaitLatch (latch=0x7acb2a2ff0c4, wakeEvents=33, timeout=0,\nwait_event_info=50331654) at latch.c:538\n#4 in ProcSleep (locallock=0x122f358, lockMethodTable=0x1037340\n<default_lockmethod>, dontWait=false) at proc.c:1355\n#5 in WaitOnLock (locallock=0x122f358, owner=0x12459f0, dontWait=false) at\nlock.c:1833\n#6 in LockAcquireExtended (locktag=0x7ffcc4e390e0, lockmode=5,\nsessionLock=false, dontWait=false, reportMemoryError=true, locallockp=0x0)\nat lock.c:1046\n#7 in LockAcquire (locktag=0x7ffcc4e390e0, lockmode=5, sessionLock=false,\ndontWait=false) at lock.c:739\n#8 in VirtualXactLock (vxid=..., wait=true) at lock.c:4627\n#9 in WaitForLockersMultiple (locktags=0x12327a8, lockmode=8,\nprogress=true) at lmgr.c:955\n#10 in ReindexRelationConcurrently (stmt=0x1208e08, relationOid=16401,\nparams=0x7ffcc4e39528) at indexcmds.c:4154\n#11 in ReindexIndex (stmt=0x1208e08, params=0x7ffcc4e39528,\nisTopLevel=true) at indexcmds.c:2814\n#12 in ExecReindex (pstate=0x12329f0, stmt=0x1208e08, isTopLevel=true) at\nindexcmds.c:2743\n#13 in ProcessUtilitySlow (pstate=0x12329f0, pstmt=0x1208f58,\nqueryString=0x12083b0 \"REINDEX INDEX CONCURRENTLY tbl_pkey;\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1209318,\nqc=0x7ffcc4e39d30) at utility.c:1567\n#14 in standard_ProcessUtility (pstmt=0x1208f58, queryString=0x12083b0\n\"REINDEX INDEX CONCURRENTLY tbl_pkey;\", readOnlyTree=false,\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1209318,\nqc=0x7ffcc4e39d30) at utility.c:1067\n#15 in ProcessUtility (pstmt=0x1208f58, queryString=0x12083b0 \"REINDEX\nINDEX CONCURRENTLY tbl_pkey;\", readOnlyTree=false,\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1209318,\nqc=0x7ffcc4e39d30) at utility.c:523\n#16 in PortalRunUtility (portal=0x1289c90, pstmt=0x1208f58,\nisTopLevel=true, setHoldSnapshot=false, dest=0x1209318, qc=0x7ffcc4e39d30)\nat pquery.c:1158\n#17 in PortalRunMulti (portal=0x1289c90, isTopLevel=true,\nsetHoldSnapshot=false, dest=0x1209318, altdest=0x1209318,\nqc=0x7ffcc4e39d30) at pquery.c:1315\n#18 in PortalRun (portal=0x1289c90, count=9223372036854775807,\nisTopLevel=true, run_once=true, dest=0x1209318, altdest=0x1209318,\nqc=0x7ffcc4e39d30) at pquery.c:791\n#19 in exec_simple_query (query_string=0x12083b0 \"REINDEX INDEX\nCONCURRENTLY tbl_pkey;\") at postgres.c:1274\n\n\nIt looks like a deadlock caused by different set of indexes being used as\narbiter indexes (or by the different order).\n\nBest regards,\nMikhail.\n\nHello, Noah!Answering https://www.postgresql.org/message-id/flat/20240612194857.1c.nmisch%40google.com#684361ba86bad11f4e9fd84dfa8e0084> On your other thread, it would be useful to see stack traces from the high-CPU> processes once the live lock has ended all query completion.I was wrong, it is not a livelock, it is a deadlock, actually. I missed it because pgbench retries deadlocks automatically.It looks like this:2024-06-25 17:16:17.447 CEST [711743] 007_concurrently_unique_stuck.pl ERROR:  deadlock detected2024-06-25 17:16:17.447 CEST [711743] 007_concurrently_unique_stuck.pl DETAIL:  Process 711743 waits for ShareLock on transaction 3633; blocked by process 711749.\tProcess 711749 waits for ShareLock on speculative token 2 of transaction 3622; blocked by process 711743.\tProcess 711743: INSERT INTO tbl VALUES(13,89318) on conflict(i) do update set n = tbl.n + 1 RETURNING n \tProcess 711749: INSERT INTO tbl VALUES(13,41011) on conflict(i) do update set n = tbl.n + 1 RETURNING n 2024-06-25 17:16:17.447 CEST [711743] 007_concurrently_unique_stuck.pl HINT:  See server log for query details.2024-06-25 17:16:17.447 CEST [711743] 007_concurrently_unique_stuck.pl CONTEXT:  while inserting index tuple (15,145) in relation \"tbl_pkey_ccnew\"2024-06-25 17:16:17.447 CEST [711743] 007_concurrently_unique_stuck.pl STATEMENT:  INSERT INTO tbl VALUES(13,89318) on conflict(i) do update set n = tbl.n + 1 RETURNING nStacktraces:-------------------------INSERT INTO tbl VALUES(13,41011) on conflict(i) do update set n = tbl.n + 1 RETURNING n#0  in epoll_wait (epfd=5, events=0x1203328, maxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30#1  in WaitEventSetWaitBlock (set=0x12032c0, cur_timeout=-1, occurred_events=0x7ffcc4e38e30, nevents=1) at latch.c:1570#2  in WaitEventSetWait (set=0x12032c0, timeout=-1, occurred_events=0x7ffcc4e38e30, nevents=1, wait_event_info=50331655) at latch.c:1516#3  in WaitLatch (latch=0x7acb2a2f5f14, wakeEvents=33, timeout=0, wait_event_info=50331655) at latch.c:538#4  in ProcSleep (locallock=0x122f778, lockMethodTable=0x1037340 <default_lockmethod>, dontWait=false) at proc.c:1355#5  in WaitOnLock (locallock=0x122f778, owner=0x1247408, dontWait=false) at lock.c:1833#6  in LockAcquireExtended (locktag=0x7ffcc4e39220, lockmode=5, sessionLock=false, dontWait=false, reportMemoryError=true, locallockp=0x0) at lock.c:1046#7  in LockAcquire (locktag=0x7ffcc4e39220, lockmode=5, sessionLock=false, dontWait=false) at lock.c:739#8  in SpeculativeInsertionWait (xid=3622, token=2) at lmgr.c:833#9  in _bt_doinsert (rel=0x7acb2dbb12e8, itup=0x12f1308, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=true, heapRel=0x7acb2dbb0f08) at nbtinsert.c:225#10 in btinsert (rel=0x7acb2dbb12e8, values=0x7ffcc4e39440, isnull=0x7ffcc4e39420, ht_ctid=0x12ebe20, heapRel=0x7acb2dbb0f08, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=true, indexInfo=0x12f08a8) at nbtree.c:195#11 in index_insert (indexRelation=0x7acb2dbb12e8, values=0x7ffcc4e39440, isnull=0x7ffcc4e39420, heap_t_ctid=0x12ebe20, heapRelation=0x7acb2dbb0f08, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=true, indexInfo=0x12f08a8) at indexam.c:230#12 in ExecInsertIndexTuples (resultRelInfo=0x12eaa00, slot=0x12ebdf0, estate=0x12ea560, update=true, noDupErr=false, specConflict=0x0, arbiterIndexes=0x0, onlySummarizing=false) at execIndexing.c:438#13 in ExecUpdateEpilogue (context=0x7ffcc4e39870, updateCxt=0x7ffcc4e3962c, resultRelInfo=0x12eaa00, tupleid=0x7ffcc4e39732, oldtuple=0x0, slot=0x12ebdf0) at nodeModifyTable.c:2130#14 in ExecUpdate (context=0x7ffcc4e39870, resultRelInfo=0x12eaa00, tupleid=0x7ffcc4e39732, oldtuple=0x0, slot=0x12ebdf0, canSetTag=true) at nodeModifyTable.c:2478#15 in ExecOnConflictUpdate (context=0x7ffcc4e39870, resultRelInfo=0x12eaa00, conflictTid=0x7ffcc4e39732, excludedSlot=0x12f05b8, canSetTag=true, returning=0x7ffcc4e39738) at nodeModifyTable.c:2694#16 in ExecInsert (context=0x7ffcc4e39870, resultRelInfo=0x12eaa00, slot=0x12f05b8, canSetTag=true, inserted_tuple=0x0, insert_destrel=0x0) at nodeModifyTable.c:1048#17 in ExecModifyTable (pstate=0x12ea7f0) at nodeModifyTable.c:4059#18 in ExecProcNodeFirst (node=0x12ea7f0) at execProcnode.c:464#19 in ExecProcNode (node=0x12ea7f0) at ../../../src/include/executor/executor.h:274#20 in ExecutePlan (estate=0x12ea560, planstate=0x12ea7f0, use_parallel_mode=false, operation=CMD_INSERT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0x12daac8, execute_once=true) at execMain.c:1646#21 in standard_ExecutorRun (queryDesc=0x12dab58, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:363#22 in ExecutorRun (queryDesc=0x12dab58, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:304#23 in ProcessQuery (plan=0x12e1360, sourceText=0x12083b0 \"INSERT INTO tbl VALUES(13,41011) on conflict(i) do update set n = tbl.n + 1 RETURNING n \", params=0x0, queryEnv=0x0, dest=0x12daac8, qc=0x7ffcc4e39ae0) at pquery.c:160#24 in PortalRunMulti (portal=0x1289c90, isTopLevel=true, setHoldSnapshot=true, dest=0x12daac8, altdest=0x10382a0 <donothingDR>, qc=0x7ffcc4e39ae0) at pquery.c:1277#25 in FillPortalStore (portal=0x1289c90, isTopLevel=true) at pquery.c:1026#26 in PortalRun (portal=0x1289c90, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x12e14c0, altdest=0x12e14c0, qc=0x7ffcc4e39d30) at pquery.c:763#27 in exec_simple_query (query_string=0x12083b0 \"INSERT INTO tbl VALUES(13,41011) on conflict(i) do update set n = tbl.n + 1 RETURNING n \") at postgres.c:1274-------------------------INSERT INTO tbl VALUES(13,89318) on conflict(i) do update set n = tbl.n + 1 RETURNING n#0  in epoll_wait (epfd=5, events=0x1203328, maxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30#1  in WaitEventSetWaitBlock (set=0x12032c0, cur_timeout=-1, occurred_events=0x7ffcc4e38f60, nevents=1) at latch.c:1570#2  in WaitEventSetWait (set=0x12032c0, timeout=-1, occurred_events=0x7ffcc4e38f60, nevents=1, wait_event_info=50331653) at latch.c:1516#3  in WaitLatch (latch=0x7acb2a2f4dbc, wakeEvents=33, timeout=0, wait_event_info=50331653) at latch.c:538#4  in ProcSleep (locallock=0x122f670, lockMethodTable=0x1037340 <default_lockmethod>, dontWait=false) at proc.c:1355#5  in WaitOnLock (locallock=0x122f670, owner=0x1247408, dontWait=false) at lock.c:1833#6  in LockAcquireExtended (locktag=0x7ffcc4e39370, lockmode=5, sessionLock=false, dontWait=false, reportMemoryError=true, locallockp=0x0) at lock.c:1046#7  in LockAcquire (locktag=0x7ffcc4e39370, lockmode=5, sessionLock=false, dontWait=false) at lock.c:739#8  in XactLockTableWait (xid=3633, rel=0x7acb2dba66d8, ctid=0x1240a68, oper=XLTW_InsertIndex) at lmgr.c:701#9  in _bt_doinsert (rel=0x7acb2dba66d8, itup=0x1240a68, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=false, heapRel=0x7acb2dbb0f08) at nbtinsert.c:227#10 in btinsert (rel=0x7acb2dba66d8, values=0x7ffcc4e395c0, isnull=0x7ffcc4e395a0, ht_ctid=0x12400e8, heapRel=0x7acb2dbb0f08, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=false, indexInfo=0x1240500) at nbtree.c:195#11 in index_insert (indexRelation=0x7acb2dba66d8, values=0x7ffcc4e395c0, isnull=0x7ffcc4e395a0, heap_t_ctid=0x12400e8, heapRelation=0x7acb2dbb0f08, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=false, indexInfo=0x1240500) at indexam.c:230#12 in ExecInsertIndexTuples (resultRelInfo=0x12eaa00, slot=0x12400b8, estate=0x12ea560, update=false, noDupErr=true, specConflict=0x7ffcc4e39722, arbiterIndexes=0x12e0998, onlySummarizing=false) at execIndexing.c:438#13 in ExecInsert (context=0x7ffcc4e39870, resultRelInfo=0x12eaa00, slot=0x12400b8, canSetTag=true, inserted_tuple=0x0, insert_destrel=0x0) at nodeModifyTable.c:1095#14 in ExecModifyTable (pstate=0x12ea7f0) at nodeModifyTable.c:4059#15 in ExecProcNodeFirst (node=0x12ea7f0) at execProcnode.c:464#16 in ExecProcNode (node=0x12ea7f0) at ../../../src/include/executor/executor.h:274#17 in ExecutePlan (estate=0x12ea560, planstate=0x12ea7f0, use_parallel_mode=false, operation=CMD_INSERT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0x12daac8, execute_once=true) at execMain.c:1646#18 in standard_ExecutorRun (queryDesc=0x12dab58, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:363#19 in ExecutorRun (queryDesc=0x12dab58, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:304#20 in ProcessQuery (plan=0x12e1360, sourceText=0x12083b0 \"INSERT INTO tbl VALUES(13,89318) on conflict(i) do update set n = tbl.n + 1 RETURNING n \", params=0x0, queryEnv=0x0, dest=0x12daac8, qc=0x7ffcc4e39ae0) at pquery.c:160#21 in PortalRunMulti (portal=0x1289c90, isTopLevel=true, setHoldSnapshot=true, dest=0x12daac8, altdest=0x10382a0 <donothingDR>, qc=0x7ffcc4e39ae0) at pquery.c:1277#22 in FillPortalStore (portal=0x1289c90, isTopLevel=true) at pquery.c:1026#23 in PortalRun (portal=0x1289c90, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x12e14c0, altdest=0x12e14c0, qc=0x7ffcc4e39d30) at pquery.c:763#24 in exec_simple_query (query_string=0x12083b0 \"INSERT INTO tbl VALUES(13,89318) on conflict(i) do update set n = tbl.n + 1 RETURNING n \") at postgres.c:1274-------------------------Also, at that time (but not reported in deadlock) reindex is happening. Without reindex I am unable to reproduce deadlock.#0  in epoll_wait (epfd=5, events=0x1203328, maxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30#1  in WaitEventSetWaitBlock (set=0x12032c0, cur_timeout=-1, occurred_events=0x7ffcc4e38cd0, nevents=1) at latch.c:1570#2  in WaitEventSetWait (set=0x12032c0, timeout=-1, occurred_events=0x7ffcc4e38cd0, nevents=1, wait_event_info=50331654) at latch.c:1516#3  in WaitLatch (latch=0x7acb2a2ff0c4, wakeEvents=33, timeout=0, wait_event_info=50331654) at latch.c:538#4  in ProcSleep (locallock=0x122f358, lockMethodTable=0x1037340 <default_lockmethod>, dontWait=false) at proc.c:1355#5  in WaitOnLock (locallock=0x122f358, owner=0x12459f0, dontWait=false) at lock.c:1833#6  in LockAcquireExtended (locktag=0x7ffcc4e390e0, lockmode=5, sessionLock=false, dontWait=false, reportMemoryError=true, locallockp=0x0) at lock.c:1046#7  in LockAcquire (locktag=0x7ffcc4e390e0, lockmode=5, sessionLock=false, dontWait=false) at lock.c:739#8  in VirtualXactLock (vxid=..., wait=true) at lock.c:4627#9  in WaitForLockersMultiple (locktags=0x12327a8, lockmode=8, progress=true) at lmgr.c:955#10 in ReindexRelationConcurrently (stmt=0x1208e08, relationOid=16401, params=0x7ffcc4e39528) at indexcmds.c:4154#11 in ReindexIndex (stmt=0x1208e08, params=0x7ffcc4e39528, isTopLevel=true) at indexcmds.c:2814#12 in ExecReindex (pstate=0x12329f0, stmt=0x1208e08, isTopLevel=true) at indexcmds.c:2743#13 in ProcessUtilitySlow (pstate=0x12329f0, pstmt=0x1208f58, queryString=0x12083b0 \"REINDEX INDEX CONCURRENTLY tbl_pkey;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1209318, qc=0x7ffcc4e39d30) at utility.c:1567#14 in standard_ProcessUtility (pstmt=0x1208f58, queryString=0x12083b0 \"REINDEX INDEX CONCURRENTLY tbl_pkey;\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1209318, qc=0x7ffcc4e39d30) at utility.c:1067#15 in ProcessUtility (pstmt=0x1208f58, queryString=0x12083b0 \"REINDEX INDEX CONCURRENTLY tbl_pkey;\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1209318, qc=0x7ffcc4e39d30) at utility.c:523#16 in PortalRunUtility (portal=0x1289c90, pstmt=0x1208f58, isTopLevel=true, setHoldSnapshot=false, dest=0x1209318, qc=0x7ffcc4e39d30) at pquery.c:1158#17 in PortalRunMulti (portal=0x1289c90, isTopLevel=true, setHoldSnapshot=false, dest=0x1209318, altdest=0x1209318, qc=0x7ffcc4e39d30) at pquery.c:1315#18 in PortalRun (portal=0x1289c90, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1209318, altdest=0x1209318, qc=0x7ffcc4e39d30) at pquery.c:791#19 in exec_simple_query (query_string=0x12083b0 \"REINDEX INDEX CONCURRENTLY tbl_pkey;\") at postgres.c:1274It looks like a deadlock caused by different set of indexes being used as arbiter indexes (or by the different order).Best regards,Mikhail.", "msg_date": "Tue, 25 Jun 2024 17:57:22 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "Hell, everyone!\n\nUsing the brand-new injection points support in specs, I created a spec to\nreproduce the issue.\n\nIt fails like this currently:\n\nmake -C src/test/modules/injection_points/ check\n\n@@ -64,6 +64,7 @@\n\n step s3_s1: <... completed>\n step s2_s1: <... completed>\n+ERROR: duplicate key value violates unique constraint \"tbl_pkey_ccold\"\n\n starting permutation: s3_s1 s2_s1 s4_s1 s1_s1 s4_s2 s4_s3\n injection_points_attach\n@@ -129,3 +130,4 @@\n\n step s3_s1: <... completed>\n step s2_s1: <... completed>\n+ERROR: duplicate key value violates unique constraint \"tbl_pkey\"\n\nBest regards,\nMikhail.", "msg_date": "Wed, 7 Aug 2024 11:30:44 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "Hello, everyone.\n\nI have updated the spec to reproduce the issue, now it includes cases with\nboth CREATE INDEX and REINDEX.\n\nTo run:\n make -C src/test/modules/injection_points/ check\n\nIssue reproduced on empty index, but it may happen on index of any with the\nsame probability.\nIt is not critical, of course, but in production system indexes are\nregularly rebuilt using REINDEX CONCURRENTLY as recommended in\ndocumentation [1].\nIn most of the cases it is done using pg_repack as far as I know.\n\nSo, in these production systems, there is no guarantee what INSERT ON\nCONFLICT DO NOTHING/UPDATE will not fail with a \"duplicate key value\nviolates unique constraint\" error.\n\nBest regards,\nMikhail.\n\n[1]: https://www.postgresql.org/docs/current/routine-reindex.html\n\n>", "msg_date": "Fri, 16 Aug 2024 12:22:00 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" }, { "msg_contents": "Hello, everyone!\n\nThis patch set addresses the issues discussed in this thread.\n\nThe main idea behind this fix is that it is safe to consider indisready\nindexes alongside indisvalid indexes as arbiter indexes. However, it's\ncrucial that at least one fully valid index is present.\n\nWhy is it necessary to consider indisready during the planning phase?\n\nThe reason is that these indexes are required for correct processing during\nthe execution phase.\nIf \"ready\" indexes are skipped as arbiters by one transaction, they may\nalready have become \"valid\" for another concurrent transaction during its\nplanning phase.\nAs a result, both transactions could concurrently process the UPSERT\ncommand with different sets of arbiters (while using the same set of\nindexes for tuple insertion later).\nThis can lead to unexpected \"duplicate key value violates unique\nconstraint\" errors and deadlocks.\n\nIs it safe to use a \"ready\" but not yet \"valid\" index as an arbiter?\nYes, as long as at least one \"valid\" index is also used as an arbiter.\nThe valid index ensures the correctness of the UPSERT logic, while the\n\"ready\" index contains an equal or lesser number of tuples, making it safe\nfor speculative insertion.\nIn any case, the insert to that index will be processed during\nExecInsertIndexTuples one way or another (with applyNoDupErr or without).\n\nFix is divided into a few patches, each following this logic:\n\n1) The first patch provides specs (and injection points) for the various\nscenarios related to the issue.\n2) The second patch introduces a straightforward change—adding indisready\nindexes to arbiters alongside indisvalid. However, at least one indisvalid\nis still required. This resolves simple cases involving REINDEX\nCONCURRENTLY and CREATE INDEX CONCURRENTLY.\n3) The third patch deals with named constraints. Instead of relying solely\non the index with the specified name, we attempt to find other indexes that\nare equivalent in terms of being used as an arbiter.\n4) This patch fixes a scenario involving partitioned tables. Special checks\nare required for partitioned indexes, which may be processed by REINDEX\nCONCURRENTLY.\n\nAdditionally, a patch with three extra TAP specifications for stress\ntesting is attached. This patch is not intended for commitment, so I\nrenamed the extension to prevent accidental application in some CI/DI jobs.\n\n>\nAlso, it is possible to look at the patches on GitHub:\nhttps://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:reindex_concurrently_with_upsert\n\nBest regards,\nMikhail.", "msg_date": "Sat, 24 Aug 2024 17:52:00 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY" } ]
[ { "msg_contents": "Way back in 2004 there was discussion of how we wanted to differentiate \nbuildfarm animals. We were flying blind to a large extent, as we didn't \nhave many similar things to compare to. We settled on {OS-name, \nOS-version, Compiler-name, Compiler-version}. The idea was that the \nnames would be invariant but we would allow for OS and compiler upgrades.\n\nThere are two problems with this system, and have been since the get-go. \nFirst, owners forget to run the upgrade-personalty.pl script when the \nupgrade the OS and./or compiler. Second, there is no standardization of \nthe names. Complaints about these surfaced (again) at the Vancouver \nunconference session about the buildfarm's future.\n\nIt was suggested that we should have the buildfarm client collect \ninformation and report it in a piece of JSON which the server would then \nuse to update the personality table automatically. However, there are \nsome issues.\n\nFirst there are issues of the source of information. And second there is \nwhat information to extract from the source.\n\nLet's take the simple case. On Linux this is fairly straightforward in \nmost cases. We can extract the \"ID\" and \"VERSION_ID\" attributes from \n/etc/os-release, and we can get the compiler version with  gcc--version \nor clang --version, and get the version number out of the first line. I \nthink I'd strip out the vendor string, so it would just be something \nlike \"13.2.1\".\n\nOn windows, systeminfo gives you this sort of information:\n\nOS Name: Microsoft Windows Server 2019 Datacenter\n\nI guess by analogy with Linux we'd say that the OS Name is \"Windows \nServer\" and the version just \"2019\".\n\ncl /? shows you a version line we could similarly parse to get something \nlike 19.40.33808. We might want a way to tie that back to a particular \nversion of VS, but it would do for now.\n\nFor msys2 and cygwin we can get the version by calling \"cygcheck \n--version\". These two are complicated by the fact that they are virtual \nenvironments.\n\nI don't have information about other OSs or compilers.\n\nFor now I'm intending just to collect the information and see what gaps \nwe have, before I start automating personality update.\n\nIf you have information about an OS or Compiler that's used in the \nbuildfarm that I haven't listed, please let me know.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 11 Jun 2024 12:13:30 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Keeping track of buildfarm animals' personality" }, { "msg_contents": "On Wed, Jun 12, 2024 at 4:13 AM Andrew Dunstan <[email protected]> wrote:\n> Way back in 2004 there was discussion of how we wanted to differentiate\n> buildfarm animals. We were flying blind to a large extent, as we didn't\n> have many similar things to compare to. We settled on {OS-name,\n> OS-version, Compiler-name, Compiler-version}. The idea was that the\n> names would be invariant but we would allow for OS and compiler upgrades.\n>\n> There are two problems with this system, and have been since the get-go.\n> First, owners forget to run the upgrade-personalty.pl script when the\n> upgrade the OS and./or compiler. Second, there is no standardization of\n> the names. Complaints about these surfaced (again) at the Vancouver\n> unconference session about the buildfarm's future.\n\nI was sorry to miss that one! Basically every unconference was\nunmissable, but you had to miss 3/4 of them due to physics...\n\n> I don't have information about other OSs or compilers.\n\nI wonder if most non-Linux, Unixoid systems would give a useful enough\nanswer with something like \"uname -rs\". It's a standard[1] after all.\nThe reason it's a bit useless on Linux is that Linux is [clears\nthroat, channels Richard Stallman] only a kernel. The interesting\ninformation for Linux is the distro, but that doesn't apply for stuff\nlike FreeBSD, AIX, yada yada. Some results from systems near me right\nnow:\n\nDarwin 23.5.0\nFreeBSD 14.1-RELEASE\n\nThe Mac is a bit annoying because that's a kernel or Darwin version\n(?), so you'd have to do some work to map it back to a macOS version\nlike 14.5... For eg FreeBSD you can also have the kernel out of sync\nwith the user space but normally you wouldn't so I think that's good\nenough. Macs do have a way to get the OS version people are more\nfamiliar with:\n\n% sw_vers\nProductName: macOS\nProductVersion: 14.5\nBuildVersion: 23F79\n\nMaybe a uname-based default would be good enough for *BSD, Solaris\netc, and then we just have a small list of specialisations where we\nknow how to do better?\n\n[1] https://pubs.opengroup.org/onlinepubs/009695299/utilities/uname.html\n\n\n", "msg_date": "Wed, 12 Jun 2024 08:29:29 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Keeping track of buildfarm animals' personality" }, { "msg_contents": "On Wed, Jun 12, 2024 at 8:29 AM Thomas Munro <[email protected]> wrote:\n> For eg FreeBSD you can also have the kernel out of sync\n> with the user space but normally you wouldn't so I think that's good\n> enough.\n\nOn second thoughts, perhaps we always want to capture the uname\nversion, which is most likely about the kernel, and optionally also\nthe userspace/distro version if we know how to with system-specific\nmethods. That could be useful for diagnosing animals running in\ncontainers, which could come up on almost any OS as they all have\ncontainer tech, and it's maybe even quite likely to be in use on\nsystems on AIX, Solaris etc that people add to the farm from shared\nrare hardware.\n\n\n", "msg_date": "Wed, 12 Jun 2024 08:46:06 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Keeping track of buildfarm animals' personality" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On second thoughts, perhaps we always want to capture the uname\n> version,\n\n+1. Is there any reason not to capture \"uname -a\"? The\nconfigure-based animals are effectively doing that already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Jun 2024 16:59:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Keeping track of buildfarm animals' personality" }, { "msg_contents": "On 2024-Jun-12, Thomas Munro wrote:\n\n> On Wed, Jun 12, 2024 at 4:13 AM Andrew Dunstan <[email protected]> wrote:\n\n> > There are two problems with this system, and have been since the get-go.\n> > First, owners forget to run the upgrade-personalty.pl script when the\n> > upgrade the OS and./or compiler. Second, there is no standardization of\n> > the names. Complaints about these surfaced (again) at the Vancouver\n> > unconference session about the buildfarm's future.\n> \n> I was sorry to miss that one! Basically every unconference was\n> unmissable, but you had to miss 3/4 of them due to physics...\n\n2/3. Yeah, but I think the room selection algorithm we used is flawed.\nOne that works better schedules all the most popular talks in the\nbiggest room (matches what we did), but for the second room, we should\nsort in the opposite direction: put the fifth most popular topic in the\nlast slot of the second room, and the sixth most popular in the last\nslot of the third room (instead, we filled the second and third room the\nsame way as the first one). So for twelve topics it'd be like this,\nnumbers represent topics ordered by popularity:\n\nroom 1 room 2 room 3\n 1. 11. 12.\n 2. 9. 10.\n 3. 7. 8.\n 4. 5. 6.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 13 Jun 2024 15:46:23 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Keeping track of buildfarm animals' personality" } ]
[ { "msg_contents": "Hi Michael and Peter,\r\nThanks a lot for the elaboration of the patch process for PG17.&nbsp; It's really unfortunate missing the the development cycle of PG17.\r\nJust some context of why we hurry to try to catch up with PG17. \r\n\r\n\r\n\r\nThere are certain government, financial and other enterprise organizations that have very strict requirements about the encrypted communication and more specifically about fine grained params like the TLS ciphers and curves that they use. The default ones for those customers are not acceptable. Any products that integrate Postgres and requires encrypted communication with the Postgres would have to fulfil those requirements.\r\n\r\n\r\nSo if we can have this patch in the upcoming new major version, that means Postgres users who have similar requirements can upgrade to PG17.\r\n\r\n\r\nThanks!\r\n\r\n\r\n\r\n \r\nOriginal Email\r\n \r\n \r\n\r\nSender:\"Michael Paquier\"< [email protected] &gt;;\r\n\r\nSent Time:2024/6/7 18:46\r\n\r\nTo:\"Erica Zhang\"< [email protected] &gt;;\r\n\r\nCc recipient:\"Peter Eisentraut\"< [email protected] &gt;;\"pgsql-hackers\"< [email protected] &gt;;\r\n\r\nSubject:Re: Re: Add support to TLS 1.3 cipher suites and curves lists\r\n\r\n\r\nOn Fri, Jun 07, 2024 at 06:02:37PM +0800, Erica Zhang wrote:\r\n&gt; I see the https://commitfest.postgresql.org/48/ is still open, could\r\n&gt; it be possible to target for PG17? As I know PG17 is going to be\r\n&gt; release this year so that we can upgrade our instances to this new\r\n&gt; version accodingly.\r\n\r\nEchoing with Peter, https://commitfest.postgresql.org/48/ is planned\r\nto be the first commit fest of the development cycle for Postgres 18.\r\nv17 is in feature freeze state and beta, where only bug fixes are\r\naccepted, and not new features.\r\n--\r\nMichael\nHi Michael and Peter,Thanks a lot for the elaboration of the patch process for PG17.  It's really unfortunate missing the the development cycle of PG17.Just some context of why we hurry to try to catch up with PG17. There are certain government, financial and other enterprise organizations that have very strict requirements about the encrypted communication and more specifically about fine grained params like the TLS ciphers and curves that they use. The default ones for those customers are not acceptable. Any products that integrate Postgres and requires encrypted communication with the Postgres would have to fulfil those requirements.So if we can have this patch in the upcoming new major version, that means Postgres users who have similar requirements can upgrade to PG17.Thanks!\nOriginal Email\n\nSender:\"Michael Paquier\"< [email protected] >;Sent Time:2024/6/7 18:46To:\"Erica Zhang\"< [email protected] >;Cc recipient:\"Peter Eisentraut\"< [email protected] >;\"pgsql-hackers\"< [email protected] >;Subject:Re: Re: Add support to TLS 1.3 cipher suites and curves listsOn Fri, Jun 07, 2024 at 06:02:37PM +0800, Erica Zhang wrote:> I see the https://commitfest.postgresql.org/48/ is still open, could> it be possible to target for PG17? As I know PG17 is going to be> release this year so that we can upgrade our instances to this new> version accodingly.Echoing with Peter, https://commitfest.postgresql.org/48/ is plannedto be the first commit fest of the development cycle for Postgres 18.v17 is in feature freeze state and beta, where only bug fixes areaccepted, and not new features.--Michael", "msg_date": "Wed, 12 Jun 2024 10:25:57 +0800", "msg_from": "\"=?utf-8?B?RXJpY2EgWmhhbmc=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:Re: Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On Wed, 12 Jun 2024 at 04:32, Erica Zhang <[email protected]> wrote:\n> There are certain government, financial and other enterprise organizations that have very strict requirements about the encrypted communication and more specifically about fine grained params like the TLS ciphers and curves that they use. The default ones for those customers are not acceptable. Any products that integrate Postgres and requires encrypted communication with the Postgres would have to fulfil those requirements.\n\nYeah, I ran into such requirements before too. So I do think it makes\nsense to have such a feature in Postgres.\n\n> So if we can have this patch in the upcoming new major version, that means Postgres users who have similar requirements can upgrade to PG17.\n\nAs Daniel mentioned you can already achieve the same using the\n\"Ciphersuites\" directive in openssl.conf. Also you could of course\nalways disable TLSv1.3 support.\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:51:45 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Re: Add support to TLS 1.3 cipher suites and curves lists" } ]
[ { "msg_contents": "Hello All,\n\nI am working on a project that aims to produce test cases that\nimprove mutation coverage of a dbms's test suite.\n\nThe rough workflow of the project goes as follows:\n(a) apply mutation at a souce code level\n(b) compile and check if the mutated installation passed existing testsuite\n(c) If not, fuzz the mutated installation with SQL fuzzer\n(d) if a fuzzor successfully produce a test case that crash or trigger bugs\nin the mutated installation, use a reduction tools to reduce the test case\n(e) add the reduced test case to existing test suite\n\nFor postgres, I am looking at adding test cases to test suite in\ntest/src/regress/. I have gone through (a)-(e), and managed to produced\nsome test cases. As an example, I claim the test case\n```\nCREATE RECURSIVE VIEW a(b) AS SELECT'' ;\nSELECT FROM a WHERE NULL;\n```\ncould kill the following mutation at optimizer/plan/setrefs.c, 502:5--502:33\nOriginal binary operator expression:\n```\nrte->rtekind == RTE_SUBQUERY\n````\nReplacement expression:\n```\n(rte->rtekind) >= RTE_SUBQUERY\n```\n\nI have a few questions about adding these test cases:\n\n(a) The regression test suite is run by a parallel scheduler, with some\ntest cases dependent on previous test cases. If I just add my test case as\npart of the parallel scheduler’s tests, it might not work, since previous\ntest cases in the scheduler might already create the same table, for\ninstance.\n\n(b) How do I get my test cases reviewed and ultimately included in a future\nrelease of PostgreSQL?\n\nThank you for your time.\n\nRegards,\nJon\n\nHello All,I am working on a project that aims to produce test cases that improve mutation coverage of a dbms's test suite.The rough workflow of the project goes as follows:(a) apply mutation at a souce code level(b) compile and check if the mutated installation passed existing testsuite(c) If not, fuzz the mutated installation with SQL fuzzer(d) if a fuzzor successfully produce a test case that crash or trigger bugs in the mutated installation, use a reduction tools to reduce the test case(e) add the reduced test case to existing test suiteFor postgres, I am looking at adding test cases to test suite in test/src/regress/. I have gone through (a)-(e), and managed to produced some test cases. As an example, I claim the test case```CREATE RECURSIVE VIEW a(b) AS SELECT'' ;SELECT FROM a WHERE NULL;```could kill the following mutation at optimizer/plan/setrefs.c, 502:5--502:33Original binary operator expression:```rte->rtekind == RTE_SUBQUERY````Replacement expression:```(rte->rtekind) >= RTE_SUBQUERY```I have a few questions about adding these test cases:(a) The regression test suite is run by a parallel scheduler, with some test cases dependent on previous test cases. If I just add my test case as part of the parallel scheduler’s tests, it might not work, since previous test cases in the scheduler might already create the same table, for instance.(b) How do I get my test cases reviewed and ultimately included in a future release of PostgreSQL?Thank you for your time.Regards,Jon", "msg_date": "Wed, 12 Jun 2024 17:44:29 +0100", "msg_from": "J F <[email protected]>", "msg_from_op": true, "msg_subject": "Contributing test cases to improve coverage" }, { "msg_contents": "J F <[email protected]> writes:\n> For postgres, I am looking at adding test cases to test suite in\n> test/src/regress/. I have gone through (a)-(e), and managed to produced\n> some test cases. As an example, I claim the test case\n> ```\n> CREATE RECURSIVE VIEW a(b) AS SELECT'' ;\n> SELECT FROM a WHERE NULL;\n> ```\n> could kill the following mutation at optimizer/plan/setrefs.c, 502:5--502:33\n> Original binary operator expression:\n> ```\n> rte->rtekind == RTE_SUBQUERY\n> ````\n> Replacement expression:\n> ```\n> (rte->rtekind) >= RTE_SUBQUERY\n> ```\n\nI am quite confused about what is the point of this. You have not\nfound any actual bug, nor have you demonstrated that this test case\ncould discover a likely future bug that wouldn't be detected another\nway. Moreover, it seems like the process would lead to some very\nlarge number of equally marginal test cases. We aren't likely to\naccept such a patch, because we are concerned about keeping down the\nruntime of the test suite.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:22:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Contributing test cases to improve coverage" }, { "msg_contents": "> I am quite confused about what is the point of this. You have not\n> found any actual bug, nor have you demonstrated that this test case\n> could discover a likely future bug that wouldn't be detected another\n> way. Moreover, it seems like the process would lead to some very\n> large number of equally marginal test cases. We aren't likely to\n> accept such a patch, because we are concerned about keeping down the\n> runtime of the test suite.\n>\n> regards, tom lane\n\n\nThe point of this project is to improve the coverage of PostgreSQL’s\npreexisting test suite. Writing a test suite to achieve close to 100%\ncoverage is challenging, but I have proposed a workflow to automate this\nprocess.\n\nI assert that no test case in the regression test suite currently covers\nthe comparator in the expression rte->rtekind == RTE_SUBQUERY. I propose\nadding a new test case that addresses exactly this. In the future, if\nsomeone accidentally modifies the operator to become >=, it will trigger\nincorrect behavior when certain queries are executed. This test case will\ncatch that issue.\n\nI get that the test cases in /regress are likely reserved for actual bugs\nfound and are designed to run quickly. Would it be a good idea to have a\nseparate, more rigorous test suite that runs longer but provides better\ncode coverage?\n\nRegards,\nJon\n\n> I am quite confused about what is the point of this.  You have not> found any actual bug, nor have you demonstrated that this test case> could discover a likely future bug that wouldn't be detected another> way.  Moreover, it seems like the process would lead to some very> large number of equally marginal test cases.  We aren't likely to> accept such a patch, because we are concerned about keeping down the> runtime of the test suite.>>                        regards, tom laneThe point of this project is to improve the coverage of PostgreSQL’s preexisting test suite. Writing a test suite to achieve close to 100% coverage is challenging, but I have proposed a workflow to automate this process.I assert that no test case in the regression test suite currently covers the comparator in the expression rte->rtekind == RTE_SUBQUERY. I propose adding a new test case that addresses exactly this. In the future, if someone accidentally modifies the operator to become >=, it will trigger incorrect behavior when certain queries are executed. This test case will catch that issue.I get that the test cases in /regress are likely reserved for actual bugs found and are designed to run quickly. Would it be a good idea to have a separate, more rigorous test suite that runs longer but provides better code coverage?Regards,Jon", "msg_date": "Wed, 12 Jun 2024 20:51:48 +0100", "msg_from": "J F <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Contributing test cases to improve coverage" }, { "msg_contents": "On Thu, Jun 13, 2024 at 1:22 AM J F <[email protected]> wrote:\n\n> > I am quite confused about what is the point of this. You have not\n> > found any actual bug, nor have you demonstrated that this test case\n> > could discover a likely future bug that wouldn't be detected another\n> > way. Moreover, it seems like the process would lead to some very\n> > large number of equally marginal test cases. We aren't likely to\n> > accept such a patch, because we are concerned about keeping down the\n> > runtime of the test suite.\n> >\n> > regards, tom lane\n>\n>\n> The point of this project is to improve the coverage of PostgreSQL’s\n> preexisting test suite. Writing a test suite to achieve close to 100%\n> coverage is challenging, but I have proposed a workflow to automate this\n> process.\n>\n\nWe monitor code coverage. But this is doing more than that. It will find\nout the places in code, which if changed, will cause bugs. That seems\nuseful to avoid refactoring mistakes, esp. when people rely on regression\ntests to tell whether their code changes are sane. But in PostgreSQL, we\nrely heavily on reviewers and committers to do that instead of tests.\nStill, the tests produced by this tool will help catch bugs that human eyes\ncan not. As Tom said, managing that battery of tests may not be worth it.\nBasically, I am flip-flopping on the usefulness of this effort.\n\n\n>\n> I assert that no test case in the regression test suite currently covers\n> the comparator in the expression rte->rtekind == RTE_SUBQUERY. I propose\n> adding a new test case that addresses exactly this. In the future, if\n> someone accidentally modifies the operator to become >=, it will trigger\n> incorrect behavior when certain queries are executed. This test case will\n> catch that issue.\n>\n\nUsually PostgreSQL developers know that rtekind is an enum so they are very\nvery unlikely to use anything other than == and !=. Such a change will be\ncaught by a reviewer. I think many of the tests that this tool will produce\nwill fall in this category.\n\n\n>\n> I get that the test cases in /regress are likely reserved for actual bugs\n> found and are designed to run quickly. Would it be a good idea to have a\n> separate, more rigorous test suite that runs longer but provides better\n> code coverage?\n>\n> There are practical difficulties like maintaining the expected outputs for\nsuch a large battery of tests. But maybe some external project could.\n\nBTW, have you considered perl tests, isolation tests etc. Tests in regress/\ndo not cover many subsystems e.g. replication.\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Jun 13, 2024 at 1:22 AM J F <[email protected]> wrote:> I am quite confused about what is the point of this.  You have not> found any actual bug, nor have you demonstrated that this test case> could discover a likely future bug that wouldn't be detected another> way.  Moreover, it seems like the process would lead to some very> large number of equally marginal test cases.  We aren't likely to> accept such a patch, because we are concerned about keeping down the> runtime of the test suite.>>                        regards, tom laneThe point of this project is to improve the coverage of PostgreSQL’s preexisting test suite. Writing a test suite to achieve close to 100% coverage is challenging, but I have proposed a workflow to automate this process.We monitor code coverage. But this is doing more than that. It will find out the places in code, which if changed, will cause bugs. That seems useful to avoid refactoring mistakes, esp. when people rely on regression tests to tell whether their code changes are sane. But in PostgreSQL, we rely heavily on reviewers and committers to do that instead of tests. Still, the tests produced by this tool will help catch bugs that human eyes can not. As Tom said, managing that battery of tests may not be worth it. Basically, I am flip-flopping on the usefulness of this effort. I assert that no test case in the regression test suite currently covers the comparator in the expression rte->rtekind == RTE_SUBQUERY. I propose adding a new test case that addresses exactly this. In the future, if someone accidentally modifies the operator to become >=, it will trigger incorrect behavior when certain queries are executed. This test case will catch that issue.Usually PostgreSQL developers know that rtekind is an enum so they are very very unlikely to use anything other than == and !=. Such a change will be caught by a reviewer. I think many of the tests that this tool will produce will fall in this category. I get that the test cases in /regress are likely reserved for actual bugs found and are designed to run quickly. Would it be a good idea to have a separate, more rigorous test suite that runs longer but provides better code coverage?There are practical difficulties like maintaining the expected outputs for such a large battery of tests. But maybe some external project could.BTW, have you considered perl tests, isolation tests etc. Tests in regress/ do not cover many subsystems e.g. replication.-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 13 Jun 2024 19:29:44 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Contributing test cases to improve coverage" }, { "msg_contents": "On 12.06.24 18:44, J F wrote:\n> (a) The regression test suite is run by a parallel scheduler, with some \n> test cases dependent on previous test cases. If I just add my test case \n> as part of the parallel scheduler’s tests, it might not work, since \n> previous test cases in the scheduler might already create the same \n> table, for instance.\n\nYes, you need to take care of that somehow. Some test files put all \ntheir test objects in a schema. Others are careful to drop all test \nobjects at the end. Or you just have to pick non-conflicting names.\n\n> (b) How do I get my test cases reviewed and ultimately included in a \n> future release of PostgreSQL?\n\nPerhaps start with\n\nhttps://wiki.postgresql.org/wiki/Development_information\n\nand in particular\n\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 16:01:23 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Contributing test cases to improve coverage" } ]
[ { "msg_contents": "Hey Postgres team,\n\nI have been working on adding support for columnar format export to\nPostgres to speed up analytics queries.\nI've created an extension that achieves this functionality here\n<https://github.com/sushrut141/pg_analytica>.\n\nI\"m looking to improve the performance of this extension to enable drop-in\nanalytics support for Postgres. Some immediate improvements I have in mind\nare:\n - Reduce memory consumption when exporting table data to columnar format\n - Create a native planner / execution hook that can read columnar data\nwith vectorised operations.\n\nIt would be very helpful if you could take a look and suggest improvements\nto the extension.\nHopefully, this extension can be shipped by default with postgres at some\npoint in the future.\n\nThanks,\nSushrut\n\nHey Postgres team,I have been working on adding support for columnar format export to Postgres to speed up analytics queries.I've created an extension that achieves this functionality here.I\"m looking to improve the performance of this extension to enable drop-in analytics support for Postgres. Some immediate improvements I have in mind are: - Reduce memory consumption when exporting table data to columnar format - Create a native planner / execution hook that can read columnar data with vectorised operations.It would be very helpful if you could take a look and suggest improvements to the extension.Hopefully, this extension can be shipped by default with postgres at some point in the future.Thanks,Sushrut", "msg_date": "Wed, 12 Jun 2024 22:26:30 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Columnar format export in Postgres" }, { "msg_contents": "Em qua., 12 de jun. de 2024 às 13:56, Sushrut Shivaswamy <\[email protected]> escreveu:\n\n> Hey Postgres team,\n>\n> I have been working on adding support for columnar format export to\n> Postgres to speed up analytics queries.\n> I've created an extension that achieves this functionality here\n> <https://github.com/sushrut141/pg_analytica>.\n>\n> I\"m looking to improve the performance of this extension to enable drop-in\n> analytics support for Postgres. Some immediate improvements I have in mind\n> are:\n> - Reduce memory consumption when exporting table data to columnar format\n> - Create a native planner / execution hook that can read columnar data\n> with vectorised operations.\n>\n> It would be very helpful if you could take a look and suggest improvements\n> to the extension.\n> Hopefully, this extension can be shipped by default with postgres at some\n> point in the future.\n>\nIf you want to have any hope, the license must be BSD.\nGPL is incompatible.\n\nbest regards,\nRanier Vilela\n\nEm qua., 12 de jun. de 2024 às 13:56, Sushrut Shivaswamy <[email protected]> escreveu:Hey Postgres team,I have been working on adding support for columnar format export to Postgres to speed up analytics queries.I've created an extension that achieves this functionality here.I\"m looking to improve the performance of this extension to enable drop-in analytics support for Postgres. Some immediate improvements I have in mind are: - Reduce memory consumption when exporting table data to columnar format - Create a native planner / execution hook that can read columnar data with vectorised operations.It would be very helpful if you could take a look and suggest improvements to the extension.Hopefully, this extension can be shipped by default with postgres at some point in the future.If you want to have any hope, the license must be BSD.GPL is incompatible.best regards,Ranier Vilela", "msg_date": "Wed, 12 Jun 2024 14:18:57 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Columnar format export in Postgres" }, { "msg_contents": "Hi,\n\nIn <CAH5mb9_YW76_0xBU2T4C7HF33b+b2w3QBtV50_ZZMy8SA8ChjA@mail.gmail.com>\n \"Columnar format export in Postgres\" on Wed, 12 Jun 2024 22:26:30 +0530,\n Sushrut Shivaswamy <[email protected]> wrote:\n\n> I have been working on adding support for columnar format export to\n> Postgres to speed up analytics queries.\n\nFYI: I'm proposing making COPY format extendable:\n\n* https://www.postgresql.org/message-id/flat/[email protected]\n* https://commitfest.postgresql.org/48/4681/\n\nIf it's accepted, we can implement extensions for COPY\nFORMAT arrow and COPY FORMAT parquet. With these extensions,\nwe can use file_fdw to read Apache Arrow and Apache Parquet\nfile because file_fdw is based on COPY FROM:\nhttps://www.postgresql.org/docs/current/file-fdw.html\n\nIf you're interested in this proposal, you can review the\nlatest proposed patch set to proceed this proposal.\n\n\n> - Reduce memory consumption when exporting table data to columnar format\n\nThe above COPY support will help this.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 13 Jun 2024 09:21:34 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Columnar format export in Postgres" }, { "msg_contents": "Thanks for the response.\n\nI had considered using COPY TO to export columnar data but gave up on it\nsince the formats weren't extensible.\nIt's great to see that you are making it extensible.\n\nI'm still going through the thread of comments on your patch but I have\nsome early thoughts about using it for columnar data export.\n\n - To maintain data freshness there would need to be a way to schedule\nexports using `COPY TO 'parquet`` periodically\n - pg_analytica has the scheduling logic, once available COPY TO can\nbe used to export the data instead of reading table in chunks being used\ncurrently.\n\n - To facilitate efficient querying it would help to export multiple\nparquet files for the table instead of a single file.\n Having multiple files allows queries to skip chunks if the key range in\nthe chunk does not match query filter criteria.\n Even within a chunk it would help to be able to configure the size of a\nrow group.\n - I'm not sure how these parameters will be exposed within `COPY TO`.\n Or maybe the extension implementing the `COPY TO` handler will\nallow this configuration?\n\n - Regarding using file_fdw to read Apache Arrow and Apache Parquet file\nbecause file_fdw is based on COPY FROM:\n - I'm not too clear on this. file_fdw seems to allow creating a table\nfrom data on disk exported using COPY TO.\n But is the newly created table still using the data on disk(maybe in\ncolumnar format or csv) or is it just reading that data to create a row\nbased table.\n I'm not aware of any capability in the postgres planner to read\ncolumnar files currently without using an extension like parquet_fdw.\n - For your usecase how do you plan to query the arrow / parquet\ndata?\n\nThanks for the response.I had considered using COPY TO to export columnar data but gave up on it since the formats weren't extensible.It's great to see that you are making it extensible.I'm still going through the thread of comments on your patch but I have some early thoughts about using it for columnar data export. - To maintain data freshness there would need to be a way to schedule exports using `COPY TO 'parquet`` periodically      - pg_analytica has the scheduling logic, once available COPY TO can be used to export the data instead of reading table in chunks being used currently. - To facilitate efficient querying it would help to export multiple parquet files for the table instead of a single file.   Having multiple files allows queries to skip chunks if the key range in the chunk does not match query filter criteria.   Even within a chunk it would help to be able to configure the size of a row group.      - I'm not sure how these parameters will be exposed within `COPY TO`.         Or maybe the extension implementing the `COPY TO` handler will allow this configuration? - Regarding using file_fdw to read Apache Arrow and Apache Parquet file because file_fdw is based on COPY FROM:     - I'm not too clear on this. file_fdw seems to allow creating a table from  data on disk exported using COPY TO.       But is the newly created table still using the data on disk(maybe in columnar format or csv) or is it just reading that data to create a row based table.       I'm not aware of any capability in the postgres planner to read columnar files currently without using an extension like parquet_fdw.        - For your usecase how do you plan to query the arrow / parquet data?", "msg_date": "Thu, 13 Jun 2024 22:30:24 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Columnar format export in Postgres" }, { "msg_contents": ">\n> If you want to have any hope, the license must be BSD.\n> GPL is incompatible.\n\n\nAck, will update the license to BSD. Thanks\n\nOn Wed, Jun 12, 2024 at 10:49 PM Ranier Vilela <[email protected]> wrote:\n\n> Em qua., 12 de jun. de 2024 às 13:56, Sushrut Shivaswamy <\n> [email protected]> escreveu:\n>\n>> Hey Postgres team,\n>>\n>> I have been working on adding support for columnar format export to\n>> Postgres to speed up analytics queries.\n>> I've created an extension that achieves this functionality here\n>> <https://github.com/sushrut141/pg_analytica>.\n>>\n>> I\"m looking to improve the performance of this extension to enable\n>> drop-in analytics support for Postgres. Some immediate improvements I have\n>> in mind are:\n>> - Reduce memory consumption when exporting table data to columnar format\n>> - Create a native planner / execution hook that can read columnar data\n>> with vectorised operations.\n>>\n>> It would be very helpful if you could take a look and suggest\n>> improvements to the extension.\n>> Hopefully, this extension can be shipped by default with postgres at some\n>> point in the future.\n>>\n> If you want to have any hope, the license must be BSD.\n> GPL is incompatible.\n>\n> best regards,\n> Ranier Vilela\n>\n\nIf you want to have any hope, the license must be BSD.GPL is incompatible.Ack, will update the license to BSD. Thanks On Wed, Jun 12, 2024 at 10:49 PM Ranier Vilela <[email protected]> wrote:Em qua., 12 de jun. de 2024 às 13:56, Sushrut Shivaswamy <[email protected]> escreveu:Hey Postgres team,I have been working on adding support for columnar format export to Postgres to speed up analytics queries.I've created an extension that achieves this functionality here.I\"m looking to improve the performance of this extension to enable drop-in analytics support for Postgres. Some immediate improvements I have in mind are: - Reduce memory consumption when exporting table data to columnar format - Create a native planner / execution hook that can read columnar data with vectorised operations.It would be very helpful if you could take a look and suggest improvements to the extension.Hopefully, this extension can be shipped by default with postgres at some point in the future.If you want to have any hope, the license must be BSD.GPL is incompatible.best regards,Ranier Vilela", "msg_date": "Thu, 13 Jun 2024 22:31:43 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Columnar format export in Postgres" }, { "msg_contents": "Hi,\n\nIn <CAH5mb98Dq7ssrQq9n5yW3G1YznH=Q7VvOZ20uhG7Vxg33ZBLDg@mail.gmail.com>\n \"Re: Columnar format export in Postgres\" on Thu, 13 Jun 2024 22:30:24 +0530,\n Sushrut Shivaswamy <[email protected]> wrote:\n\n> - To facilitate efficient querying it would help to export multiple\n> parquet files for the table instead of a single file.\n> Having multiple files allows queries to skip chunks if the key range in\n> the chunk does not match query filter criteria.\n> Even within a chunk it would help to be able to configure the size of a\n> row group.\n> - I'm not sure how these parameters will be exposed within `COPY TO`.\n> Or maybe the extension implementing the `COPY TO` handler will\n> allow this configuration?\n\nYes. But adding support for custom COPY TO options is\nout-of-scope in the first version. We will focus on only the\nminimal features in the first version. We can improve it\nlater based on use-cases.\n\nSee also: https://www.postgresql.org/message-id/20240131.141122.279551156957581322.kou%40clear-code.com\n\n> - Regarding using file_fdw to read Apache Arrow and Apache Parquet file\n> because file_fdw is based on COPY FROM:\n> - I'm not too clear on this. file_fdw seems to allow creating a table\n> from data on disk exported using COPY TO.\n\nCorrect.\n\n> But is the newly created table still using the data on disk(maybe in\n> columnar format or csv) or is it just reading that data to create a row\n> based table.\n\nThe former.\n\n> I'm not aware of any capability in the postgres planner to read\n> columnar files currently without using an extension like parquet_fdw.\n\nCorrect. We still need another approach such as parquet_fdw\nwith the COPY format extensible feature to optimize query\nagainst Apache Parquet data. file_fdw can just read Apache\nParquet data by SELECT. Sorry for confusing you.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Sun, 16 Jun 2024 06:32:20 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Columnar format export in Postgres" } ]